Chris Mattmann

Greenhorn
+ Follow
since Mar 09, 2021
Chris Mattmann is an experienced IT Executive, CTO and Division Manager of the AI, Analytics and Innovative Development Organization in the Information Technology and Solutions Directorate at NASA JPL. At JPL Mattmann is the Chief Technology and Innovation Officer and reports to the CIO and Director for IT, and manages advanced IT research and open source and technology evaluation and user infusion capabilities. Mattmann is JPL's first Principal Scientist in the area of Data Science. The designation of Principal is awarded to recognize sustained outstanding individual contributions in advancing scientific or technical knowledge, or advancing the implementation of technical and engineering practices on projects, programs, or the Institution. He has over 20 years of experience at JPL and has conceived, realized and delivered the architecture for the next generation of reusable science data processing systems for NASA's Orbiting Carbon Observatory, NPP Sounder PEATE, and the Soil Moisture Active Passive (SMAP) Earth science missions. Mattmann's work has been funded by NASA, DARPA, DHS, NSF, NIH, NLM and by private industry and commercial partnerships. He was the first Vice President (VP) of Apache OODT (Object Oriented Data Technology), the first NASA project at the Apache Software Foundation (ASF) and he led the project's transition from JPL to the ASF.

He contributes to open source and was a member of the Board of Directors at the Apache Software Foundation (2013-18) where he was one of the initial contributors to Apache Nutch as a member of its project management committee, the predecessor to Apache Hadoop. Mattmann is the progenitor of the Apache Tika framework, the digital "babel fish" and de-facto content analysis and detection framework that exists. Today Mattmann contributes to TensorFlow, Google’s technology platform for all things machine learning and has recently finished a book on Machine Learning for TensorFlow, 2nd edition published by Manning Publications.

Mattmann is the Director of the Information Retrieval & Data Science (IRDS) group at USC and Adjunct Research Professor. He teaches graduate courses in Content Detection & Analysis & in Search Engines & Information Retrieval. Mattmann has materially contributed to understanding of the Deep Web and Dark Web through the DARPA MEMEX project. Mattmann's work helped uncover the Panama Papers scandal which won the Pulitzer Prize in Journalism in 2017.
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
1
Received in last 30 days
0
Total given
3
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Chris Mattmann

Hi R Castro! Yes! Ch12 (AutoEncoders with CIFAR), Ch14 (CNNs) and Ch15 (building VGG-Face and CIFAR-10 CNNs) are all about computer vision! You build up to it first with explainable models and then get into neural networks. You will find a lot in there to help you in your great work!

--Chris
Turab thank you so much I sincerely appreciate it. Please do hit me up on Instagram (http://instagram.com/chrismattmann and/or Twitter http://twitter.com/chrismattmann/) if I can help answer any questions.

--Chris
Thank you Laura! for me, I think the biggest coming things are: 1) AutoML - here today and available e.g., see DataRobot and Google AutoML; 2) edge/IoT ML e.g., using NVIDIA Jetson or TX2 and TF Lite; and even onto Space! and 3) Learning with Less Labels or requiring less labels to train machine learning data.

I cover elements of all three of these in the book!

--Chris
Laura thank you for your question! Yes, see ch18 on Seq2Seq models and see the recent TF2 branch and an implementation using a similar notebook/approach to the TensorFlow Neural MT example (Spanish to English). This is an Encoder / Decoder model that is like a Q&A answer for the Q is a translation. See: https://www.tensorflow.org/tutorials/text/nmt_with_attention

Ch18 in my book covers Seq2Seq which was the predecessor to this. You build a chat bot using Seq2Seq. In the TF2 repo in my GitHub: http://github.com/chrismattmann/MLwithTensorFlow2ed/tree/master/TFv2/ you can find an updated version of the notebook for ch18 that uses TFv2 similar to the Neural MT example to implement the chat bot.

--Chris
Hi Yury it sure did! I got into Google Show and Tell from doing Mars Image Captioning. Also lots of NLP and sentiment analysis e.g., for mining web data, and the work that I did on MEMEX with DARPA for the Deep and the Dark Web!

--Chris
Thank you for your question Yury. I know more about Amazon AWS mostly out of my own experience, but have been learning Google Compute Platform (GCP) and am a fan of TPUs and learning more and more about them. If you standardize on Docker, or K8s, then most all of the platforms will run those natively and so you can mix and match.

Cheers!
Chris
Claude, thank you! In my opinion, TensorFlow does a good job of not just jumping to elegance, and includes all model possibilities including regression, classification and explainable models. It presents them as easily as it does neural networks and forces you to think about both the lower level API aspects, and when ready you can jump to the elegance, e.g., using Keras, or native TF API.

Cheers,
Chris
Thank you Yury and Campbell and Jeanne!

Looking forward to being here at the ranch and helping with any outstanding questions.

Most sincerely,
Chris Mattmann