Win a copy of Rust Web Development this week in the Other Languages forum!

Jeff Smith

Author
+ Follow
since Jul 15, 2018
Jeff likes ...
Scala
I'm the author of Machine Learning Systems from Manning. I build artificial intelligence systems and teach others what I know. I'm interested in the ideas behind intelligence architectures and the tools to make them real. As part of this work, I coined the term reactive machine learning to describe an ideal for machine learning architectures to strive towards. Currently, my professional work is focused on deep learning technology.
New York, NY
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
4
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Jeff Smith

I don't know that I fully understand your question, but let me give a shot at answering what I think you're asking.

The reactive manifesto led to a few books applying the principles of reactive systems design to various contexts, including this excellent book on Reactive Design Patterns. My book is absolutely influenced by material in that book, but the two books are really about different things. RDP is all about very general design patterns for systems design. Machine Learning Systems can be viewed as a collection of design patterns for machine learning systems specifically. You don't have to have read RDP to read my book. But certainly, I would expect some readers to find value in both.

If your question is more generally about design patterns and not that specific book, then I'm a bit fuzzier on what the question is. The main topic of my book is how to build machine learning systems. I specifically talk about all of the different components that will need to be implemented; that's what Part 2, the bulk of the book, is about. Those components aren't really skippable for you to have a working production grade machine learning system. Along the way, I present some ways that you could implement those components, using some reactive design patterns but a lot more things which could just be described as programming techniques or design principles. The tools in the book, Scala, Akka, and Spark, are really there to just make it possible to implement these ideas, but I'm not particularly concerned with selling any readers on the use of those specific tools. I'm trying to help build your understanding of how to build whole machine learning systems.
3 years ago
Your question is somewhat addressed by this other thread, but let me answer in a bit more detail for your specifics.

I'm familiar with Weka. I've used it with folks who I'm trying to teach aspects of machine learning before. It and the accompanying textbook are great. You've definitely got a head start on some of the core concepts.

As for Scala, it's just another tool. I presume that the reader has a level of proficiency in some programming language. Experience in Java, Python, JavaScript, Ruby, etc. are all useful. But as long as you're a proficient software developer, then all of the Scala specifics will be introduced in the text as they're used. I presume that a substantial fraction of the readers will not have deep experience in functional programming languages (Erlang, Haskell, LISPs like Clojure, etc.), so I spend a lot of time on those concepts. Additionally, if you've not worked with a static type system before, like Java, C++, and Ocaml have, then a lot of that material will be new, but I explain it as it's introduced. At the level we work at in this book, the static type material boils down to describing the shape of our data and encoding that in our program.
3 years ago
I talk about the choice to use Scala a bit in the book, and I get asked about it so much, I'm probably going to write a blog post on the topic. Let me try giving you a fairly broad answer.

First, I think it's important to acknowledge that learning generally applicable skills is usually the goal of a reader of a technical book like mine. If you're learning, then the choice of language for that learning is only of secondary importance.

But let's get into the question of why someone would choose to use Scala for a machine learning book. Here are some of my reasons:

1. Large portions of a production machine learning system need to be able to support high concurrency. This is usually a requirement of the model server, but it can come up in data collection as well. This means that it's useful to have a multi-threaded runtime like the JVM, the BEAM, the CLR, etc.
2. Beyond single node concurrency, it's often necessary to build distributed data processing pipelines for things like feature generation and increasingly for model learning as well. Since my book isn't primarily about distributed systems infrastructure implementation, I wanted to have some straightforward answer for how to distribute computation such as a framework like Spark or language native capabilities as in Distributed Erlang.
3. A lot of the techniques used in distributed systems rely upon techniques common to functional programming, such as immutable data and pure functions as first class citizens. FP languages like Scala, Clojure, Haskell, F#, and others make using those techniques easy, but so do libraries and language features in multi-paradigm languages like JavaScript and Python.
4. The book is all about machine learning systems, so I really need access to good library implementations of common bits of machine learning functionality. This really wasn't optional; I wanted every chapter to only use code from that specific chapter. Languages with good enough machine learning libraries include Python, Scala, R, and not too many others.
5. A lot of the book is about data modeling and data engineering. Specifically, I tried to introduce a fair amount of material around uncertain data engineering. To teach all of that material, I needed some way of describing data structures. The simplest way to do this is with static types as in Scala or Haskell, but it could also be done using optional type annotations as in Erlang or some dialects of JavaScript.
6. The concepts of supervision and message passing are closely intertwined with the actor model. Ideally, I needed a robust actor model implementation that I could use for several different aspects of the machine learning system. This requirement is fulfilled by Erlang, Akka, and a few other less commonly used implementations.

Let's score a few languages against these criteria.

R
1. Not natively.
2. Via libraries written in other languages.
3. Not by default.
4. Some good ML libraries.
5. Not easy to express.
6. No implementation I'm aware of.

Python
1. Not natively. C/++ systems and libraries are often used to mitigate this.
2. Via libraries, usually written in other languages.
3. Only optionally and via libraries.
4. The best ML libraries of any language.
5. Recently added and only rarely used.
6. Only via libraries, none in common use.

Erlang
1. Arguably the canonical concurrency-oriented programming language.
2. Support built directly into the language as well as several commonly used libraries.
3. Very FP-oriented language allowing for only immutable data.
4. No widely used libraries.
5. Optional type annotations via Dialyzer.
6. The canonical actor model implementation.

Scala
1. Several approaches to concurrency, thanks to the JVM.
2. Spark is the biggest project in all of big data.
3. Very FP-oriented language that allows for some limited use of non-FP techniques (e.g. vars).
4. MLlib in Spark is very complete and scales to arbitrary workloads.
5. Incredibly rich and powerful type system.
6. Akka is the second most widely used actor model implementation.

Looking at all of this, Scala really was the only language that fulfilled all of my needs. I could have written a similar book about a problem other than machine learning using Erlang. Or I could have dropped a lot of the material and written something far narrower around machine learning using Python relying upon things like Spark or TensorFlow Serving to fill the gaps that Python would have left. Or I could have done some mixing and matching, hoping that readers would be able to follow along across toolchains.

I chose to use Scala, because I wanted someone to use pretty much the same tools to explore all of these area. I think it's a fun trip, going through every phase of the machine learning process and layering in new techniques on top of the same tools.

Final caveat: that's an answer specifically about my book. My answer would be totally different if you were trying to get a job in ML or were building your first ML application. That said, I think the book will do a good job of preparing you for future ML development, regardless of what toolchain you choose to use.
3 years ago
I worked on developing the materials for Machine Learning Systems over quite a while, in a number of different formats. My personal site, jeffsmith.tech brings a lot of those together.

Breaking it down in a bit more detail, here's a guided tour of places you can dive in:

All of the code in the book can be found on the GitHub repo. I have some minor updates to post there, so star that repo if you want to get the latest.

I concurrently developed all of the code examples in the book as conference and meetup talks as well. This gave me the ability to work out some of the bugs in my ideas and get direct, realtime feedback from audience members. You can find a listing of these talks on the speaking section of my site. Since many of these talks were recorded, you can use these videos to mix up the modality of your studying and hear a bit more from me about the real world context of some of these techniques.

I've also done a few interviews that touch on the materials in the book. In particular, this written interview with Manning and this podcast interview with Defrag This.

I also sporadically blog on Medium and will probably be posting more supplementary thoughts in support of the book.

3 years ago
I personally learned Scala while at the same time I was applying it to the implementation of machine learning systems, so I think that this is a pretty doable goal.

The book presumes no prior Scala knowledge, although experience with Java or Python or similar would be helpful. I try never to introduce a Scala language concept without formally calling it out and explaining it.

Also, compared to the limit of what's possible in Scala, the FP techniques I use in the book stay solidly in the range of basic to intermediate. There's basically no type level programming, no type classes, no macros (IIRC), and only the bare minimum required use of implicits.

Of course, I absolutely hope that folks will be curious about how to use Scala and FP even more than I show, so this book may be a sort of gateway drug to deeper study in FP, as I talk about in this talk, Spark as the Gateway Drug to Typed Functional Programming. But that's really all up to you.
3 years ago
There's really very little mathematics. Any level of mathematical preparation is probably sufficient.

Chapter 5 has a section on implementing model learning algorithms that goes through an implementation of naive bayes. That's about as complex as things get, and it's really just a bit of multiplication and division. It's also a totally optional deep dive for readers interested in understanding how model learning algorithms work. It could easily be skipped with little impact on your reading experience. All other model learning algorithms used are either library implementations or simple dummy/stub implementations.

Beyond that, Chapter 6 has a section on model metrics that involves some basic multiplication and division. I think this stuff is pretty important, but I stick solidly to some core fundamentals. I've taught this material to a wide range of software developers face-to-face, and no one has ever struggled to get the core concepts down, with a bit of coaching.
3 years ago
Howdy, folks. Jeff here.

I'm looking forward to talking with all of you about the book, Scala, and machine learning in general.
3 years ago