Hadoop's MapReduce model it is very similar with Google's MapReduce model. If we consider what the creators of the Hadoop tell us, that they started to design Hadoop after reading the GFS and MapReduce papers published by Google, it means that is the same model.
Of course there are some differences. Not so far ago I read Data-Intensive Text processing with MapReduce book written by Jimmy Lin and Chris Dyer, in that book I often saw mentioned the small differences between Hadoop and Google's MapReduce implementation.
I am asking the author of Hadoop in Action, if that book also covers such comparisons or mentions differences on the mode Google's MapReduce implementation was designed compared to Hadoop?
Hadoop is based on a couple research papers published by Google explaining Google's data processing model. So the *conceptual model* can be considered the same. Of course, the details are very different.
To start out, Google's MapReduce programs are generally written in C/C++, while Hadoop's are generally Java-based. Given that both models have evolved separately over the years to target different communities, it shouldn't be surprising that the details are very different. Having said that, Google engineers I've talked to claim that learning Hadoop is relatively easy for them.