posted 11 years ago
Regarding where it shines, it really is the classic situation that if you have large volumes of structured or semi-structured data and have analytics that need to touch a lot of that data then it's possibly a good fit.
I suspect I'll make this point multiple times this week -- I view Hadoop as one component of the data processing systems I build but I use it alongside traditional databases and data warehouses. If your use case requires you to pull specific items from a well structured data set then odds are you'll be much better off with a traditional RDBMS. Can you do it in Hadoop, sure, but pick the best tool for the job. If your queries on the RDBMS turn into table scans because of how much data you need process to generate your results then in that case I'd consider Hadoop.
I find the Java APIs in Hadoop very well designed and easy to pick up. I find the biggest learning curve is more conceptual; learning how to take a particular problem and expressing it as a series of MapReduce jobs. You can find yourself with a series of MR jobs the code for each is literally only a few lines in each map and reduce method. But put together in the MapReduce framework the processing chain can do extremely sophisticated things. This is where the real experience will need develop.
Garry