posted 2 years ago
Hello, I am new Hadoop mapreduce, as I've only edited a basic WordCount program. I need my current MapReduce program to first read an input file(in some kind of initialization phase?) to produce multiple <key, value> pairs which are sent to the mapper, have the mapper output <key, value> pairs, and then do some aggregation on them in the reducer. After that, I want to feed those results back into the mapper and repeat for a set amount of iterations. I've read some stuff on Input Splits, but I'm not really sure to go about doing this. Any help is appreciated.