Monday, April 28, 2008

Parallelism with Map/Reduce

We explore the Map/Reduce approach to turn sequential algorithm into parallel

Map/Reduce Overview

Since the "reduce" operation need to accumulate results for the whole job, as well as communication overhead in sending and collecting data, Map/Reduce model is more suitable for long running, batch-oriented jobs.

In the Map/Reduce model, "parallelism" is achieved via a "split/sort/merge/join" process and is described as follows.
  • A MapReduce Job starts from a predefined set of Input data (usually sitting in some directory of a distributed file system). A master daemon (which is a central co-ordinator) is started and get the job configuration.
  • According to the job config, the master daemon will start multiple Mapper daemons as well as Reducer daemons in different machines. And then it start the input reader to read data from some DFS directory. The input reader will chunk the read data accordingly and send them to "randomly" chosen Mapper. This is the "split" phase and begins the parallelism.
  • After getting the data chunks, the mapper daemon will run a "user-supplied map function" and produce a collection of (key, value) pairs. Each item within this collection will be sorted according to the key and then send to the corresponding Reducer daemon. This is the "sort" phase.
  • All items with the same key will come to the same Reducer daemon, which collect all the items of that key and invoke a "user-supplied reduce function" and produce a single entry (key, aggregatedValue) as a result. This is the "merge" phase.
  • The output of reducer daemon will be collected by the Output writer, which is effective the "join" phase and ends the parallelism.
Here is an simple word-counting example ...






















3 comments:

Aslam Khan said...

Hi Ricky,

I thoroughly enjoyed reading your articles.

I am the zone leader for the Architects zone at architects.dzone.com. I would be keen to discuss reposting your blog on dzone as well as you writing specific articles for dzone.

Please email me on aslam.khan _at_ dzone.com

homeycat said...

Hi Ricky,

I have to say that your explaination about the MapReduce concept and the implementation about it are the most "understandable" that I've ever read. (I am not good at expressing myself, but I mean that your articles are really great! They helped me so much to understand MapReduce) I just wanna to say "Thank You!" and hope that you could tell us more about parallel computing.

Abhinav said...

Hi Ricky,
Thanks for this nice post. It seems there is a mistake in the reduce function given in the diagram. The "emit(key, count)" step should come after the for loop, since we need to sum all the values corresponding to a given key in this example.