Although it is providing the same map, reduce programming model to the application developer, the underlying implementation architecture of Cloud MR is drastically different from Hadoop. For a description of Hadoop internals, here is it.
Build on top of a Cloud OS (which is Amazon AWS), Cloud MR enjoys the inherit scalability and resiliency, which greatly simplifies its architecture.
- Cloud MR doesn't need to design a central coordinator components (like the NameNode and JobTracker in the Hadoop environment). They simply store the job progress status information in the distributed metadata store (SimpleDB).
- Cloud MR doesn't need to worry about scalability in the communication path and how data can be moved efficiently between nodes, all is taken care by the underlying CloudOS
- Cloud MR doesn't need to worry about disk I/O issue because all storage is effectively remote and being taken care by the Cloud OS.
- Network latency and throughput : 20 - 100 ms for SQS access, SimpleDB domain write througput is 30 - 40 items/sec
- Eventual consistency : 2 simultaneous requests to dequeue from SQS can both get the same message. SQS sometimes report empty when there are still messages in the queue.
Cloud MR use a "double check" technique to overcome the consistency issue. Writer will write status into multiple places and reader will read from multiple places also. If the reader read inconsistent result from different place, that means the eventual consistent state hasn't arrived yet so it needs to retry later. When the state read from different places agrees with each other, eventual consistency has arrived and the state is valid.
Following describe the technical details of Cloud MR ...
Cloud MR Architecture
SimpleDB is used to store Job status. Client submit jobs to SimpleDB, Map and Reduce workers update and extract job status from the SimpleDB. The actual data of each job is stored in SQS (which can also points to an Object stored in S3).
So the job progress in the following way
Job Client Processing Cycle
- Store data in many S3 file objects
- Create a Mapper task request for each file split (each map task request contains a reference to the S3 object and the byte range).
- Create an input queue in SQS and enqueue each Mapper task request to it.
- Create a master reduce queue, an result output queue as well as multiple partition queues.
- Create one reducer task request for each partition queue. Each reducer task request contains a pointer to the partition queue.
- Enqueue the reducer task requests to the master reducer queue
- Create a job request that contains a mapper task count S as well as a reference to all the SQS queue created so far.
- Add the job request into SimpleDB
- Invoke AWS commands to start the EC2 instances for Mappers and Reducers, passing along queue and SimpleDB locations as "user data" to the EC2 instances.
- From this point onwards, poll the SimpleDB on the job progress status
- When the job is complete, download the result from output queue and S3
- Take a mapper task request from the SQS input queue
- Fetch the file split and parse out each record
- Invoke user defined map() function, for each emit intermediate key, perform a (hash(k1) % no_of_partitions). Enqueue the intermediate record to the corresponding partition queue.
- When done with the mapper task request, write a commit record containing worker id, map request id and number of records processed count per partition (in other words, R[i][j] where i is the map request and j is the partition no.).
- Remove the map task request from the SQS input queue
It is possible that a Mapper worker crashes before it finishes the mapper task, so another mapper will re-process the map task request (after the SQS timeout). Or due to eventual consistency model, it is possible to have 2 simultaneous mappers working on the same file splits. In both case, it is possible of causing some duplications in the partition queues.
To facilitates the duplicate elimination, each intermediate records emit by the mapper will be tagged with [map request id, worker id, a unique number]
Reducer Processing Cycle
- Monitor SimpleDB and wait for seeing commit records from all mappers.
- Dequeue a reducer task request from the master reducer queue
- Go to the corresponding partition queue, dequeue each intermediate record
- Invoke user define reduce() function and write the reducer output to the output queue
- When done with the reducer task request, write a commit record in a similar way as the Mapper worker
- Remove the reduce task request from the master reducer queue
Similar to the Mapper, Reducer j will not stop getting the message from the partition queue even when it is empty, it will keep reading the message up to the sum all R[i][j] over i.
Due to eventual consistency, it is possible that multiple reducers dequeue the same reducer task request from the master reducer queue and then taking messages from the same partition queue. Since they are competing on the same partition queue, one of them will find the queue is empty before they reach the sum of R[i][j] over i. After certain timeout period, the reducer will write a "suspect conflict" record (containing its worker id) in the SimpleDB. If it found another reducer has written such record, it knows there is another reducer working in the same partition. Workers with the lowest id is the loser and so the reducer will keep reading until it sees another conflict record with a lower id, then it will drop off existing processing and pickup another reducer task. All the records read by the loser will come back to the queue after the timeout period, and will be picked up by the winner.
Network Latency and Throughput
One of the SimpleDB-specific implementation constraint is they read and write throughput is very asymmetric. While the read response is very fast, the write is slow. To mitigate this asymmetry, Cloud MR using multiple domains in SimpleDB. When it writes to SimpleDB, it randomly pick one domain and write to it. This way, the write request workload is spread across multiple domains. When it reads from SimpleDB, it read every domain and aggregate the results (since one domain will have the result).
To overcome the latency issue of SQS, CloudMR at the Mapper side uses buffering technique to batch up intermediate messages (destined to the same partition), A message buffer is 8k size (the maximum size of a SQS message). When the buffer is full (or after some timeout period), a designated thread will flush the buffer by writing a composite message (which contains all the intermediate records) into the SQS.
The reducer side works in a similar way, multiple read threads will dequeue message from the partition queue and put them in a Read buffer, where the Reducer will read the intermediate messages. Notice that it is possible to have 2 threads reading the same message from the partition queue (remember the eventual consistency scenario described above). To eliminate the potential duplicated message, the reducer will examine the unique number tagged with the message and discard the message that has seen before.
Difference with Hadoop
Map/Reduce developers familiar with Hadoop implementation will find Cloud MR behaves in a similar way. But there are a number of difference that I want to highlight here.
- Reducer key is not sorted: Unlike Hadoop which guarantee that keys arrived at the same partition (or reducer) will be in sorted order, Cloud MR doesn't provide such feature. Application need to do their own sorting if they need the sort order.
For a detail technical description of Cloud MR, as well as how it is compared with Hadoop, read the original paper from Huan Liu and Dan Orban
1 comment:
is cloudmr a product or opensource project?
Post a Comment