Load Balancer
In this model, there is a dispatcher that determines which worker instance will handle the request based on different policies. The application should best be "stateless" so any worker instance can handle the request.
This pattern is deployed in almost every medium to large web site setup.
Scatter and Gather
In this model, the dispatcher multicast the request to all workers of the pool. Each worker will compute a local result and send it back to the dispatcher, who will consolidate them into a single response and then send back to the client.
This pattern is used in Search engines like Yahoo, Google to handle user's keyword search request ... etc.
Result Cache
In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
This pattern is commonly used in large enterprise application. Memcached is a very commonly deployed cache server.
Shared Space
This model also known as "Blackboard"; all workers monitors information from the shared space and contributes partial knowledge back to the blackboard. The information is continuously enriched until a solution is reached.
This pattern is used in JavaSpace and also commercial product GigaSpace.
Pipe and Filter
This model is also known as "Data Flow Programming"; all workers connected by pipes where data is flow across.
This pattern is a very common EAI pattern.
Map Reduce
The model is targeting batch jobs where disk I/O is the major bottleneck. It use a distributed file system so that disk I/O can be done in parallel.
This pattern is used in many of Google's internal application, as well as implemented in open source Hadoop parallel processing framework. I also find this pattern can be used in many many application design scenarios.
Bulk Synchronous Parellel
This model is based on lock-step execution across all workers, coordinated by a master. Each worker repeat the following steps until the exit condition is reached, when there is no more active workers.
- Each worker read data from input queue
- Each worker perform local processing based on the read data
- Each worker push local result along its direct connection
Execution Orchestrator
This model is based on an intelligent scheduler / orchestrator to schedule ready-to-run tasks (based on a dependency graph) across a clusters of dumb workers.
This pattern is used in Microsoft's Dryad project
Although I tried to cover the whole set of commonly used design pattern for building large scale system, I am sure I have missed some other important ones. Please drop me a comment and feedback.
Also, there is a whole set of scalability patterns around data tier that I haven't covered here. This include some very basic patterns underlying NOSQL. And it worths to take a deep look at some leading implementations.
13 comments:
super dope. nice graphs, sums it up without overtechnalizing.
I think it is important to make a distinction between blackboard and tuple spaces. Blackboard is a pattern for incrementally improving data using multiple agents. Javaspaces and other tuple spaces are mechanisms for implementing most of the patterns on this page.
Simple and straight forward; beautifully illustrated. Thanks. :)
Can you tell me what you use to create those nice diagrams? I'm trying to become an architect :-)
Thanks
So is MapReduce a special case of Scatter and Gather? Could you share any links to posts/papers describing Scatter and Gather?
Great summary, thanks!
Here is the difference between Map Reduce and Scatter & Gatter ...
1) Scatter and gather has a single point of consolidation, Map reduce can have multiple reducer working on parallel to do the consolidation.
2) In Scatter/Gather, the point of consolidation is often the point where request is split. In Map Reduce, it is different.
3) Scatter and gather usually works as a general request-reply pattern where data is pushed in from request. In Map Reduce, data is typically pulled out from DFS.
Thanks for Sharing a great information.
Thanks for collecting this information for the benefit of the general IT comunity.
I think a bit of grouping would help understand the features of individual patterns a bit better.
For example single step or multi step processing, mediated or non-mediated flow control etc.
My take on it:
- Single Step Processors
- Load Balancer
- Load Balancer with Sticky Session
- Scatter and Gather
- Multi Step Processors
- Mediated
- Execution Orchestrator
- Bulk Synchronous Parellel
- Non-mediated
- Pipe and Filter
- Shared Space
Interestingly, as far as I can see, the roles Dispatcher/Orchestrator/Master (which are arguably the same) and the queues in Pipe and Filter* have in common that you can have an arbitrary number of Dispatchers and therefore scale-out horizontally, as long as the requests are either stateful, or else the application logic is stateless and only one dispatcher handles one request.
(The ability to arbitrarily multiply elements of the system may be a sufficient condition of scalability.)
I don't think this is true of Shared Space - if you had multiple Shared Spaces you'd need a Scatter-and-Gather mechanism to ensure consistency in the final result.
*Come to think of it, the most basic possible Dispatcher consists of a request queue/response queue pair.
Hi, Thanks for your excellent article, is there open source implementation of these patterns?
If not, it will be very interesting to implement it?
What do u think about that?
Great post! This is a great example of how a good architect can explain complex technical matters to a wide audience. Keep it up.
I really like this general idea, but I think that what is missing here is a description of how each pattern handles failure. Much of the difference between a system which is theoretically scalable and one which is scalable in the real world is that the real-world scalable system can cope with the failure of individual components without knocking over the entire system.
James Hamilton once wrote: "Once the service has scaled beyond 10,000 servers and 50,000 disks, failures will occur multiple times a day. If a hardware failure requires any immediate administrative action, the service simply won’t scale cost-effectively and reliably. The entire service must be capable of surviving failure without human administrative interaction. Failure recovery must be a very simple path and that path must be tested frequently."
Post a Comment