While "cloud computing" provide an obvious cost-attractive operating model for technology startups as well as SME, questions always raised whether the same argument applies to large enterprises who already have an army of high-skilled system administrators running multiple geographically distributed 7 x 24 data centers. Why would they believe that Amazon can run a more effective data center operation than themselves ?
I believe such large enterprise will still be beneficial by leveraging the "cloud provider", but under a very different circumstances from the startups.
"Cloud" as an overflow buffer
For most enterprise applications, workload fluctuates according to seasonal traffic patterns or user behavioral changes. To ensure satisfactory performance over the whole period, enterprise need to provisional equipment for the peak load, even though the excess equipment will be sitting idle most of the time.
By leverage a cloud provider, enterprise no longer need to budget for the peak load. They just need to provision equipment for the average load so these equipments are fully utilized most of the time. When the peak load arrives that exceed existing equipment capacity, the enterprise can programmatically startup new equipments in the cloud and redirect the extra traffic to it.
Dropping the equipment budget from what the peak load requires from what the average load requires can be a big savings for IT budget.
"Cloud" as an new idea playground
In most large enterprises, IT department is struggling in support their business department on their rapidly changing need. To state competitive in the business, product and service group has to come up with new ideas quickly and be able to test it out with their customer quickly as well. However, making any changes on existing IT infrastructure that has production application run on is very risky. A lot of impact analysis, change planning, equipment purchases and approvals has to go through. This significantly delays the test-out of the new idea. In fact, unless the idea has mature enough to get significant management buy-in, it won't even pass the approval stage and simply dies before tested.
By leveraging a cloud provider, business department no longer need to come to their IT department for setting up their testing environment. They can just run their new "experimental" services in the cloud and watch their customer's acceptance in a completely seperated envrionment without worrying any impact to existing production applications.
After the new idea is tested and proven, business department then provision equipment in their internal IT and migrate their services from the cloud back to their inhouse data center. At this stage the approval is much easier as the new idea is proven already.
Therefore, for large enterprise it will not be a "all inhouse" or "all cloud" setup. In most cases, I believe it will be a hybrid environment where some parts of the application is running in the data center while other parts is running inside the cloud. These different components need to interact with each other in a transparent way without sacrificing security, performance and control. In fact, having the ability to migrate application components seamlessly between the "inhouse data center" and "public cloud" enable the enterprise to put their functionality in places with lowest cost, hence further optimize their cost efficiency.
Unfortunately, cloud providers are not motivated to encourage their customer to run in a hybrid environment (they want all your business, not just part of it). They are also not interested to provide an easy way for their customer to migrate their application from the cloud to their IT. This is a gap where I think new cloud technology startups may fill.
Virtualizing internal data center
How about running some virtualization software (e.g. VMWare, Xen ... etc) in house to make your data center look very similar to the cloud ?
While "virtualization" (which encourages sharing) is useful in a generic sense regardless whether the equipment is residing inhouse or in the cloud, the cost dynamics are quite different such that a different scheduling policy is needed.
For example, because cloud provider will charged whenever a VM instance is running, so it probably won't make sense to start the VM too early. Therefore I expect most of the cloud deployment scenarios for large enterprises are about launching multiple machines, do the heavy duty processing, and then shutdown all machines. In other words, the machines in the cloud will be keep idle for too long.
However, for inhouse equipment, there is no such cost involved. The enterprises would like to have all available resources (e.g. their employee's desktop can be used while they gone home) to be registered to the pool as soon as they are available. So the deployment scenario will become, register resources whenever available, resources sit idle waiting for job allocation, do the allocated job and then go back to the pool to wait for the next one.