"What makes a cloud a cloud is three things: technology (dynamic pools of virtual resources), operations (low-touch and zero-touch service delivery) and consumption (convenient consumption models, including pay as you go)... What makes a cloud "private" is that IT exerts control over the resources and associated service delivery."Let's take a look at today's dynamic datacenter, especially in an organization where private cloud is being pursued.
- You have a very high virtualization rate. Due to less friction for resource acquisition, you can assume that more and more systems will become virtualized on the private cloud as time goes on.
- You have a variable cost model, allowing for changing costs based off of actual consumption and performance utilization.
- You have an automation engine, to drive processes/systems through the private cloud.
- Regardless of technology, you're hopefully pursuing loosely coupled systems that do not have low latency requirements and provide rich web interfaces.
From a technology play, you have at least most of these in play:
- VMware - VMs are moving among hosts based on dynamic workload decisions - "where" something is running becomes less important.
- Intelligent Storage Optimization - placing the right data in the right place without sacrificing performance.
- Replication - ensuring production data is recoverable in a remote location.
Virtualization allows IT organizations to break down silos and drive utilization up while controlling costs. Most large organizations maintain several data centers, and resources are not easily shared between them. That's the next silo to be knocked down... by leveraging the investment in virtualization and storage technologies, it could be possible in the near future.
- You have extremely high visibility into utilization, data traffic, response times, frequency... basically, what drives the physical location of a VM. The main reason not to move a workload to a different data center typically has to do with latency between users and the application layer, or between the application layer and the data. By hooking into the hypervisor, you could determine likely candidates that can be moved without massively disrupting the user experience.
- The "heavy lifting" of migrating large portions of production data is already taken care of. You have an asynchronous mirror of the data at the remote site, probably hooked up to an existing VMware Cluster. The remain "system state" information could be replicated with a brief outage at a predefined window and then promoted to production at the remote site (flipping the replication to maintain recoverability).
Given the end-to-end knowledge from #1, and the data proximity of #2, you can theoretically "warm" migrate a VM from one datacenter to another, keeping response times the same or better, and increase the flexibility of the environments.
So, in the end, it comes down to what percentage of applications are eligible for this type of workload distribution based on network and performance requirements. By optimizing at that level, you can more evenly spread out your workload requirements geographically. The notion of distributed cache coherence comes into play for applications that don't behave well in a higher latency location. Finally, once you have that technology in place, disaster recovery becomes much simpler - instead of vMotioning between hosts, you vMotion to an alternate datacenter.
Sure, none of this is available right now... but looking forward, you can see how an entirely fluid, geographically dispersed IT infrastructure is possible.
No comments:
Post a Comment