Monday, January 26, 2009

Utility computing syndication -- evolutionary future

Owning and operating data centers is an extremely complex and expensive proposition. The result, data center complexes often in multiple fixed locations, each with a life span of 15 years or so. And thus tied to locality considerations of power generation, labor pools, real estate, taxes and many others. This is not the epitome of a grand dynamic computer and storage fabric vision whereby capacity can be added, subtracted or moved around as needed. The fact that so many organizations throughout the world have to roll out their own data centers, ought to be the first clue that a huge amount of inefficiencies exist in organizations owning and running their own data centers.

However, wherever significant inefficiencies exist, opportunities are created. It's an unsustainable notion, having each organization build its own data center, even for smaller versions with only a handful of machines. Besides being tied to the locality issues already enumerated, one just can not come close to tapping economies of scale of a large virtualization provider, nor utilize the same level of statistical usage. Which is why I see the future of data centers will move towards outsourcing to larger data center providers, who house workloads and data for many many companies at once.

Amazon with its Elastic Compute Cloud (EC2) and Simple Storage Service (S3), offers a first-order outsourced data center service, for example. Though, it's mostly set up currently to handle server workloads. It doesn't support Windows XP and Vista VMs, so it's not yet an on-demand VDI provider. Same for the UK based Flexiscale and GoGrid. But looking forward, why can't utility computing handle most or all of your entire data center needs? They have to. There's just too much inefficiency in the old model. Kicking and screaming and dragged into the next decade perhaps, but all the attendant security, compliance, networking and other issues have to be solved.

Before the world catches up to utility computing, I'm proposing that we think about the next step in the evolution -- because it requires (re-)factoring of so many facets of the utility fabric (virtualization, networking, storage, management, security, ...). Ultimately, we will need to allow for utility providers to syndicate for a myriad of reasons:
  • Capacity management
  • Power management and rate optimizations
  • Catastrophe management
  • Follow the Sun operations
  • Eliminating single vendor lock-in
  • VDI optimizations (see below)
  • ...
VDI is worth explaining more. One of the big problems with streaming a desktop session from a VM running on a server, is that in a mobile and distributed work-force world, a user can be literally anywhere on the planet. And VDI just doesn't work that well across longer latencies, especially if you want a high quality desktop experience. But how can any organization expect to serve up a low latency VDI session anywhere? To do that would mean to place the running VM somewhat near the users. The answer is syndication. By borrowing capacity from another provider, the sheer amount of physical locations that a VM can run is bounded only by the syndication network size. Until then, I don't see VDI as a viable solution for true road-warriors.

To use an analogy with the airline business, if unexpected events take capacity out of one airline, they can re-book flights with other airlines. Similar analogies exist with the power grid, network fabric, silicon fabrication and many others. That's where we need to get to with utility computing.

And the reason I mention this, is not only because there are so many additional opportunities for related products and services, when one realizes syndication is the future. But also, to urge all types of companies to push their products and road-maps to accommodate it.

Disclosure: no positions

No comments: