Pages

Wednesday, April 21, 2010

Obligatory Stack Wars Post

Two days ago, Hitachi announced a preconfigured stack that is perceived to compete with VMware/Cisco/EMC's vBlock.  On one hand, this is pretty significant since it is the second enterprise announcement as far as I can remember since the current USP-V was announced in 2007 (itself a speeds-and-feeds announcement).  The first was, of course, the over-hyped announcement that you could mirror USP-Vs and failover between them via MPIO.

On the other hand, being a pretty serious Block-Head, I have difficulty seeing much significance in this new announcement.  HDS bloggers are posting more on virtualization and general IT topics and less on storage.  Even this press release focuses more on Hyper-V and Hitachi Blades, and not so much on the storage offering.

Historically Did Storage, indeed.

However, mostly due to the recent GestaltIT Tech Field Day, VCE's vBlock is getting a lot more attention.  There are a lot of really good posts on the architecture and what it provides:

When it comes to vBlock, there appears to be approximately three standard responses:
  1. I don't see the point.  Many excellent technical minds share this response.  After all, vBlock is an extremely inflexible product that isn't customizable to the nth degree.  It isn't meant to be, but from a technical point of view, I can see this argument having a lot of validity.
  2. This could cause me to lose my job.  I haven't seen this online, but typically this response is from people who see their entire value proposition as being able to develop and support heterogeneous configurations.  Honestly, if that is someone's only value proposition, they'll be in trouble eventually regardless of technology shifts.
  3. VCE made the wrong configuration decisions/is too expensive.  If I was developed it, I would do the following differently.  Basically a variant on response 1, this pushes on the details of the implementation and why different choices could be better/cheaper in certain situations.
The consistent argument being made by VCE proponents (typically employed by one of the companies) is that this isn't a product for technical people... the target market is at the C-Level executive role.  This typically follows with analogies about cars and garages and why people don't build their own cars anymore.

First off, that is slightly insulting... all good technical people should be able to show the business benefit that what they support provide.  I'd say a more apt way of putting it would be "focus less on the technology, and more on the benefits."

What benefits?
  • It frees IT staff time from building this infrastructure.  While I'm sure there are people who really enjoy analyzing compatibility matrices and getting different vendor technologies working together, there is very little business benefit to these hours (and, I'd argue, most high-level technical minds don't particularly enjoy this... anything that saves me from having to do the tedious portions of my job is a good thing).  Additionally, for this scale (1000s of VMs), the setup time for the VMware clusters, storage provisioning, networking is not a small amount.
  • One vendor POC to work with when things don't go well.
  • The technology trade-offs are typically made to ensure the vBlock is balanced regardless of what is running on it.  Most of the arguments around the amount of memory per blade is this trade-off in effect. vBlock strives to be a bottleneck-free architecture.
  • Any cost benefits are built on the massive reduction of IT hours installing and configuring virtualization environments on this scale.  The argument of "too expensive" may go away in many cases after an honest assessment of what a 500 VM farm costs in time and materials.
  • This product is targeted towards large enterprises.  It definitely doesn't make sense at smaller scales (yet).
Does vBlock have a place?  Definitely.  Is it perfect?  No.  I'd argue that it is likely VCE released the current version more to generate the 'stack' conversation with customers and convert a lot of those customers with version 2 than to necessarily sell a ton of v1 vBlocks.  When you're trying to change a fairly deeply rooted mindset, it is going to take some time.

What could v2Block look like?  If the rumors of a converged CLARiiON / Celerra offering running on a customized VMware kernel are true, there is quite a bit of potential for extending that into the vBlock and gaining efficiencies... especially depending on what happens with the CLARiiON RecoverPoint splitter after the convergence.    

The two questions I had after reading this Register article were:
  1. When will Enginuity be rolled into the converged platform?
  2. Would that allow tighter integration (non-switch based) between Recoverpoint and VMax?