Wednesday, February 24, 2010

The Tiers are Drying Up

A few days ago, NetApp CEO Tom Georgen pronounced that storage tiering is dead. Simon Sharwood has a good post linking to existing commentary... rather than re-link all the sites, I suggest you read his take on it (and associated links). The general consensus is that tiering is not dead.

A sentiment that I agree with, for the most part.  Let's think about why most people tier, and what this pronouncement could possibly indicate.
  • Many organizations tier storage to reduce the TCO of their environment.  SSDs are expensive, SATA disks aren't, and not everything requires IOPS ahead of bulk capacity.  All things being equal, if SSDs were the cheapest storage option for capacity and IO, then this reason for tiering would not be relevant (best performance in most cases for the cheapest cost... who wouldn't like that?).
  • Many organizations tier storage to meet unique requirements such as WORM.  Of course, it'd be silly to enact WORM policies on 100% of all storage in an enterprise.  Since NetApp offers SnapVault, it is obvious he doesn't consider this tiering at all... they still offer SnapVault, right?
  • So, what it basically comes down to is the standard direction that the industry is going towards; the adoption of SSDs + SATA drives for all storage. This is colloquially referred to as "Flash and Trash." Tom Georgen seemed to be advocating placing 100% of all data on SATA disks and using PAM SSD cards as an extremely large and fast cache to maintain performance requirements. As discussed on Twitter, the main drawback of this is that if you lose PAM, your storage response time is impacted until the replacement PAM is primed with the data.  If you want more information, Mike Riley and Dimitris Krekoukias were the two NetApp resources on Twitter who were most engaged.
  • Additionally, the NetApp stance is that WAFL and Raid-DP provides a RAID-6 implementation that is faster than traditional vendor's RAID-10 implementations... so basically even the SATA drives should perform admirably.  The NetApp resources took this as an assumed fact and most of the other people involved didn't which made some of the retorts nonsensical at times.  NetApp can show benchmarks that indicate this performance level is realistic (well, as realistic as a benchmark can get), and other vendors can show benchmarks indicating that WAFL performance degrades as it fragments.  As someone in the "not-vendor" space, I can't weigh in on how much of an advantage this is, nor whether or not it actually degrades as time goes on.  My gut feeling is that there isn't some magic algorithm that removes the parity impacts of RAID-6 after the volumes start filling up.
  • Probably the best information presented on Data ONTAP over the last few months were over at Marc Farley's blog, StorageRap.  The comments have a lot of good, understandable information on how WAFL works.
Going forward, my opinion is that storage tiering is going to remain important, and grow to include migrating data to private/public cloud and even cheaper storage platforms - while doing so, making intelligent decisions such as removing DR replication links post-migration to free up expensive licensing.  Basically, the right data on the right platform protected the right way... at all times.


Unknown said...

Hi, could you post a few of your thoughts concerning this sibject and IBM XIV. Here is another vendor claiming that single tier is all you need. I'm not advocating a single tier, I'd just like your perspective.

Karl Dohm

Hector Servadac said...

Matt, you said you don't use iSeries (actually IBM i), but take a look at this:

Automatic tiering according to usage, performed by the operating system. If you add SSD , decides which data move to and from this devices.