Pages

Monday, October 18, 2010

Integrating Instapaper with TTYtter

If you're using TTYtter as a command line twitter client in a shell environment and you want /url to add the link in a tweet to your Instapaper account, the following entry in .ttytterrc will accomplish this:
urlopen=curl -s -d username="USERNAME" -d password="PW" -d url=%U https://www.instapaper.com/api/add

Not This Leg Either

Late in Season 1 of House MD, House discloses the story behind his limp and his addiction to painkillers... he was diagnosed with a blood clot in his leg, and rather than follow the doctor's advice to amputate the leg, he demanded a different, more risky procedure that involved circumventing the clot with a vein.  House cited an "irrational attachment" to his leg for this illogical stance.

As he was preparing for the surgery, he wrote on his bad leg... "Not This Leg Either."

Two things stood out in this episode to me.  First off, I understand a common caution given to medical students (and doctors, for that matter) is to not self-diagnose nor self-treat whatever affliction you think you have.

Very sage advice, since when you are personally involved, it prejudices your thinking.  Why is it then, that so many IT professionals try to self diagnose when it comes to their environments?  In many cases, those people either caused or contributed to the issues in the environment they are supporting!  As Einstein said, We can't solve problems by using the same kind of thinking we used when we created them.”

Too often I've seen people attack recommendations for rational, low risk improvements simply because they had too much emotional investment in the current design to realize that the suggestions are for the best.  When you're too close to an issue, it becomes easy to fall into "ditch digger's syndrome."

The second thing that resonated with me in that episode was House's admittance that his decision not to amputate was irrational and due to his attachment to his leg.  How many irrational beliefs are holding IT organizations back today?

  • Server Huggers (i.e. "You Can't Virtualize Exchange/SAP/etc") - Actually, quite often, you can.
  • Conversely, Virtualization Huggers (i.e. "You must virtualize everything!") - Ultimately, technical decisions have to come down to what provides the best return to the business on the investment.  Many times, that may be virtualization... but not every time.  Are you sure that your virtualized environments are saving you money?
  • We absolutely need five 9s of availability - Have you actually determined whether or not all systems can justify the infrastructure and cost that five 9s requires?
IT people shouldn't be primarily technology advocates... they should be business advocates that understand how technology can be used to increase the profitability and ability for their business to deliver.  Otherwise, they are just trumpeting whatever the current buzzword that vendors are hyping as the current cure for all the issues in their environment.

Saturday, August 7, 2010

CIO Cloud Initiatives

With VMworld just around the corner, I've been reflecting on EMC World's focus of "the journey to the private cloud." Even today, some of the statements made during the opening keynote by Joe Tucci still don't sit well with me.

In the keynote, Joe showed a slide that had a list of ten top CIO initiatives for 2010.  The third initiative was Cloud Computing, followed closely by Storage Virtualization.  People can argue endlessly on how accurate surveys such as this one are, but from a hype standpoint, Cloud Computing is definitely up there.

I question how many of those CIOs have an accurate understanding on the nature and skills of their IT organization and how that impacts the feasibility of internally implementing a private cloud... successfully at least.  I've always pictured the "journey to the private cloud" as a model similar to Maslow's hierarchy of needs.

On the lowest level, you have the technical competence of staff.  Is your organization properly staffed with the right type of IT people to architect, implement, and administer a private cloud?  Or are there a substantial number of fiefdoms staffed by "next-next-finish" administrators?  Any gaps here would need to be resolved before a private cloud could be successfully implemented.  The resulting architecture is going to be most likely more complex and automated than the current environment, and this will amplify any skills gaps.

The next level is the maturity of workflows and processes.  Are the steps for implementing new IT systems well defined?  Is there a good understanding on where time is being spent for new system implementations?  More importantly, are the processes relevant and actually followed, with appropriate budget and technical approval in place to ensure that new systems are properly financed and designed?  How is chargeback handled?  Without good processes and workflows, all private cloud provides is the automation of broken processes and a further lack of fiscal responsibility.

The third level is the organization's current state of virtualization.  What is the current percentage of systems virtualized?  What barriers exist to increase this percentage?  Is the environment stable, performant, and well managed?  If your organization is bad at virtualization, it will be terrible at private cloud.

If the organization is successful on those three levels, then the private cloud is likely the next step to take to further increase the business value of IT.  In fact, organizations in that state will gradually adopt a private cloud model without any external influence as they continue to automate and consolidate the routine architectures and implementations.  

However, many organizations aren't there yet... at least, not enough to make Cloud Computing the third most popular CIO initiative.  I'd argue that many times when CIOs say that Cloud Computing is important, what they're really saying is that their current IT infrastructure is painful to work with, expensive, and slow to change.  The hope is that once this new buzzword architecture is implemented, their IT will suck less.

In a way, this is similar to the "server consolidation" CIO initiatives during the infancy of virtualization.  Lots of organizations had initiatives to reduce the server footprint.  The resultant architecture ended up being Windows VMs running on VMware GSX Server running on Windows 2000/2003 (since, after all, the current footprint was primarily Windows so those departments were assigned this initiative).  While meeting the "requirement" of server consolidation, these implementations resulted in little to negative business benefit.

Lets make sure today's private cloud implementations don't end up the same way.

Friday, August 6, 2010

Required Reading: Storage 101

Over the past few months, I've collected links that I felt were good information for people who are new to storage networking or interested in storage networking as a field.  I consider these resources to be some of the best writing available on storage, both for fundamentals and for architecture.
Any good resources I missed? Link to them in the comments!

XIV Posts Updated

One of the main concerns with XIV early on was the shroud of secrecy that surrounded its architecture and the lack of IBM response to publicly posted questions.  Since Tony Pearson recently remedied that with a series of posts and comments on my blog, I have updated all XIV posts with the following:
UPDATE [8/6/2010]:  If you're interested in more current XIV information, I recommend reading Tony Pearson's recent posts here and here.  He also provided additional information in the comments to one of my posts here.
This was not by anyone's request.  Rather, it was the right thing to do.  My blog is not intended to serve as FUD and since IBM posted a response, I felt ethically compelled to link to it on each post.

Wednesday, April 21, 2010

Obligatory Stack Wars Post

Two days ago, Hitachi announced a preconfigured stack that is perceived to compete with VMware/Cisco/EMC's vBlock.  On one hand, this is pretty significant since it is the second enterprise announcement as far as I can remember since the current USP-V was announced in 2007 (itself a speeds-and-feeds announcement).  The first was, of course, the over-hyped announcement that you could mirror USP-Vs and failover between them via MPIO.

On the other hand, being a pretty serious Block-Head, I have difficulty seeing much significance in this new announcement.  HDS bloggers are posting more on virtualization and general IT topics and less on storage.  Even this press release focuses more on Hyper-V and Hitachi Blades, and not so much on the storage offering.

Historically Did Storage, indeed.

However, mostly due to the recent GestaltIT Tech Field Day, VCE's vBlock is getting a lot more attention.  There are a lot of really good posts on the architecture and what it provides:

When it comes to vBlock, there appears to be approximately three standard responses:
  1. I don't see the point.  Many excellent technical minds share this response.  After all, vBlock is an extremely inflexible product that isn't customizable to the nth degree.  It isn't meant to be, but from a technical point of view, I can see this argument having a lot of validity.
  2. This could cause me to lose my job.  I haven't seen this online, but typically this response is from people who see their entire value proposition as being able to develop and support heterogeneous configurations.  Honestly, if that is someone's only value proposition, they'll be in trouble eventually regardless of technology shifts.
  3. VCE made the wrong configuration decisions/is too expensive.  If I was developed it, I would do the following differently.  Basically a variant on response 1, this pushes on the details of the implementation and why different choices could be better/cheaper in certain situations.
The consistent argument being made by VCE proponents (typically employed by one of the companies) is that this isn't a product for technical people... the target market is at the C-Level executive role.  This typically follows with analogies about cars and garages and why people don't build their own cars anymore.

First off, that is slightly insulting... all good technical people should be able to show the business benefit that what they support provide.  I'd say a more apt way of putting it would be "focus less on the technology, and more on the benefits."

What benefits?
  • It frees IT staff time from building this infrastructure.  While I'm sure there are people who really enjoy analyzing compatibility matrices and getting different vendor technologies working together, there is very little business benefit to these hours (and, I'd argue, most high-level technical minds don't particularly enjoy this... anything that saves me from having to do the tedious portions of my job is a good thing).  Additionally, for this scale (1000s of VMs), the setup time for the VMware clusters, storage provisioning, networking is not a small amount.
  • One vendor POC to work with when things don't go well.
  • The technology trade-offs are typically made to ensure the vBlock is balanced regardless of what is running on it.  Most of the arguments around the amount of memory per blade is this trade-off in effect. vBlock strives to be a bottleneck-free architecture.
  • Any cost benefits are built on the massive reduction of IT hours installing and configuring virtualization environments on this scale.  The argument of "too expensive" may go away in many cases after an honest assessment of what a 500 VM farm costs in time and materials.
  • This product is targeted towards large enterprises.  It definitely doesn't make sense at smaller scales (yet).
Does vBlock have a place?  Definitely.  Is it perfect?  No.  I'd argue that it is likely VCE released the current version more to generate the 'stack' conversation with customers and convert a lot of those customers with version 2 than to necessarily sell a ton of v1 vBlocks.  When you're trying to change a fairly deeply rooted mindset, it is going to take some time.

What could v2Block look like?  If the rumors of a converged CLARiiON / Celerra offering running on a customized VMware kernel are true, there is quite a bit of potential for extending that into the vBlock and gaining efficiencies... especially depending on what happens with the CLARiiON RecoverPoint splitter after the convergence.    

The two questions I had after reading this Register article were:
  1. When will Enginuity be rolled into the converged platform?
  2. Would that allow tighter integration (non-switch based) between Recoverpoint and VMax?

Wednesday, March 24, 2010

XIV Final Thoughts- Drive Failures a Red Herring?

UPDATE [8/6/2010]:  If you're interested in more current XIV information, I recommend reading Tony Pearson's recent posts here and here.  He also provided additional information in the comments to one of my posts here.
--

Over the past few weeks, between the Wave and the blog posts, I've been thinking about XIV quite a bit. It has taken IBM quite a while to attempt to explain the impact and risk of double drive failures on XIV.

IBM definitely has an explanation, one that could have been told quite a bit ago.  In fact, I'd assume that this is the same explanation they've been giving customers who pushed the point; that the risk is less that it seems due to quick rebuilds and the way parity is distributed between interface and data nodes.  I realize that UREs are a very large concern, but to be honest, I bet less than 5% of customers even think about storage at that level.  Perhaps the double drive failure issue is just a red herring that draws attention away from other issues.

One thing that continues to stick out in my mind is the ratio of interface nodes to data nodes.  On the Google Wave, on of the IBM VARs made the following statement:
Remember there is more capacity in the data modules than in the interface modules.  (9 data, 6 interface)  Why they couldn't make this easy and have an equal number of both module types, I'll never know!  :) 
The interface nodes are only 40% of the array.  Even IBM VARs can't explain why this is a 40:60 ratio rather than 50:50.  It increases the probability of double drive faults causing data loss at high capacity and it is a pretty specific design decision.

I wonder if it is related to the Gig-E interconnect and driving out "acceptable" performance from non-interface nodes.  Jesse over at 50 Micron shares similar thoughts.  Thinking this through (and this is all simply a hypothesis)... perhaps the latency and other limitations of the Gig-E interconnect are somewhat offset by having additional spindles (IOPS+throughput) on the "remote data nodes."  I'd like to load a XIV frame to 50% utilization, run a 100% read workload at it, and see if the interface nodes are hit much harder than the data nodes (in effect, performing like a RAID 0, not a RAID 1).  If that were true, for optimal performance you'd never want to load a frame past the point where new volumes would be allocated solely from data nodes.

I am not claiming this is true (no way for me to test it), but if XIV changes the interconnect to a different type (Infiniband, for example), I will find it interesting if "suddenly" there is a 50:50 ratio of interface to data nodes.

Monday, March 22, 2010

XIV Recap

UPDATE [8/6/2010]:  If you're interested in more current XIV information, I recommend reading Tony Pearson's recent posts here and here.  He also provided additional information in the comments to one of my posts here.
--

A few weeks ago, I created a Google Wave to discuss the architecture surrounding XIV and the related FUD (some of it fact-based) that this architecture attracted.  I intended to post a recap after the wave had died down.

This is not that recap.  The recap was about 80% complete, but more reputable resources have posted much of the same information.  For anyone interested in the actual Wave information, contact me and I'll send a PDF (provided there is some mechanism to decently print the Wave).  There was a podcast Nigel hosted last week that I participated in available on his podcast archives.

New Zealand IBMer the Storage Buddhist wrote this post discussing the disk layout and points of failure in IBM's XIV array... which generated this response by NetApp's Alex McDonald.  Both posts, especially the comments, are interesting and show both sides of the argument around disk reliability for XIV.

This post is meant to bridge a few gaps on both sides, and requires a little disclaimer.  Most of the technical information below came from the Google Wave, primarily from IBM badged employees and VARs.  I have been unable to independently guarantee accuracy- even the IBM RedBook on XIV has diagrams of data layout that contradict these explanations, but with disclaimers that basically say the diagrams are for illustrative purposes and don't actually show how it really works.  So, caveat emptor - make sure you go over the architecture's tradeoffs with your sales team.

Hosts are connected to the XIV through interface nodes.  Interface nodes are 6 of the 15 servers in an XIV system have FC and iSCSI Ethernet interfaces providing host connectivity.  Prior to an unspecified capacity threshold, each incoming write is written to an interface node (most likely the one it came in on) and mirrored to a data node (one of the 9 other servers in an XIV system).

At this point, you can have drive failures in multiple interface nodes without data loss.  In fact, one person claimed that you could lose all of the interface nodes without losing any data (of course, this would halt the array).  The "data-loss" risk in this case is losing one drive in an interface module (40% of the disks) followed by one drive in a data module (60% of the disks) prior to a rebuild being complete (approximated at, worst case, 30-40 minutes).  Or, as it was put in the wave:
"If I lose a drive from any of a pool of 72 drives, and then I lose a second disk from a separate pool of 108 drives before the rebuild completes for the first drive, I'm going to have a pretty huge problem." 
Past a certain unknown threshold, incoming writes start getting mirrored between two data nodes rather than an interface node and a data node.  At that point, double disk failures between different data nodes can also cause a pretty huge problem.

From a 'hot spare' perspective, the XIV has space capacity to cover 15 drive failures.  When you hear XIV resources discuss "sequential failures," they typically mean drive failures that occur after the previous one has rebuilt, but prior to the replacement of the failed drive.  This is an important statistic from the perspective of double drive failures that occur because the failed drive was never detected (have you verified YOUR phone home lately?).

A couple of final thoughts.  First off, the effect of a uncorrectable error during a rebuild was never fully explained.  I have heard in passing that "the lab" can tell you what the affected volume is and that it shouldn't cause the same impact as two failed drives.  Secondly, Hector Servadac mentions the following on the StorageBuddhist's post:
2 disk failures in specific nodes each one, during a 30 min windows, is likely as 2 controller failure
Unless I'm not understanding the impact of a 2 controller failure, there is no data loss with that type of 'unlikely' failure... with the double drive failure, there is significant data loss.  But as a yardstick of "how likely does XIV/IBM feel this outage scenario is," it serves as a decent yardstick.

I tried to make this as unbiased as possible.  I am positive I will be brutally corrected in the comments :-).

Tuesday, March 16, 2010

Breaking Datacenter Boundaries

Chuck Hollis (EMC) has an interesting post up regarding the future of workload optimization and fluid architectures.  First off, he has one of the clearest definitions of cloud architecture and private clouds I've seen recently:
"What makes a cloud a cloud is three things: technology (dynamic pools of virtual resources), operations (low-touch and zero-touch service delivery) and consumption (convenient consumption models, including pay as you go)... What makes a cloud "private" is that IT exerts control over the resources and associated service delivery."
Let's take a look at today's dynamic datacenter, especially in an organization where private cloud is being pursued.

  1. You have a very high virtualization rate.  Due to less friction for resource acquisition, you can assume that more and more systems will become virtualized on the private cloud as time goes on.
  2. You have a variable cost model, allowing for changing costs based off of actual consumption and performance utilization.
  3. You have an automation engine, to drive processes/systems through the private cloud.
  4. Regardless of technology, you're hopefully pursuing loosely coupled systems that do not have low latency requirements and provide rich web interfaces.
From a technology play, you have at least most of these in play:
  1. VMware - VMs are moving among hosts based on dynamic workload decisions - "where" something is running becomes less important.
  2. Intelligent Storage Optimization - placing the right data in the right place without sacrificing performance.
  3. Replication - ensuring production data is recoverable in a remote location.
Virtualization allows IT organizations to break down silos and drive utilization up while controlling costs.  Most large organizations maintain several data centers, and resources are not easily shared between them.  That's the next silo to be knocked down... by leveraging the investment in virtualization and storage technologies, it could be possible in the near future.
  1. You have extremely high visibility into utilization, data traffic, response times, frequency... basically, what drives the physical location of a VM.  The main reason not to move a workload to a different data center typically has to do with latency between users and the application layer, or between the application layer and the data.  By hooking into the hypervisor, you could determine likely candidates that can be moved without massively disrupting the user experience.
  2. The "heavy lifting" of migrating large portions of production data is already taken care of.  You have an asynchronous mirror of the data at the remote site, probably hooked up to an existing VMware Cluster.  The remain "system state" information could be replicated with a brief outage at a predefined window and then promoted to production at the remote site (flipping the replication to maintain recoverability).
Given the end-to-end knowledge from #1, and the data proximity of #2, you can theoretically "warm" migrate a VM from one datacenter to another, keeping response times the same or better, and increase the flexibility of the environments.

So, in the end, it comes down to what percentage of applications are eligible for this type of workload distribution based on network and performance requirements.  By optimizing at that level, you can more evenly spread out your workload requirements geographically.  The notion of distributed cache coherence comes into play for applications that don't behave well in a higher latency location.  Finally, once you have that technology in place, disaster recovery becomes much simpler - instead of vMotioning between hosts, you vMotion to an alternate datacenter.

Sure, none of this is available right now... but looking forward, you can see how an entirely fluid, geographically dispersed IT infrastructure is possible.

Thursday, March 11, 2010

Odds and Ends - Tiering, and Performance Planning

A few articles I wanted to briefly highlight:
  • Storagebod posted a brief article on automated storage tiering.  To briefly summarize, imperfect automated storage tiering is better than nothing... it is an easy way to get value out of SSDs in an existing environment and it provides a mechanism to move less-used data off of FC drives and onto SATA drives.  One thing is certain... the importance of manual data layouts is decreasing.  Between array architectures that don't 'allow' it (XIV being the most notorious example), don't 'need' it (NetApp FAS), and traditional architectures getting performance-driven automated storage tiering, using Excel to mismanage storage layouts could finally be over.  Dimitris makes the point that due diligence still needs to be applied to allocations that require high performance a few times a month to ensure the volumes don't get migrated to the wrong tier (among other comments).  There are excellent comments on that post from EMC and NetApp discussing the two approaches.
  • Dimitris also has a good post on vendor competition and under-sizing proposals to get the sale.  It is worth reading just for the 'basics' explanation of performance-sizing small arrays - it also has some good information on Compellant's architecture.  My comments regarding this vendor comparison are attached to that post.  As always, prior to storage acquisitions, make sure you understand how the vendor determined their bid's sizing and get guarantees on performance/capacity if you are at all concerned about meeting your requirements.
  • Chuck Hollis (EMC) and Marc Farley (3PAR) have excellent posts up on storage caching.

Monday, March 1, 2010

FAST & PAM Contrasted

** Updates Appended Below **

Over the past few days, I've been thinking about storage tiering... both in general, and specifically FAST and PAM II.  Each takes a very different approach to providing better storage performance without highly specific tuning.  This is an outsider's view based off of publicly available information (so, in cases where I'm wrong, both vendors have shown that they aren't shy in correcting misconceptions).  First, some general definitions:

FASTv1:  Released in December, it is the first version of EMC's Fully Automated Storage Tiering.  It works at the LUN level, and requires identical LUN sizes across tiers.  It is not compatible with Thin Provisioned/Virtual Provisioned LUNs.

FASTv2:  Scheduled to be released in the second half of 2010, it is the next version of FAST that works at a sub-LUN level.  It requires Thin Provisioning/Virtual Provisioning to manage the allocations since it utilizes that functionality to provide the granularity of migration.

PAM II:  NetApp's Flash solution, Performance Acceleration Module.  It acts as a additional layer of cache memory and does not have specific layout requirements.

Architecture Differences
FAST runs as a task on the processors of the DMX/VMAX.  At user specified windows, it will determine volumes/sub-volumes that would benefit from relocation and perform a migration to a different tier.  This requires some IO capacity to migrate the data, so offhours/weekends are ideal window candidates.  It does a semi-permanent relocation so all reads/writes are serviced by the new location post-migration (semi-permanent since FAST can relocate an allocation back to the prior tier if the performance data indicates it is a good swap).  Since RAID protection is maintained throughout the migration, the loss of components do not substantially affect response time.
PAM II is treated as an extremely large read cache.  Basically, as a given read-block is expired in memory, it trickles down to PAM until it is finally flushed and resides solely on disk.  This gives PAM II a few nice features.  First of all, there is no performance hit during the 'charging' of the PAM - since it is fed by expired 'tier-1' cache, there is no additional performance impact after the un-cached block is read.  Secondly, it does not cache any writes.  This is a giant assumption on my part, but I assume that due to the 'virtualization' WAFL provides, PAM does not need to track changed blocks on the disk.  Since everything is pointer based (think of NetApp snaps), when the track is changed on disk, future reads hit the new disk location then get migrated through the cache levels like 'new' reads since the location has changed (the old location/data gets expired fairly quickly).  The downside to this approach is that the loss of PAM requires all reads to be serviced by disk+tier 1 memory until it is replaced and recharged.

One thing that the NetApp resources on Twitter kept repeating was the benefits of PAM as an extension of cache.  I assume the main benefit of taking this approach to Flash is that it is accessed via memory routines (less layers/translation to execute through) rather than disk routines.  Whether or not this is a significant performance benefit, I really can't say.

From the initial implementation, PAM will provide almost immediate benefit as data expires from cache.  FAST will require a few iterations of the swap window before things have optimized.  Taking a longer view, FAST will work best with consistent workloads... after a few weeks, the migrations should hit an equilibrium and response times should be stable and fast.  Component failures should not adversely affect response time.  PAM, as an extension of cache, will continuously optimize whatever blocks are getting hit hardest at any given moment.  While this is more flexible day-to-day than data migrations, consistent performance could be an issue.  Additionally, the IO hit of losing PAM would decrease response times, but the impact of this is somewhat reduced by the fact that ramping up PAM is much faster than the data migrations that FAST requires.

Both solutions make various trade offs between performance, stability, and consistency.  Understanding these trade offs will benefit the customer as they choose which tools to leverage in their environment.  Following are a few considerations...

Considerations
  1. Many customers have performance testing environments.  Since both of these approaches optimize as tests run, what relationship can be drawn between the 3rd-5th week of integration/performance testing and the production implementation?  Theoretically, if the data is identical between performance testing and production, NetApp dedup could leverage performance testing optimizations during the production implementation.
  2. Can customers run both FASTv1 and FASTv2 simultaneously since they have mutually exclusive volume requirements?  Are both separately licensed?  There are implementations where LUN level optimization may be preferred over sub-LUN.
  3. NetApp can simulate the benefits of PAM II in an environment.  Can similar benefits be simulated for FAST prior to implementation?
  4. I assume that FAST will promote as much into SSD as possible to increase response time.  How can customers determine when to grow that tier of storage?
  5. If a customer is using PAM II to meet a performance requirement, what can they do to reduce the impact of a PAM II failure?
  6. For both FASTv2 and PAM II, how can a customer migrate to a new array while keeping the current performance intact?  With FASTv1, it is a simple LUN migration since it is determinable what tier a LUN is on.  With FASTv2 and PAM II, it gets tricky (please note, I'm not talking about migrating the data, which is a standard procedure, I'm talking about making sure you hit performance requirements post-migration).
** Updates - 03/02/2010 AM **
To be clear, this is an "apples to oranges" comparison.  Each solution takes a completely different approach to implementing flash into an array, and the two solutions behave very differently.

Additionally, since I was focusing on Flash in particular, I neglected to compare cache capacity directly.  DMX/VMAX has a much higher cache capacity than the NetApp arrays.  Per Storagezilla on Twitter:  "Symmetrix already has acres of globally accessible DRAM for read/write and doesn't need anything like PAM."

Finally, cost does play into comparing the two approaches, but I don't have access to any sort of real-world pricing.

    Wednesday, February 24, 2010

    The Tiers are Drying Up

    A few days ago, NetApp CEO Tom Georgen pronounced that storage tiering is dead. Simon Sharwood has a good post linking to existing commentary... rather than re-link all the sites, I suggest you read his take on it (and associated links). The general consensus is that tiering is not dead.

    A sentiment that I agree with, for the most part.  Let's think about why most people tier, and what this pronouncement could possibly indicate.
    • Many organizations tier storage to reduce the TCO of their environment.  SSDs are expensive, SATA disks aren't, and not everything requires IOPS ahead of bulk capacity.  All things being equal, if SSDs were the cheapest storage option for capacity and IO, then this reason for tiering would not be relevant (best performance in most cases for the cheapest cost... who wouldn't like that?).
    • Many organizations tier storage to meet unique requirements such as WORM.  Of course, it'd be silly to enact WORM policies on 100% of all storage in an enterprise.  Since NetApp offers SnapVault, it is obvious he doesn't consider this tiering at all... they still offer SnapVault, right?
    • So, what it basically comes down to is the standard direction that the industry is going towards; the adoption of SSDs + SATA drives for all storage. This is colloquially referred to as "Flash and Trash." Tom Georgen seemed to be advocating placing 100% of all data on SATA disks and using PAM SSD cards as an extremely large and fast cache to maintain performance requirements. As discussed on Twitter, the main drawback of this is that if you lose PAM, your storage response time is impacted until the replacement PAM is primed with the data.  If you want more information, Mike Riley and Dimitris Krekoukias were the two NetApp resources on Twitter who were most engaged.
    • Additionally, the NetApp stance is that WAFL and Raid-DP provides a RAID-6 implementation that is faster than traditional vendor's RAID-10 implementations... so basically even the SATA drives should perform admirably.  The NetApp resources took this as an assumed fact and most of the other people involved didn't which made some of the retorts nonsensical at times.  NetApp can show benchmarks that indicate this performance level is realistic (well, as realistic as a benchmark can get), and other vendors can show benchmarks indicating that WAFL performance degrades as it fragments.  As someone in the "not-vendor" space, I can't weigh in on how much of an advantage this is, nor whether or not it actually degrades as time goes on.  My gut feeling is that there isn't some magic algorithm that removes the parity impacts of RAID-6 after the volumes start filling up.
    • Probably the best information presented on Data ONTAP over the last few months were over at Marc Farley's blog, StorageRap.  The comments have a lot of good, understandable information on how WAFL works.
    Going forward, my opinion is that storage tiering is going to remain important, and grow to include migrating data to private/public cloud and even cheaper storage platforms - while doing so, making intelligent decisions such as removing DR replication links post-migration to free up expensive licensing.  Basically, the right data on the right platform protected the right way... at all times.

    Thursday, February 18, 2010

    Cost per $metric - Part 2

    Previously, I discussed storage costs and that, while cost per usable GB is perfectly fine in capacity driven tiers, it has less use in IOPS constrained tiers.  There were a few aspects of storage costing that were not covered in that post.

    First off, most arrays come with additional management software that is often licensed per TB threshold; you only incur additional license costs at certain points of capacity (20TB, then again at 40TB for example).  You would need to work with the vendor to ensure that these upgrades are rated into the flat cost per GB for it to be a true "total cost."

    Similarly, Storagewonk (Xiotech) had a post discussing the sometimes prohibitive cost of the next TB after a customer has reached the capacity of the current array.  It is a very valid point.  Even with a good cost per GB pricing model, it is doubtful that it covers cost of the next array simply due to the substantial initial implementation costs and lack of guarantee of future growth.  Xiotech's spin on this is that since they're such a modular platform, the ramp up cost for additional footprints is substantially lower.  Since I have no idea what an ISE costs, I'm not going to comment on how much of an expense advantage this is (I assume that it is pretty substantial).  But it did get me thinking... what are other aspects can impact TCO and storage costs?  For some of these, I'll use Xiotech's ISE as an example since it really is a unique solution that demonstrates the necessity of thinking through the impacts of decisions.
    • Xiotech's ISEs are all attached to the FC fabric.  This could potentially increase the number of fabric ports that are used servicing array requests and should be accounted for - make sure to consider the density of the director blade that you use for storage ports.
    • If hosts need storage from 2 distinct ISEs, I assume that they'll need zoned to each ISE which increases administration time.
    • If a host is spanned across multiple ISEs, how does that affect replication?  Is there a method to guarantee consistency across the ISEs?
    • Many up-and-coming solutions leverage host-level software to accomplish what used to be handled at the array (compression, replication, etc).  How does that affect VMware consolidation ratios?  Does it affect IO latency?  Make sure that you understand the cost of placing yet-another-host-agent on SAN attached hosts.  David Merrill (HDS) goes into this a little more on his blog.
    • Similarly, are there any per-host costs (such as management or MPIO software) that affect a solution's TCO?
    • What does migration look if you have a lot of smaller footprint arrays to replace in 3-5 years?
    Any good IT architect will look at the total impact of implementing new technology into the environment.  In a lot of cases (and I'd pick on Exchange on DAS here), short term cost savings can be quickly eroded by long-term sustainability issues... especially in shared environments.

    Wednesday, February 3, 2010

    Guarantees and Lowest Common Denominator

    In the last few days, 3PAR, Pillar, and HDS joined NetApp as vendors offering guarantees around storage efficiencies.  Chuck Hollis (EMC) posted why he feels that EMC (not including VMware, natch) won't offer blanket guarantees like this in the near future.  The comments showed that a lot of people were passionate about the topic, especially vendors.  It also showed that people who post on Chuck's blog apparently like to talk like press releases.

    Honestly, I find this entire topic unnecessary and a little boring.  I don't think that guarantees necessarily mean that the vendor is selling snake oil, nor do I think that not having a guarantee shows the vendor is hiding something.  I'm still not sure how having an optional guarantee available for customers could ever be seen as being a "negative."

    In a previous post, I discussed various ways to evaluate the cost of storage... Cost / GB and Cost IOP.  Certain vendors (NetApp, 3PAR, etc) rely on software functionality such as primary storage deduplication and thin provisioning for competitive advantages.  These features allow them to propose fewer disks/capacity to meet a customer's IOP or GB requirements.  The guarantees show is that the vendor will stand behind these numbers.

    If a customer is allowing a vendor to propose fewer spindles due to "secret sauce software," then I'd expect those terms to be written into the contract regardless of whether or not the vendor offers a guarantee.  Other than marketing, I don't see a ton of value that the guarantees provide that a decent purchasing contract wouldn't.  Yes, my opinions have shifted a little bit since the original NetApp guarantee... it is still a great marketing instrument, but outside of that, not a ton of actual value.

    Various other notes from the comments...
    "Since the SSD on V-max thru gateway CIFS debacle^W benchmark, it's not even apparent that a workaday NAS solution from EMC can crawl north of 45% storage efficiencies"
    You shouldn't claim that SPC benchmarks have any validity and then bash EMC's SPECFS entry.  Not many of the SPC entries have any more real-world relevance than EMC's entry here.
    "Customers don’t want to have to bring in a team of neurologists to build a storage and data protection solution. NetApp offers simplicity and a great efficiency story."
     Last time I checked, NetApp's guarantee required neurologists^W professional services.
     "If a vendor is getting into my environment by selling some executive a useless empty guarantee we've started on the wrong foot from square one."
    Hate to say it, the problem here really isn't the vendor with the guarantee... it's upper management not listening to their people.
    "When I'm buying a car (infrequently, thank goodness) I am interested in the warranties and guarantees; it's a seller's mark of confidence in his product."
    Which is why everyone buys Hyundai right now.  Or, in the storage realm, Xiotech.

    NetApp has a great solution, as does 3PAR, HDS, and EMC.  Conversations like this really doesn't help anyone involved, least of all the customers.  I'd much rather see debates around various approaches to solving real world problems than arguments like this which seem to be "who has the biggest contract."

    Sunday, January 31, 2010

    Odds and Ends - 01/31/10

    • The Hot Aisle has an article showing the mathematical inevitability of storage arrays moving to Flash and SATA (AKA Flash and Trash).  While SSD adoption was slow initially, almost every vendor is offering it in some fashion.  I agree that to reap the full benefits, it will eventually have to stop looking like a standard "spindle."
    • Storagezilla had a nice post on Oracle's declaration of war on NetApp.  It is the second time Oracle has declared war on an established vendor in recent memory, the first time being their release of rebranded RedHat.  It doesn't look like it affected RedHat in the long term, and I doubt it'll affect NetApp much.  During storage purchases, you're relying on the vendor's ability to deliver as much as what their delivering, and it'll be some time before Oracle has proven itself in the storage realm.
    • EMC obliterates the competition in SPECsfs_cifs and posts extremely competitive numbers in SPECsfs_nfs.  The cifs benchmark originally looked like the result of some bored engineer in an EMC lab trying to see how much he could destroy the existing rankings - the benchmark was ran on all SSDs (well, 4 FC disks for Celerra information).  I wonder if this will cause some of the other vendors to post updated cifs numbers.  Storagezilla claims they won't because of how bad their implementation is.  It could be due to the few vendors that can offer that amount of SSD storage.  I have to ask, does CIFs really make sense for this type of a workload?
    • Storagebod posted on the cloud-angle of Apple's iPad announcement. I thought something very similar when I saw the announcement, except for a few things.  First of all, the bulk of storage on most consumers computers is media.  iTunes already has most of that content available, so pushing that storage into the iTunes cloud is more a function of scaling IO/access rather than 'having sufficient storage'.  In fact, if Apple could talk Big Content into allowing them to detect non-iTunes media and offer free-of-charge the equivalent iTunes media, this would be even easier.  Pirates won't buy songs they already 'have', so there isn't a lot of money left on the table AND it reduces the availability of completely wide-open music/movie files.
    • Cleversafe posted a good primer on silent errors.  This is the main reason why details matter when it comes to RAID implementation, and why you need more than 1 piece of parity for large drives.
    • If you have the chance to try it, New Belgium's Spring Seasonal, the Mighty Arrow, is quite tasty.  A nice pale ale that is light on hops and extremely drinkable.

    Thursday, January 28, 2010

    vBlock and Private Cloud

    I'll be honest... when EMC announced the vBlock architecture alongside the VCE initiative, I didn't quite get its importance.  In my mind, there was very little benefit to these preconfigured stacks, especially at the price points that I heard rumored.

    After a few weeks, I think I've got a handle on it and why this is potentially a big deal.

    When technologies first come out, implementations are fairly complex and require quite a bit of trial and error alongside a fairly good breadth of skillsets.  VMware was no different... it required people who understood virtualization technology, networking, storage, Windows/Guest OSes, and security.  As time passed, the implementations become easier due to the toolsets becoming better and the availability of knowledge increasing.

    After that, the difficulty shifted.  Beyond the politics of getting signoff for virtualizing as much of your environment as possible, the next challenge was taking the architecture and scaling it big.  As the technology progressed, this became easier (it is still not what I'd call "simple") and things start shifting towards "how do we backup/recover these large environments, and how do we leverage the technologies in play for Disaster Recovery/Business Continuity?"

    But what strikes me is, as the challenges become greater, the importance of a good fundamental implementation remains.  Compatibility matrices still need to be kept up to date, documented, and tested.  Research needs to be done on new server hardware models and processor models, alongside updating any documentation/procedures that change as new VM farms are built on the new hardware.

    What is really the kicker, though, is typically the people who originally brought in VMware are still at that level, making sure the implementations are solid rather than spending the time on the more difficult "next big thing."


    So how does vBlock fit into this?  Simple.  If you are an organization where there is a large virtualized environment and you aren't allowed headcount increases, vBlock offers an opportunity... namely to take some of your best technical employees and allow them to be repositioned to where they can provide the most value.  Lets face it, an architect with 4+ years of experience is wasted on validating firmware levels.  Similarly, in these large environments where vBlock makes sense, churning VMware farms to stay supported isn't a great use of highly skilled resources time.

    If you notice, the vBlock architecture does not cover the current cutting edge portions of leveraging the virtual infrastructure as much as possible to benefit DR/BC.

    How does this play into Private Clouds?  Simple.  There are a lot of private cloud definitions floating around, but for the purpose of this, I'm going to drastically simplify it.

    A private cloud is "self service IT."

    A lot of people get edgy when you start discussing private clouds.  The foremost argument typically runs along these lines:  A private cloud, implemented properly, greatly reduces the time to deploy a server/system, increases the accuracy of the build process, and dramatically reduces the "friction" of implementation.  By "friction," what I'm really getting at are those things that take a 1 day server build and turn it into a 2 week process.  By reducing the friction and difficulty of implementing new systems, the total cost will go up because there will be a lower barrier of entry (easier/quicker build process = more systems being built = more money goes to VMware, Microsoft, and your storage vendor).

    Not quite.  A good private cloud still requires strong processes.  Systems have to be sized, priced, signed off on, and someone with a budget has to agree to fund it.  None of that really changes (granted, depending on many factors, sizing exercises may be reduced in many cases).  All that changes is, once all of the appropriate approvals are attained, systems can be deployed in days instead of weeks.  Money still should not be spent without a good business and cost justification - but in either model, if someone gets the appropriate signoffs and funding, the environment is going to get built.

    The second argument is fairly simple:  In order to implement a good private cloud, it takes automation, standardization, and virtualization. If things get to the point where it is extremely simple to deploy systems, then people could lose jobs.  Lets be honest with ourselves... if the only value IT resources provide is the ability to install a server, then their days of employment are probably numbered anyway.  If their only function is to ask "small, medium, large, or super-sized server," they are probably in the wrong profession.

    The final bit is this.  The vBlock provides a couple things:
    • A standardized, compliant "plug and play" architecture for virtualized environments
    • The ability to free up valuable time to work on areas that provide true business value.
    • A decent building block for private clouds, alongside software to (supposedly) streamline the administration of the cloud architecture and increase the number of systems a given admin can support.
    VMWare/EMC/Cisco were first to market with these preconfigured building blocks.  NetApp recently announced something similar, and I assume Oracle will be coming out with a competitor eventually.  A good systems administrator automates as much as possible.  Fundamentally, all this does is just take automation to the next (massive, cross-vendor) level.

    Monday, January 25, 2010

    XIV Disk Fault Questions [Updated]

    UPDATE [8/6/2010]:  If you're interested in more current XIV information, I recommend reading Tony Pearson's recent posts here and here.  He also provided additional information in the comments to one of my posts here.
    --
    Today, I came across an XIV RAID-X post by IBMer KT Stevenson: RAID in the 21st Century.  It is a good overview of the XIV disk layout/RAID algorithm.  I have limited my questions in this post to ones raised by KT’s post since this post is already a bit lengthy.
    In fact, DS8000 Intelligent Write Caching makes RAID-6 arrays on the DS8000 perform almost as well as pre-Intelligent Write Caching RAID-5 arrays.
    Any array that does caching for all incoming writes should be able to claim the same (from a host perspective).  For large writes where the entire stripe is resident in memory for parity computations, there should be almost NO performance degradation.  It is great that the DS8000 performs well with RAID-6, but that is rapidly becoming “table stakes” if it isn’t already.
    When data comes in to XIV, it is divided into 1 MB “partitions”.  Each partition is allocated to a drive using a pseudo-random distribution algorithm.  A duplicate copy of each partition is written to another drive with the requirement that the copy not reside within the same module as the original.  This protects against a module failure.  A global distribution table tracks the location of each partition and its associated duplicate.
    [..]
    The most common ways to mitigate the risk of data loss are to decrease the probability that a critical failure combination can occur, and/or decrease the window of time where there is insufficient redundancy to protect against a second failure.  RAID-6 takes the former approach.  RAID-10 and RAID-X take the combination approach.  Both RAID-10 and RAID-X reduce the probability that a critical combination of failures will occur by keeping two copies of each bit of data.  Unlike RAID-5, where the failure of any two drives in the array can cause data loss, RAID-10 and RAID-X require that a specific pair of drives fail.  In the case of RAID-X, there is a much larger population of drives than a typical RAID-10 array can handle, so the probability of having the specific pair of failures required to lose data is even lower than RAID-10.
    While at first this paragraph made perfect sense to me, there was something that just didn't seem to sit right.  Namely, this portion:
    In the case of RAID-X, there is a much larger population of drives than a typical RAID-10 array can handle, so the probability of having the specific pair of failures required to lose data is even lower than RAID-10.

    The following is based off of what I've read online... allocations are divided up into 1 MB partitions, which are the distributed across the frame.  For the purpose of this question, I will assume 100% of all disks are available for distribution (which is untrue, but it is the absolute best case scenario) and that the data is perfectly evenly distributed.

    In a fully loaded XIV frame, there are 180 physical disks.  What I'm interested in is the number of chunks that can be mirrored among the 180 disks without repeating the pair – once a ‘unique’ pair is repeated, you are vulnerable to a double disk failure with every allocation past that point.  So, 180 C 2 = 16110.  With 1MB per chunk, that is 16 GB.  From an array perspective, you run out of uniqueness after 16GB of utilization.  From an allocation perspective, any allocation larger than 16GB would be impacted by a double disk fault.  I assume XIV doesn’t “double up” on 1MB allocations (going for a less wide stripe for reducing the chances of a double fault) simply because I've always heard that hotspots aren't an issue.  This is best case assuming a perfect distribution, as near as I can reason - I'm sure any XIVers out there will correct this if I'm making an invalid assumption.

    If you look closer, though, it’s a little worse than that.  Every single disk is not a candidate as a mirror target – as noted above, XIV does not mirror data within a module.  With 15 modules in a 180 disk system, that means for each mirror position there are 11 disks that can not be used.  The math gets beyond me at this point, so if anyone wants to comment on what that actually equates to, I’d be interested.

    1. What is the “blast radius” of a double drive fault with the two drives on different modules?  Is it just the duplicate 1MB chunks that are shared between the two drives, or does it have broader impacts (at the array level)?
    2. At what size of allocation does a double drive fault guarantee data loss (computed as roughly 16GB above)?
    3. What is the impact of a read error during a rebuild of a faulted disk?  How isolated is it?
    4. Does XIV varyoff the volumes that are affected by data loss incurred by a double drive issue, or is everything portrayed as “ok” until the invalid bits get read?
    5. If there is data loss due to a double drive issue, are there reports that can identify which volumes were affected?
    Update (01/26/2010):
    I realize that the math part of this post is a little hard to understand, especially with 180 spindles in play, so I went ahead and drew it out with only 5 spindles (5 C 2 = 10).

    This shows the 5 spindle example half utilized.  In this diagram it is possible to lose 2 spindles without data loss... for example, you can lose spindle 2 and spindle 4 - since neither of them have both copies of a mirror, no data is lost.



    This shows the 5 spindle example with all unique positions utilized.  In this diagram, it is impossible to lose 2 spindles without losing both sides of one of the mirrors.

    The 16GB number quoted above is based off of a 1MB chunk size, which is what has been documented online.  If the chunk size was larger, then that amount would be higher before guaranteed loss.  Of course, if you lose the wrong two drives prior to 16GB, you'll still lose data.  The percentage chance of data loss increases as you get closer to 16GB.

    I know KT is working on a response to this, I'm looking forward to being shown where the logic above is faulty (or where my assumptions went south).

    Tuesday, January 19, 2010

    Storage Costs - Cost per GB? Not Always

    Recently, using "cost per GB" as a metric to rank the acquisition cost for storage platforms has come up quite a bit... starting with a conversation amongst the storage fanatics on Twitter ('bod, myself, sysadmjim, and peglarr) followed by a post from tommyt (Xiotech) and then Storagebod.

    First off, the general consensus seems to be that "cost per raw GB" is not ideal.  This is simply because, due to architectural differences (i.e. DIBs), RAID overhead, and other factors, the amount of usable capacity per "raw GB" can vary greatly.  Most people have settled on "cost per usable GB."  Over at Storagebod's blog, an unnamed vendor (but obviously one that has dedup, etc) claims that "cost per used GB" is a better, albeit harder to measure metric.

    When migrating from a legacy environment, it is difficult to quantify how much primary storage dedup can save you.  From a career standpoint, it is a little risky to assume that you can purchase 80% of your currently allocated storage to meet your current and future needs based off of what a vendor claims you can squeeze out of it.  While there are tools and benchmarks to help with this, all things being equal, I think it's safe to say people would be more comfortable recommending buying sufficient spinning rust to meet their capacity requirements than relying on dedup/tp.  While dedup/tp can save money (especially for future orders after experiencing how it behaves in a particular environment), for the initial outlay it is a little risky.  Additionally, you need to be careful that you're not spending a greater amount of money on dedup licenses than you would have on disk (after maintenance, etc).

    Another issue with cost per GB is it can fluctuate based on what components need to be upgraded with any given order.  With the upfront costs of the cabinets, storage processors, cache, connectivity being fairly significant, the ongoing cost of adding disk can be higher (or lower) depending on what you need to "scale" on the array.

    In short, cost per GB is not always a good metric.  While it works when capacity is the primary requirement, it falls short when there are specific IOPS requirements that outstrip physical capacity.  For the sake of argument, lets assume a given corporation has standardized on 3 tiers of storage:
    • Tier 1:  High performance requirements - such that the capacity required does not fulfill the IOPS requirement (typical architecture:  RAID 10, 15k drives or SSDs).
    • Tier 2:  General storage requirements - generally, the capacity required will fulfill the IOPS requirement, especially when spread over enough hosts and disks (typical architecture:  RAID 5, 10k drives).
    • Tier 3:  Archive storage requirements - generally, IOPS is not a major consideration, and fat cheap disks are more important than speed (typical architecture:  RAID 6, 1-2TB SATA drives).
    With these tiers, cost per GB is a fairly good metric for Tiers 2 and 3, but falls down in Tier 1 where after the capacity requirement is met, additional drives are allocated to meet a performance requirement.  The use of SSDs can reduce the cost of Tier 1, but in the past it has been difficult to optimally layout the allocations to get the best use out of them.  Technologies such as FAST (EMC) and PAM (NetApp) help with the layout issue.  Wide striping can help performance in all Tiers, but  in Tier 1 you have to be careful if you need to guarantee response time.

    FAST v2 is an interesting technology.  Basically, EMC is betting that, with extremely granular intelligent data moves, most customers can eliminate the Tier 2 requirement, spread all the allocations over a lot more 'Tier 3' spindles, and handle all the hot spots by migrating hot blocks to SSDs.  This will make internal chargeback extremely difficult since it is dynamic and self tuning.  Also, it will also make performance testing on non-production servers difficult since, to my knowledge, there isn't a good way to "apply" a given layout to a different environment.

    All of which basically says, going forward, straight "Cost per usable GB" is going to become less important for determining the total cost of a storage environment.  My recommendations?
    • Work with all vendors very closely to make sure that they (and you) have a good understanding of the requirements of the proposed environment.  Make sure the estimates are realistic, and in the cases of dedup and TP, make sure that the vendor will stand by the ratios that the solution depends on.
    • Make sure that you understand how the storage environment will grow - namely maintenance costs, stair step upgrades (when purchasing additional disks require cache, directors, additional licensing, etc).  Make sure it is all in the contract, and pre-negotiate as much as possible.
    • Maintain competition between vendors.
    Typically, someone will bring up using virtualization as a mechanism to ensure fair competition among vendors for capacity growth.  While there is some validity to that for existing environments where multiple vendors are already on the floor and under maintenance, for new environments the "startup costs" of new arrays tend to negate any "negotiating benefit" you could get from virtualizing the environment in my opinion.

    Friday, January 15, 2010

    Giving the Right Answers

    Storagebod has an interesting post up about the questions customers ask vendors.  It is definitely a good read and the comments are good food for thought.  The main question I think should be asked during RFPs is "give me customer references that have gone through a very similar migration as we would be faced if we chose your solution."  While customer references are always glowing and refereed, these type of references can (and have) offer common gotchyas.

    What I'd like to touch on is vendor responses to questions.  What I'd like to see is more blunt answers that actually help the customer with the actual implementation.  Answers such as:
    • "That is a supported configuration.  However, we'd recommend that you do it this way..." - Too often, I've seen responses that just indicate whether or not a given configuration is supported... not if it makes sense with the equipment in question.
    • "While with your current architecture, having that type of drive configuration made sense... in the proposed solution, it does not and this is why." - No two storage platforms behave the same... questions are driven by past experience with whatever vendor is currently in house.  The responses should guide the customer to better solutions... not simply forklifting what is currently installed with a differently badged environment.  Of course, explain why there are deviations, but don't allow for a tray of 15 10k 73GB drives if it isn't optimum on the suggested architecture, for example.
    • "No.  We do not provide that functionality.  We actually do not plan on providing that functionality due to." - If there is functionality that the proposed solution does not provide, and there isn't a concrete date for implementation, don't even respond with "it's coming."  One or two missing "nice to haves" probably isn't going to make a difference during vendor selection (cost will come into play typically before that).
    Basically, I'd rather see blunt responses that indicate where the customer is being stupid/misguided than a glowing RFP response that doesn't quite paint an accurate picture.

    Sunday, January 10, 2010

    Perl Script to Add or Remove Colons from WWNs

    Some days it seems like a large portion of my job is pasting WWNs from Cisco MDS switches into SYMCLI (and back).  MDS switches require the WWNs to have colons in them, SYMCLI requires no colons.

    I wrote a quick and dirty script in Perl (with a little help from Aaron on the backreference) to add or remove colons from what is currently on the clipboard. 

    Requires Microsoft Windows and ActiveState Perl.
    use Win32::Clipboard;
    $CLIP = Win32::Clipboard();
    $x = $CLIP->Get();
    if ($x =~ m|:|) {
        $x =~ s|:||g;
    }
    else {
        $x =~ s|(.{2})|$1:|g;
        $x =~ s|:$||;
    }
    $CLIP->Set($x);

    Thursday, January 7, 2010

    Ripping DVDs to Other Formats in Windows 7 64-Bit

    After I migrated my main laptop to Windows 7 64-bit, I noticed that most DVD ripping applications no longer worked.  After trying several replacements, I finally found a free app to rip the DVDs into MKV format where HandBrake can transcode them into an iPhone compatible format.  There are several tutorials online if you needed to convert the MKV files back to DVDs, but for my purposes that was unnecessary.

    MakeMKV - "MakeMKV is your one-click solution to convert video that you own into free and patents-unencumbered format that can be played everywhere. MakeMKV is a format converter, otherwise called "transcoder". It converts the video clips from proprietary (and usually encrypted) disc into a set of MKV files, preserving most information but not changing it in any way. The MKV format can store multiple video/audio tracks with all meta-information and preserve chapters. There are many players that can play MKV files nearly on all platforms, and there are tools to convert MKV files to many formats, including DVD and Blu-ray discs."

    HandBrake - "HandBrake is an open-source, GPL-licensed, multiplatform, multithreaded video transcoder, available for MacOS X, Linux and Windows."

    Hot Migrate Root Volumes in AIX

    AIX is one of the few OSes that, out of the 'box', can replace boot disks while the OS is running without an outage.  Of course, the standard disclaimers apply... namely, test this on a non-production LPAR and make sure you have backups.  This is extremely helpful for storage array migrations (bringing an LPAR under a SVC, for example) and general maintenance.

    If you’re running under VIO Servers, it even allows complete cleanup to occur online.  I recommend a reboot at the end to be safe, but it really isn't necessary.

    Assumptions:  All the target disks are currently configured on the LPAR and are not in any volume groups.

    Step 0: MAKE SURE TO HAVE A CURRENT MKSYSB AND BACKUP.

    Step 1: Replace an old root hdisk with the new one. If this fails due to the destination disk being smaller, go to the alternate instructions below
    $ replacepv OLDDISK1 NEWDISK1
    0516-1011 replacepv: Logical volume hd5 is labeled as a boot logical volume.
    0516-1232 replacepv:
    NOTE: If this program is terminated before the completion due to
    a system crash or ctrl-C, and you want to continue afterwards
    execute the following command
    replacepv -R /tmp/replacepv385038
    0516-1011 replacepv: Logical volume hd5 is labeled as a boot logical
    volume.
    Step 2: Verify that the old disk is not defined to any volumegroups:
    $ lspv
    OLDDISK1 00007690a14xxxxx None
    NEWDISK1 00007690a14xxxxx rootvg  active
    OLDDISK2 00007690913xxxxx rootvg  active
    Step 3: Add the boot image to the new disk:
    $ bosboot -ad NEWDISK1
    bosboot: Boot image is 30441 512 byte blocks.
    Step 4: Repeat steps 1-3 for the second root disk (if replacing both
    root disks)

    Step 5: Adjust the bootlist
    $ bootlist -om normal  NEWDISK1 NEWDISK2
    $ bootlist -om service NEWDISK1 NEWDISK2
    Step 6: Remove the old hdisks.
    $ rmdev -dl OLDHDISK
    Step 7: Remove the old disk mappings from the VIO Server if applicable.
    $ rmdev -dev OLDMAPPING
    Step 9: Run savebase
    $ savebase
    Alternate Instructions

    Step A1: Place the replacement hdisks into the volumegroup:
    extendvg rootvg NEWDISK
    Step A2: Migrate the disks (you must have PPs sufficient to migrate the
    disk):
    migratepv OLDDISK NEWDISK
    Step A3: Validate that there is no data on the old disk
    lspv -l OLDDISK
    Step A4: Remove the OLDDISK from the Volumegroup
    reducevg rootvg OLDDISK
    Step A5: Add the boot image to the new disk:
    $ bosboot -ad NEWDISK1
    Step A6: Repeat steps A1-A5 for the second root disk.

    Step A7: Continue with step 5 above (namely, adjust the bootlist, cleanup left over disks, and savebase).

    Monday, January 4, 2010

    SSH over VPN on the iPhone - Why Not?

    Recently, Nigel Poulton tweeted a YouTube video that showcased an application to manage the Xsigo I/O Directors from the iPhone... I responded that, at one time, I had done something 'similar' using SYMCLI and SSH.  He posted a followup discussing whether or not an iPhone admin function is something that Enterprise customers would be comfortable with:
    While I think the idea is cool, I’m not sure how interested companies would be –> management and configuration changes to production kit from an iPhone ….. sounds a bit ahead of its time to me.   Cool, yes.  But is cool what major companies and managers of large Data Centres are looking for?  Remember that Xsigo kit is pretty squarely pitched at enterprise customers.  Would such applications cause more worries and concerns than they would solve problems?
    I don't think anyone would argue that administering any sort of production kit primarily using an iPhone is a good idea.  But certainly most IT folks have had production situations arise where they're away from a computer and just need to check a few things out quickly.  This type of software is perfect for that.  In any case, it is optional software, so if a given customer has an issue with providing this type of access then they can simply not deploy this interface.
    Think about it this way……. Matt Davis pinged me back saying that he had once done “symcli over ssh over VPN ….. via my iPhone” to administer a Symmetrix DMX!!  Not sure what your initial thoughts are on hearing that, but mine were trepidation.  Sure, that’s pretty damn cool, but pretty flipping scary too!  Kudos to Matt, but more scary than cool in my books
    More scary than cool?  In my opinion, not really.
    1. GNU Screen provides protection against connection hiccups.  If the VPN or SSH connection drops in the middle, I can re-attach the terminal as it was running.
    2. I've written perl scripts around the majority of changes... as part of these scripts, they generate 'undo scripts' that can easily revert any changes to the way they were previously.
    3. I'm extremely familiar with SYMCLI (to the point that I tend to know more than the support people I work with) and I would only ever run processes I was comfortable with via this type of connection.  I'm enough of a geek that I have the entire Solutions Enabler PDF collection synced to my iPhone via DropBox (along with Cisco documentation and other array documentation). 
    4. I would never run any procedure that would generate a lock on the array or take a long time to run.  But some symdev or symmaskdb queries?  Readying a device or kicking off a symrcopy command?  Why not?
    I could argue that this method is more stable than most Web interfaces since it isn't subject to JVM crashes and browser hangs.  As with most CLIs, you need to know exactly what you are doing though.  A little knowledge and root access is a dangerous thing.