No, I'm not going anywhere; well, not yet! I was hoping for a billion dollar take-over bid from someone in light of some of the goings on in the market. Hey EMC or NetApp, whichever one of you looses out; just throw a billion dollars my way!
Actually, what I'd like to talk about is data migration, with HDS' HAM announcement and the promise of seamless migrations forever; easy and smoothly, it seems a killer feature. Indeed EMC are talking about the same capabilities for the V-MAX. As long as you going from V-MAX to V-MAX, USP-V to USP-V and even IBM SVC to IBM SVC; migrations should be outage free and relatively easy; that's as long as you meet all the pre-requisites with firmwares, driver-levels, multi-pathing software, probably operating system levels; migration will be relatively simple, outage free and automagic!
Of course, as soon as you want to go out of family let's say USP-V -> V-MAX; you've got a problem but as long as you want to keep your disk-controllers the same; you are fine. Yes, I know you can use the USP-V or SVC to bring in external arrays but I am talking about fundamental changes to the storage architecture.
Block-based migration can also be achieved at the host level with minimal to no outage by using host-based tools such as volume managers. It is this technique which is probably most commonly used; it is laborious but it is a well-travelled path. So when it comes to block-level migrations, you have options.
I am assuming that you are not taking the opportunity to re-tier; re-layout for performance, stack LUNs, remove dead data and generally do a tidy-up of your storage environment. You are simply going to move one LUN to another LUN.
However, as we are aware; we don't just have block-storage these days; NAS is becoming the default option for many companies. Management of NAS is generally easier, it can certainly be quicker to provision and it's TCO is often lower. It is an attractive option but….it's a pain to migrate seamlessly and without outage!
In the past, we had data on NAS which probaby did not have the availability requirements of the data sitting on our Tier-1 arrays; it was not mission critical but this is no longer the case; mission critical data sitting on NAS is becoming more common and availability requirements for this data are in the five and six nines levels. Taking outages for migration is will not be acceptable to businesses and we need to come up with strategies for seamless migration of NAS data.
There are tools such Acopia from F5 and Rainfinity from EMC which virtualise at the file-level. Isilon promise no more fork-lift upgrades and you simply incrementally upgrade and migrate; as do others. Or clustered file-systems might be the answer? Perhaps using the facilities in the hypervisor, almost akin to what we do on a host for block?
But this is not yet a mature and well understood discipline for most people. And as NAS becomes de-facto, it will need to be. Also, as NAS has been sold on simplicity, reduced management costs etc; it is going to have to be easy.
Martin
You’ve hit on a very important TCO factor when deciding what platform to put data on – the cost of getting off. I’ve always used the analogy of buying a house or car. Yes I/we might like it, but what will be the resale value in the future? Storage deployment is the same – if you deploy a technology, in 3/4 years time, how easy will it be to get out? Block-based arrays tend to be reasonably easy but products like NAS and in particular Netapp with SnapVault can prove a real headache, partly because they’re proprietary, partly because understanding exactly what/how data is stored is difficult.
Of course the cost of decommissioning tends to get forgotten at the time of purchase. Perhaps we should look at the Nuclear Power industry for some pointers on decommissioning costs and forward planning….