Storagebod Rotating Header Image

Storage Virtualisation – Not Magic!

Okay, I am getting increasingly irritated by the argument that storage virtualisation somehow magically reclaims large amounts of disk!

Implementing storage virtualisation with a consolidation and data classification project enables storage managers to reclaim large amounts of disk. The process of putting in a new storage technology should allow you to reclaim large amounts of disk as you identify and re-classify data; taking the opportunity to re-tier on occasion, archive files which are not accessed any more.

Storage Virtualisation can make the task easier but Storage Virtualisation is not a magic bullet in itself.

Moving to a thin-provisioned environment is probably a more useful way of managing your storage growth but if we are being honest; get your management and your processes right and you will find that your utilisation rates grow. Capacity plan properly, get realistic views of data-growth.

Stop short-stroking disks for performance; this will drive up utilisation. This means you might consider SSDs or wide-striping across the whole array (note all the spindles in the array, not a subset(s)).

Deduplication, this will drive up utilisation.

Archiving and architecting archive-aware and capable applications; this means ensuring that you can get data out as well as in. This will drive up utilisation.

What is needed is process change, mind-set changes; without this, you will find that even your virtualised environment has extremely poor utilisation rates. Implementing virtualisation may well be the kicker to address your problems but it is not the answer in itself.

Instead of selling us more technology widgets; sell us management tools so we can get information about utilisation etc which work quickly, reliably and without a data-centre of their own and I bet a lot of the historic utilisation issues start to get resolved without resorting to storage virtualisation. But hey, then you lot couldn't brag about year on year record growth.


10 Comments

  1. Tony Asaro says:

    I don’t think anyone is claiming that external storage virtualization is reclaiming disks in of itself. In the case of Hitachi, what they are saying is that by using external storage virtualization you can use HDP for thin provisioning and wide stripping on 3rd party storage systems that don’t support these capabilities. And these are two points that you agree with. I also agree that it isn’t just technology but process and planning but both in conjunction are a good thing.
    External storage virtualization is unique and valuable technology for HDS within the USP V – their competition don’t have it. I think that it enables true storage networking (see my blog http://blogs.hds.com/tony/2009/05/true-storage-networking.html ). Educating customers on the value of it is important (just like Data Domain educating customers on data dedupe). Clearly there is some confusion about the message since it didn’t come across to you – but I think every time you hear Hitachi say “storage virtualization” you react violently. It is like that Abbott and Costello bit when the guy freaks out when Costello says Susquehannah Hat Company.

  2. Hi!
    Thing that confuses me:
    What is storage virtualization?
    Is a LUN a form of storage virtualization?
    Must it be a separate device sitting on top of a storage device?
    What?
    kostadis

  3. Tony Asaro says:

    That is a good question. There are many forms of storage virtualization. Martin and I are both referring to external storage virtualization – the ability of having an intelligent storage controller extend its intelligence to external storage systems.

  4. Martin G says:

    The more you repeat you a terminlogical inexactitude doesn’t make it any truer. IBM, NetApp and HDS have very similar approaches to external storage virtualisation and hence those technologies are no longer unique. There are other approaches such as the approach taken by LSI (resold by HP), EMC Invista and Incipient. I actually believe that external storage virtualisation has some value; however I believe that value is very much overstated and it is not a panacea to all of our problems.
    I would also like to pick up on some of your points; how does HDP enable wide striping on arrays which do not support it? If, and correct me if I’m wrong, a USP simply virtualises LUNs which are presented to it; trying to wide-stripe across those LUNs is a potential recipe for disaster as you have no idea as to what the layout of those LUNs are in the array being virtualised? So attempting to wide-stripe could actually make things worse. Thin-provisioning, yes; wide-stripe, I would suggest you would have to be extremely careful.
    External virtualisation has some potential issues because it is external and has no under-lying knowledge as to what is going on with the array it is virtualising. This means that implementing things like a FAST-type optimiser is harder to do; certainly if you want to do it at the sub-LUN level. And *shiver*; trying to virtualise an array which has a FAST-type optimiser built in and then apply your own FAST-type optimiser. Would you put a Compellent array behind a USP?
    Also, as things like thin-provisioning and wide-striping become simply another feature; does the value of the externalising approach to enabling these features go down? I don’t know! But maybe, external storage virtualisation is a concept with a limited span? Certainly with VMWare beginning to offer some of the benefits of external storage virtualisation but without the added complexity, it certainly becomes an interesting debate.
    Perhaps all we really need is an uber-storage-migration tool which allows us to quickly migrate between heterogeneous arrays? XIV has an interesting approach to this problem in that it claims (and I’ve not looked in detail at this) that you can present your existing array’s LUNs to it and then suck the data from those LUNs to XIV LUNs; present the XIV LUNs to the host and remove the existing LUNs from the host and not loose access. As I say, I’ve not looked in detail in how it does it and whether it works.
    And I react violently to the terms storage virtualisation because I see them used along with hand-waving to imply that magic can be done. The process of putting in virtualisation should drive people to think and it is this thinking which probably has the most dramatic impact on utilisation rates. If a CIO bets on a technology driving up utilisation rates and reducing costs; most of the time it will appear to do so. Cynical? I prefer the term realist!
    Kostadis, indeed! Every modern array out there supports virtualisation of storage (spinning or non); it abstracts the phyiscal and presents a logical view. An array controller does precisely that; I suppose you could go mad and present each physical disk and create a 1-to-1 mapping. But certainly storage virtualisation is pretty much as old as the hills.
    Blimey, I wonder if this should have been a seperate blog entry in itself!

  5. Tony Asaro says:

    The argument I make is that the USP V is an Enterprise-class storage system that also supports external storage virtualization. It is the only Enterprise-class storage system that supports this functionality. That is unique and it does make a difference to the customers that acquire the USP V. Since the USP V competes against the DMX and the DS8000 and they don’t support this capability it is unique in its class. Additionally, the NetApp FAS does not support this function and the IBM SVC is not a storage system.
    Because the USP V is a storage system with its own internal capacity it can perform optimized functions internally and store data as a tier and it can also use external storage systems as tiers as well. This is the best of both worlds. And it isn’t an all or nothing value proposition. Therefore the USP V could potentially perform any function that any other storage system can and on top of that it can also perform functions via external storage virtualization that their competition cannot.
    You do need to look at a storage system holistically and not just feature by feature. But that is the point. The USP V in a great storage system and external storage virtualization is just one more valuable thing that it does.
    I never accused you of being cynical. As for being realistic there are thousands of IT pros out there that have leveraged the USP V and external storage virtualization. You are right it isn’t magic but the USP V has a rich set of features including external storage virtualization that when thinking is applied – it can be leverage for greater optimization and utilization.
    Btw – I never said that it was magic so I don’t intend to defend a position I never took.
    And come on -you have to admit that Abbott and Costello reference was pretty funny 😉

  6. Martin G says:

    Suggest you update your NetApp knowledge and look at the v-Series! v-Series supports both NetApp’s own disk and external arrays. We have one which has both DMX and NetApp disk behind it.
    BTW, AFAIK, you can buy a USP-VM without any disk. So is that a storage system? If IBM bundled disk with SVC, would it magically become a storage system!? I think the distinction that HDS and yourself take on the position of the USP-V is quite frankly nonsense!
    I always preferred Laurel and Hardy!
    Sorry the magic comment wasn’t aimed at you; more the sales droids who try to sell virtualisation.
    And yes, there are thousands of IT Pros who have put in a new storage system who have taken the chance to drive up utilisation and optimisation rates. I replaced a number of EMC arrays with some new EMC arrays driving up optimisation and utilisation rates! It’s funny that…often putting in something new has that sort of impact.

  7. Tony Asaro says:

    That’s why I specifically said FAS and not V-Series. Come on man – give it a break as if that is some deep esoteric knowledge. Maybe you should work on your reading comprehension.
    The point is that existing FAS customers can’t add this functionality to a storage system they already invested in but have to get another product. However, any USP customers can add this functionality at any point they want. That is also why I specifically said the USP V and not the USP VM. I don’t discount appliances but they are a different category of product. I see a pattern of you not understanding the position being presented, leading to the wrong conclusion and making the wrong argument.
    The point is that customers who budget for a storage system and want to use it for that can also get external storage virtualization with the USP V. The other point is that existing storage system customers can just use the USP V as a storage system and they can’t do that with the SVC. And yes, if IBM turned the SVC into a storage system with its own internal disk then I think that would be valuable. Why they haven’t done that is beyond me – probably internal politics. Additionally, the SVC doesn’t compete head to head with the DMX or DS8000 – the NetApp V-series doesn’t either. Another point is that the USP V is Hitachi’s flagship products and that matters in terms of sales, support and R&D focus.

  8. Martin G says:

    Tony, you are still wrong! I said NetApp, IBM and HDS have very similar approaches to virtualisation; I did not name products, you tried to go specific to support your argument. Appliances, integrated systems; it makes no difference to me. Sorry, I just don’t split the market up like you.
    Can I turn an AMS into a USP? No because they are different products! Would it be relatively easy to migrate from a FAS into a vSeries; actually I suspect it would and could probably be done with not a huge amount of disruption.
    The USP is not unique; if anything the v-Series is unique in that it can virtualise third party arrays and can present those LUNs out as a either block or file, HDS can’t do this and their NAS strategy is a shambles. Now EMC et al can argue the merits of the NetApp approach but I don’t think at present that they would argue that the v-Series is unique. Although it is no particular secret that you will be able to do something very similar if you so wish with the Sun product line.
    And if you like to think that the v-Series does not compete head-to-head with USP, you carry on thinking like that. I’ve sat on RFP/RFI boards where the USP consistently competes with the v-Series.
    There are large banks where SVC is deployed in Tier-1 mission critical, 6 9’s available, high-transactional through-put situations? Is that an Enterprise situation.
    It matters not one jot to me that the USP is HDS’ flagship product; what actually matters that it meets a need at the right price point. The line between Enterprise and non-Enterprise gets more and more blurry everyday; this is actually causing the high-end vendors more and more problems as customers are beginning to ask questions to all vendors, ‘Why, precisely do I need your expensive enterprise tin? I can do everything I need with the ‘non-Enterprise’ tin at half the cost!’ This is particularly HDS’ and EMC’s challenge; how do you continue to sell expensive tin in a market which is driving toward commodity!
    Stamping feet and wittering about why your products are unique does not help your cause! I acknowledge that external storage virtualisation has value, I believe that it’s value is very much overstated and may well actually only be of a transitional nature. There is a lot more value to be driven out of process, procedure and decent management tools which allow me to see clearly into my storage environment and manage it; without having to get a vendor in to tell me the time with my own watch; or even worse tell me the time and then try to sell me a new watch!

  9. Barry Whyte says:

    As I commented on my own blog in my first few posts a few years ago – where I compared the 3 approaches (at the time) to what we all think of as “storage virtualization” – the abstraction of a virtual volume from the physical volume(s) it resides on – the HDS and IBM approaches are almost identical in the functionality they can provide. I think the point that Martin is making is that what virtualization devices of any type can provide can be done with much closer detail and management of existing resources.
    OK so you get some major benefits like non-disruptive array to array migration, common copy services across all vendors products etc etc that cannot be done without a virtualizer, but the increased utilisation and centralised management aspects can be solved in other ways – possible just not as easily. You need a way to get a truely centralised view of all your resources (free and used) and adding a virtualizer helps in that respect – it also makes it easier to use up the free capacity because of the pooling it can provide across multiple arrays – however host based migrations and some very fine grained management of your devices can provide the same end result, it may just take longer and be less flexible on a day to day basis.
    I never claim that SVC or other products are the answer to world hunger, and I think thats what Martin was getting at in the first place.
    My take is to present the true facts, present the questions customers should be asking, acknowledge any limitations and let the customer decide.

  10. Matt Cross says:

    Hi there – you might consider vOptimizer Pro from Vizioncore as a good management tool or if you are running a large scale SAN then some kind of I/O measurement such as Virtual Instruments can really help with bottleneck discovery.
    Best,
    Matt

Leave a Reply

Your email address will not be published. Required fields are marked *