Storagebod Rotating Header Image

Storage Virtualisation 2.Oh…put some meat on the bones!

Okay, so I like picking on HDS's Hu; I don't know why but his blogs drive me to distraction (I should probably stop reading them). And I'm a sarcastic, cynical and miserable bugger at the best of times! I like challenging vendors and I want them to come up with answers. At the moment, I'm not seeing a lot of answers from HDS and undeniably they have some great products, so do their competitors. I suspect that they may have a very interesting roadmap, their competitors certainly do.

Tony Asaro addresses some of my points here and I'd like to respond:

1) Would I deploy SVC into an Enterprise environment; I know of Enterprise environments including financial institutions which have. Would I do it? I'd want to test it in my environment; I have no fundemental objection to it virtualising a mix of tier-1 and tier-2 if the commercial model works.

2) Okay, USP-V means you could claim to have bought an array and then get virtualisation in by stealth. To this day, I find it hard to believe that IBM have not built an array based on the SVC; technically there is no reason that they shouldn't. I suspect internal politics may have prevented them. But because USP-V can get Virtualisation in by stealth is not a feature in itself.

3) See 2

4) Now this is true and the whole focus of HDS' organisation is on USP-V and USP; that's great. They've bet the farm on it and committed to it. That doesn't make it any better than anyone else's product in itself. And HDS' smartest might not be as smart as a competitor's average; I have no way of telling.

But Tony is right, HDS need to turn up the volume but in a useful way! Talk about how you might use virtualisation, talk about it as a practical thing; not a religious thing. Enough about how wonderful it is!

I suspect that a lot of the vendors have some very similar ideas about the next generation of disk arrays. Many of them are in line in what I have been asking for/positting for the last couple of years. Some of them, I am frankly amazed have taken this long to happen.

1) Looser coupling between the spinning rust (going to need a new phrase to cope with SSDs) and the controllers.

2) Virtualisation in the controllers allowing potentially things like                  

  • Domainable arrays

  • Virtual Appliances such as back-up appliances, dedupe appliance, database accelerators running in the array

3) Commodity hardware usage

4) Greater connectivity options

5) Sub-LUN, block level optimisation

6) Highly automated management

Now some of these already exist, some of these should exist already. None of them exist in a single product…yet!

I'll stop picking on Hu and being horrible to him! Just ask him to put some humour and personality into his blog. Let's have a real discussion on the value of virtualisation, let's have a real discussion on the future of storage.


4 Comments

  1. Tony Asaro says:

    I am a New Yorker so I am also a sarcastic, cynical and miserable bugger. I do think there is a legitimate discussion to be had here. It frustrates me that HDS hasn’t done the kind of education that is needed but I know that is changing now.
    Here is some responses to your responses:
    1. I didn’t ask if you would put SVC in an Enterprise environment – I think it has a legitimate place in Enterpise data centers. I asked if you would seriously consider using it for a mission-critical application with just tier 2 storage behind it.
    2. and 3. Not a feature but an architectural difference that is important and valuable. A product’s architecture is absolutely a legitimate part of the discussion and decision making.
    4. The whole focus of HDS is not on the USP-V but it is their flagship product. The point I was making here is that as a company external storage virtualization is their top priority – that can’t be said of any other leading billion dollar storage vendor.
    We agree that disaggregation or decoupling is where things need to go. The devil, as they say, is in the details – and I am all for a real discussion.

  2. Martin G says:

    Firstly, you have to define your Tiers; for example, in my world, there is no difference in the availability characteristics of Tier 1 & 2. So I would put mission-critical applications on Tier 2 and indeed we do. So if I deployed SVC and I was happy with the end-to-end performance and availability characteristics of the solution, I wouldn’t have a problem on deploying on SVC with Tier 2 or whatever!

  3. ianhf says:

    Firstly I need to declare that I am an a large enterprise HDS customer (also a v large EMC & NetApp customer as well – sorry Barry W not much IBM though).
    Secondly I’ve been in the fortunate position to meet with Hu and most of the snr HDS team on regular occasions, and find them all both highly professional, knowledgeable, entertaining. and industry aware. I do however agree that the public and commercial corporate persona of HDS appears somewhat different in culture to others (but sometimes it’s good not to be having a sales guy hold a gun to your head right? or not to have to fend off the constant marketing hype barrage of “we’re the best, the others are rubbish regardless of what you want to discuss”? – neither of these have even happened with HDS folks I work with). Lastly, HDS’ communications are changing and as previously promised Martin I’ll send you over the contacts of a couple of snr HDS technical folks that you will very much enjoy engaging f2f with on these topics…
    Do I have products that can do ‘virtualisation’ today? Yes. Do I use them? Rarely – generally for migrations. We’re too big a scale to take risks. We’ve looked at all the products repeatedly and we’ve a list of RFEs with all the providers about our scale, refresh, availability & OSS reqs before we’ll get of feet any wetter. It’s getting better – but slowly, too slowly.
    A question I’d ask is “Is the current Virtualisation an ‘enterprise product’, or a point product in the enterprise?” my feeling is very much it’s a point product. Hence I agree that Virtualisation (server or storage) today is mainly used for point product consolidation, which is purely the first step on the benefits chain. To be of real & required value it needs to truly operate at the dynamic level abstracting the enterprise rather than parts of the enterprise, and it needs to be federated over wide parts of the enterprise.
    Again my feeling on this is that ‘virtualisation 2.0’ will actually be ‘improved policy based dynamic abstraction of the logical presentation from the physical instantiation’ – it will mean storage tiers truly defined by QoS & Performance logical attributes rather than the under-lying persistence technology. This will still be deployed as ‘point products’ within the enterprise (ie independent instances), but giving a richer more flexible vertical pipe if you like – as the technology development on OSS tools will need time to mature and the scale point of abstracting 10s of millions of LUNs will take some time to work through. Hence I raise the spectre of ‘virtualisation 3.0’ – with it being the widening / consolidation of the vertical deployments into a few enterprise wide logical domains of abstracted storage. Thus finally (maybe) giving rise to the properly connected ‘storage network’ that will deliver on the promises (most) storage vendors have been making for far far too long…
    V1 = logical domains of a few enclosures presented as a single enclosure, managed at the static LUN level
    V2 = logical domains of a few enclosures presented as a single enclosure with improved enclosure resilience, fully automated policy based dynamic sub LUN object tiers
    V3 = (significant) improvements in scale (ie 10s PB of capacity, 10m+ LUNs, 100k+ connectivity) and OSS tools to enable consolidated enterprise wide deployments
    And of course in order for this to make any sense the TCO model must be beneficial in terms of real hard €s and not in softer enabling or future benefits, in today’s climate the TCO model has far greater weight on the outgoing capex & opex than the incoming benefits. So those suppliers that wish to require additional revenue to enable v2/3 will be stuck, the revenue lines have to get cheaper, simpler and easier to manage, understand and predict and additional functionality cannot be seen as a way for suppliers to boost falling rev streams – the rev streams are falling due to massive budget cuts so there simply is less money to go round.
    Of course this has to happen a lot quicker than people think, because the rise of the object storage platforms has started – and for sure that will replace much of the underlying storage that we worry about today.
    Oh and did I mention that I loathe the word virtualisation? It’s a storage network – it’s supposed to be connected, logical and dynamic… ‘Abstracted storage’ feels so much better, just as we have an FR/NFR relationship with our application architects shouldn’t we be having an FR/NFR with our persistency layers? Now if only we could standardise on the FR/NFR language to storage persistency in a meaningful way…

  4. Tony Asaro says:

    Nice – that should be a blog entry.

Leave a Reply

Your email address will not be published. Required fields are marked *