Okay, I think I know the answer to this but here's a quick question to those people reading who are using in-band virtualisation (or any of the virtualisation tools). If I virtualise my estate using something like a USP-V or SVC; when it becomes time to migrate/upgrade due for accounting reasons, support issues etc; is this disruptive? Will HDS enable me to go to USP-V++ with no disruption? Will SVC? Will Invista?
At the moment, array migrations are a pain but they aren't actually very disruptive in a lot of our estate. Bring the new Luns in, use the volume manager to move the data, drop the luns and maybe a reboot at some point to completely remove them. We can schedule the reboot around our normal maintenance windows and it's not a big hassle. It's a bit time consuming but not a huge hassle, we recently did 10 arrays down to 2 and it took about three months.
How does this work when I'm moving between virtualisation devices?
What would the flow be?
Do I need swing disk to achieve this?
Can a virtualised lun be presented to two different virtualisation appliances at different levels so that I can do a path at a time?
Once I've finished virtualising my estate, do I need to start planning to migrate to the next iteration of the virtualisation device?
We already artificially constrain the size of our arrays for a variety of reasons; one of those is the sheer terror of having to migrate a petabyte of data at array refresh time. If I've got all my storage virtualised, I could be looking a migrating multi-petabytes at virtualisation device refresh time.
I am intrigued as to how this all going to be handled? Do I virtualise the virtualisation device? Arrrrrghhh…my brain hurts!! So Tony A, Barry W? How does it work it your world?
I am not sure specifically about the SVC or Invista, but I do know that HDS promises that the next-generation USP-V will be a simple migration from the current USP-V: You dual-attach the LUNs, move them over, and retire the old one. I saw a presentation on just this topic on Tuesday!
I imagine that SVC clusters can be upgraded similarly, by adding new SVC nodes and pulling out the old ones.
EMC – Chuck? Barry? Mark?
Lots of the same Qs in this area (as I frankly don’t subscribe to virtualised storage game as delivered today – premise yes, delivery no) – fancy doing a joint vendor briefing / questioning?
Certainly do and I’m very much in agreement with you; the premise is interesting but the pudding doesn’t yet appear to be fully cooked. I’ll let someone else go through the refresh cycle, actually some SVC customers must be at the refresh stage.
SVC supports both concurrent software and hardware updates. I’ve done both. Since the SVC consists of node pairs, you update one of the nodes in the pair, then the other.
For example, I assisted a customer in upgrading his first generation 2145-4F2 SVC nodes to the current 2145-8G4 nodes. We did the replacement on-line with applications still accessing storage.
(Appropriate disclaimer, I am an IBM Storage employee)
How quick is it? And how automagic can you make it i.e transfer the config from one SVC node to the other. How much damn work is it for the poor storage admins?
http://storagearchitect.blogspot.com/2008/10/replacing-virtualisation-component-ii.html
All of the nodes in the SVC cluster contain the full configuration. When a new node joins the cluster (even as a hardware replacement), it downloads the current level of cluster firmware and the configuration from the other nodes in the cluster.
The key to making it work correctly is to make absolutely certain that the multipathing drivers in the using OSes are properly configured. I’ve found that process to be far more time consuming that the actual hardware or software upgrades.
The other key thing to mention is that we re-use the old WWNN and WWPN with the new hardware, so NO switch zone changes are needed. A single node can but replaced in a little over 5 minutes, (most of that time will be rack removal and replace)
Even if HDS promise dual active USP-V, USP-V+ access, without the ability to also re-use the old WWNN then you will have to disrupt your applications – unless they can solve this problem as we have.
We also support mixtures of SVC hardware in a single cluster, so if you only need to upgrade one pair (the other pairs still performing well) then you can upgrade piecemeal – although most customers upgrade all their nodes in the course of a day.
One customer I supported in this effort went from ~80% node utilisation to ~20% after the upgrade from gen1 to gen4 nodes.
Once you are virtual, there can be no excuse to cause outages – HDS are getting beaten up over this – hence their obvious need to tackle in the next gen USP.
Thanks to all the SVC guys for their answers; you’ve obviously thought this through. Disappointed that no-one from HDS has popped up, I know from the IP ranges that there are some HDS peeps who read this.
And I know the guys from BlueArc keep an eye on it; so howabout it guys?
Thanks to all the SVC guys for their answers; you’ve obviously thought this through. Disappointed that no-one from HDS has popped up, I know from the IP ranges that there are some HDS peeps who read this.
And I know the guys from BlueArc keep an eye on it; so howabout it guys?
OK, OK, OK…HDS has been following this post with interest and our desire not to chime in is based more on the fact that new technology and solutions are NDA material. We obviously continue to innovate and this is an area of major concern for us. Rather than give detail on future products in a public forum, I would invite anyone to get an NDA from HDS on the plans that have already been hinted at in this forum.