My turn to rag on XIV; this time I'm not going to rag on the architectural issues related to availability, I want to pick up on something from their marketting blurb. And I quote
Pardon me for taking exception to this. Capacity? In what world does XIV scale in capacity? It scales to 79Tb useable! Okay, you could take 2Tb drives and this would double etc but realistically how useful is this? Many enterprises are going to need 10s of XIV arrays to meet their requirements.
Of course, XIV is really easy to manage. But it's going to need to be because your admins are going to be busy enough managing these many arrays. Still, I can think of some plus points; the array is so small that secure multi-tenancy doesn't become an issue; if you are a hosting provider, simply provision an array per user!
But array size is important; as companies deal with larger and larger data-sets, they need larger arrays. Limited capacity causes all kinds of problems and makes processes like upgrades, data-refreshes and migrations more complex.
Even migrating some of our 'smaller' arrays would require migrating to multiple XIVs. Even if thin provisioning was to gain us 40-50% improvements in capacity utilisation, we would be filling the XIV to the gills leaving no room for growth.
IBM have so many issues with XIV at present; some might be perception as opposed to reality. I think it's time to for them to have a very good look at the architecture and the limitations that it has. IBM have said in the past that there is no reason why XIV cannot scale to larger but at the moment it seems that the strategy to scale is to stick bigger spindles in the array.
So I have to ask; will XIV GEN3 and GEN4 scale the number of spindles and nodes? If not, why not? Is it
1) Customer demands i.e no-one wants a bigger XIV
2) Architectural issues i.e it can't scale any further