Virtual Openness
I don’t always agree with Trevor Pott but this piece on ServerSAN, VSAN and storage acceleration is spot on; the question about VSAN running in the kernel and the advantages that brings to performance; and indeed, I’ve also heard comments about reliability, support and the likes over competing products is very much one which has left me scratching my head and feeling very irritated.
If running VSAN in the kernel is so much better and it almost feels that it should be; it kind of asks another question, perhaps I would be better running all my workloads on bare-metal or as close as I can.
Or perhaps VMware need to be allowing a lot more access to the kernel or a pluggable architecture that allows various infrastructure services to run at that level. There are a number of vendors that would welcome that move and it might actually hasten the adoption of VMware yet further or at least take out some of the more entrenched resistance around it.
I do hope more competition in the virtualisation space will bring more openness to the VMware hypervisor stack.
And it does seem that we are beginning towards data-centres which host competing virtualisation technologies; so it would be good if that at a certain level that these became more infrastructure agnostic. From a purely selfish point of view; it would be good to have the same technology to present storage space to VMware, Hyper-V, KVM and anything else.
I would like to easily share data between systems that run on different technologies and hypervisors; if I use VSAN, I can’t do this without putting in some other technology on top.
Perhaps VMware don’t really want me to have more than one hypervisor in my data-centre; the same way that EMC would prefer that all my storage was from them…but they have begun to learn to live with reality and perhaps they need to encourage VMware to live in the real world as well. I certainly have use-cases that utilise bare-metal for some specific tasks but that data does find its way into virtualised environments.
Speedy Storage
There are many products that promise to speed-up your centralised storage and they work very well, especially in simple use-cases. Trevor calls this Centralised Storage Acceleration (CSA); some are software products, some come with hardware devices and some are mixture of both.
They can have some significant impact on the performance of your workloads; databases can benefit from them especially (most databases benefit more with decent DBAs and developers how-ever); they are a quick fix for many performance issues and remove that bottleneck which is spinning rust.
But as soon as you start to add complexity; clustering, availability and moving beyond a basic write-cache functionality…they stop being a quick-fix and become yet another system to go wrong and manage.
Fairly soon; that CSA becomes something a lot closer to a ServerSAN and you are sticking that in front of your expensive SAN infrastructure.
The one place that a CSA becomes interesting is as Cloud Storage Acceleration; a small amount of flash storage on server but with the bulk of data sitting in a cloud of some sort.
So what is going on?
It is unusual to have such a number of competing deployment models for infrastructure; in storage, we have an increasing number of deployment models.
- Centralised Storage – the traditional NAS and SAN devices
- Direct Attached Storage – Local disk with the application layer doing all the replication and other data management services
- Distributed Storage – Server-SAN; think VSAN and competitors
And we can layer an acceleration infrastructure on top of those; this acceleration infrastructure could be local to the server or perhaps an appliance sitting in the ‘network’.
All of these have use-cases and the answer may well be that to run a ‘large’ infrastructure; you need a mixture of them all?
Storage was supposed to get simple and we were supposed to focus on the data and providing data services. I think people forgot that just calling something a service didn’t make it simple and the problems go away.