A lot of posts and talks from people involved in VMware and especially when we start talking about the Private Cloud talk about 100% virtualised data centres. And there's always the nay-sayers like me who point out that there are niche applications which currently can't be virtualised. These include applications which run specialist hardware and applications which have real-time requirements; in my world of Broadcast Media, these are often one and the same.
But there a whole bunch of other applications; often niche and often from small vendors which can't be virtualised for no other reason than the fact that the vendor says they can't. And the reason? It's not been tested, often the applications have very restrictive hardware requirements which are basically dictated by the vendor's ability to test against multiple hardware variants and VMware (and other virtualisation technologies) is really just another hardware variant. I have a whole bunch of these where people swear blind that they can't be virtualised, I don't believe them.
So I'm going to have a go; fortunately, as well starting to build a new storage team, I have another job which involves running a test and integration department. Hence I have all the test cases etc for alot of these apps already built, so it should be just a case of opportunistically running these tests against a non-virtualised and a virtualised enviroment and seeing the differences. It's going to be a case of fitting it in when we can but we've managed to scrounge some fairly meaty hardware to build our new virtual environment on.
I still don't think you can virtualise everything; especially in an environment which has specialist requirements; in the same way it would be very hard for some environments to get rid of their mainframes, it will be hard for some environments to get rid of all the non-virtualised stuff and replacing all your non-x86 with x86 hardware. But with some work, we might be able to get rid of more than we can today.
One thing I’ve never understood about the niche or small shop software publishers is the reluctance to embrace server virtualization. You would think that they would be pushing customers to virtual, hardware independent platforms in an effort to reduce their own development and support costs.
Actually, many niche software houses in Broadcast also bundle ‘special’ hardware with their products. They don’t want people to be able to just run on anything! Of course, when you peel back the labels, you might find that it is generic white-box.
Personally, I agree with you; targeting a virtualised environment would make a lot of sense….
The point you raise is valid, even where no hardware is involved. I’ve heard many opinions that a critica database is still better off in a dedicated box. But back to hardware – in a lot of cases it would be simpler for the hardware vendor to just certify against VMware and let VMware take care of the never-ending task of updating and testing with the ongoing stream of hardware and driver releases. Sort of the flip side of having to stay non-virtualized due to a specialized environment. I know this has been the path forward for many legacy canned environments, such as those built around SCO OpenServer. Plus it gains access to new options such as SAN Storage (I work for HDS).
As the horsepower & memory of x86 machines increases, the ‘virtualization tax’ continues to become increasingly irrelevant, and the compelling value of (at least) encapsulation will win out (whatever the hypervisor).
In the meantime, you’re right – and oversimplifying the problem statement isn’t helping understanding between IT & user communities.
As for adoption barriers – it has more to do with changing operating practices than technology, because you cannot run a virtualized data center as you do a physical environment.