I studiously avoided blogging on the EMC World announcements last week, I wasn’t there and there was enough verbiage from those who were attending. Chad appeared to have a never ending stream of blog entries, I suspected all of which prepared in advance covering the various announcements. But it was this one about Project Lightning which especially caught the eye especially
Then, we vMotioned a bandwidth-constrained workload to a vSphere cluster which was running co-resident on the same hardware running Isilon, increasing the amount of bandwidth dramatically. Yes, this idea (vSphere running on the arrays) does indeed exist within the walls of EMC, as does vSphere running co-resident on VMAX hardware. If you think about a big Isilon cluster, with 100+ nodes of Intel x86 based power, or a future generation VMAX with 16+ similarly Intel-powered storage engines, it makes all the sense in the world – particularly for workloads where bandwidth and the parameters of the dataset make it easier to move the compute closer to the data rather than the other way around.
So EMC have vSphere running on Isilon nodes and VMAX; I suspect we could also add VNX to that list as well. And I completely agree with Chad that it does make all the sense in the world to do so but is it a good idea?
Now the techie in me says yes; IBM have had the unused and unexplored capability to run AIX/Linux workloads in the DS8k for some time and it has always bugged me that they have never leveraged this. There are simply some workloads that you might want to run as close to the storage as possible.
But, there is another part of me which says no! This is not the techie but more the person who cares about the complex eco-system that has built up around VMware; VMware has thrived because of the support of various other companies, these companies have also grown with VMware’s support, companies such as NetApp have been part of VMware’s journey to dominance in the server virtualisation marketplace.
The server companies have also embraced VMware and as VMware was not competing at a hardware level; it was allowed to become the de-facto standard.
If EMC utilise VMware to give them a serious competitive advantage in their storage platforms and give them a unique capability; do we risk a splintering of the server virtualisation marketplace? Vendors such as IBM, HP and Dell might well look at producing there own hypervisors to allow workloads to run in their arrays; as I say IBM already have the capability but this is for Power-based workloads but could IBM leverage their technical expertise in hypervisor technologies to build an x86-based hypervisor to allow them to run workloads on the v7000, SONAS, SVC and even XIV?
And of course, if EMC start to muscle in on the server market as well; where does that leave VCE? Will EMC even need Cisco at that point, I’ve said it to people before that Cisco need EMC more than EMC needing Cisco.
As I say, technically a good idea…but I would suggest that the jury is out on whether it is a great idea for EMC/VMware long term?
I assume someone in IBM has already built an x86 AS/400 style “application server”, if only as a proof of concept, but I think the time isn’t right for this kind of fixed block concept.
But I find the idea of a storage platform spread across 100s of servers, each one running a virtual storage appliance with a scale-out filesystem, alongside local computing power running the application workloads to be compelling, and is probably a lot closer to a modern cloud design than the V-MAX concept.
Ewan,
I’ve been wondering if that was a possible Isilon scenario…run OneFS in a virtual appliance or even as a pluggable file-system for vSphere. That would make a lot of sense. We shall see.
I also assume that IBM have at least one x86 hypervisor on their own running in the labs…
Martin