Storagebod Rotating Header Image

Storage

Don’t shoot the Messenger’s Friends….

Word has reached me that EMC Marketing may not be reacting so well to my previous post; yet if truth be known, I actually toned down what I really wanted to write because I wanted to ensure that people who I like and have time for didn’t catch so much flack. Although I speak only for myself, I know that I am not the only person who feels similarly about the current strain of EMC Marketing.

What I found so disappointing with the Mega-Launch is that with-in the hype and general hullabaloo; there are some interesting pearls but they got lost.

The re-write of the VNX2 code is very long overdue and from what I see gives EMC a solid base for their mid-range offering; it should allow them to support their current user-base whilst they work out how to transform them into the scale-out world.

It will allow them to take advantage of the tick-tock releases from Intel and if they have done serious work on the RAID code; it would surprise me if they haven’t at least enabled the possibility of a different data-protection method in the future; for example a micro-RAID enabling RAID to be better distributed across all disks and improving re-build times.

To move to a more modular software architecture has to be sensible and should allow them to transition to a software only virtual array in the future.

If they’d talked about such things as opposed to concentrating on the hype; putting the VNX2 into a context of future innovation…that’d been more interesting.

Of course EMC continue to talk very fast about VNX being a unified platform when all reality; we know it’s not really…not in the way that NetApp is. But that’s fine but it still grates that Marketing smoke and mirrors are involved.

But the VNX2 announcement is not without problems either; can I take an existing VNX and migrate non-disruptively to this new code? Do I have to take services and products such as VPLEX to enable this?

And then there was the ViPR GA announcement; much more detail and context could have been put around this; especially when aligned with the Nile ‘product’. I can see the beginnings of a platform strategy emerging and an interesting one at that. I’d be interested to know how some of their partner’s products fit into the picture; companies such as Panzura for example?

Yet where are the blogs, the context setting for these announcements? This side of EMC ‘marketing’ has sadly vanished only to be replaced by glitz. I think if the announcements had been accompanied by blogs and commentary more akin to Storagezilla’s here; much could have been forgiven and the announcement could have put to one side as the carnival it was.

It is sad that I miss Chuck’s take on these announcements; I know that Chuck was a real drum beater for EMC but there would have been technical content and interesting pearls in the blog. These days, it seems that the real detail has to be obtained face-to-face where most of the crap can be cut through.

So with a VMAX announcement probably due next year, probably at EMC World…I would hope for a more considered approach and a more balanced approach but I shan’t be holding my breath. Breathless seems to be the current EMC Marketing approach.

EMC have some good, some great and some products with serious challenges….I know from my day-to-day dealings with EMC that some are really trying to shift culture and convince customers that they are different.

Today’s Megalaunch leads me to question that.

 

Speed2Pb is Heavy Going…

EMC Marketing have done it again and managed to turn into what might be an interesting refresh of a product into something that just irritates me and many others.

It started off badly when they put the Sneak Preview video up and decided to have a dig at their lead competitor; then some tweets where they used the tag #NotAppy.

And then the hype cycle started to ramp up. So we have a ridiculously overblown launch with some tenuous link to F1. Tyre changing competitions and the likes which appear to be fun but just break up the presentations and destroy the flow.

EMC are just very bad at this sort of launch; speeds, feeds, marketing mumbo-jumbo linked in with videos/events which trash competitors, bore the audience and add little value. But all with the highest production values.

So what did the event feel like? It felt like an internal kick-off, an event where EMC high-five themselves and pretty ignore the customers. This felt more like an EMC event of eons ago along with a smattering of cheer-leading from Social Media.

There was little about the value, the use-case and what it will allow customers to do.

Death by PowerPoint; overly complex and busy slides.

And no humour…no humour! Make me laugh, make me smile!

Obviously I’m sure that it all played well to everyone else…and I’m not the target audience.

However, I think the technologies launched might be interesting; I think if the VNX2 code has undergone a rewrite, it’s long overdue and an achievement. It deserved better…

Such Fun…

With EMC allegedly putting the VMAX into the capacity tier and suggesting that performance cannot be met by the traditional SAN; are we finally beginning to look at the death of the storage array?

The storage array as a shared monolithic device came about almost directly as the result of distributed computing; the necessity for a one-to-many device was not really there when the data-centre was dominated by the mainframe. And yet as computing has become ever more distributed; the storage array has begun to struggle more and more to keep up.

Magnetic spinning platters of rust have hardly increased in speed in a decade or more; their capacity has got ever bigger tho’; storage arrays have got denser and denser from a capacity point of view, yet real-world performance has just not kept pace. More and more cache has helped to hide some of this; SSDs have helped but to what degree?

It also has not helped that the plumbing for most SANs is Fibre-channel; esoteric, expensive and ornery, the image of the storage array is not good.

Throw in the increased compute power and the ever incessant demands for more data processing, coupled with an attitude to data-hoarding at a corporate scale which would make even the most OCD amongst of us look relatively normal.

And add the potential for storage-arrays to become less reliable and more vulnerable to real data-loss as RAID becomes less and less of an viable data-protection methodology at scale.

Cost and complexity with a sense of unease about the future means that storage must change. So what are we seeing?

A rebirth in DAS? Or perhaps simply a new iteration of DAS?

From Pernix to ScaleIO to clustered-filesystems such as GPFS; the heart of the new DAS is Shared-Nothing-Clusters. ex-Fusion-IO’s David Flynn appears to be doing something to pool storage attached to servers; you can bet that there will be a Flash part to all this.

We are going to have a multitude of products; interoperability issues like never before, implementation and management headaches…do you implement one of these products or many? What happens if you have to move data around between these various implementations? Will they present as a file-system today? Are they looking to replace current file-systems; I know many sys-admins who will cry if you try to take VxFS away from them.

What does data protection look like? I must say that the XIV data-protection methods which were scorned by many (me included) look very prescient at the moment (still no software XIV tho’? What gives IBM…).

And then there is application specific nature of much of this storage; so many start-ups are focused on VMware and providing storage in clever ways to vSphere…when VMware’s storage roadmap looks so rich and so aimed taking that market, is this wise?

The noise and clamour from the small and often quite frankly under-funded start-ups is becoming deafening…and I’ve yet to see a compelling product which I’d back my business on. The whole thing feels very much like the early days of the storage-array; it’s kind of fun really.

You Will be Assimilated.

So why are the small Flash vendors innovating and the big boys not? Why are they leaving them for dust? And do the big boys care?

Innovation in large companies is very hard; you have all the weight of history pressing down on you and few large companies are set-up to allow their staff to really innovate. Even Google’s famous 20% time has probably not born the fruit that one would expect.

Yet innovation does happen in large companies; they all spend a fortune on R&D; unfortunately most of that tends to be making existing products better rather than come up with a new product.

Even when a new concept threatens to produce a new product; getting an existing sales-force to sell a new product…well, why would they? Why would I as a big-tin sales-droid try and push a new concept to my existing customer base? They probably don’t even want to talk about something new; it’s all about the incremental business.

I have seen plenty of concepts squashed which then pop up in new start-ups having totally failed to gain traction in the large company.

And then there are those genuinely new ideas that the large vendor has a go at implementing themselves; often with no intention of releasing their own product, they are just testing the validity of the concept.

Of course, then there is the angel funding that many larger vendors quietly carry out; if you follow the money it is not uncommon to find a large name sitting somewhere in the background.

So do the big boys really care about the innovation being driven by start-ups…I really don’t think so. Get someone else to take the risk and pick-up the ones which succeed at a later date.

Acquisition is a perfectly valid R&D and Innovation strategy. Once these smaller players start really taking chunks of revenue from the big boys…well, it’s a founder with real principles who won’t take a large exit.

Of course, seeing new companies IPO is cool but it’s rarely the end of the story.

 

 

Excessive Sorrow Laughs….

So you’ve manage to get yourself a job in storage? Commiserations, why did you do something so crazy; I hope you enjoy pain and misery, this is now your world. Perhaps if you do a great job, you’ll come back as something higher up the food chain such as desktop support.

Anyway here’s some hints and tips

1) You will never have the right amount and type of storage but it is probably better to have too much than too little. Applications fall-over when they run out of storage and adding new storage into an environment at a run is never a great deal of fun. Don’t believe vendors when they tell you that it is non-disruptive; even if it is non-disruptive technically, it will always be disruptive in other ways that you do not expect.

Learn to be economical with the truth about what you actually have available; keep something in your back-pocket for a rainy day but don’t save the day too often. Miracles need to be the exception as opposed to the rule.

2)Related to this is the fact that no end-user has any actual idea of how much storage they will use. They will glaze over when you start talking about terabytes, gigabytes, exabytes; my experience recently is that they under-estimate but this is probably a factor of the sector I’m in.

3)Every best practice document appears to have been written by someone who has shares in a storage company. This is especially true for databases; you have various options…

  • smile and allocate what they ask for
  • smile and tell them that you’ve allocated what they’ve asked for
  • frown and have an argument
I’ve been around for long enough to know that the last option maybe the most tempting but it only leads to pain.

4)Non-disruptive upgrades are rarely so; read the small print as to what non-disruptive means. Code upgrades will always result in more work for every other team as opposed to the Storage team as they struggle to bring their environments up to scratch to meet the crazed requirements of your chosen storage vendors.

5)Fibre-channel is not a standard; it is a vague suggestion of how things should work. Hence point 4)! But Fibre-channel scares the crap out of people, you start waffling on about FLOGIs and you can get away with murder. (Serious hint, don’t mix up WWPNs and WWNNs…understand the difference..please!)

6)Of course you will be tempted to head down the NAS route; whatever you do, don’t mix NFS and SMB shares…every vendor claims that they have a solution to the inherent problems with the mixed security model. They don’t! It breaks in subtle ways and never underestimate the power of a Mac user to put some very strange things in a filename.

7)’But I can buy a 1Tb USB disk in PC World for £50′; learn to tune this statement out or you will be committed or jailed.

8)Everyone can do your job better than you can….until it goes wrong. In your darkest hours, remember point 4); there is nothing more joyful than realising that a single storage upgrade can mean many hours of disrupted lives for every other team.

9)There is always a better technology; you just didn’t buy it. Don’t worry about it; what you’ve got will do most of what you want and probably most of the time. This is why the same sales-guy you bought NetApp from will later turn up selling you EMC; they aren’t clever enough to understand subtle differences in technologies…so basically, they are selling you a different badge.

10)Storage is actually quite easy…everything which surrounds it is hard…

11)Learn to laugh….

Keep On Syncing…Safely..

Edward Snowden’s revelations about the activities of the various Western security organisations have been both neither a surprise and yet also an a wake-up call to how the landscape of our own personal data security has changed. Multiple devices and increased mobility have meant that we have looked for ways to ensure that we have access to our data where-ever and when-ever;  gone are the days when even the average household has a single computing device and it is also increasingly uncommon to find an homogeneous household in the terms of manufacturer or operating-system. It is now fairly common to find Windows, OSX, Android, iOS and even Linux devices all within a single house; throw in digital cameras and smart-TVs, it is no wonder that we have a situation that makes data-sharing in a secure fashion more and more complex for the average person. So file-syncing and sharing products such as Dropbox, Box, SkyDrive and GoogleDrive are pretty much inevitable consequences and if you are anything like me;  you have a selection of these, some free and some charged but pretty much all of them are insecure; some terribly so. Of course it would be nice if the operating system manufacturers could agree on a standard which included encryption of data in-flight and at rest with a simple and easy to use key-sharing mechanism. Yet even with this, we would probably not trust it anymore but it might at least provide us an initial level of defence. I have started to look at ways of adding encryption to the various cloud services I use; in the past, I made fairly heavy use of TrueCrypt but it is not especially seamless and can be clunky. However this is becoming more feasible as apps such as Cryptonite and DiskDecipher are appearing for mobile devices. Recently I started to play with BoxCryptor and EncFS; BoxCryptor seems nice and easy to use; certainly on the desktop. It supports multiple Cloud providers; although the free version only supports a single Cloud provider; if you want to encrypt your multiple cloud stores, you will have to pay. There are alternatives such as Cloudfogger but development for BoxCryptor seems to be ongoing. And there perhaps there is the option of building your own ‘Sync and Share’ service; Transporter recently successfully kickstarted and looks good; Plug is in the process of kickstarting. Synology Devices have Cloud Station; QNAP have myQNAPcloud. You can go totally build your own and use ownCloud. In the Enterprise, you have a multitude of options as well but there is one thing; you do not need to store your stuff in the Cloud in an insecure manner. You have lots of options now, from keeping it local to using an Cloud service provider; encryption is still not as user-friendly as it could be but it has got easier. You can protect your data; you probably should…    

The Landscape Is Changing

As the announcements and acquisitions which fall into the realms of Software Defined Storage or Storage as I like to call it continue to come; one starts to ponder how this is all going to work and work practically.

I think it is extremely important to remember that firstly, you are going to need hardware to run this software on and this although is trending towards a commodity model; there are going to be subtle differences that are going to need accounting for. And as we move down this track, there is going to be a real focus on understanding workloads and the impact of different infrastructure and infrastructure patterns on this.

I am seeing more and more products which enable DAS to work as shared-storage resource; removing the SAN from the infrastructure and reducing the complexity. I am going to argue that this does not necessarily remove complexity but it shifts it. In fact, it doesn’t remove the SAN at all; it just changes it.

It is not uncommon now to see storage vendor presentations that show Shared-Nothing-Cluster architectures in some form or another; often these are software and hardware ‘packaged’ solutions but as end-users start to demand the ability to deploy on their own hardware, this brings a whole new world of unknown behaviours into play.

Once vendors relinquish control of the underlying infrastructure; the software is going to have to be a lot more intelligent and the end-user implementation teams are going to have to start thinking more like the hardware teams in vendors.

For example, the East-West traffic models in your data-centre become even more important and here you might find yourself implementing low-latency storage networks; your new SAN is no longer a North-South model but Server-Server (East-West). This is something that the virtualisation guys have been dealing with for some time.

Understanding performance and failure domains; do you protect the local DAS with RAID or move to a distributed RAIN model? If you do something like aggregate the storage on your compute farm into one big pool, what is the impact if one node in the compute farm starts to come under load? Can it impact the performance of the whole pool?

Anyone who has worked with any kind of distributed storage model will tell you that a slow performing node or a failing node can have impacts which far exceed that you believe possible. At times, it can feel like the good old days of token ring where a single misconfigured interface can kill the performance for everyone. Forget about the impact of a duplicate IP address; that is nothing.

What is the impact of the failure of a single compute/storage node? Multiple compute/storage nodes?

In the past, this has all been handled by the storage hardware vendor and pretty much invisibly at implementation phase to the local Storage team. But you will need now to make decisions about how data is protected and understand the impact of replication.

In theory, you want your data as close to the processing as you can but data has weight and persistence; it will have to move. Or do you come up with a method that allows you in a dynamic infrastructure that identifies where data is located and spins/moves the compute to it?

The vendors are going to have to improve their instrumentation as well; let me tell you from experience, at the moment understanding what is going on in such environments is deep magic. Also the software’s ability to cope with the differing capabilities and vagaries of a large-scale commodity infrastructure is going to be have to be a lot more robust than it is today.

Yet I see a lot of activity from vendors, open-source and closed-source; and I see a lot of interest from the large storage consumers; this all goes to point to a large prize to be won. But I’m expecting to see a lot of people fall by the road.

It’s an interesting time…

 

 

Monomyth

It really doesn’t matter about which Back-Up technology, the myths are pretty much all the same and unless you are aware of them, life will be more exciting for you than it should be…but perhaps that’s the point of Myths…they bring excitement to a mundane existence.

1) 99% Back-Up Completion is great. I’ve been guilty of this in the past when telling people how great my back-up team is…look, 99% success rate; we’re awesome. Actually, it’s a good job that some of my customers in the past have not realised what I was saying. Depending what has failed; I might not be able to restore a critical service but I still have a great back-up completion rate.

2) Design Back-Up Policies. No, don’t do that; build restore policies and then work out what needs to be backed-up to restore the service.

3)Everything Needs to be Backed-Up. Closely related to the above; if you feel the need to back-up an operating system several thousand times…feel free I guess but if you’ll never use it to restore a systems and in these days of automated build servers, Chef, Puppet and the likes, you are probably wasting your time. Yes, they can probably be de-duped but you are putting extra load on your back-up infrastructure for no reason.

3) Replication is Back-Up. Nope, synchronous replication is not a back-up; if I delete a file, that change will be replicated in real-time to the synchronous copy. It’s gone.

4) Snapshots are a Back-Up. Only if your snapshots are kept remotely; a snapshot on the same storage device can give you a fast recovery option but if you have lost the array or even a RAID rank; you are screwed.

5)RAID is a Back-Up. Yes people still believe this and some people still believe that the world is flat.

6)Your Back-Up is Good. No Back-Up is good unless you have restored it; until you have, you potentially have no back-up.

7)Back-Up is IT’s Responsibility. No, it is is a shared responsibility; it can only work well if the Business and IT work in partnership. Businesses need to work with IT to define data protection and recovery targets. IT needs to provide a service to meet these but they do not know what your retention/legal/business objectives are.

8)Back-Up Teams are Not Important. Back-Up teams are amongst the most important teams in your IT organisation. They can destroy your Business, steal your data and get access to almost any system they want…if they are smart and you are stupid!

 

From Servers to Service?

Should Enterprise Vendors consider becoming Service Providers? When Rich Rogers of HDS  tweeted this and my initial response was

This got me thinking, why does everyone think that Enterprise Vendors shouldn’t become Service Providers? Is this a reasonable response or just a knee-jerk, get out of my space and stick to doing what you are ‘good’ at.

It is often suggested that you should not compete with your customers; if Enterprise Vendors move into the Service Provider space, they compete with some of their largest customers, the Service Providers and potentially all of their customers; the Enterprise IT departments.

But the Service Providers are already beginning to compete with the Enterprise Vendors, more and more of them are looking at moving to a commodity model and not buying everything from the Enterprise Vendors; larger IT departments are thinking the same. Some of this is due to cost but much of it is that they feel that they can do a better job of meeting their business requirements by engineering solutions internally.

If the Enterprise Vendors find themselves squeezed by this; is it really fair that they should stay in their little box and watch their revenues dwindle away? They can compete in different ways, they can compete by moving their own products to more of a commodity model, many are already beginning to do so; they could compete by building a Service Provider model and move into that space.

Many of the Enterprise Vendors have substantial internal IT functions; some have large services organisations; some already play in the hosting/outsourcing space.  So why shouldn’t they move into the Service Provider space? Why not leverage the skills that they already have?

Yes, they change their business model; they will have to be careful that they ensure that they compete on a level playing field and look very carefully that they are not utilising their internal influence on pricing and development to drive an unfair competitive advantage. But if they feel that they can do a better job than the existing Service Providers; driving down costs and improving capability in this space….more power to them.

If an online bookstore can do it; why shouldn’t they? I don’t fear their entry into the market, history suggests that they have made a bit of a hash of it so far…but guys fill your boots.

And potentially, it improves things for us all; as the vendors try to manage their kit at scale, as they try to maintain service availability, as they try to deploy and develop an agile service; we all get to benefit from the improvements…Service Providers, Enterprise Vendors, End-Users…everyone.

 

The Reptile House

I was fortunate enough to spend an hour or so with Amitabh Srivastava of EMC; Amitabh is responsible for the Advanced Software division in EMC and one of the principal architects behind ViPR. It was an open discussion about the inspiration behind ViPR and where storage needs to go. And we certainly tried to avoid the ‘Software Defined’ meme.

Amitabh is not a storage guy; in fact his previous role with Microsoft sticks him firmly in the compute/server camp but it was his experience in building out the Azure Cloud offering which brought him appreciation of the problems that storage and data face going forward. He has some pretty funny stories about how the Azure Cloud came about and the learning experience it was; how he came to realise that this storage stuff was pretty interesting and more complex that just allocating some space.

Building dynamic compute environments is pretty much a solved problem; you have a choice of solutions and fairly mature ones. Dynamic networks are well on the way to being solved.

But building a dynamic and agile storage environment is hard and it’s not a solved problem yet. Storage and more importantly the data it holds has gravity or as I like to think of it, long-term persistence. Compute resource can be scaled up and down; data rarely has the idea of scaling down and generally hangs around. Data Analytics just means that our end-users are going to hug data for longer. So you’ve got this heavy and growing thing…it’s not agile but there needs to be some way of making it appear more agile.

You can easily move compute workloads and it’s relatively simple to change your network configuration to reflect these movements but moving large quantities of data around, this is a non-trivial thing to do…well at speed anyway.

Large Enterprise Storage environments are heterogeneous environments, dual supplier strategies are common; sometimes to keep vendors honest but often there is an acceptance the different arrays have difference capabilities and use-cases. Three or four years ago, I thought we were heading towards general purpose storage arrays; we now have more niche and siloed capabilities than ever before. Driven by developments in all-flash arrays, commodity hardware and new business requirements; the environment is getting more complex and not simpler.

Storage teams need a way of managing these heterogenous environments in a common and converged manner.

And everyone is trying to do things better, cheaper and faster; operational budgets remain pretty flat, headcounts are frozen or shrinking. Anecdotally, talking to my peers; arrays are hanging around longer, refresh cycles have lengthened somewhat.

EMC’s ViPR is attempt to solve some of these problems.

Can you lay a new access protocol on top of already existing and persistent data?  Can you make so that you don’t have to migrate many petabytes of data to enable a new protocol?  And can you ensure that your existing applications and new applications can use the same data without a massive rewrite? Can you enable your legacy infrastructure to support new technologies?

The access protocol in this case is Object; for some people Object Storage is religion…all storage should be object, why the hell do you want some kind of translation layer. But unfortunately, life is never that simple; if you have a lot of legacy applications running and generating useful data, you probably want to protect your investment and continue to run those applications but you might want to mine that data using newer applications.

This is heresy to many but reflects today’s reality; if you were starting with a green-field, all your data might live in an object-store but migrating a large existing estate to an object-store is just not realistic as a short term proposition.

ViPR enables your existing file-storage to be accessible as both file and object. Amitabh also mentioned block but I struggle with seeing how you would be able to treat a raw block device as an object in any meaningful manner. Perhaps that’s a future conversation.

But in the world of media and entertainment, I could see this capability being useful; in fact I can see it enabling some workflows to work more efficiently, so an asset can be acquired and edited in the traditional manner; then ‘moving’ into play-out as an object with rich-metadata but without moving around the storage environment.

Amitabh also discussed possibilities of being able to HDFS your existing storage, allowing analytics to be carried out on data-in-place without moving it. I can see this being appealing but challenges around performance, locking and the like become challenging.

But ultimately moving to an era where data persists but is accessible in appropriate ways without copying, ingesting and simply buying more and more storage is very appealing. I don’t believe that there will ever be one true protocol; so multi-protocol access to your data is key. And even in a world where everything becomes objects, there will almost certainly be competing APIs and command-sets.

The more real part of ViPR; when I say real, I mean it is the piece I can see huge need for today; is the abstraction of the control-plane and making it look and work the same for all the arrays that you manage. Yet after the abomination that is Control Center; can we trust EMC to make Storage Management easy, consistent and scalable? Amitabh has heard all the stories about Control Center, so lets hope he’s learnt from our pain!

The jury doesn’t even really have any hard evidence to go on yet but the vision makes sense.

EMC have committed to open-ness around ViPR as well; I asked the question…if someone implements your APIs and makes a better ViPR than ViPR? Amitabh was remarkably relaxed about that, they aren’t going to mess about with APIs for competitive advantage and if someone does a better job than them; then that someone deserves to win. They obviously believe that they are the best; if we move to a pluggable and modular storage architecture, where it is easy to drop-in replacements without disruption; they better be the best.

A whole ecosystem could be built around ViPR; EMC believe that if they get it right; it could be the on-ramp for many developers to build tools around it. They are actively looking for developers and start-ups to work with ViPR.

Instead of writing tools to manage a specific array; it should be possible to write tools that manage all of the storage in the data-centre. Obviously this is reliant on either EMC or other storage vendors implementing the plug-ins to enable ViPR to manage a specific array.

Will the other storage vendors enable ViPR to manage their arrays and hence increase the value of ViPR? Or will it be left to EMC to do it; well, at launch, NetApp is already there. I didn’t have time to drill into which versions of OnTap however and this where life could get tricky; the ViPR-control layer will need to keep up with the releases from the various vendors. But as more and more storage vendors are looking at how their storage integrates with the various virtualisation-stacks; consistent and early publications of their control functionality becomes key. EMC can use this as enablement for ViPR.

If I was a start-up for example, ViPR could enable me to fast-track management capability of my new device.I could concentrate on storage functionality and capability of the device and not on the periphery management functionality.

So it’s all pretty interesting stuff but it’s certainly not a forgone conclusion that this will succeed and it relies on other vendors coming to play. It is something that we need; we need the tools that will enable us to manage at scale, keeping our operational costs down and not having to rip and replace.

How will the other vendors react? I have a horrible suspicion that we’ll just end up with a mess of competing attempts and it will come down to the vendor who ships the widest range of support for third party devices. But before you dismiss this as just another attempt from EMC to own your storage infrastructure; if a software vendor had shipped/announced something similar, would you dismiss it quite so quickly? ViPR’s biggest strength and weakness is……EMC!

EMC have to prove their commitment to open-ness and that may mean that in the short term, they do things that seriously assist their competitors at some cost to their business. I think that they need to almost treat ViPR like they did VMware; at one point, it was almost more common to see a VMware and NetApp joint pitch than one involving EMC.

Oh, they also have to ship a GA product. And probably turn a tanker around. And win hearts and minds, show that they have changed…

Finally, let’s forget about Software Defined Anything; let’s forget about trying to redefine existing terms; it doesn’t have to be called anything…we are just looking for Better Storage Management and Capability. Hang your hat on that…