Storagebod Rotating Header Image

You Will be Assimilated.

So why are the small Flash vendors innovating and the big boys not? Why are they leaving them for dust? And do the big boys care?

Innovation in large companies is very hard; you have all the weight of history pressing down on you and few large companies are set-up to allow their staff to really innovate. Even Google’s famous 20% time has probably not born the fruit that one would expect.

Yet innovation does happen in large companies; they all spend a fortune on R&D; unfortunately most of that tends to be making existing products better rather than come up with a new product.

Even when a new concept threatens to produce a new product; getting an existing sales-force to sell a new product…well, why would they? Why would I as a big-tin sales-droid try and push a new concept to my existing customer base? They probably don’t even want to talk about something new; it’s all about the incremental business.

I have seen plenty of concepts squashed which then pop up in new start-ups having totally failed to gain traction in the large company.

And then there are those genuinely new ideas that the large vendor has a go at implementing themselves; often with no intention of releasing their own product, they are just testing the validity of the concept.

Of course, then there is the angel funding that many larger vendors quietly carry out; if you follow the money it is not uncommon to find a large name sitting somewhere in the background.

So do the big boys really care about the innovation being driven by start-ups…I really don’t think so. Get someone else to take the risk and pick-up the ones which succeed at a later date.

Acquisition is a perfectly valid R&D and Innovation strategy. Once these smaller players start really taking chunks of revenue from the big boys…well, it’s a founder with real principles who won’t take a large exit.

Of course, seeing new companies IPO is cool but it’s rarely the end of the story.

 

 

Excessive Sorrow Laughs….

So you’ve manage to get yourself a job in storage? Commiserations, why did you do something so crazy; I hope you enjoy pain and misery, this is now your world. Perhaps if you do a great job, you’ll come back as something higher up the food chain such as desktop support.

Anyway here’s some hints and tips

1) You will never have the right amount and type of storage but it is probably better to have too much than too little. Applications fall-over when they run out of storage and adding new storage into an environment at a run is never a great deal of fun. Don’t believe vendors when they tell you that it is non-disruptive; even if it is non-disruptive technically, it will always be disruptive in other ways that you do not expect.

Learn to be economical with the truth about what you actually have available; keep something in your back-pocket for a rainy day but don’t save the day too often. Miracles need to be the exception as opposed to the rule.

2)Related to this is the fact that no end-user has any actual idea of how much storage they will use. They will glaze over when you start talking about terabytes, gigabytes, exabytes; my experience recently is that they under-estimate but this is probably a factor of the sector I’m in.

3)Every best practice document appears to have been written by someone who has shares in a storage company. This is especially true for databases; you have various options…

  • smile and allocate what they ask for
  • smile and tell them that you’ve allocated what they’ve asked for
  • frown and have an argument
I’ve been around for long enough to know that the last option maybe the most tempting but it only leads to pain.

4)Non-disruptive upgrades are rarely so; read the small print as to what non-disruptive means. Code upgrades will always result in more work for every other team as opposed to the Storage team as they struggle to bring their environments up to scratch to meet the crazed requirements of your chosen storage vendors.

5)Fibre-channel is not a standard; it is a vague suggestion of how things should work. Hence point 4)! But Fibre-channel scares the crap out of people, you start waffling on about FLOGIs and you can get away with murder. (Serious hint, don’t mix up WWPNs and WWNNs…understand the difference..please!)

6)Of course you will be tempted to head down the NAS route; whatever you do, don’t mix NFS and SMB shares…every vendor claims that they have a solution to the inherent problems with the mixed security model. They don’t! It breaks in subtle ways and never underestimate the power of a Mac user to put some very strange things in a filename.

7)’But I can buy a 1Tb USB disk in PC World for £50′; learn to tune this statement out or you will be committed or jailed.

8)Everyone can do your job better than you can….until it goes wrong. In your darkest hours, remember point 4); there is nothing more joyful than realising that a single storage upgrade can mean many hours of disrupted lives for every other team.

9)There is always a better technology; you just didn’t buy it. Don’t worry about it; what you’ve got will do most of what you want and probably most of the time. This is why the same sales-guy you bought NetApp from will later turn up selling you EMC; they aren’t clever enough to understand subtle differences in technologies…so basically, they are selling you a different badge.

10)Storage is actually quite easy…everything which surrounds it is hard…

11)Learn to laugh….

Keep On Syncing…Safely..

Edward Snowden’s revelations about the activities of the various Western security organisations have been both neither a surprise and yet also an a wake-up call to how the landscape of our own personal data security has changed. Multiple devices and increased mobility have meant that we have looked for ways to ensure that we have access to our data where-ever and when-ever;  gone are the days when even the average household has a single computing device and it is also increasingly uncommon to find an homogeneous household in the terms of manufacturer or operating-system. It is now fairly common to find Windows, OSX, Android, iOS and even Linux devices all within a single house; throw in digital cameras and smart-TVs, it is no wonder that we have a situation that makes data-sharing in a secure fashion more and more complex for the average person. So file-syncing and sharing products such as Dropbox, Box, SkyDrive and GoogleDrive are pretty much inevitable consequences and if you are anything like me;  you have a selection of these, some free and some charged but pretty much all of them are insecure; some terribly so. Of course it would be nice if the operating system manufacturers could agree on a standard which included encryption of data in-flight and at rest with a simple and easy to use key-sharing mechanism. Yet even with this, we would probably not trust it anymore but it might at least provide us an initial level of defence. I have started to look at ways of adding encryption to the various cloud services I use; in the past, I made fairly heavy use of TrueCrypt but it is not especially seamless and can be clunky. However this is becoming more feasible as apps such as Cryptonite and DiskDecipher are appearing for mobile devices. Recently I started to play with BoxCryptor and EncFS; BoxCryptor seems nice and easy to use; certainly on the desktop. It supports multiple Cloud providers; although the free version only supports a single Cloud provider; if you want to encrypt your multiple cloud stores, you will have to pay. There are alternatives such as Cloudfogger but development for BoxCryptor seems to be ongoing. And there perhaps there is the option of building your own ‘Sync and Share’ service; Transporter recently successfully kickstarted and looks good; Plug is in the process of kickstarting. Synology Devices have Cloud Station; QNAP have myQNAPcloud. You can go totally build your own and use ownCloud. In the Enterprise, you have a multitude of options as well but there is one thing; you do not need to store your stuff in the Cloud in an insecure manner. You have lots of options now, from keeping it local to using an Cloud service provider; encryption is still not as user-friendly as it could be but it has got easier. You can protect your data; you probably should…    

The Landscape Is Changing

As the announcements and acquisitions which fall into the realms of Software Defined Storage or Storage as I like to call it continue to come; one starts to ponder how this is all going to work and work practically.

I think it is extremely important to remember that firstly, you are going to need hardware to run this software on and this although is trending towards a commodity model; there are going to be subtle differences that are going to need accounting for. And as we move down this track, there is going to be a real focus on understanding workloads and the impact of different infrastructure and infrastructure patterns on this.

I am seeing more and more products which enable DAS to work as shared-storage resource; removing the SAN from the infrastructure and reducing the complexity. I am going to argue that this does not necessarily remove complexity but it shifts it. In fact, it doesn’t remove the SAN at all; it just changes it.

It is not uncommon now to see storage vendor presentations that show Shared-Nothing-Cluster architectures in some form or another; often these are software and hardware ‘packaged’ solutions but as end-users start to demand the ability to deploy on their own hardware, this brings a whole new world of unknown behaviours into play.

Once vendors relinquish control of the underlying infrastructure; the software is going to have to be a lot more intelligent and the end-user implementation teams are going to have to start thinking more like the hardware teams in vendors.

For example, the East-West traffic models in your data-centre become even more important and here you might find yourself implementing low-latency storage networks; your new SAN is no longer a North-South model but Server-Server (East-West). This is something that the virtualisation guys have been dealing with for some time.

Understanding performance and failure domains; do you protect the local DAS with RAID or move to a distributed RAIN model? If you do something like aggregate the storage on your compute farm into one big pool, what is the impact if one node in the compute farm starts to come under load? Can it impact the performance of the whole pool?

Anyone who has worked with any kind of distributed storage model will tell you that a slow performing node or a failing node can have impacts which far exceed that you believe possible. At times, it can feel like the good old days of token ring where a single misconfigured interface can kill the performance for everyone. Forget about the impact of a duplicate IP address; that is nothing.

What is the impact of the failure of a single compute/storage node? Multiple compute/storage nodes?

In the past, this has all been handled by the storage hardware vendor and pretty much invisibly at implementation phase to the local Storage team. But you will need now to make decisions about how data is protected and understand the impact of replication.

In theory, you want your data as close to the processing as you can but data has weight and persistence; it will have to move. Or do you come up with a method that allows you in a dynamic infrastructure that identifies where data is located and spins/moves the compute to it?

The vendors are going to have to improve their instrumentation as well; let me tell you from experience, at the moment understanding what is going on in such environments is deep magic. Also the software’s ability to cope with the differing capabilities and vagaries of a large-scale commodity infrastructure is going to be have to be a lot more robust than it is today.

Yet I see a lot of activity from vendors, open-source and closed-source; and I see a lot of interest from the large storage consumers; this all goes to point to a large prize to be won. But I’m expecting to see a lot of people fall by the road.

It’s an interesting time…

 

 

Monomyth

It really doesn’t matter about which Back-Up technology, the myths are pretty much all the same and unless you are aware of them, life will be more exciting for you than it should be…but perhaps that’s the point of Myths…they bring excitement to a mundane existence.

1) 99% Back-Up Completion is great. I’ve been guilty of this in the past when telling people how great my back-up team is…look, 99% success rate; we’re awesome. Actually, it’s a good job that some of my customers in the past have not realised what I was saying. Depending what has failed; I might not be able to restore a critical service but I still have a great back-up completion rate.

2) Design Back-Up Policies. No, don’t do that; build restore policies and then work out what needs to be backed-up to restore the service.

3)Everything Needs to be Backed-Up. Closely related to the above; if you feel the need to back-up an operating system several thousand times…feel free I guess but if you’ll never use it to restore a systems and in these days of automated build servers, Chef, Puppet and the likes, you are probably wasting your time. Yes, they can probably be de-duped but you are putting extra load on your back-up infrastructure for no reason.

3) Replication is Back-Up. Nope, synchronous replication is not a back-up; if I delete a file, that change will be replicated in real-time to the synchronous copy. It’s gone.

4) Snapshots are a Back-Up. Only if your snapshots are kept remotely; a snapshot on the same storage device can give you a fast recovery option but if you have lost the array or even a RAID rank; you are screwed.

5)RAID is a Back-Up. Yes people still believe this and some people still believe that the world is flat.

6)Your Back-Up is Good. No Back-Up is good unless you have restored it; until you have, you potentially have no back-up.

7)Back-Up is IT’s Responsibility. No, it is is a shared responsibility; it can only work well if the Business and IT work in partnership. Businesses need to work with IT to define data protection and recovery targets. IT needs to provide a service to meet these but they do not know what your retention/legal/business objectives are.

8)Back-Up Teams are Not Important. Back-Up teams are amongst the most important teams in your IT organisation. They can destroy your Business, steal your data and get access to almost any system they want…if they are smart and you are stupid!

 

From Servers to Service?

Should Enterprise Vendors consider becoming Service Providers? When Rich Rogers of HDS  tweeted this and my initial response was

This got me thinking, why does everyone think that Enterprise Vendors shouldn’t become Service Providers? Is this a reasonable response or just a knee-jerk, get out of my space and stick to doing what you are ‘good’ at.

It is often suggested that you should not compete with your customers; if Enterprise Vendors move into the Service Provider space, they compete with some of their largest customers, the Service Providers and potentially all of their customers; the Enterprise IT departments.

But the Service Providers are already beginning to compete with the Enterprise Vendors, more and more of them are looking at moving to a commodity model and not buying everything from the Enterprise Vendors; larger IT departments are thinking the same. Some of this is due to cost but much of it is that they feel that they can do a better job of meeting their business requirements by engineering solutions internally.

If the Enterprise Vendors find themselves squeezed by this; is it really fair that they should stay in their little box and watch their revenues dwindle away? They can compete in different ways, they can compete by moving their own products to more of a commodity model, many are already beginning to do so; they could compete by building a Service Provider model and move into that space.

Many of the Enterprise Vendors have substantial internal IT functions; some have large services organisations; some already play in the hosting/outsourcing space.  So why shouldn’t they move into the Service Provider space? Why not leverage the skills that they already have?

Yes, they change their business model; they will have to be careful that they ensure that they compete on a level playing field and look very carefully that they are not utilising their internal influence on pricing and development to drive an unfair competitive advantage. But if they feel that they can do a better job than the existing Service Providers; driving down costs and improving capability in this space….more power to them.

If an online bookstore can do it; why shouldn’t they? I don’t fear their entry into the market, history suggests that they have made a bit of a hash of it so far…but guys fill your boots.

And potentially, it improves things for us all; as the vendors try to manage their kit at scale, as they try to maintain service availability, as they try to deploy and develop an agile service; we all get to benefit from the improvements…Service Providers, Enterprise Vendors, End-Users…everyone.

 

The Reptile House

I was fortunate enough to spend an hour or so with Amitabh Srivastava of EMC; Amitabh is responsible for the Advanced Software division in EMC and one of the principal architects behind ViPR. It was an open discussion about the inspiration behind ViPR and where storage needs to go. And we certainly tried to avoid the ‘Software Defined’ meme.

Amitabh is not a storage guy; in fact his previous role with Microsoft sticks him firmly in the compute/server camp but it was his experience in building out the Azure Cloud offering which brought him appreciation of the problems that storage and data face going forward. He has some pretty funny stories about how the Azure Cloud came about and the learning experience it was; how he came to realise that this storage stuff was pretty interesting and more complex that just allocating some space.

Building dynamic compute environments is pretty much a solved problem; you have a choice of solutions and fairly mature ones. Dynamic networks are well on the way to being solved.

But building a dynamic and agile storage environment is hard and it’s not a solved problem yet. Storage and more importantly the data it holds has gravity or as I like to think of it, long-term persistence. Compute resource can be scaled up and down; data rarely has the idea of scaling down and generally hangs around. Data Analytics just means that our end-users are going to hug data for longer. So you’ve got this heavy and growing thing…it’s not agile but there needs to be some way of making it appear more agile.

You can easily move compute workloads and it’s relatively simple to change your network configuration to reflect these movements but moving large quantities of data around, this is a non-trivial thing to do…well at speed anyway.

Large Enterprise Storage environments are heterogeneous environments, dual supplier strategies are common; sometimes to keep vendors honest but often there is an acceptance the different arrays have difference capabilities and use-cases. Three or four years ago, I thought we were heading towards general purpose storage arrays; we now have more niche and siloed capabilities than ever before. Driven by developments in all-flash arrays, commodity hardware and new business requirements; the environment is getting more complex and not simpler.

Storage teams need a way of managing these heterogenous environments in a common and converged manner.

And everyone is trying to do things better, cheaper and faster; operational budgets remain pretty flat, headcounts are frozen or shrinking. Anecdotally, talking to my peers; arrays are hanging around longer, refresh cycles have lengthened somewhat.

EMC’s ViPR is attempt to solve some of these problems.

Can you lay a new access protocol on top of already existing and persistent data?  Can you make so that you don’t have to migrate many petabytes of data to enable a new protocol?  And can you ensure that your existing applications and new applications can use the same data without a massive rewrite? Can you enable your legacy infrastructure to support new technologies?

The access protocol in this case is Object; for some people Object Storage is religion…all storage should be object, why the hell do you want some kind of translation layer. But unfortunately, life is never that simple; if you have a lot of legacy applications running and generating useful data, you probably want to protect your investment and continue to run those applications but you might want to mine that data using newer applications.

This is heresy to many but reflects today’s reality; if you were starting with a green-field, all your data might live in an object-store but migrating a large existing estate to an object-store is just not realistic as a short term proposition.

ViPR enables your existing file-storage to be accessible as both file and object. Amitabh also mentioned block but I struggle with seeing how you would be able to treat a raw block device as an object in any meaningful manner. Perhaps that’s a future conversation.

But in the world of media and entertainment, I could see this capability being useful; in fact I can see it enabling some workflows to work more efficiently, so an asset can be acquired and edited in the traditional manner; then ‘moving’ into play-out as an object with rich-metadata but without moving around the storage environment.

Amitabh also discussed possibilities of being able to HDFS your existing storage, allowing analytics to be carried out on data-in-place without moving it. I can see this being appealing but challenges around performance, locking and the like become challenging.

But ultimately moving to an era where data persists but is accessible in appropriate ways without copying, ingesting and simply buying more and more storage is very appealing. I don’t believe that there will ever be one true protocol; so multi-protocol access to your data is key. And even in a world where everything becomes objects, there will almost certainly be competing APIs and command-sets.

The more real part of ViPR; when I say real, I mean it is the piece I can see huge need for today; is the abstraction of the control-plane and making it look and work the same for all the arrays that you manage. Yet after the abomination that is Control Center; can we trust EMC to make Storage Management easy, consistent and scalable? Amitabh has heard all the stories about Control Center, so lets hope he’s learnt from our pain!

The jury doesn’t even really have any hard evidence to go on yet but the vision makes sense.

EMC have committed to open-ness around ViPR as well; I asked the question…if someone implements your APIs and makes a better ViPR than ViPR? Amitabh was remarkably relaxed about that, they aren’t going to mess about with APIs for competitive advantage and if someone does a better job than them; then that someone deserves to win. They obviously believe that they are the best; if we move to a pluggable and modular storage architecture, where it is easy to drop-in replacements without disruption; they better be the best.

A whole ecosystem could be built around ViPR; EMC believe that if they get it right; it could be the on-ramp for many developers to build tools around it. They are actively looking for developers and start-ups to work with ViPR.

Instead of writing tools to manage a specific array; it should be possible to write tools that manage all of the storage in the data-centre. Obviously this is reliant on either EMC or other storage vendors implementing the plug-ins to enable ViPR to manage a specific array.

Will the other storage vendors enable ViPR to manage their arrays and hence increase the value of ViPR? Or will it be left to EMC to do it; well, at launch, NetApp is already there. I didn’t have time to drill into which versions of OnTap however and this where life could get tricky; the ViPR-control layer will need to keep up with the releases from the various vendors. But as more and more storage vendors are looking at how their storage integrates with the various virtualisation-stacks; consistent and early publications of their control functionality becomes key. EMC can use this as enablement for ViPR.

If I was a start-up for example, ViPR could enable me to fast-track management capability of my new device.I could concentrate on storage functionality and capability of the device and not on the periphery management functionality.

So it’s all pretty interesting stuff but it’s certainly not a forgone conclusion that this will succeed and it relies on other vendors coming to play. It is something that we need; we need the tools that will enable us to manage at scale, keeping our operational costs down and not having to rip and replace.

How will the other vendors react? I have a horrible suspicion that we’ll just end up with a mess of competing attempts and it will come down to the vendor who ships the widest range of support for third party devices. But before you dismiss this as just another attempt from EMC to own your storage infrastructure; if a software vendor had shipped/announced something similar, would you dismiss it quite so quickly? ViPR’s biggest strength and weakness is……EMC!

EMC have to prove their commitment to open-ness and that may mean that in the short term, they do things that seriously assist their competitors at some cost to their business. I think that they need to almost treat ViPR like they did VMware; at one point, it was almost more common to see a VMware and NetApp joint pitch than one involving EMC.

Oh, they also have to ship a GA product. And probably turn a tanker around. And win hearts and minds, show that they have changed…

Finally, let’s forget about Software Defined Anything; let’s forget about trying to redefine existing terms; it doesn’t have to be called anything…we are just looking for Better Storage Management and Capability. Hang your hat on that…

 

Apple Defaults to Windows Standard

I was looking at the Apple documentation around Mavericks as I was interested to see how they were intending to make more use of the extensive metadata that they have had available for files stored; the keynote made me wonder whether they were beginning to transition to something more object-like. Something which makes a lot of sense in my world, it’d certainly give some of the application vendors a decent kick in right direction.

And I came across this snippet which will upset some die-hard Mac-fans but make some people who integrate Macs into corporate environments pretty happy.

SMB2

SMB2 is the new default protocol for sharing files in OS X Mavericks. SMB2 is superfast,
increases security, and improves Windows compatibility.

It seems that Apple are finally beginning to deprecate AFP and wholeheartedly embrace SMB2; yes, I know some of us might have prefered NFS but it is a step in the right direction. And Apple changing a default protocol to improve Windows compatibility; who’d have thunk it. Still, it appears that Apple are continuing with the horrible resource forks!!

And the big storage vendors will be happy…because they can finally say that they support the default network file system on OSX.

No, I can’t see evidence for a whole-hearted embracing of Object Storage yet..

 

 

More Thoughts On Change…

This started as a response to comments on my previous blog but seemed to grow into something which felt like a blog entry in it’s own right. And it allowed me to rethink a few things and crystalise some ideas.

Enterprise Storage is done; that sounds like a rash statement, how can a technology ever be done? So I better explain what I mean. Pretty much all the functionality that you might expect to be put into a storage array has been done and it is now done by pretty much every vendor.

Data Protection – yep, all arrays have this.

Clones, Snaps – yep, all arrays have this and everyone has caught up with the market-leader.

Replication – yep, everyone does this but interestingly enough, I begin to see this abstracted away from array

Data Reduction – mostly, dedupe and compression are on almost every array; slightly differing implementations, some architectural limitations showing.

Tiering – mostly, yet again varying implementations but fairly comparable.

And of course, there is performance and capacity. This is good enough for most traditional Enterprise scenarios; if you find yourself requiring something more, you might be better at looking at non-traditional Enterprise storage. Scale-Out for capacity and All-Flash for performance. Now, the traditional Enterprise Vendors are having a good go at hacking in this functionality but there is a certain amount of round pegs, square holes and big hammers going on.

So the problem for the Enterprise Storage vendors is as their arrays head towards functionality completeness is how they compete. Do we end up in a race to the bottom? And what is the impact of this? Although their technology still has value, it’s differentiation is very hard to quantify. It’s become commodity.

And as we hit functionality completeness; it is more likely that open-source technologies will ‘catch-up’; then you end up competing with free. How does one compete with free?

You don’t ignore it for starters and you don’t pretend that free can’t compete on quality; that did not work out so well for some of the major server vendors as Linux ate into their install base. But you can look at how Red-Hat compete with free; they compete on service and support.

You no longer compete on functionality; Centos pretty much has the same functionality as Red Hat. You have to compete differently.

But firstly you have to look at what you are selling; the Enterprise Storage vendors are selling software running on what is basically commodity hardware. Commodity, should not be taken as some kind of second-rate thing; it really means that we’ve hit a point where it is pretty standard, there is little differentiation.

Yet this does not necessarily mean cheap, Diamonds are a commodity. However, customers can see this and they can compare your price of the commodity hardware that your software runs on against the spot-price of that hardware on the open market.

In fact if you were open and honest, you might well split out the licensing costs of your software and the cost of the commodity hardware?

This is the very model that Nexenta use. Nexenta publish a HSL of components that they have tested Nexenta-stor on; there are individual components and also complete servers. This enables customers to white-box if they want or leverage existing server support contracts. If you go off piste; they won’t necessarily turn you away but there will be a discussion. The discussion may result in something new going onto the support list; it may end up finding out something definitively does not work.

We also have VSAs popping up in one form or another; these piggy-back on the VMware HCL generally.

So is it really a stretch to suggest that the Enterprise Storage vendors might take it a stage further; a fairly loose hardware support list that allows you to run the storage personality of your choice on the hardware of your choice?

I suspect that there are a number of vendors who are already considering this; they might well be waiting for someone to break formation first. There’s quite a few of them who already have; they don’t talk about it but there are some hyper-scale customers who are already running storage personalities on their own hardware. If you’ve built a hyper-scale data-centre based around a standard build of rack, server etc; you might not want a non-standard bit of kit messing up your design.

If we get some kind of standardisation in the control-plane APIs; the real money to be made will be in the storage management and automation software. The technologies which will allow me to use a completely commoditised Enterprise Storage Stack are going to be the ones that are interesting.

Well, at least until we break away from an array-based storage paradigm; another change which will eventually come.

 

 

Change Coming?

Does your storage sales rep have a haunted look yet? Certainly if they work for one of the traditional vendors, they should be beginning to look hunted and concerned about their prospects long-term; not this year’s figures and probably not next year’s but the year after? If I was working storage sales, I’d be beginning to wonder about what my future holds. Of course, most sales look no further than the next quarter’s target but perhaps its time to worry them a bit.

Despite paying lip-service to storage as software; very few of the traditional vendors (and surprisingly few start-ups either) have really embraced this and taken it to the logical conclusion; commoditisation of high margin hardware sales is going to come and despite all efforts of the vendors to hold this back, it is going to change their business.

Now I’m sure you’ve read many blogs predicting this and you’ve even read vendor blogs telling you how they are going to embrace this; they will change their market and their products to match this movement. And yet I am already seeing mealy-mouthed attempts to hold this back or slow it down.

Roadmaps are pushing commoditisation further off into the distance; rather than a whole-hearted endorsement, I am hearing HCLs and limited support. Vendors holding back on releasing virtual editions because they are worried that customers might put them into production. Is the worry that they won’t work or perhaps that they might work too well?

Products which could be used to commoditise an environment are being hamstrung by only running on certified equipment. And for what is very poor reasoning; unless the reasoning is to protect a hardware business. I can point to examples in every major vendor; from EMC to IBM to HDS to HP to NetApp to Oracle.

So what is going to change this? I suspect customer action is the most likely vector for change? Cheap and deep for starters; you’d probably be mad not to seriously consider looking at a commodity platform and open-source. Of course vendors are going to throw a certain amount of FUD but like Linux before; there is momentum beginning to grow, lots of little POCs popping up.

And there are other things round the corner which may well kick this movement yet further along. 64-bit ARM processors have been taped out; we’ll begin to see servers based on those over the next couple of years. Low-power 64-bit servers running Linux and one of a multitude of open-source storage implementations will become two-a-penny; as we move to scale-out storage infrastructure, these will start to infiltrate larger data-centres and will rapidly move into the appliance space.

Headaches not just for the traditional storage vendors but also for Chipzilla; Chipzilla has had the storage market sewn-up for a few years but I expect ARM-based commodity hardware to push Chipzilla hard in this space.

Yet with all the focus on Flash-based storage arrays, hybrid-arrays and the likes; everyone is currently focusing on the high-margin hardware business. No vendor is really showing their hand in the cheap and deep space; they talk about big data, they talk about software defined storage…they all hug those hardware revenues.

No, many of us aren’t engineering companies like Google and Facebook but the economics are beginning to look very attractive to some of us. Data growth isn’t going away soon; the current crop of suppliers have little strategy apart from continuing to gouge…many of the start-ups want to carry on gouging whilst pretending to be different.

Things will change.