Storagebod Rotating Header Image

Storage

2013 – The Year of Stew!

Bubble, bubble…there’s lots of things bubbling away in the storage pot at the moment and it appears to be almost ready to serve. Acquisitions are adding ingredients to the stew and we will see a spate in early 2013 as well; the fleshing out of the next generation of storage arrays will continue.

Yes, we will see some more tick-tock refreshes; storage roadmaps have become tied to the Intel/AMD roadmap as they have become commoditised.  More IOPs, more capacity and more features that you will possibly never use. And the announcements will make you yawn, certainly the roadmap presentations that I have had are not exactly stimulating.

It is the Flash announcements and finally shipping product that will generate the most interest; latency, long the enemy of performance and utilisation will be slain or at least have GBH visited upon it.

The question is going to be how to implement Flash and the options are going to be myriad; there is going to be significant focus on how to bring this low-latency device closer to the server. I would expect to see an explosion in Cache devices both in the server but also in appliance format.

And we will finally see some all-Flash arrays starting to ship from the big boys; this will bring credibility to some of the smaller players. It is easier to compete with something than trying to introduce a completely new class of array.

But I think the really interesting stuff is going to be happening in the file-system space; Ceph will grow in maturity and with OpenStack gaining traction, expect this to mature fast. This is going to force some of the object storage vendors to move away from their appliance model and also encourage some more mature vendors to look at their file-systems and see them as potential differentiators.

CDMI also appears to be actually beginning to happen; I have been very sceptical about this but the number of vendors who are beginning to ship CDMI-compatible product is gaining momentum.

Another trend I am seeing is the deployment of multiple storage solutions within a data-centre; few people are currently standardising, there’s a lot of experimentation and there is an acknowledgement that one size really does not fit all.

Expect a lot of pain as infrastructure teams try to make things just work; Dev-Ops teams will continue to forge ahead, traditional infrastructure teams will be playing catch-up until new ways of working can be put in place.  This is not one way traffic though; expect some fun and games in 2014/2015 as some chickens come home to roost.

Management tools are going to be big again….expect lots of attempts to build single-pane of glass management tools which cater for everything. APIs and Automation will be held up as some kind of universal magic toolset; expect that cauldron to bubble over and cause a mess as the Sorceror gets more apprentices who try to short-cut.

I see a year of fun and change….and some tasty bowls of nourishment with some really soggy horrible dumplings floating about.

More Things Change….

Seeing the latest spat involving EMC regarding Xtreme-IO is kind of nice; it feels like the arguments of days gone by; swap NAS and Flash, you’ll probably find the same blog entries work and the same arguments made.

Storage seems to be more cult-like in nature which leads to these vigorous debates and some rather amusing tantrums. It is probably closer to the ‘Linux distribution’ model as opposed to the ‘Operating System model’ .

In the server world, it is a pretty large investment in time and money to move from one operating system to another; it is certainly pretty disruptive. Yet in the storage world, we can change vendors with some ease; we understand migrating workloads in a non-disruptive manner. It would be fairly unusual to find a Storage Manager who can’t describe at least in theory how to do this. This leads too many vendors feeling a little nervous and tense; customers do have a lot more power and choice in this space.

It also means that there is space in the market for newcomers to come in and disrupt; it is probably ironic that EMC own the company and technology that actually allows their core storage products to be most disrupted. VMware allowed NetApp to get a massive foothold in some of EMC’s backyard and it seems that it may also allow some of the flash vendors to get a foothold too.

It used to be fairly common to find a fairly homogeneous storage environment with EMC owning a whole data-centre; in talking to my peers in the industry, this is less common. Multiple storage vendors are becoming the norm despite the management headaches that this does bring at times. Many of the headaches are overstated mind you and as more people come to realise this, this will put yet further pressure on the likes of EMC.

I wonder if this is why EMC get particularly sensitive about the whole subject? They can’t out innovate the myriad of small storage start-ups and indeed they enable many of them; this will mean that they will be slower to market and will rely on their engineering being very solid and spot-on.

All sounds strangely familiar mind you!

Scale-Out Fun For Everyone?

Recently I’ve been playing with a new virtual appliance; well new to me in that I’ve only just got my hands on it. It’s one of the many that our friends in EMC have built and it is one which could do with a wider audience.

A few years ago Chad Sakac managed to make the Celerra virtual appliance available to one and all; a little sub-culture built up around it and many VMware labs have been built around it; when the Celerra and Clariion morphed into the VNX range, the virtual appliance followed. Nicholas Weaver further enhanced it and made it better and easier to use. It’s a great way for the amateur to play with an Enterprise class NAS and get some experience; I suspect it is also a great way for EMC to get community feedback and input on the usability and features in the VNX. A win/win as we like to say.

But EMC have another NAS product, one that I suspect over the long term will become the foundation of their NAS offerings; it is certainly important to their Big Data aspirations; yes, the Isilon range of Scale-Out NAS. I’d always suspected that there must be an appliance version kicking around; I mean anyone who has ever played with an Isilon box will have realised that it really is just an Intel server. You can order the SuperMicro motherboard which it is built on and pretty build your own if you wanted.

At a recent meeting, I was talking about the need for a training/test system for some of my guys to play on and lamenting that I probably could not justify the cost; our Isilon TC said ‘Why don’t I send you links to the Virtual Appliances?’

I bit his hand off and now I have a little virtual Scale-Out NAS to play with. It’s pretty much as easy to set up as the real thing without all the hassles of racking and stacking; I’ve got it running with 5 virtual nodes with a small amount of disk and can mess around with to my heart’s content.

I wish that you guys could also have a play but perhaps the guys from the Isilon team are bit nervous that we might do some silly things like put it into a production environment. I guess some of you might be that stupid but it didn’t stop them putting out the Celerra/Clariion version. So EMC can you give the community an early Christmas present and get the Isilon appliance out there.

Scale-Out NAS is going to be a really important growth sector; OneFS is a great product and it takes away a lot of the pain in building them and helps to demystify the whole thing.

At worst, a few geeks like me get to have some fun and you get some interesting feedback but I suspect you might find some people doing some interesting things with it and build a decent community.

And IBM, perhaps you could do the same and build a SONAS appliance and get that out as well?

I’d love to see EMC make the Enginuity appliance generally available but that does have stupid memory and CPU requirements, so I’m not holding my breath for that….

Flash is dead but still no tiers?

Flash is dead; its an interim technology with no future and yet it continues to be a hot topic and technology. I suppose I really ought to qualify the statement, Flash will be dead in the next 5-10 years and I’m really thinking about the use of Flash in the data-centre.

Flash is important as it is the most significant improvement in storage performance since the introduction of the RAMAC in 1956; disks really have not improved that much and although we have had various kickers which have allowed us to improve capacity, at the end of the day they are mechnical devices and are limited.

15k RPM disks are pretty much as fast as you are going to get and although there have been attempts to build faster spinning stuff,; reliability, power and heat have really curtailed these developments.

But we now have a storage device which is much faster and has very different characteristics to disk and as such, this introduces a different dynamic to the market. At first, the major vendors tried to treat Flash as just another type of disk; then various start-ups questioned that and suggested that it would be better to design a new array from the ground-up and treat Flash as something new.

What if they are both wrong?

Storage tiering has always been something that has had lip-service paid to but no-one has ever really done it with a great deal of success. And when you had spinning rust; the benefits were less realisable, it was hard work and vendors did not make it easy.  They certainly wanted to encourage you to use their more expensive Tier 1 disk and moving data around was hard.

But Flash came along and with an eye-watering price-point; the vendors wanted to sell you Flash but even they understood that this was a hard-sell at the sort of prices they wanted to charge. So, Storage Tiering became hot again; we have the traditional arrays with Flash in and the ability to automatically move data around the array. This appears to work with varying degrees of success but there are architectural issues which mean you never get the complete performance benefit of Flash.

And then we have the start-ups who are designing devices which are Flash only; tuned for optimal performance and with none of the compromises which hamper the more traditional vendors. Unfortunately, this means building silos of fast storage and everything ends up sitting on this still expensive resource. When challenged about this, the general response you get from the start-ups is that tiering is too hard and just stick everything on their arrays. Well obviously they would say that.

I come back to my original statements, Flash is an interim technology and will be replaced in the next 5-10 years with something faster and better. It seems likely that spinning rust will hang-around for longer and we are heading to a world where we have storage devices with radically different performance characteristics; we have a data explosion and putting everything on a single tier is becoming less feasible and sensible.

We need a tiering technology that sits outside of the actual arrays; so that the arrays can be built optimally to support whatever storage technology comes along. Where would such a technology live? Hypervisor? Operating System? Appliance? File-System? Application?

I would prefer to see it live in the application and have applications handle the life of their data correctly but that’ll never happen. So it’ll probably have to live in the infrastructure layer and ideally it would handle a heterogeneous multi-vendor storage environment; it may well break the traditional storage concepts of a LUN and other sacred cows.

But in order to support a storage environment that is going to look very different or at least should look very different; we need someone to come along and start again. There are a various stop-gap solutions in the storage virtualisation space but these still enforce many of the traditional tropes of today’s storage.

I can see many vendors reading this and muttering ‘HSM, it’s just too hard!’ Yes it is hard but we can only ignore it for so long. Flash was an opportunity to do something; mostly squandered now but you’ve got five years or so to fix it.

The way I look at it; that’s two refresh cycles; it’s going to become an RFP question soon.

 

 

 

 

Software Sucks!

Every now and then, I write a blog article that could probably get me sued, sacked or both; this started off as one of those and has been heavily edited as to avoid naming names…

Software Quality Sucks; the ‘Release Early, Release Often’ meme appears to have permeated into every level of the IT stack; from the buggy applications to the foundational infrastructure, it appears that it is acceptable to foist beta quality code on your customers as a stable release.

Having run a test team for the past few years has been eye-opening; by the time my team gets hands on your code…there should be no P1s and very few P2s but the amount of fundamentally broken code that has made it to us is scary.

And then also running an infrastructure team, this is beyond scary and heading into realms of terror and just to make things nice and frightening, every now and then, I ‘like’ to search vendor patch/bug databases for terms like ‘data corruption’, ‘data loss’ and other such cheery terms; don’t do this if you want to sleep well at night.

Recently I have come across such wonderful phenomena as a performance monitoring tool which slows your system down the longer it runs; clocks that drift for no explicable reason and can lock out authentication; reboots which can take hours; non-disruptive upgrades which are only non-disruptive if run at a quiet time; errors that you should ignore most of the time but sometimes they might be real; files that disappear on renaming; updates replacing a update which makes a severity 1 problem worse..even installing fixes seems to be fraught with risk.

Obviously no-one in their right minds ever takes a new vendor code release into production; certainly your sanity needs questioning if you put a new product which has less than two year’s GA into production. Yet often the demands are that we do so.

But it does lead me wondering, has software quality really got worse? It certainly feels that it has? So what are the possible reasons, especially in the realms of infrastructure?

Complexity? Yes, infrastructure devices are trying to do more; no-where is this more obvious than in the realms of storage where both capabilities and integration points have multiplied significantly. It is no longer enough to support the FC protocol; you must support SMB, NFS, iSCSI and integration points with VMware and Hyper-V. And with VMware on an 12 month refresh cycle pretty much, it is getting tougher for vendors and users to decide which version to settle on.

The Internet? How could this cause a reduction in software quality? Actually, the Internet as a distribution method has made it a lot easier and cheaper to release fixes; before if you had a serious bug, you would find yourself having to distribute physical media and often in the case of infrastructure, mobilising a force of Engineers to upgrade software. This cost money, took time and generally you did not want to do it; it was a big hassle. Now, send out an advisory notice with a link and  let your customers get on with it.

End-users? We are a lot more accepting of poor quality code; we are used to patching everything from our PC to our Consoles to our Cameras to our TVs; especially, those of us who work in IT and find it relatively easy to do so.

Perhaps it is time to start a ‘Slow Software Movement’ which focuses on delivering things right first time?

Why So Large?

One of the most impressive demonstrations I saw at SNW Europe was from the guys at Amplidata; on their stand, they had a tiny implementation of Amplistor with the back-end storage being USB memory-sticks. This enabled a quick and effect demonstration of their erasure encoding protect and the different protection levels on offer; pull one stick and both video streams kept working, pull another one and one stopped, the other kept playing.

It was a nice little demonstration of the power of their solution; well I liked it.

But it did start me thinking, why do we assume that object-stores should be large? Why do the start-ups really only target petabyte+ requirements? Certainly those who are putting together hardware appliances seem to want to play in that space?

Is there not a market for a consumer-level device? Actually, as we move to a mixed-tier environment even at the consumer level with SSD for application/operating system and SATA for content, this might start to make a lot of sense.

We could start to choose protection levels for content appropriate to the content; so we might have a much higher level of protection for our unique content, think photos and videos of the kids; we might even look at some kind of Cloud storage integration for off-site.

And then I started to think some more; is there not a market for an consumer device which talks NFS, SMB and S3? Probably not yet but there may well be in the future as applications begin to support things like S3 natively. I can see this playing especially well for consumers who use tablets as their primary computing device, many apps already talk to the various cloud storage providers and it is not a stretch to think that they might be able to talk to a local cloud/object store as well.

I have seen home NAS boxes which support S3 as a back-up target; actually another device that I saw at SNW which is more a SMB device than a home NAS supports a plethora of Cloud Storage options. The Imation Dataguard Data Protection Device looks very interesting from that point of view. So when will we see the likes of Synology, Drobo and competitors serve object storage and not just use it as a back-up target?

I think it will happen but the question is, will Microsoft, Apple etc beat the object storage vendors to the punch and integrate it into the operating system?

Good Enough Isn’t?

One of the impacts of the global slowdown has been that many companies have been focussing on services and infrastructure which is just good enough. For some time now, many of the mainstream arrays have been considered to be good enough. But the impact of SSD and Flash may change our thinking and in fact I hope it does.

So perhaps Good Enough Isn’t Really Good Enough? Good Enough is only really Good Enough if you are prepared to stagnate and not change; if we look at many enterprise infrastructures, they haven’t really changed that much in over the past 20 years and the thinking behind them has not changed dramatically. Even virtualisation has not really changed our thinking because still despite the many pundits and bloggers like me who witter on about service thinking and Business alignment; for many it is just hot-air.

There appears to be a lack of imagination that permeates our whole business; if a vendor turns up and says ‘I have a solution which can reduce your back-up windows by 50%’, the IT manager could think ‘Well, I don’t have a problem with my back-up windows; they all run perfectly well and everyone is happy…’. What they don’t tend to ask is ‘If my back-up windows are reduced by 50%, what can I do with the time that I have saved; what new service can be offered to the Business?’

Over the past few years, the focus has been on Good Enough; we need to get out of this rut and start to believe that we can do things better.

As storage people, we have been beaten up by everyone with regards to cost and yet I still hear it time and time again that storage is the bottle-neck in all infrastructures; time to provision, performance, and capacity; yet we are still happy to sit comfortably talking about ‘Good Enough Storage’.

Well, let me tell you that it isn’t ‘Good Enough’ and we need to be a lot more vocal in articulating why it isn’t and why doing things differently would be better; working a lot closer with our customers in explaining the impact of ‘Good Enough’ and letting them decide what is ‘Good Enough’.

Big Answers Need Big Data?

At a SNW briefing session today, X-IO (Xiotech) talked a lot of sense about Big Data; in fact it was almost the most sense that I have heard spoken about Big Data in a long time. The fact is that most Big Data isn’t really that big and the data-sets are not huge; there are exceptions but most big data-sets that many companies will use can be measured in a few terabytes and not the tens or hundreds of terabytes that the big storage vendors want to talk about.

Sentiment data which can derived from social networking, these are not necessarily big data sets. A tweet for example is 140 characters, so 140 bytes…a terabyte is 1 099 511 627 776 bytes; we can store a lot of tweets in a terabyte and within that data, there is a lot of information that can be extracted.

In fact, there are probably some Big Answers in that not so Big Data but we need to get rid of the noise; in order to do this, we need to be able to process this data differently and directly. The most important thing that the storage can do is to vanish and become invisible; allow data processing to be carried out in the most natural way and not require various work-arounds which hide the deficiencies of the storage.

If your storage vendor spends all their time talking about the bigness of data; then perhaps they might be the wrong vendor.

Ho Hum!

Recently we’ve had a few problems with one of our storage systems, we were getting some strange errors and didn’t know what was going on. And although the problems were irritating, they weren’t show-stopping but as volumes increased, it was becoming more of a problem. The support logs weren’t giving us any real clues and the response we got from the vendor support was go to the latest version. Now, I am very averse to this sort of problem fixing; I want to know why we should go to the latest version and where in the patch notes it mentions this particular problem and how it is going to fix it.

Anyway, this story has a kind of funny ending; firstly, the vendor turned up mob-handed to a meeting and brought an expert along. I am always cynical when a vendor turns up with an expert as they are often nothing of the sort but this time the guy won our respect by saying, ‘I see that the Support guys are telling you that you should upgrade your software; don’t do that, I’m not convinced that’s the problem..’ Then he arranged for us to get some performance management software on trial.

Unfortunately I have no idea whether this would have helped as one of my colleagues noticed a pattern and decided that a reboot might help things. And it did…the problem has now gone away unfortunately as I have a horrid suspicion that it could return. But hey, the user has stopped moaning for the time being and I can go back to worrying about why our insane data growth carries on getting more insane!

And of course, we could have upgraded the software which would have necessitated a reboot and the problem would have gone away as well! Everyone would have believed that the problem was fixed and carried on happily…so at least, I still know there’s a problem and I know the solution is a reboot…

Ho Hum!

Wellies!

I was watching the iPhone 5 announcement with a sinking feeling; I am at the stage where I am thinking about upgrading my phone and have been thinking about coming back to Apple and I really wanted Apple to smash the ball over the pavilion and into the car-park (no baseball metaphors for me). But they didn’t, it’s a perfectly decent upgrade but nothing which has made my mind up for me.

I am now at the situation where I am considering another Android phone, an iPhone or even the Lumia 920 and there’s little to choose between them; I don’t especially want any of them, they’ll all do the job. I just want someone to do something new in the smartphone market but perhaps there’s nothing new to do.

And so this brings me onto storage; we are in the same place with the general purpose corporate storage; you could choose EMC, NetApp, HDS, HP or even IBM for your general purpose environment and it’d do the job. Even price-wise, once you have been through the interminable negotiations mean that there is little between them. TCO, you choose the model which supports your decision; you can make it look good or bad as you want. There’s not even a really disruptive entry to the market; yes, Nexanta are getting some traction but there’s no big market swing.

I don’t get the feeling that there is a big desire for change in this space. The big boys are packaging their boring storage with servers and networking and trying to make it look interesting and revolutionary. It’s not.

And yet, there are more storage start-ups in storage than ever before but they are all focused around some very specific niches and we seeing these niches becoming mainstream or gaining mainstream attention.

SSD and flash-accelerated devices aimed at the virtualisation market; there’s a proliferation of these appearing from players large and small. These are aimed at VMware environments generally, once I see them appearing for Hyper-V and other rivals; then I’ll believe that VMware is really being challenged in the virtualisation space.

Scalable bulk storage; be it Object or traditional file protocols; we see more and more players in this space. And there’s no real feeling of a winner or a dominant player; this is especially true in the Object space where the lack of or even the perceived lack of a standard is hampering adoption by many who would really be the logical customers.

And then there is the real growth where the exciting stuff is happening; this is the like of Dropbox, Evernote and others; this is really where the interesting stuff is happening, it is all about the application and the API access. This is kind of odd, people seem to be willing to build applications, services and apps around these proprietary protocols in a way that people feel unwilling to do so with the Object Storage vendors. Selling an infrastructure product is hard, selling an infrastructure product masquerading as a useful app….maybe that is the way to go.

It is funny that some of the most significant changes in the way that we will do infrastructure and related services in the future is being driven from completely non-traditional spaces..but this kind of brings me back round to mobile phones, Nokia didn’t start as a mobile company and who knows perhaps it’ll go back to making rubber boots again.