Storagebod Rotating Header Image

Cloud

Pooh and the Cloud

As most people are aware, Winnie the Pooh is one of the greatest minds who ever lived and through his many adventures in 100 Acre Wood, we can learn many things about many subjects. Over the next few weeks, I shall be examining what he and his friends have to teach us about Cloud Computing. 

I think one of the first lessons that can be learnt from the 'Bear of Very Little Brain' is that calling yourself a Cloud and attempting to disguise yourself as a Cloud doesn't often fool the bees that make the honey. In fact they might indeed sting you and you could come crashing down into a prickly gorse bush. 

In fact, Pooh whilst hanging from the balloon, pretending to be a Cloud and getting stung by suspicious bees comes to the conclusion that the bees are 'the Wrong Sort of Bees' who make the wrong type of honey.

Pooh is acting like many a vendor in the Cloud space

1) Rolling around in mud and pretending to be a Cloud Company! 

2) When customers express suspicion about their cloud credentials; express the belief that these are obviously the wrong sort of customers! 

And although none of them have come crashing to the ground yet, I suspect some of them are looking slightly uncomfortable. Let's hope that they have a softer landing than a prickly gorse bush.

In my next entry, I will examine what lessons that we can learn from Pooh and friends about migration to and from the Cloud.

Infrastructure is Software

Chuck has just written a blog which was very similar to a blog that I was working and I agree with a lot of what he says but I'd take it a lot further and there are some interesting conclusions and potentials along the way which could open the market for interesting innovation going forward.

Chuck talks about storage as being software, go read his blog; there's little I would disagree with there at all; well, until he starts talking about EMC products! However, I would go much further and suggest that we are getting to the stage where all infrastructure at a very real level is becoming software. Although I am not totally enamoured with the Intel-focussed monoculture, it has allowed a common hardware platform and it is flattening the playing field when it comes to hardware differentiation.

In fact, to differentiate your hardware is going to become increasingly expensive and hard, so why bother? Yes, there will be edge cases where hardware differentiation will be a key USP but in 90% of all use-cases; an Intel box assembled from bog-standard off-the-shelf parts will be good enough. 

If we then factor in pervasive virtualisation in the data-centre; we have a platform which has become pretty much standardised and commoditised. I would like to see more 'standardisation' in the virtualisation arena but it's not that bad at present and you really do not have that many choices.

So this has some interesting results; it lowers the cost of entry for new players in the market, if they no longer have to spend time developing a hardware platform and packaging their infrastructure product as a device but simply as a 'soft appliance'; they can get it out there a lot quicker. If they can now rely on the virtualisation layer providing them with a common way of accessing hardware services; they can develop a lot quicker and they can try things much faster. The cost of failure is a lot less; it also allows integration with other types of infrastructure to be tested without a lot of expense; this is a plus for both the developer but also for the interested Infrastructure specialist.

The speed to market is greatly enhanced and they can get it front of idiots like me who will download the appliance and have a play and it's not just storage appliances. There are products like Traffic Squeezer which do WAN Network Traffic Acceleration; there are more open-source router projects than you can shake a stick at. 

I am slowly building a virtual Data Centre out of open source or at least free products; I want to see how far it would be possible to get. But I'm not suggesting that anyone would do this for real today, although I can see some Cloud providers having a really good bash at it. This approach probably would not make sense for most companies as a complete strategy but as part of a strategy, it may well be worth considering. There are large companies out there who invest in start-ups simply to develop stuff for them to use internally; the advent as 'infrastructure as software' without a huge investment in tooling up to build hardware means this a very viable approach for the biggest companies. 

Google do it, Amazon do it but often you hear the comments, 'Well they employ very clever people, so it's easier for them!' Well, don't you employ clever people? Or are you saying that all your employees are second rate.

It took me a long time to come round to the idea that commoditisation and standardisation could drive innovation at all levels; I now believe that it could. It's not just about more and more Web applications; it could drive a new wave of infrastructure innovation as well. 

This leaves some interesting conflicts on the horizon for companies like VMware; perhaps VMware might want to get into infrastructure appliances but that would lead them into direct competition with the Mothership. Infrastructure as software; interesting times.

Here's some links of things worth looking at or playing with, it possibly includes things which are not strictly infrastructure but are interesting anyway. Some are great, some not so great; some show great potential. 

Traffic Squeezer

Openfiler

Samba Ctdb

Vyatta

Amanda

OpenDedup

And of course, there is Ubuntu Server; which will let you build your own cloud for free; there are various ZFS-based storage appliances. You can build your own appliances as well, packaging up and integrating components in the way you want. 

One area where EMC have shown great foresight and that is investing in the vSpecialist team and building a team out of diverse specialities because as infrastructure becomes software; the cross-over between the infrastructure disciplines will become even more mandated. Now the vSpecialist team may be very focused on the 'EMC product set' but if I was an SI or another vendor playing in this space; I would be looking at doing something very similar in the near future. 

Clusters, WNPoT and Great Blogs…

I was researching for a blog entry I was going to write about clustering, clustered file-systems and positing whether the future of x86 virtualisation was a Single System Image hypervisor allowing seamless automatic migration of virtual machines between hosts and whether we could see some kind of automated tiering for applications or whether just faking SSI with clever load-balancing/migration technologies might be good enough when I came across Greg Pfister's blog here

Greg wrote 'In Search of Clusters' which was pretty much the Bible when I was working in HPC and clustering; I don't know where my copy is any more unfortunately and I note that the publication date was in 1997; so it's probably a little bit out of date. But his blog is a great read and I can recommend it to anyone who is interested in virtualisation and Cloud; it'll fill the gap until he writes another book. 

And he points to another great blog written by Charlie Stross who has come up with the acronym 'WNPoT' which stands for Wonderful New Piece of Technology which I suspect is another way of saying 'Awesome Sauce'. 

p.s Yes, I suspect we will see some attempts at a SSI hypervisor; IBM have a statement of direction for z/VM leading down that route, so I expect some brave soul to try to do the same for x86. But for the time being, I think faking it with some good tools might be good enough. 

Atmos Offline?

So Atmos Online has become Atmos Offline; well, okay, not quite yet but it's on the way to becoming so. There does appear to be alot of spinning going on from EMC about precisely what Atmos Online was and that is to be expected but it's really okay to try something and fail, it really is. I don't think EMC need to lick their wounds too much on this and I suspect they have learnt quite a lot from the experience i.e being a Service Provider is actually quite hard.

But it beggars another question, I wonder how well the other EMC Atmos based services are doing and how much traction they are getting against S3? It's funny, many of EMC's competitors in the storage world talk about EMC as some kind of marketing behemoth but in the competition for mindshare and getting traction in this space, they appear to be really struggling to get any kind of message out there. I suspect many EMC's competitors will also struggle to get traction against S3; so it is far too early for them to be crowing about EMC's failure as they haven't delivered anything in this space either. 

Although it is still early days, it does appear at present that Amazon's S3 is really ruling the roost in this space but EMC's Atmos in a Box might help them in this space; if they bother to tell anyone about it. 

AIAB may allow developers to play with Atmos and develop cool services on top of it. They could also do with getting some books written on working and developing Atmos, building a development community around Atmos might also be a good thing. EMC are not a company which immediately comes to a developer's mind when they are developing cloud-services; they need to work to change this; this is a new market for EMC to learn how to compete in.  

But let's not forget that in the pile it high and sell it cheap world of consumer cloud storage, EMC have Mozy and Iomega. This combination is a service that people do want and with people like Asus offering cloud storage with their NetBooks, this is a growth market. 

I wouldn't count EMC out of the public Cloud market just yet; an early knock-down doesn't necessarily mean an early knock-out. If anything, they got into the ring and tripped over their shoelaces whilst swinging for someone who was either in a different ring or not actually turned up yet. 

Duck the bullets….

Chris Mellor's blog entry about the 'The Storage Array Killing Fields' and Scott Lowe's entry on 'The Future of NetApp' touch on very similar areas; what is the future of storage array manufacturers and in Scott's case, what is the future of a specific array vendor but many of Scott's points can be applied to any number of storage vendors.

So what is the future of the storage array? Are storage arrays even relevent today? This is a question which is going to be increasingly asked as we move towards a commodity based approach; it has wide reaching implications for the storage array vendors. Of course, when we talk about the future; what horizon are we talking about? 2 years, 5 years or even 15 years? 

I suspect the short-term future of the storage array, certainly the next 2 years is probably fairly rosey. Demand for storage is strong and the general herd are not yet transitioning to dynamic, commodity-based data centres. If you are storage array vendor, you should be able to continue to make hay. But I think that you only have to look at the high-end array market to see some interesting trends.

VMAX? VMAX almost stands by itself at the moment; there are no real competing products in the market today. Huh? What about HDS and IBM? Well, HDS are due a refresh of the USP and then I suspect that V-MAX has a real competitor in that space. IBM however, really don't want to sell you DS8K unless you are a mainframe shop. IBM are no longer that interested in the high-end array; if you are a non-mainframe shop, they will try to sell XIV and they will do some interesting things to do so.

I do not expect HDS, EMC or IBM to vacate this space until the mainframe dies but the reality is that for most IT shops, the high-end array is probably not where they want to be; many of them have not realised this and are still buying high-end arrays because they always have done and that is what they are comfortable doing. In reality; Clariion, AMS, FAS, XIV, 3Par and a whole raft of others are going to do the job for most purposes; probably 80-90% of the workloads which run on high-end arrays today, no longer belong there.  

It is in this mid-range sector that we are seeing most main-stream development and some of the major new features such as automated tiering and thin provisioning have come from there. It is a very busy sector however and as Chris points out in his blog entry, if integrated stacks become pervasive there are going to be casualties and some of the casualties could be large. 

But there is another emerging market; that is bulk commodity storage; 'Cloud Storage' if you will and I'm not expecting this to delivered as part of the integrated stack for some time. My reasoning for it not being delivered as part of the integrated stack is that it is not yet well enough defined as a concept. Customers are not really ready for it, delivering it as part of the stack adds very little value to the stack at present.

Arguably much of the data which sits on mid-range storage could probably shift down to bulk commodity storage but I think that there needs to be more clarity around the delivery model for bulk storage and how it will/should be integrated. 

So there is a window of opportunity for vendors to innovate in this space but will the traditional vendors seize it? Or will they simply see 'Cloud Storage' as simply another gateway into their current cash-cow? Or perhaps paint the cow a different colour and call it something else? 

I think we've seen evidence of all of these approaches from the traditional storage vendors and maybe their history prevents them from seeing that the delivery model of their product needs to change. 

In fact, this emerging market is being ignored by a great number of these traditional vendors who are hypnotised by the current array market which as Chris points out is about to become a killing field. There are going to be casualties….move to somewhere where lots of people are not trying to shoot each other. The new field is getting crowded but there's still time to stake a claim.

No Longer Functional

Having worked in Corporate Infrastructure for many years, fighting the good fight and trying to get the enemy to conform to best practise and generally think beyond the next line of code that they are writing, I surrender! I throw the towel in!

I'd like to take an example without being too specific from my own experience; currently in the race to innovate, many corners are cut. Every screen whether it is a TV screen or the screen on a hand-held games device is now potential target for a product. 

Our developers build proof-of-concept systems on servers under their desks at best and often on desktops which might be laying around spare; proof-of-concepts are demonstrated and then proof-of-concepts are products. Cycle times for delivery of new product are often measured in weeks, not the months and years of the past. 

No thought is made to non-functional requirements, the capability is all and it's now a product with paying customers who expect it to be up and running. In fact, the mention of non-functional requirements are enough to send customers running for the hills, they don't want to know. 

But of course, in reality we have to retro-engineer in all those requirements that are required to make it a supportable service. And whilst we are doing this, our people are in a constant struggle to keep this system up. At times, our people do remarkable things in this battle and they should be commended for doing so. 

But there must be a better way than PCs under desks and developers acting like MacGyver to get a new service together? I think there is, I think it's a dynamic, scalable, on-demand infrastructure; call it Cloud, call it what you will. Users should be given the ability to throw-up an environment quickly and easily with almost no thought made to the availability, scalability requirements which will surely be required if the development is a success. 

Yes, some use is made of public Cloud but I think that pales into insignificance when compared to use of commodity hardware which is just lying about; this has been going on since the PC reared it's ugly head but as PC's have become more powerful with the ability to run server operating systems and especially with the rise of Open Source, every developer has the ability to build a 'production environment' without permission. 

Are they doing things in Cloud? Yes, they surely are but I suspect it is a more common situation for a infrastructure team to be presented with a skunk-works system built out of commodity and be told to make it live…..tomorrow. 

And it is then that the Cloud comes into play, it would be much better to have built an environment which allows developers the flexibility and agility that they require, having them work in an environment which can be promoted to a production environment rapidly. An environment which is as cheap and as flexible as the development teams believe that their PCs are but gives the infrastructure teams the supportability that they require.

Non-functional requirements whether we like it or not are just bolt-ons; after-thoughts. They are seen as the obstacles which stop customers gettting the new services which they require, it's time to move on. I still believe that developers should pay attention to non-functional requirements, I still believe that systems should be designed with availability, scalability, recoverability etc in mind but I think that this is now needs to provided in such a way that it is transparent to all.

Of course this transparency should be completely transparent and open allowing rapid migration between providers, private and public…but that is another story!

One door closes, another opens

I find the announcements of the past few weeks from EMC and NetApp around their cloud storage offerings really interesting; it shows an interesting contrast at the moment in approach. One increasingly controlling and one lessening control. 

EMC finally announced a software appliance version of Atmos allowing you to use Atmos with any storage which is certified with VMWare; this is a long overdue move, it's been pretty much an open secret that a software version of Atmos has been existence since prior to launch. It is a software product.

And NetApp announced the repackaged Bycast StorageGrid product; a software product which supported a number of third party storage devices but now only supports NetApp storage. 

As I look at storage and the provision of storage in my working life, I am finding more and more features moving up the stack into the software layer, storage infrastructure is becoming more and more hardware agnostic. More and more the value-add sits above the storage array controller; I can buy pretty much any array and stick it behind a software layer which gives me all the features that I require. 

Beyond the basic provision of RAID, the value of the array in such an infrastructure is pretty minimal and with object stores, even the value of RAID must be questionable. Yes, I can use third party storage with the StorageGrid product but I need to now put it behind a vSeries; this seems to add an unnecessary layer. EMC's move of using a hypervisor layer to ensure that they are dealing with a known environment feels much more sensible in this case.

Yes in generalised block-level; NetApp, IBM and HDS really currently are streets ahead in virtualising third party storage but if EMC start to make even more use of hypervisor layer; they could make real strides in this. Certainly in the mid-range, less performance sensitive space; this approach may just be good enough. 

So I really struggle to see the logic in the NetApp move; mass object storage is about commoditisation and moving features to a more independent layer. And it's also kind of wierd to be in the position where EMC actually give you more choice about your back-end storage than NetApp; it is simply wrong! *shudder*

It's also pretty cool to be able to pretty much build a complete virtual data-centre on my desktop PC using virtual appliances and software layers of various flavours; pity none of them are NetApp's at the moment as they are simply too limited. 

And Barry W, if you read this; pull your finger out and get a virtual SVC appliance available; how hard can it be??

p.s Hah, how many people read the title and thought….another blogger going to join EMC!? 

 

NetApp StorageGrid – More Questions than Answers?

Okay, so NetApp have announced the NetApp StorageGrid product, however at the moment as far as I can see it is a simple rebrand of the ByCast product. I am not sure whether I was expecting anything more or whether I was expecting them to go dark with the Bycast product set for the time being whilst they work out what the hell they are going to do with it and at least come up with an integration strategy for the products.

Like many I wonder what this does to the whole Unified Storage message because NetApp now have two disparate storage product sets which are not integrated; I'm sure that they are briefing the integration message under NDA and if not, I'd ask why? But I'd interested to see what form the integration takes, will be it be at the tools level or will be it more fundamental integration more akin to OnTap 8. 

As NetApp have announced it under the Storage Management Software product set, it appears to be the former, certainly for the short to medium term and I suspect that NetApp are going to be very wary about going after a full blown integration or at least a public statement on it after the torturous integration of Spinnaker.

The data-sheet shows a software gateway layer sitting above the OnTap filers, well I think that's what it shows. It says that the front-end app server supports NFS/CIFS/HTTP(Restful) protocols communicating with the back-end storage via NFSv3; so theoretically, the back-end storage could be anything supporting NFSv3? But at present the data sheet actually shows a very restricted storage environment supported, namely FAS31x0 and FAS20xx and only SATA drives, so there seems to be no way of utilising your legacy storage in your StorageGrid. This is a little disappointing but no huge surprise, if EMC decide to 'support' third party storage with Atmos, it should be no biggie for NetApp to follow suit with StorageGrid; or perhaps vice-versa.

And as ByCast StorageGrid was resold by a number of other vendors, what is the ongoing roadmap for those customers who are running StorageGrid with different vendors storage behind it? Are these customer's going to be expected to move to NetApp storage?

Also from the diagram in the data-sheet;

'NetApp StorageGRID object-based storage solution brings the best of NAS and RESTful HTTP client access together'

Now I am willing accept that NetApp claim that the Filer product set are the best of NAS but to provide this 'best of breed functionality' with the StorageGrid product would imply a deeper level of integration than I can currently see or are they claiming that the Bycast product was actually the best NAS product out there? 

Is the Filer behind the Gateway being treated as pretty a dumb-share-only Filer and not leveraging any of the OnTap features at all? Even if this is the case, it is a cute move politically as the sales-team will not see any potential Filer sales being cannibalised by this new product. A problem that I believe that EMC might have had to deal with the Atmos product set.

One of the keys will be how NetApp present the integration; will they add StorageGrid to Ops Manager? It seems to make sense that you add it at that level because Ops Manager is the preferred way of managing multiple Filers and to get the most out of StorageGrid, there will be many Filers.It also keeps it in the realms of the familiar.

If it is seen as very much a different product it makes the Unified Storage pitch a little harder as it becomes mostly-Unified-Storage product which is a bit like being a little bit unique.

So this announcement asks many more questions than it answers! 

And one final comment, what is the difference between an Storage Grid and a Storage Cloud? Is it an Object Cloud or an Object Grid? Does the Object Cloud live in the Storage Grid?? 

iBlock?

What can Apple teach us about Enterprise IT? Apple and Enterprise IT, words which don't really belong in the same sentence but perhaps we can learn quite a lot about the future of Enterprise IT by looking at Apple and its current strategy. 

Firstly, like many geeks I must admit to having a very uneasy relationship with Apple and it's products; I still keep thinking style over substance, overpriced and under performing kit. So why is my laptop of choice a MacBook, why do I own an iPhone and an iPad? Why am I looking forward to June 7th and Steve's keynote where he'll certainly announce a new iPhone? 

Like it or not, Apple's stuff just works; my MacBook boots up in the half the time of my Windows Laptop (actually it's even faster since I put an SSD in it), applications just work; hardware and software work in harmony because they have been designed in concert to work together. I don't measure TCO for my home kit but the time I save with at least one piece of kit which just works is great. It gives me the time to hack about with Linux, ESX and Windows. And of course, hidden under the covers, there beats the heart of the ultimate geek operating system, Unix! 

And then there's the iPhone and the iPad; Apple have taken control-freakery to extremes; even telling you what languages you can develop in and then controlling the method of distribution and if Steve doesn't like it, it isn't coming in. But the app-store is so unbelievably convenient; installation of applications is just a tap away and despite the fact that Steve's control-freakery is simply wrong, I still happily use the devices and ignore that nagging voice in the back of mind.

Sure Apple's stuff is more expensive but it just works; it's a fairly sad indictment that to get stuff that just works, we are willing to pay more but that appears where we are at the moment. Apple have developed the iBlock or various iBlocks; perhaps quietly and subconsciously, various strategists in the Enterprise Industry have been influenced by this seductive idea that things should just work? 

People are getting used to the idea that there's an app for everything and it's simply a tap away to get. Our users are getting used to this on the iPhones and now their iPads; we can expect that they are going to ask why they first can't get the same service for their desktops and eventually for their enterprise servers. And they'll just expect everything to work and work *now*. 

But a word of caution and take this from the voice of experience; Apple's TCO in a heterogeneous environment soars, it is painful to get it to work with anything else. It wants to do everything it's own way and plays very begrudgingly with others. If you need to do something slightly out of the ordinary, you will struggle to do so. 

Apple is great as long as what you do is what Apple wants you to do in the what it wants; which is why it will always struggle in the Enterprise. Let's hope that the various Enterprise stack vendors learn both the positive lessons from Apple but also take account of the downsides.

Finding Space and How You Can Help

For the last twenty years or so, anyone who has worked in an Enterprise IT department have had various traits drummed into them and if the Business has not got the IT department that they want, it is not just the IT department's fault. 

IT Managers have accounting principles banged into them; almost more so than any other department apart from the actual Finance department. In almost no other department apart from the Finance department are the concepts of TCO, ROI etc really understood; we deal with rapidly depreciating capital assets which also attract huge recurring costs. And unless IT is your core business, these are not seen as adding real value to the Business but a cost. 

IT Managers also become pretty expert in the area of contract law, leasing law and many other forms of corporate law; at various times in my career, I have been involved in drafting complex support documents, negotiating penalty clauses, get-out clauses, payment schedules and other minutiae which I had no intention of getting involved in when I decided to do IT. 

Innovation within Enterprise IT often comes second to day-to-day keep the lights on which is all kind of sad because most IT guys I know have a ton of creativity and ideas on how to improve IT. A lot of people get into IT because it is their passion and their hobby.

However, perhaps the vendors can help with all this and perhaps instead of coming up with fairly trite redefinitions of accepted terms and sound-bites; perhaps they can try answering those questions which they can help with?

So next time an Enterprise IT guy asks the question 'What's the TCO Model?', 'Can you help build the ROI Business Case?'; if you answer the question, you might find that you have a much happier and more loyal customer who will say good things about you? Perhaps you will find a customer who actually has time to listen to your latest product pitch? The New IT will still have to follow all the good financial governance that we are supposed to practise today, so there is no pointing pretending that it doesn't. 

Help give your customers the space to innovate and perhaps you can help turn IT departments into the IT partners which businesses need and want now?