Storagebod Rotating Header Image

Cloud

Kindling the Fire

Obviously many people are going to use the same very obvious pun but hell, I’m not going to apologise. Amazon have finally launched their tablet and at first glance it does appear to be a bit of a rush job to be honest; no 3G, no GPS, no cameras and no microphone. This not an iPad 2 replacement; if you still want a iPad, you are probably going to buy an iPad and if you already have an iPad, you will probably wait another six months for the iPad 3.

And it’s not really a Kindle eReader replacement; yes, they’ve refreshed that range and they look nice but the Fire isn’t an eReader. Colour e-Ink is a year or two away for a consumer device and that’s probably the thing which will convince me to change away from my Kindle Keyboard 3G.

Amazon know people love their Apple stuff and to try to compete with this visceral and illogical love is madness; what Amazon know is that people want to consume content quickly and easily via many devices. They also know how to use the Amazon experience to encourage stickiness and further business; I don’t think that the ‘Amazon Recommends’ algorithms are that great but I do find myself adding things to my basket which it recommends on a disturbingly frequent basis.

This is going to give Apple a real headache over time; how many iPads are only used for content consumption? How many people really use video-chat?

Yes, a camera and a microphone would be nice but I suspect that’s not a show stopper for most people; if you are in the market for a tablet, you most likely already have a smart phone with a pretty decent camera.

If the Kindle Fire allows me to use my Audible library and my Kindle library in a seamless way, that’d be a big win. And as long as it runs Spotify, that’s my music sorted for the time being.

Still, that said; I’m probably not going to rush out and buy a Fire….well not for myself, there’s a ten year old who would love one. Of course there is a problem with this, when the inevitable Fire-2 comes out with the bells and whistles, she’s going to want one and I don’t want to be lumbered with a second rate bit of kit.

Now if Amazon were really clever; they should go and buy OnLive; a tablet which came with an OnLive subscription, that’d give everyone a headache.

Your media and your life is moving to the Cloud….its going to get harder and harder to resist.

Faster, Better and then Cheaper?

While CIOs and IT departments only try to compete with external service providers on cost; they are eventually going to lose by being so one dimensional. At some point, you will be unable to reduce costs any further and to be honest, I suspect many internal IT suppliers are already at that point. For costs to fall any further, the service provision is going to be impacted. And I think that we are already there but then again perhaps the service wasn’t so great to start with.

It is time to focus on Better and Faster; if customers are getting a Better service and a Faster service, the pressure on Cheaper may abate but whilst service is not getting better and delivery is not Faster; your key metric as a supplier will continue to be cost.

How galling is it to loose business to a supplier who is charging a premium? No-one wants to loose to that but it happens all the time in IT. Many of the big suppliers would be out of business if IT was simply a race to the bottom but we as customers often make a decision to pay more for what we believe is a better service and product. In fact, in the long term, we often believe that the decision will work out to be cheaper (believe /= know; this is a different problem!).

So we know this is a better way to procure IT in general. And yet, we do not often focus on the Better bit in the value proposition that we present to our internal customers.

So I wonder how we get to this position and that is too take a leaf out of our vendor’s books. Learn to market and sell all the advantages of the Internal IT supplier; focus on the Better and Faster, be competitive but do not get involved in a race to the bottom.

But be mindful of costs; deploying Faster is often Better for our customers but it can unnecessarily raise costs or at least lead to infrastructure deployments being less efficient at first deployment but too often first deployment is the last deployment until something goes wrong or it is time to refresh.

In the background, there needs to be a team which is constantly working to improve effectiveness and efficiency; taking services and tuning them, improving them and driving down the cost but not at the cost to the service.

Better and Faster; let the Cheaper follow….it almost certainly will do. Experience suggests that doing to do Cheaper first nearly always sacrifices Better and Faster.

 

Three is the Magic Number?

I never thought that I’d keep this going so long but it is now three years that I’ve been writing this blog. It’s still fun to do and keeps the mind going; sometimes I think it’s getting easier and then at times, I just sit here writing and re-writing the same sentence again and again!

It amazes me that people come and keep coming back to read more. It also amazes me when people actually write nice things about the blog even when I’ve been very critical of their company; the vendors have been incredibly supportive (no, no money has changed hands) as have my fellow bloggers.

I look back at some posts and wonder ‘what the hell was I thinking?’ and then there are others which I can read with pleasure. There are the posts which I know have changed things; encouraging and badgering  EMC to include VP as part of the standard stack with Symmetrix is something I am quietly proud to have influenced.

Its been interesting to watch the take-over shenanigans as the tier 2 companies have been gobbled up; leaving only NetApp really retaining its independence.

And now we have new wave of storage start-ups; many virtualisation focused and many trying to figure out the best way to deploy SSDs. How many of these will grow into take-over targets and can any of them become the next NetApp?

Then there is the growth of Cloud and what that means; does Cloud mean anything? It certainly still seems to mean many things to many people. From the consumer cloud to the private cloud; Cloudwashing is the order of the day!

So dear reader, thank-you for reading, thank-you for commenting and thank-you for the generally nice things you say about the blog.

Here’s to the next year and beyond.

Cloud Gazing

The illustrious Stuie blogs about latest Cloud outage here and opines about the new paradigm and how application developers need to change as well. I think it’s something worth examining, infrastructure guys are getting a kicking from all sides about how they need to change; dev-ops wants to come along and ‘steal our jobs’, users don’t believe we are responsive enough, we don’t embrace change and so the lists go on.

But surely cloud needs a change from everyone and possibly a step back to ensure what we are calling Cloud is not simply a slightly more responsive and agile deployment of infrastructure enabling applications to be rolled out slightly faster but with little underlying change. And I am not just talking about the Cloud-wash of server virtualisation.

Applications themselves need to become more resilient in themselves; they need to become resource aware and become able to request more infrastructure resource but also release the resource when not needed. They may even need to become aware of their own environment and be able to query a centralised resource broker which will move them to an appropriate resource;  an application may become aware that it has become docile and it can be moved to a state resembling hibernation. It may become aware that it is handling more transactions and ask for more resource or even to move to a larger resource; it could even be aware that it has a single transaction which has caused a temporary spike in activity, a badly written query for example and it asks the broker whether it is possible to have temporary capacity on the infrastructure it is on.

We may even be looking a new operating systems which are much better in handling distributed processing and instances.

And then there is the role of the user and how do their expectations change. I am not convinced that some of our current user experience metrics stand-up long term and in Enterprise IT, have we fully taken notice of what else is happening in the world around us? We have some odd dichotomies; our users seem to expect instant gratification except when they don’t.

I don’t know if you have watched your children online; it’s kind of odd, they expect excellent performance but tolerate shoddy performance. If YouTube runs slowly, they are not exactly happy but they will find something else to do instead; they will come back later and try again. Of course, if the whole environment runs slowly, then you’ve got trouble but we have a generation of users who tolerate inconsistency in performance a whole lot better than their elders.

But we ourselves are changing as well, if Amazon is down; well, we’ll probably wait to order that book. Yes, there are other places we could buy that book but we like the Amazon experience and most of the time, the service is good.

I think that as we move more applications into the cloud; this experience will become common. But business processes might have to change to ensure that a user who is bottle-necked in one application is still able to progress in another application. Workflows will need to be looked at in that mind.

Can we define roles with this flexibility, is it even desirable? Is it even inevitable as attention spans and a youth spent in distraction changes us?

You see when people talk about Cloud being more than just technology, I’m not sure whether they realise quite how right they may be.

 

Virtual Angst

There appears to be a lot of angst and general uncertainty about VAAI; who is in and who is out of the Storage Cartel. Will the future developments of VAAI and VMware storage directions in general cut out some of the start-ups who have built their business models around VMware storage and is this an attempt to curtail or will it even accidentally curtail innovation in storage.

Personally, I think it will only curtail innovation if you buy into the premise that the storage market is reliant on VMware and its machinations. Now VMware and the mothership EMC want you to believe that this is the case; they want you to play on their pitch under their rules but you do not have to.

And if you only focus on VMware; you could well find that killer feature moves up the stack into the hypervisor and you are suddenly in a very cold place.

No, if you are driven by innovation; you need to ensure that VMware is part of your strategy and not your whole strategy. There are plenty of customers and users who are buying storage to do  more than just VMware; don’t make yourself irrelevant to them.

We are a long way away from the majority of datacentres being VMware monocultures; don’t fall into the trap…and that goes for the big boys as well.

Building for Yes

Like many of you, I have sat in meetings where the whole focus of the meeting is coming up with reasons not to do something; at times, it seems that the whole reason to hold meetings in IT is come up with reasons not to do something.

Actually, it is amazing that we have so many meetings considering that most of them appear to revolve around the word ‘No’.  I think that is what is so scary about Cloud and building scalable, flexible, dynamic infrastructures; your default answer should become ‘Yes’…

It might be ‘Yes but…’ to start with but that has got to be a step forward from where many organisations are today. Start from a position of ‘Yes’ or at least start from the position building the Castle before throw rocks trying to knock it down. Stop pummelling yourself and your customers with rocks before you’ve even begun.

And stop having meetings where the whole focus is to say ‘No’!

 

Glistening Gluster

There seems to be be more and more stuff appearing about Gluster; there’s a really nice article about Rolling Your Own Fail-Over SAN Cluster with Thin Provisioning, Deduplication and Compression using Ubuntu which just goes how far you can go with the DIY approach to building your own storage devices.

Please note that this article utilises iSCSI for it’s SAN connectivity but there’s no reason why you shouldn’t do a little more work and support FC as well and I daresay, that putting together FCoE is not beyond the realms of possibility.

I’d also suggest that people have a look at the 3.3beta stuff for reveal about what is coming down the line.

And I am certainly not suggesting that you should run your mission critical business applications on it but it really goes to show how far we’ve moved; premium features are beginning to turn-up in open-source systems.

A threat to the existing Storage Cabal? Not yet but for the more adventurous of you, there is a huge amount of potential.

 

Presumptuous Thinking

A couple of emails floated into my inbox recently which brought home to me about how long the journey is going to be for many companies as they try to move to a service oriented delivery for IT. I think many are going to be flailing around for some years to come as they try to make sense of ‘new’ paradigms; not just the IT functions but this impacts beyond this.

The technological changes are important but actually, much could be achieved without changing technologies massively. All that is required is mindset change.

Pretty much all traditional IT is delivered based on a presumption based delivery model; everything is procured and provisioned based on presumption.

A project will look at its requirements and talk to the IT delivery teams; both teams often make the presumption that both sides know what they are talking about and a number of presumptions are made about the infrastructure which is required. An infrastructure is procured and provisioned and this becomes often a substantial part of the project costs; it is also something which is set in stone and cannot change.

I don’t know about you but if look at the accuracy of these presumptions; I suspect you will find massive over-provisioning and hence the cost of many projects are overstated. Or sometimes, it is the other way round but examining most IT estates (even those heavily virtualised) there is still lots of spare capacity.

However, you will find that once the project funding business unit has been allocated the infrastructure; they are loath to let it go. Why should we let the other guy get his project cheap? And once a project is closed, it is often extremely hard to retrospectively return money to it.

Of course, this is nonsense and it is all money which is leaving the Business but business units are often parochial and do not take the wider picture into account. This is even more true when costs are being looked at, you don’t want to let the other guy look more efficient by letting them take advantage of your profligacy. It is politically more astute to ensure that everyone is over-provisioning and ensuring that everyone is equally inefficient!

In IT, we make this even easier by allowing an almost too transparent view into our provisioning practises. Rate-cards for individual infrastructure components may seem like a great idea but it encourages all kinds of bad practise.

‘My application is really important, it must sit on Tier 1` has often lead to a Tier 1 deployment fair in excess of what is really required. However if you are caught moving a workload to a lesser tier, all kinds of hell can break out; we’d paid for that tier and we are jolly well going to use it.

‘My budget is a little tight, perhaps I can get away with it sitting on a lower tier or not actually provision enough disk’; I’ve seen this happen on the grounds that by the time the application is live and the project closed; it becomes an IT Support problem. The project team has moved on and its not their problem.

The presumption model is broken and leads to dissatisfaction both in the IT teams and the Business teams. In fact it is probably a major factor in the overwhelming view that IT is too expensive.

The consumption model is what we need to move to but this does mean some fundamental changes to thinking about IT by Business Leaders and IT Leaders. If you want to retain a private IT infrastructure and many do; you almost have to take a ‘build it and they will come approach’; the Service Provider competitor already does this, their model is based entirely on this.

You need to think about your IT department as a Business; however, you have an advantage over the external competitor or at least you should.

  • You should know your Holding company’s Business.
  • You only have to break even and cover your costs, you do not need to make a profit and any profit you do make should be ploughed straight back into your business. This could be in the form of R&D to make yourself more efficient and effective or it could be on infrastructure enhancement but you do not have to return anything to your shareholders apart from better service.
  • You should have no conflicting service demands; there should be no suspicion that another company is getting a better deal or better service. You can focus! You can be transparent.

When I talk about transparency, you should beware of component level rate cards; you should have service rate cards based on units consumed; not units presumed to be allocated. In order to do this, you will need a dynamic infrastructure that will grow to service the whole. It would be nice if the infrastructure could shrink with reduced demand but realistically that will be harder. However, many vendors are now savvy to this and can provision burst capacity with a usage-based model but beware of the small print.

There might be ways of using redundant capacity such as DR and development capacity to service peaks but this needs to be approached with caution.

And there is the Holy Grail of public Cloud-Bursting but currently most sensible experts believe that this is currently not really viable except for the most trivial workloads.

If you have a really bursty workload, this might be a case when you do negotiate with the Business for some over-provisioning or pre-emptable workloads. Or you could consider that this is an appropriate workload for the Public Cloud, let the Service Provider take the investment risk in this case.

But stop basing IT on presumption and focus on consumption.

 

 

Service Partner or Service Provider?

How many Businesses are ready to consume IT from a Service Provider? And how many might be better looking for a Service Partner which can transition them and help them transform their IT platform?

There are few enterprises who are ready to consume their IT in the same way that they consume electricity. The technology solutions are near enough there but to realise the actual benefits is going to take partnership.

We are still in the early days of Cloud and Cloud provision and the variety of options along with relative immaturity of both providers and users means that this relationship needs to be one of partnership where both sides put some skin in the game. The technology solutions are near enough there but to realise the actual benefits is going to take partnership.

We are going to have to find new ways of connecting and contracting business in IT; adversarial relationships are going to have to change, moving away from cost, perceived ROI and towards to a true value models.

This is as true for the internal IT supplier as it is for the external IT relationships. One change which might be long overdue is to identify IT consumption metrics and unitise the costs, the internal IT supplier’s role is to ensure that unit cost is as low as possible but not control how many units that the Business consumes. Think enabler as opposed to gatekeeper.

They have a role in advising how to minimise the number of units but not to tell the Business that they cannot use that many units. Too much time is spent haggling with the Business about the cost of provision; it should be simply, you want do ‘X’ and you want to do it that many times/that big/that quickly; this is the cost. Yes, there will always be a sales discussion but really it needs to move on from do everything cheaper.

I think that many are still expecting the consumerisation of Enterprise IT in the form of Cloud will make things cheaper but I do not believe that this is inevitable. It might make things easier and it might make the whole IT process quicker, certainly when trying to deliver at scale with velocity.

The cost of IT units will continue to fall as well but total number of units will increase. Actually if all consumerisation of Enterprise IT does is to reduce costs; it has failed to live up to its promise.

Cloud Architects?

I think at the moment, too often people reach into the server virtualisation toolkit when they start looking at Cloud and I think this leads to missed opportunity, potentially increased costs and almost certainly a degree of increased complexity.

Firstly, I must state that I believe quite strongly that anybody who deploys large quantity of servers and does not start  from the default position of that all operating systems should be deployed on top of a hypervisor is quite possibly starting from the wrong place.

Hypervisors, be it on mainframe, RISC-based Unix and x86 are a good thing; for starters, it puts in place a degree of hardware abstraction which means the long-term support of workloads is greatly simplified.

But the hypervisor is a commodity.

However in my mind; hypervisor does not have to equate to mass server consolidation; every time I see someone boasting about the number of virtual servers that they have, I find myself judging. As far as I am concerned having too many virtual servers is as bad as having too many physical servers.

I would like to see more people think about service virtualisation and defining services which can be seamlessly moved around and quickly deployed. Ironically, this sort of virtualisation does in many ways predate the current focus on server virtualisation; we were building shared database tiers before the days of commodity server virtualisation.

Clustering technologies such as Veritas and HA/CMP allowed services to be defined as a collection of resources; storage, network, data etc and moved between servers in the cluster. Yes this was complex and most commonly used to fail-over services but you could if you were clever use this to move services even in a non-failover scenario. It would certainly have been feasible to do it based on load; I personally have never seen it but I have heard third-hand that some people have used it in such a scenario.

Web-services have been virtualised for many, many years and do not rely on server virtualisation to provide this virtualisation.

I want to see Cloud Architect move from what is often a glorified VMware Architect into a role which has both a broader and deeper understanding of what it means to architect services which can be located in the Cloud.

Cloud Architects who only have experience of a single technology and a single solution focus are possibly not what most companies require and certainly not what the industry requires.

Cloud Solution Providers also need to offer a breadth of service which yes certainly has hypervisors and server virtualisation as a key offering but they need to know more than just this. They need a menu of services but also be able to articulate and offer these in a consultative manner. Certainly if they want to compete with the Amazons of the world who can offer a no-frills sandbox at unbeatable prices.