Storagebod Rotating Header Image

Storage

Clariion runs Windows…

Much as I enjoy reading Toigo’s blogs; his contrarian views sometimes make a lot of sense and it is always great to have someone kicking against the mainstream. His blog which reveals that Flare runs on top of XP is just hilarious. Especially the assertion that EMC have been trying to hide this; quite frankly this is not true! I have many beefs with EMC at times but this is not one….

Let me see, let’s look at Steve Todd’s blog,  you know Steve Todd who works for EMC and is one of the architects behind the whole Clariion range; this was posted in 2008, Window into the Decision and the reasons behind the decision to move Flare to Windows is fully disclosed. I can’t see much hiding here.

Even the wikipedia article on the Clariion here has details. The VNX also runs Windows Server to support VNX block; file-services, I am told run on top of Linux. Yes, the VNX runs two operating systems, something which obviously makes NetApp very happy but at the end of the day, it probably makes little difference to the customer as long as it does what it says on the label.

I’m not sure why EMC were trying to kill a story which revealed information which has been in the public domain for years? Seems a bit odd to me….

Virtual Bubble

VMware is hot and VMware with storage seems to be really hot. Just look at the spate of announcements with regards to arrays which are specifically targeted at VMware; announcement after announcement in past few weeks.

But are we looking at a bubble? We are certainly getting some bizarre announcements, iSCSI flash arrays which only allegedly support VMware? And is this targeting not a huge risk?

As an end-user, I would be loathe to purchase something which was so locked into specific infrastructure stack.

I am looking for devices which allow a certain amount of flexibility in deployment scenarios. And yes, I do have some storage which is specifically targeted for specialist workloads but I am not tied to a specialist workload platform. I can change the application which generates the workload.

Building storage arrays which only target VMware seems pretty much as dumb to me as building arrays which only support Windows. There may be a short term advantage but as a strategic play, I’m not convinced.

 

Presumptuous Thinking

A couple of emails floated into my inbox recently which brought home to me about how long the journey is going to be for many companies as they try to move to a service oriented delivery for IT. I think many are going to be flailing around for some years to come as they try to make sense of ‘new’ paradigms; not just the IT functions but this impacts beyond this.

The technological changes are important but actually, much could be achieved without changing technologies massively. All that is required is mindset change.

Pretty much all traditional IT is delivered based on a presumption based delivery model; everything is procured and provisioned based on presumption.

A project will look at its requirements and talk to the IT delivery teams; both teams often make the presumption that both sides know what they are talking about and a number of presumptions are made about the infrastructure which is required. An infrastructure is procured and provisioned and this becomes often a substantial part of the project costs; it is also something which is set in stone and cannot change.

I don’t know about you but if look at the accuracy of these presumptions; I suspect you will find massive over-provisioning and hence the cost of many projects are overstated. Or sometimes, it is the other way round but examining most IT estates (even those heavily virtualised) there is still lots of spare capacity.

However, you will find that once the project funding business unit has been allocated the infrastructure; they are loath to let it go. Why should we let the other guy get his project cheap? And once a project is closed, it is often extremely hard to retrospectively return money to it.

Of course, this is nonsense and it is all money which is leaving the Business but business units are often parochial and do not take the wider picture into account. This is even more true when costs are being looked at, you don’t want to let the other guy look more efficient by letting them take advantage of your profligacy. It is politically more astute to ensure that everyone is over-provisioning and ensuring that everyone is equally inefficient!

In IT, we make this even easier by allowing an almost too transparent view into our provisioning practises. Rate-cards for individual infrastructure components may seem like a great idea but it encourages all kinds of bad practise.

‘My application is really important, it must sit on Tier 1` has often lead to a Tier 1 deployment fair in excess of what is really required. However if you are caught moving a workload to a lesser tier, all kinds of hell can break out; we’d paid for that tier and we are jolly well going to use it.

‘My budget is a little tight, perhaps I can get away with it sitting on a lower tier or not actually provision enough disk’; I’ve seen this happen on the grounds that by the time the application is live and the project closed; it becomes an IT Support problem. The project team has moved on and its not their problem.

The presumption model is broken and leads to dissatisfaction both in the IT teams and the Business teams. In fact it is probably a major factor in the overwhelming view that IT is too expensive.

The consumption model is what we need to move to but this does mean some fundamental changes to thinking about IT by Business Leaders and IT Leaders. If you want to retain a private IT infrastructure and many do; you almost have to take a ‘build it and they will come approach’; the Service Provider competitor already does this, their model is based entirely on this.

You need to think about your IT department as a Business; however, you have an advantage over the external competitor or at least you should.

  • You should know your Holding company’s Business.
  • You only have to break even and cover your costs, you do not need to make a profit and any profit you do make should be ploughed straight back into your business. This could be in the form of R&D to make yourself more efficient and effective or it could be on infrastructure enhancement but you do not have to return anything to your shareholders apart from better service.
  • You should have no conflicting service demands; there should be no suspicion that another company is getting a better deal or better service. You can focus! You can be transparent.

When I talk about transparency, you should beware of component level rate cards; you should have service rate cards based on units consumed; not units presumed to be allocated. In order to do this, you will need a dynamic infrastructure that will grow to service the whole. It would be nice if the infrastructure could shrink with reduced demand but realistically that will be harder. However, many vendors are now savvy to this and can provision burst capacity with a usage-based model but beware of the small print.

There might be ways of using redundant capacity such as DR and development capacity to service peaks but this needs to be approached with caution.

And there is the Holy Grail of public Cloud-Bursting but currently most sensible experts believe that this is currently not really viable except for the most trivial workloads.

If you have a really bursty workload, this might be a case when you do negotiate with the Business for some over-provisioning or pre-emptable workloads. Or you could consider that this is an appropriate workload for the Public Cloud, let the Service Provider take the investment risk in this case.

But stop basing IT on presumption and focus on consumption.

 

 

Creeping Featuritis?

VMAX also does not (at the time of this writing) support virtualization of external storage.

Would Anarchist have written this if it was not planned for VMAX? Is VMAX like some kind of Borg Cube absorbing and assimilating technologies from everyone.

What with the brief science experiment ‘announcement’ from Chad about hypervisors running within the array to this; it makes you wonder whatever next!

It wouldn’t surprise me if a VMAX grew legs and started rampaging across the globe like a giant robot looking for trouble!!

Just answer the question

Every now and then I dive into the forums on places on Linked In and come across discussions where people ask for advice about which piece of kit to buy;  they list some requirements and then various people dive in with answers and recommendations.

It never ceases to amaze me the way that people completely ignore the requirements and just pimp the piece of kit that they are selling.

But of course this is the way that the Internet and forums have always worked. No-one ever reads what the original question was; if they did, the forums would actually be pleasant and useful.

People talk about the the Wisdom of Crowds but that only works if the crowd can read.

So next time you are on a forum, trying to pimp your kit; try reading the question and if you can’t answer the question with your kit…..

Better to remain silent and be thought a fool than to speak out and remove all doubt

attr: various

Stop Buying Storage from EMC and everyone else!

Stop buying storage from EMC!

Stop buying storage from NetApp!

Stop buying storage from HP!

Stop buying storage from HDS!

Stop buying storage from IBM!

Stop buying storage from Dell!

Stop buying storage from Oracle!

Start buying storage from PC World!

Start buying storage from Best Buy!

Start buying storage from Ebay!

Start buying storage from the scruffy PC shop on the high street!

[there are other storage vendors available]

Why? To be honest, it seems that most people would better doing this than buying storage from any vendor who might have added some useful features in the past five years. Because some of the conversations I’ve had recently suggest that people still don’t trust:

Deduplication

Thin provisioning

Wide Striping

Automated Storage Tiering

Compression

Snapshots etc, etc, etc

All of these features are useful and effective; if you are not using them, you might be better simply buying hard-disks and directly attaching them. It’s time to move on and get with the program.

Now all of these features do need some thought and planning to get the best out of them but the really great thing is the vendors have worked really hard to make their arrays easier to configure and manage. This gives you time to start thinking about how best to use these new features, stop being a LUN-monkey and add value to your organisation.

Your life will be more interesting and your work will become less of repetitive, soul-sapping drain.

Vignette From the Life of a Storage Manager

Still people generally have little idea about how much storage they need and consistently over-estimate their capacity requirements. I had a conversation with one of our support teams about providing some storage for their internal website.

‘Yeah sure you can some space, how much are you thinking?’

‘500Gb, something like that?’

‘Really?’ *raised eyebrow*

‘Well maybe a terabyte then?”

‘Going to be storing a lot of videos?’

‘No just Word documents, pdfs and may be a few autocad drawings’

‘How many?’

‘Oh a few thousand maybe?’

‘Average size?’

‘4 or 5 megabytes’

‘Okay we’ll start you with 20-50Gb…’

*confused looking customer* ‘As little as that?’

At which point, I avoid shouting do the arithmetic and simply ensure the guy that it’ll be plenty; if he needs more, we can give him more.

And Back To Reality…

A lot of my spare time recently has been eaten up by something that I can’t blog about but is very cool but non-storage or even work related but it’s sadly come to an end, so I guess I’ll have some more spare time again.

I thought I’d make a quick mention of the Celerra/OS X Lion issue; this is not the first time that OS X has caused an issue with a commercial NAS. The last time was with OnTap where a case change on a file-name would cause a file to disappear and cause data-loss.

If Apple are going to make more in-roads into the Enterprise Desktop space and with the current ‘Bring Your Own Device’ meme seeming to make some headway; all of the Enterprise vendors are going to have to be more aware of OS X and make sure that their test suites cover OS X cases adequately.

Apple users are some of the most unforgiving and impatient users that I have ever had to support; they generally expect everything to work and do not show the tolerance of long-suffering Windows users.

Apple have done a great job in convincing them that IT should just work and they believe it! They (Apple users)  sometimes do have short memories and forget Apple’s screw-ups but they never forget a non-Apple issue.

Tape Dead? Really?

There’s an unpublished blog post sitting whilst I wait for a certain vendor to get back to me to clear what I can actually say; life as a end-user blogger sometimes gets a little complicated because I find myself under strange NDAs and embargoes. Sometimes I find things out which would be really interesting to blog about but I can’t because of work NDAs and sometimes I find things out which would be useful for work but I’m sworn to secrecy. And sometimes I just don’t know what I can say!

Anyway, I’ve recently been looking at LTFS and it’s certainly an interesting technology, LTFS stands for Linear Tape File System and yes, we are talking about a file system on tape. Developed by IBM and adopted by the LTO Technology Provider companies as a self-describing tape format which allows a tape to be used as a file-system with full drag and drop capabilities allowing applications and users direct access to files using the tools/interfaces that they are used to.

Linux, OSX and Windows implementations are available for free but you do require a LTO5 device to utilise it. Actually, IBM do support their enterprise drives as well but in some ways that might miss the point of it. You can take an LTFS formatted LTO5 and move it between IBM, HP and Quantum systems and access it via a normal Explorer interface.

Yes, if you are bonkers enough; you can edit directly from tape and there is nothing stopping you editing a Word document if you wanted directly from tape without it touching disk. It would be painful but doable.

More interestingly is that both the meta-data describing what is on the tape and the data which is on the tape are both held on the tape in a standard, defined format; this means that an LTFS tape volume written by one application can be read another application. This has some significant advantages, especially with long-term archives which have potential life-spans measuring in tens of years and beyond; it means that you are no longer locked into a specific archive application vendor and if you want to change vendor, it should be simply a case of re-importing just the meta-data from the tape and not rewriting the whole tape.

Even back-up/restore applications might be able to handle foreign tapes in the future; no more worrying about whether the restore system is NetBackUp, TSM or whatever, it should be relatively simple to move tapes between these environments.

There are many other aspects and developments which might be interesting; I’m just waiting to talk about them. But with both Quantum and IBM involved, both having Scale-Out file-systems; there are some interesting possibilities.

And, oh yes; IBM have LTFS Library Edition which allows you to mount an entire library as a file-system….now that really lends itself to a Scale Out archive.

There will be more…

Standardise on Scale Out Soon

I find it interesting and telling when a vendor tries to put another vendor’s product into a niche; sometimes it’s justified but oft-times it isn’t. It’s even more interesting when a vendor tries to put their own product into a niche to defend the market position of one of their other products.

Currently watching some of the positioning of Isilon both from NetApp and EMC themselves is amusing; Isilon is currently being positioned as a ‘Big Data’ solution by EMC and some confused witterings around it’s applicability to general purpose storage from NetApp are helping muddy the water.

And I can see similar issues around other Scale-Out products; where positioning is at times very defensive to prevent cannibalisation of existing products; for example, IBM currently find hard to position SONAS versus nSeries. Their current positioning of nSeries for less than 100TB and SONAS for larger deployments is simply marketing masquerading as technical strategy.

Quite simply, if you are looking to put in place or refresh your file-serving environment in the next couple of years; you are doing yourself and your employer a massive dis-service if you do not look at a Scale-Out solution. At present, they do not have some of the features of the traditional dual-head design; deduplication and compression come to mind but those features will come but architecturally they are designed to scale and if we know anything today, scale is going to be important to everyone.

They do this elegantly and seamlessly; adding additional capacity is simple and transparent; migration and maintenance is simple; over the years, this will save you probably more money than de-duplication will today.

The next thing will be to see if someone can bring a equal level of simplicity to block….

[Edited to show that I do understand IBM’s SONAS positioning but think it’s less than credible]