Storagebod Rotating Header Image

Dry your eyes – No More Tiers

In valiant effort to redefine storage tiers, Kostadis has come up with a very practical and useable set of differentiators for storage tiers and getting across the differences between the levels. We can call them what we want but they are tiers!

I did some work on this before the summer but really couldn't come up with clear definitions and an explanation as to what I meant; so I'm now going to plagarise Kostadis' work.

Dedicated Performance Tier – for those applications which have performance demands which mean that they are not good sharers – Exchange is a good example, badly written Oracle databases; anything which needs spindles to provide raw I/Os. In the ideal world, you'd give them their own array. You will probably see them utilising 70%+ of the available I/Os of a spindle but often space utilisation is dreadful and you still might find yourself trying to hand-tune. You are really talking about 15K Fibre Channel disks or SSDs.

Shared Performance Tier – for those applications which have a fair balance between I/Os and capacity. These applications are generally fairly cache-friendly, you'll still find a fairly high % of the I/Os utilised and space utilisation is better. I'd suggest things like VMWare images live quite happily on this tier. You are probably talking about 10K, larger Fibre Channel disks but you might get away with 500 gig SATA/Low cost fibre. You probably don't want to go much larger as your I/O density drops too much and your space utilisation drops.

Capacity Tier – for those applications which are just space-hogs; file-serving etc. You should be able to drive up the space utilisation much higher and you can probably use large SATA disks; you should be aiming to drive up the space utilisation as high as possible. I/O utilisation may be high but you don't really care; cheap as chips is the order of the day.

In each of the tiers, you will have service offerings; replication, snap-shots, dedupe, encryption etc. Availability in my case is a given, 99.999% during the service day; however service days may differ, so I might be able to take planned downtime but unplanned downtime must correlate to five nines availability. And each of the tiers, your presentation layer could be block or file.

You maybe able to come up a couple more tiers; I can think of a couple which are useful in my specific circumstances; one of which is tape.

Now why is this at all useful? Well, it potentially allows the infrastructure guys to really articulate the impact of poorly written applications or at least applications which don't play nice. It also allows us to explain why at least some disk utilisation rates are so poor and ensure that responsibility to drive down the TCO of storage is a shared responsibility. It is also important that the two utilisation metrics are articulated

  • Available/Utilised I/Os
  • Available/Utilised Space

To be honest, this has probably helped me more than you dear reader in that it has helped me crystalise some ideas in my own mind! But for the rest of today, I shan't be worrying about storage as I've got a day off!

p.s I lied…there are still Tiers!


2 Comments

  1. Don’t you ever sleep? You beat me to posting pretty much this same article! Go take a rest and let some of us others have good ideas too! 🙂

  2. marc farley says:

    Four tiers on the layer cake seems about right. Nice post and good thoughts!
    Dedicated resources have a funny way of becoming underutilized or wasted.
    The tier structure Sr. Kostadis writes about reflects the legacy storage designs he is most familiar with (no sin in that) – but with an eye towards SSDs.
    Beating the wide striping drum (again), I’d argue that wide striping the top layer, members-only, most performance-sensitive applications across a number of high performance drives would establish a shared top performance tier, as opposed to creating a number of discrete management burdens.
    Next level down, another set of widely striped drives (high performance or not) could be used to support a larger number of applications with somewhat lower performance requirements.
    If there are two tiers with high speed drives, a third tier could be established with capacity-optimized drives.
    Then there is the “I hope we don’t have to use this layer too often” layer – which I recall you using recently. Maybe someday this will be mostly stationary disks, but it’s tape today.

Leave a Reply

Your email address will not be published. Required fields are marked *