I’m still getting comments on the ‘Extreme Cash Cow’ entry I wrote last year as a diatribe against the current state of the SRM market. I feel it’s probably about time that I updated it.
Firstly, since then I have moved jobs and no longer have responsibility for day-to-day storage management, I do obviously keep an eye on things but unfortunately it means that I cannot really influence how SRM develops in my organisation. This is a bit sad as actually I had a very positive response from EMC who took the brunt of my diatribe and I have been unable to take them up on their offer to work with me on understanding what I would like to see from a SRM product. I do hope that someone in the organisation I work for does take them up and help them understand what a large storage end-user requires and where the issues are.
Anyway, I thought I would put some thoughts together about the challenges that SRM tools or at least pose some questions.
Is the problem one of scale and complexity? If you look at what we expect the SRM tool to do, we expect currently expect it to understand our storage environment end-to-end. So look at what an SRM tool needs to do.
1) It needs to understand the array and how that is configured – easy
2) It needs to understand the switch fabric – fairly easy
3) It needs to understand the IP fabric – moderately hard
4) It needs to understand the hosts/servers, including virtual – moderately hard
5) It needs to understand the applications – hard
6) It needs to be able to correlate all the above information into useable and consistent model – potentially very hard
So to be fair to the SRM vendors, what they are trying to do is non-trivial and we the end-users don’t always make their jobs easy. We have a duty to ensure that organisational standards are set, adhered to and maintained otherwise the data consistency checking becomes horrible. We have to give them a chance.
Do we want an end-to-end management tool which allows us to understand our whole IT infrastructure because the relationship between storage and data is intrinsically linked?
What do we actually want from a SRM tool which will make it useful to us so that we do not carry on cursing the vendors and writing our own scripts? Perhaps we should hand over the contents of our individual script/tools directories and say, we want a tool which does all this and does it reliably. Perhaps the SRM vendors should send out an investigatory team wearing red shirts to discover what the storage civilisations are up to?
We can probably say that we don’t want ECC and its ilk; perhaps SanScreen is closer to what we want. I suspect that is very much the case; we do not want an all-singing, all-dancing provisioning/configuration tool but we do want something which gives us an immediate view of our storage environment and allows us to drill down the layers into the individual components getting performance, capacity and configuration details? It would be incredibly useful if it understood the reality which is a heterogenous storage environment with SAN, NAS and in future Object/Cloud.
And vendors if you continue to expand the number of different storage families in your product range and do not standardise on your management APIs, interfaces etc, you are making your job harder. And even in a product family, 37 varieties of LUN are not making your job any easier. As part of the development track of any new feature; the question needs to be asked, how will this be managed and the question needs to be asked early in the development cycle.
So what do you want your SRM tool to do?
Martin
So, point 1 & 2 above – easy – but why is it still not achieved? Because the vendors keep on adding features to their products turning them into bloatware before they’ve even finished getting the basics sorted.
Second, everyone has a view on what *they* want in an SRM tool; I’ve been in presentations where even our own guys argued as to how we would want an SRM tool to work.
As for getting an immediate view of your storage environment, SRA from Storage Fusion does that today.
i agree we want visibility from the array to the app, but vendors need to get the basics written first before trying to do clever stuff. As I’ve previously blogged about, a start would be simply having the same access method for querying all arrays – e.g. XML – not in-band, IP, web-based GUI, CLI, service processor, management server etc etc. Even if vendors didn’t swap APIs, agreeing to a standard access protocol would give us a fighting chance.
Hi Storagebod,
Yet another excellent post. From speaking with other SAN admins we all agreed that something quick and accurate is what is needed. An alerting system, quick access to performance data (a simplified set even), accurate usage data. I don’t mind if an agent is needed, as long as one agent can perform all data gathering and interface with the SAN, be it via a GUI or CLI. If they could ditch the massive java installation across 6 servers then yippee!
I agree that configuration/provisioning could sit in its own package, since EMC released SMC I have never opened ECC. It’s not perfect yet, but it has the correct ideas for provisioning. SANScreen is also not perfect, but it is getting pretty damn close. The price might be too hard to justify for many organisations, especially if they have invested in the extreme cash cow. The extreme cash cow is broken. It tries to do too much, and it does none of it well. It needs a full redesign and rebuild. It needs less agents, be able to sit on a robust VM, and it needs to be accurate. SYMCLI has everything in real time, and is always accurate. A database and interface sitting on this cannot be that difficult, especially for an organisation like EMC. The fascination with Java needs to go!
My issues with the EMC SRM tools is every time they develop and market a product, they end of life it with something more bloated and twice as expensive that still doesn’t do what you want it to. (see VisualSRM and VisualSAN).
I actually have a terminal server that I use for all of my various java based management applications. If I tried to run all of this stuff on my desktop, eventually I’ll be sitting at 100% memory utilization and a slow desktop.
I actually was looking into ControlCenter and my EMC rep pretty much told me I couldn’t afford it. he recommended that I check out tek-tools. It doesn’t allow me to manage my storage, but it does give all kinds of information. I like that I’ve found a tool that is able to look at my block-level allocation and drill down to the host and tell me my file-level usage. I also like that it doesn’t set me back $100,000. It also works with pretty much everything I have in my environment.
I’m one of those medium shops, but we’re still facing an ever-increasing demand for storage that spans multiple storage devices. It’s become nearly impossible to manually trend usage, and any moderately-priced solution definitely appeals to me. A lot of the SRM vendors out there seem to forget that medium shops also face growth that shows no sign of slowing.
I equate DBAs and Application admin’s requests for storage to fishing stories. They ask for 250 GB, when in reality they only use 50GB. I keep my margins razor-thin.
Any time I see storage allocated and not used at around 70-80%, a small part of me dies.