After latest 'twit-piss' mostly between NetApp and EMC, although HP also chimed in, I posted a tweet which most people seemed to agree with and it got retweeted a lot.
But sadly, I don't think it's true at all! I think really goes something along the lines of
'NTAP pretend to know shit about EMC who pretend to know shit about IBM who pretend to know shit about HDS who pretend to know….'
Well you get the gist of it.
Or perhaps
'NTAP marketing know shit about EMC, whose marketing know about IBM whose…'
Well you get the gist again!
You see there's a big game going on here and actually it's not doing us the customer any particular favours; you see most of the big vendors know alot about their competitors kit, they just can't admit it. I, suspect like most of my readers, have visited large vendor facilities and have been given the full on guided tour; often on these tours, one is taken to the interoperability lab. What's in in the interoperability lab? Well, just about every flavour of competitive kit going. All in the name of testing interoperability but are you really telling me that more doesn't go on? Are you really telling me that performance tests are not run? Are you telling me that the kit is not put through the same type of torture tests that their own kit is put through and that the results are not pored over?
Frankly I don't believe it! So why don't we get to see the results? Funny isn't it!?
Perhaps it's just a Mutually Assured Disinformation pact!?
(Disclosure: I work for EMC, but I speak only for myself.)
Assuming that the vendors did test and torture competitor’s gear like their own, who would believe them when they published the results of that testing? If EMC were to put a competitor’s gear through the same tests its own gear went through and then published results showing how the competitor’s gear wasn’t as good, no one would believe the results. “That’s just FUD,” everyone would respond. So what’s the point? It seems to me that as vendors—myself included—we should try to focus on our own products, our own strengths, and our own weaknesses. (And yes, all products have weaknesses.)
Scott, that would be nice but we all know that the vendors are absolutely incapable of doing what you say. Granted some try but eventually they revert; so I reckon we should just make the best out of a bad situation.
Publish your FUD with your methodology and be damned! Open your tests up to public scrutiny and let the fun commence. No more weasel words, no more whispering..
Put Up or Shut Up!
Martin,
We have competitive gear, but most competitors have specific license restrictions preventing IBM from publishing comparison results for performance and/or energy consumption. Interoperability labs are focused on making sure the combination works together, and not intended for side-by-side comparisons.
Rather, IBM believes in publishing results of standardized benchmarks to provide purchase and planning guidance.
Tony Pearson (IBM)
And Tony, are you telling me that you don’t run performance tests etc? You know what test engineers are like, they’ll try to break anything put in front of them. And it’s important for interoperability testing that you know the edge-cases and how boxes react under extremes. For example, for SVC testing I would assume that you try to thrash the proverbial out of any array you support behind it. You need to know whether that performance problem a user is seeing is SVC or the back-end array. You need to know what latency the back-end array might suffer from under extreme load etc.
Actually the license restrictions that you are under, users tend to be under as well. I’m not saying that benchmarking never flows between users but the whole opaqueness of the process leaves a feeling of dissatisfaction to be honest.
And let’s be honest; IBM and HP, with their huge managed services groups really do have a lot of real world experience as to how the various competing arrays perform.
The whole argument is stupid, we need a SPEC style suite of tests with published, audited results and full disclosure of settings and cost. Then I can look up my IOPS requirements, power budget, capital budget and find the best couple solutions for me and find a VAR or two for each of the top contenders and more quickly and easily find my solution.
We have SPEC suites with all of that but the problem is that no-one I know actually runs SPEC as a workload. And as arrays get bigger, the mix of workloads on it gets more and more varied. SPEC will get more and more complex to cater for this.
And will it stop FUD flying about? No, of course not! So, lets accept that there will be FUD but lets make it more concrete….
Instead of EMC saying that WAFL fragments over time; EMC should say that we have run this torture test over time and found that WAFL fragments and performance degrades. NTAP then have right of reply and can point out what EMC are doing wrong.
Instead what we get is ‘Your array is crap!’; ‘No it’s not…your array is crappier anyway!’. This is all very amusing but it would be more useful to every one if it was ‘Your array is crap because….’
Stop the name calling and start landing the punches!
Well now, we could of course publish the benchmark of concurrency.
By that I mean, with SVC we HAVE to test each array until it returns Q full. Because Q full is one of the worst possible SCSI return codes you could ever get…
How long do you wait, how long do you let it drain…
So we set what we call a concurrency, or queue for each controller type, and its I guess a league table of what we find in real life testing.
Now this isn’t performance testing, I rarely get my hands on competitive kit, because in the most part I’m testing our kit.
However, there are three (well four) classes of controller (the numbers are for sake of example)
enterprise : gets the highest concurrency (1000)
midrange : gets the middle ground (500)
ahem : gets some I/O through. (200)
wau! : don’t sniff or you’ll miss it (50)
You’d be amazed at how many products are sold in these categories, but usually fit in one lower.
Especially those companies that sell you an SLA, not a box. Infact, those boxes that report themselves with DG in their scsi inquiry vendor id, caused us to create the final category…
I’m not sure of the impact, but I will see if I can get IBM’s settings for all the things we support de-classified, and post on my blog… lol wouldn’t that be fun
Late to the party but, I’ve got to say; “Dear god please *no*” on that suggestion.
I have to already wade through vendors constantly putting up their own carefully manufactured benchmarks that have no basis in real life; to have to deal with them intentionally using benchmarks to show their competition in bad light… probably will break my will to want to deal with any vendor at all.
I already have to constantly slap vendors around for putting benchmarks in front of me that show iops only performing within cache, or a single app, shortstroked, or whatever goofball way they have done it. Even worse it gets in my executive’s hands who have no idea about what happens under the cover of a storage array other than “ooohh rack with blinky lights” so I then have to spend hours trying to explain why the benchmark has no real meaning in any real life scenario that would apply to us… arggg!!!
I can only imagine the horror that would descend upon me if I had to additionally deal with execs dragging whitepapers down to me as to why vendor sucks, because some other vendor found a specific case which never would be done in real life and only applies in an engineered benchmark world.