We have a lot of LSI disk; media and broadcast companies tend to; it pops up in many forms, from traditional IT vendors such IBM, SGI, SUN Oracle to Broadcast/Media vendors who offer LSI as their badged solution.
For the most part, we use almost none of the functionality beyond RAID and it is used as big lumps of relatively high throughput and cheap disk. Many of our internal customers are only interested in one thing; can it stream uncompressed HD to their workstations and can it do it reliably and consistently. We tend to have very low attach rates with one smallish array split between a handful of hosts; sometimes even down to one.
If we need special functionality, it’ll often be done at the application layer or the file-system layer; for example, we use GPFS and Stornext clustered file-systems. We don’t make a huge use of NAS in these specific areas either, we would like to use more but at times we struggle to get application vendor support. For what is at times seen as a bleeding edge industry, we are very conservative about deploying new infrastructure; when you are trying to put stories to screen before the other guy, you tend not to pee about. If it works, you don’t mess about with it.
Thin provisioning, snaps etc don’t have a huge amount of worth; dedupe is practically worthless. Once a project is finished, it gets archived away (to tape 🙂 ) and onto the next one. There’s little time to dedupe or compress even if it we got reasonable ratios.
LSI is great for this small but growing niche; I’m hoping NetApp don’t mess it up too much and that we don’t have a horde of NetApp sales-guys trying to up-sell us to something we don’t need and our users don’t want. There are plenty of other smaller storage vendors who would fancy filling that niche but I, like a lot of people in the media game, have loads of this stuff and we would rather not have to start again with a new storage environment.
However, starting again is always an option because our data doesn’t tend to live on disk for very long and much of the storage logic is in the application anyway. And you see, this might actually be a model for the future for everyone; the disk won’t really matter that much to us, so you better be able to control that cost and get it as low as possible and then focus how you get value-add much, much further up the stack.
The much vaunted ‘Big Data’ might well be a case in point; it is entirely possible that many ‘Big Data’ applications will use very little of your advanced array functionality and will be looking for drag-racer type storage which gets from ‘A’ to ‘B’ as quickly as possible.
‘Cheap, Fast and Dumb’…it’s not sexy but it’s a good reason for NetApp to buy LSI; if you told a NetApp engineer to develop ‘Cheap, Fast and Dumb’, they’d probably walk out in disgust or deliver something five years later which was none of those things but was a really good general purpose, all things to all men ‘Unified Storage System’.
Arguably, NetApp had done the hard stuff already; they need someone to do the simple stuff for them.