Despite what we like to say in IT; it is rare for what we do to have a transformational impact on the day-to-day working of a business, fortunately I am currently in the position of knowing what I do completely changes how the company I work for creates it's product. And even better, I am in the position of being able to recruit someone else to come and join our little band of storage specialists, it's not an army, more a crack(ed) squad.
Over the next twelve months, we will be ramping up the delivery of a file-based workflow for content production for broadcast and at the core of this, is the storage; it is probably no exageration to say that all the company's eggs are being placed in our basket. We will be migrating into our new Broadcast Centre which amongst many other new features will be completely free of video-tapes; when I point out that we currently have over a million video tapes, this should give you some idea of the magnitude of our task.
And whilst we are delivering this; we will be delivering 3D edit capability, render-farms, core back-up solutions for the Broadcast systems and anything else the business cares to throw at us. The skills that I am looking for are listed in the job advert but core skills are:
- Tivoli Storage Manager – Especially from an archive point of view
- IBM's GPFS – Clustered file-systems are becoming the foundation of the storage we deliver to the business
- NAS – NFS especially but also you need to be CIFS aware and an interest in pNFS would be helpful.
- General Storage – realistically, I don't really care what arrays you've worked on as long as you are ready to take on new challenges and also willing to let go of anything you have done in the past. We are not here to debate EMC vs HDS vs IBM vs NetApp etc; we are here to deliver a function not navel gaze!
Although it is not obvious at first, pretty much everything we do can be applied to the Cloud; no we don't do VMware (yet) but what we do do is build out massively scalable storage solutions based around pretty much commodity hardware.
We're a small team at present; there's only three of us at the moment and I've no intention of growing it massively. The plan is to keep it small and focussed on doing the right things from day one. You will get the chance to work closely with both the other infrastructure teams but also the business teams.
If you are interested, please go here and search for position number 02979 or just searching for storage should it.
If you have any questions; you should be able to find how to contact me and I'll be at #storagebeers on Thursday 2nd December.
That sounds like an interesting job… too bad I’m in the wrong country.
Interesting on you approach to storage. I feel that we are moving into that direction where we will only see hard drives as a commodity and will focus more on “Storage as an Application”. Ok, bad one :-), but if you think of it that is where the world needs to move to – commidty hardware with a software level focused on the application. Why should I focus on my storage systems (and pay alot) when I can focus on my application and not worry…..
Suggestions on your architecture.
I’ve done three TV stations (few PB archive) but with Front Porch Digital DivaArchive with Quantum StoreNext or MetaSAN as file systems.
Worked for StorageTek and it was defacto standard in the industry prior Sun with probably 20+ installations.
More inline with their need rather than TSM IT stuff.
Today I would suggest the same software along with Quantum/IBM robots and NetApp/IBM N-series NAS. 10Gbits is required. They move very large volumes at the same time. Avoid single volume pool for everything. Big mistake if you do.
Beware pNFS is not for prime time yet.
Jean with 27+ years in Storage…
Jean, we are already someway along with our deployment and architecture. I’d strongly suggest anyone who is considering StorNext for their archive, look very closely at GPFS. Licensing costs alone make it an interesting comparison.
And to be honest, you just can’t get the required sustained throughput with NetApp for example.
Interesting project, would love to hear how it’s progressing.
We’ve had quite a bit of experience building custom, purpose-built infrastructures using GPFS + TSM (for integrated HSM and for LAN-free backup) and a COTS-based approach.
One of our first projects was a encode/transcode farm with large distribution and even larger archival requirement (6PB total). Rather than throwing 6PB of disk at the problem, and for protection, replicate to another 6PB pile of disks (the usual disk vendor suspects), our approach allowed us to reduce the disk to .5 PB and create a 5.5PB tape-based ‘active’ archive, with a second copy of tape vaulted locally (for LAN-Free backups) and replicated offsite for DR to another tape library.
The customer then decided to add a 50-node render farm – once that was added, they then added a 50-node encryption farm – the cool things about this design is that the data doesn’t move; upon ingest, it lives within the same FC-based storage – GPFS’s policy-based ILM engine migrates the data sets from tier to tier, TSM’s media manager handles the tape abstraction, and eliminates the need to move data out of one island of storage (HPC, NAS, archive) to another.
The ability to provide a single, global namespace plays to the overall data management idea; but make no mistake, these are very sharp tools with many many knobs & levers upon which to self-inflict damage, IBM likes to pre-package them into appliances (ie SoNAS, Information Archive, etc.) for mass consumption, that limits the ability for true customization.
The real idea is executing using a COTS-based approach – we’ve managed to get installed prices under $300/TB using IBM’s cost/performance leading storage (DS3500) and LTO-5 tape.
Again, best of luck with your project – if you need an introduction to some of IBM’s best & brightest, we’d be happy to broker that for you.
John