IBM’s purchase of TMS was not the biggest surprise, especially for anyone who has been involved with IBM’s HPC team. It’s a good move for both companies and gives IBM a good solid flash-based storage team. It does add yet another storage array product to IBM’s ever growing portfolio and positioning outside of the HPC world is going to be fun; IBM have multiple competing arrays but arguably the TMS range does not overlap with any.
And so the move to flash continues or more likely a move to a 2-tier storage future; with active data sitting in a flash-tier and resting data sitting on a SATA/SAS bulk storage tier. But with all these competing products and different vendors, the storage management and implementation head-aches could be massive.
Now, we could move to hybrid arrays where both flash and traditional rotational storage live in the same array. The array itself can auto-tune what sits where, moving data according to temperature; we’ve seen this in the various auto-tiering technologies from EMC, IBM and HDS for example.
We could move to a world where the flash in an array simply works as an extended cache tier, augmenting the DRAM cache; speeding up reads, think NetApp’s approach.
Both of these implementations allow existing storage architectures to be enhanced with flash and hence both are pretty popular with the existing vendors and customers. Nothing much changes and things feel very much Business As Usual but faster.
Then there are the new players on the block with their flash-only arrays; architected to make the most out of flash. These tend to have really screaming performance but can you afford to replace all of your storage with flash technology? If you can’t, you need to think a lot harder about how best to utilise these; for example if the data you are storing has any form of longevity and needs to be kept, you will need to come up ways of moving this data between tiers of storage which almost certainly come from different vendors. Experience suggests that this sort of tiering is very hard to-do and applications need to be designed with this in mind.
And then there vendors who believe that you should implement flash as close to the server as possible. This is the approach of Fusion-IO and the like; in many ways this could be very attractive, certainly if you can use it as a cache-tier but if you have very large server farms, this could be expensive and yet again, you could run into design headaches where positioning workloads starts to get much harder. There are also potential issues with clustering, failure modes and the like. But it could allow you to leverage your existing storage estate and sweat the asset.
This introduction of a new tier of storage has re-opened the ILM/HSM box; the glue which moves data between different tiers of storage is going to be incredibly important and this more than the actual hardware could well define the future of flash in the Enterprise and beyond.
We are seeing a rapidly evolving hardware market but the technologies could manifestly increase the complexity of the storage environment. This might be good news for the storage administrators who look at the increasingly simplified administration tools that all of these arrays ship with but to enable the dynamic environments that Business requires, the integration layer is going to have to start appearing.
And as IBM’s acquisition of TMS shows, acquiring hardware platforms and expertise seems to be the current focus…even if every array you purchased was IBM branded, data movement between the arrays would be hard; start adding other vendors into the mix and your problems are going to be interesting.
[…] on here Rate this:Share this:TwitterEmailLinkedInPrintDiggFacebook Leave a Comment by rogerluethy on […]