Storagebod Rotating Header Image

In a Flash..

Nice interview with the Fusion-IO guys; the heart of the Project Quicksilver work done by the SVC team in IBM Hursley (which must almost be the nicest environment of any big IT shop in the world, I hope these guys appreciate their surroundings!!).

I wonder where these PCI-Express cards will turn up next and who else will have a go at using them. There are alot of people beginning to build ‘arrays’ and storage appliances out of commodity hardware. Now obviously, not EMC but there a lot more vendors who could make use of them.

Edit: Be interested to see what NetApp base this on; the PAM modules are PCI Express cards, so wouldn’t take the largest leap for it to be something like the Fusion-IO cards.


9 Comments

  1. Barry Whyte says:

    We do appreciate our surroundings believe me. Especially when the US folks realise the House is older than their country… šŸ˜‰

  2. Jesse says:

    FusionIO *STILL* has not come up with a solid answer to the question about MTBF. Since NAND memory has a solid limit on write-erase cycles (About 100,000 write-erase cycles)
    I don’t think there is a drive in the world that can outlast spinning platters, and that makes them useful for indexing and caching, but not for production data.
    The PCI-e bus is another one that I think they will find to be a market limitation. That means they are open to windows and Linux, but not many in the way of “real” operating systems. The problem is that their price-point is too high for companies that use primarily x86 architecture. Companies with real budgets are spending real money on real servers.

  3. Martin G says:

    Sorry, don’t get what you mean about real operating systems at all. I work for a real company, with real workloads, which does real stuff for real people and we are moving a lot of our workloads to Intel based servers running Windows and Linux on top of VMware.
    And don’t think that VMware is just about server consolidation; we potentially see VMWare (and its rivals) as potentially the technology which allows us to do the things in the Intel space which we cut our teeth on in the mainframe space.
    We still have a huge SAN infrastructure and we will maintain that. Once the price of SSDs comes down a bit, I can see myself deploying them for those workloads that need huge amounts of I/O which at the moment we are spreading across large numbers of spindles which take power and space and by the way, we waste most of the capacity of those disks.
    SSDs are the first thing in a long time which actually improves my I/Os; capacity growth has so far outstripped performance improvements that it actually causes me problems.
    This is not an either/or scenario, we will have both flash/SSDs, spinning rust and of course tape. But it adds another potentially very important tool to my currently rather empty toolbox.

  4. Barry Whyte says:

    Jesse,
    I suggest you do some reading on what makes an Enterprise SSD – and how it differs from a “laptop” SSD – they are very different beasts. Most of the Enterprise SSD vendors have intelligence in the write algorithms that boosts the write endurance by several orders of magnitude. Its the IP in these algorithms that are currently adding the ‘premium’ to the price – since the raw components are much cheaper. Over the next few years this will come down dramatically.
    Also, I think most of our Systems group people would disagree strongly with your thinking about the form factor.

  5. Jesse says:

    I just came out of an environment where some moron thought that a major banking system could be put together on a Windows platform.
    The environment failed. Later so did the company.
    Now I’m back to consulting and I’m back in “real” environments. Windows is a great front-end system, mostly because people who write code for .NET are a dime-a-dozen.
    Most successful environments utilize Windows/Linux for perimeter systems and AIX, HPUX, Solaris, MainFrame, etc for the core systems.
    Now granted FusionIO’s system might enhance the throughput of the perimeter systems, which could in-turn speed up the over-all performance of an environment. However, back to my original point, I wouldn’t put any NAND based memory (and that includes most current SSD implementations) in a system that represents a critical path for data. Because when NAND flash fails, it will fail spectacularly.

  6. Jesse says:

    Barry – I understand the concepts behind write-levelling. All that means is that when chips start failing they are going to fail at around the same time, instead of sporadically.
    Truthfully it’s a six of one, half-dozen of the other kind of scenario. You either have a few blocks fail sooner or most of the blocks fail later.

  7. Martin G says:

    Components fail, mechanical spinning disks fail; we protect them in a variety of means, I assume that my flash disks will fail. I would protect my flash disks via RAID of some sort. Okay, it is possible that all my Flash disks will fail at the same, it is also possible that all my mechanically spinning disks will fail at the same time.
    I think EFDs are going to be very important to us in the storage world; it is the first major performance increase in years. At the moment, we can only get more disk performance by adding spindles, we can fake it a bit by adding huge amounts of cache but that does tend to be horrendously expensive.

  8. Nice blog, Martin!
    Wear leveling on SSD is important, and the enterprise level stuff does it on-chip. But if you think about the way NetApp WAFL works, the wear leveling is built in to the way we write to storage. Wear leveling for free.
    To address Jesse’s point, it still doesn’t avoid the need for some form of RAID, and for management of these “drives” to prevent catastrophic data loss. Any assumption otherwise and we’re back to the bad old days.
    Last point is to do with impedance mistmatches. We’re still stuck with SSD being seen as a class of storage. No-one else on the traditional brown stuff gets the benefit if it’s treated just like regular disk. Perhaps a better use of SSD is as a 2ndary cache, between the fast and expensive DRAM on controller and the far cheaper but slow spinning disk. That way, everyone gets a boost.

  9. Martin G says:

    I think the use of EFD will evolve over the next few years; it maybe that they get deployed as a form of secondary cache, it may be that hot blocks get written there and for some applications, you may decide that you want to pin the whole application on EFDs.
    EFDs may be the kick up IBM’s proverbial which encourages them to flesh out a DFSMS for open systems which works.

Leave a Reply

Your email address will not be published. Required fields are marked *