There is currently a fair bit of noise about FCoE, whether it is ready for the prime time, whether it’ll ever be ready and does it matter anyway. I thought I’d add my tuppence worth of thought.
When FCoE was first announced and it looked like that we might get some fabric convergence, I was pretty positive about it all. I really thought that this was a good thing, convergence is good. Now there have been various ways of getting storage traffic to flow over traditional IP networks but storage guys don’t trust them and quite frankly most network guys don’t trust the storage guys not to destroy their networks anyway.
FCoE and the accompanying standards seem to be a real chance to do something about and as we continue down the path of 10GbE in the data-centre; we could work together to bring a new world and operating model. (exaggeration for effect BTW)
However, we are a long way away from this at present and 10 GbE networks are going without consideration of FCoE and traditional SAN environments keep growing. The vendors cannot agree on how to ensure interoperability and standardisation. So FCoE is turning into the complete ‘head-f**k’ that FC is. You will not be able to easily build a heterogenous FCoE fabric, mixing and matching switches from different vendors; you will have a ‘vendor x’ fabric, you will still be in the world of complex compatibility matrices, you will have political trouble as you try to get teams to co-operate and work together, you probably will not see any real performance gains at present from going down the FCoE route and change/problem/incident will be a complete mess.
So why bother with FCoE at all? You could just run iSCSI/NAS over 10GbE or just stick with FC as you know and love it today. The roadmap for FC is still healthy and there is deployed product. You probably have enough going on in your data centre already and if IT is going through a transformational change…you really do have enough on your plate.
Now, the future is not completely bleak for FCoE but the adoption of FCoE is going to take longer and more painful than I thought. But the vendors do need to get together on work out how best to adhere and stick to the standards and not just simply ‘Embrace and Extend’ causing a reply of FC!
Martin, if you already have business processes in place that have evolved in the presence of FC; then moving to FCoE allows you to realize:
1. a slight performance boost (compared to 8G FC);
2. a potential CAPEX / OPEX savings; and
3. some degree of investment protection (since we all know 10GbE is going to be around for a while).
All of this without needing to alter the way you do handle common storage provisioning transactions.
As Mark Lippitt says, I think it basically comes down to the provisioning model you use. Either you prefer a Network-Centric provisioning model (FC) and make use of zoning, RSCNs, etc or you prefer an End-Node-Centric provisioning model (NAS/iSCSI) where the bulk of the provisioning is handled at the devices, supply a Target IP Address/IQN, share, etc directly to the host. (BTW, I appreciate that the preference can change from one environment/application to the next.).
If you prefer the Network-Centric model you may find that moving to FCoE instead of iSCSI or NAS will be less painful. If you prefer an End-Node-Centric model, you may find iSCSI/NAS more to your liking. Whatever our customers want we will do..
In regards to your concerns about interoperability and matrices; why would you expect that encapsulating FC in Ethernet would alter its basic behavior / interoperability characteristics? One of the basic goals of FCoE (and something that it nailed) is that it would not break existing FC tools and applications. Unfortunately, you get the good (driver and mulitpathing software reuse) and the bad (FC-SW interop) with FCoE.
BTW, FC isn’t the only transport with interoperability issues. Show me an implementation of TRILL from Brocade or Cisco that will interoperate today!!