The end to end (FCoE) justifies the means to means

I was pleasantly surprised to see that Mike Fratto, site editor and lead analyst from Network Computing, called me out in this article, with the following statement:

two nodes using FCoE connected to a Nexus 2000 Fabric Extender, which is connected to a Nexus 5000, does not constitute end-to-end Ethernet FCoE because the Nexus 2000 Fabric Extender is just a bump in the wire and switching occurs on the Nexus 5000. If you want to call the Nexus 2000 a hop (and you know who you are, Brad), you might as well call the CAT6 cable between them a hop as well. So there. LOL

Mike and I had a pretty heated exchanged on Twitter when he published a different but related article that preceded this one.  I passionately disagreed with his headline “Brocade First to market with End to End FCoE“.  We did keep it professional, but in hindsight I realize more of my focus should have been on Brocade, not Mike Fratto who was just the messenger carrying a bill of goods given to him by Brocade. I’m relieved to see it ended in a light hearted fashion like this.

I really enjoyed reading this article, Mike, and I got a much needed laugh out of it. So, Thank You! :-)

Now, back to business:

Whether the Nexus 2232 FCoE fabric extender is a “hop”, or not, is largely irrelevant. If you think about it, the resulting solution provides the same “end to end FCoE” (as described by Brocade) without all of the “hops” otherwise required in a Brocade design, and without all of the management, configuration, and design complexity that comes with each “hop”.

Ultimately, I tend to believe customers don’t care about what is and is not technically an FCoE “hop”.  The “hop” in whatever shape or form it presents itself is just the means to an end (solution).  More importantly, customers are more interested in solutions addressing a given set of requirements (such as # of servers, racks, bandwidth, pod size and scalability, etc.) With the Cisco Nexus 5000/2232 FCoE fabric extender solution, the design complexity of a “hop” is effectively siphoned away while still addressing the multi rack scalability requirements. Why is this unique approach to FCoE scalability so quickly dismissed in a discussion about “end to end FCoE” or “multi-hop FCoE”?

Interestingly enough, the confusion Brocade has recently created around the applicability of TRILL to FCoE will, IMHO, only make the Nexus FCoE fabric extender solution even more attractive to customers in the sense that it’s easier to deploy and understand.  Just as Mike described it: “a bump in the wire”  —^—

Does it get any easier than that?


  1. says

    Brad, I have been reading some of the discussions being had regarding FCoE and the delta between what is available today and what the endgame is. I read some of the dialogue between you and Scott Lowe that occurred about a year and a half ago.

    Besides the CNAs, DCB-capable Ethernet switches and the FCF, what other pieces of the puzzle are necessary for what may be termed an end-to-end solution?

    There also seems to be a philosophical discussion going on with regard to the definition of a “hop”. Why is that an issue? Why is there a distinction made between FCoE and multi-hop FCoE?

    I would love to hear some of your thoughts on this and perhaps recommend a good (recent) white paper that addresses where FCoE is today ( in realistic terms, not vendor-specific FUD).

    And, by the way, no worries on getting back to me quickly. I am sure you are very much in demand, so I don’t expect you to drop what you’re doing to engage me within 10 minutes of posting message – 1 hour will do! 😉

    • says

      I can’t think of a better way to define “end-to-end FCoE” than having a storage network with all FCoE interfaces. That can be done today, and customers are doing this today. It’s just a matter of scale and oversubscription that defines how large these implementations can grow, one “end” to the next.
      In general, I think some people tend to view FCoE as a competing technology to traditional FC. If you come at with that frame of mind, you tend to look at FC networks with all the hops and say “why cant FCoE do that?” — And then, “Oh geez, until FCoE can do everything FC can do, I’ll just wait.” As if its one or the other.
      Naturally I disagree.
      In fact, I’ll say this: FCoE is not a storage networking protocol … “Brad, how could you say such a thing?!” OK, OK, that might be a little facetious, but hear me out..
      FCoE is really a just a way to introduce a new cable type (Ethernet) to an existing storage protocol, FC. The job of FCoE is merely to deliver original FC data frames from one end of a link to another.
      Its a very simple concept but the impact is profound when you begin to add up all of the adapters and cables and switches that can be removed from the Server access layer. It’s the server access layer where the real economies of scale exist that make FCoE compelling.
      This optimization of the server access layer has been possible since 2008 when the first FCoE switches and CNA were shipped.
      The next major step for FCoE will be to unify the Aggregation/Core platforms, with FCoE uplinks from the Server access layer to the Core. That will happen this year (2011) 😉 … and the scale of “end-to-end FCoE” mentioned earlier suddenly gets a lot bigger. We will certainly see ROI optimization with a unified Core, and Cisco will lead the way, however, IMHO, the ROI will be a little more nuanced than what has already been possible in large black & white numbers for several years at the Server access layer.
      The belief that FCoE is a competing technology to FC, and somehow incomplete until you have “multi-hop FCoE” or FCoE “uplinks” is focusing on all the wrong things and completely missing the real opportunity at hand, today.

  2. says

    Brad, I agree with you. I don’t see FCoE as a competing technology at all. I actually didn’t know that anyone did! I see FCoE for exactly what it is: the ability to transport FC over a different fabric. FCoE is still carrying FC’s layer 2,3 and 4 constructs, but with the use of an adaptation layer which acts as an interface to the Ethernet layer. Ethernet is only replacing FC layers 1 and 2. FCoE uses the same FC SAN admin tools and workflows. It uses the same control plane prtocol that FC uses (FSPF) and the same management infrastructure.

    It can even be argued that FCoE is FC’s savior, since Ethernet is scaling out to 100G, making iSCSI more feasible for positioning in enterprise-class data centers. But that’s another argument :-)

    I’m still not sure what is meant by “multi-hop” FCoE. Is it used to simply mean FCoE beyond the top-of-rack? If so, why the discussion about whether the FEX is a hop or not…? How does hop count fit into the discussion of FCoE?


    • says

      By the way, I just ran into this tidbit found on an EMC white paper.

      Connecting a CNA to a Nexus 50×0 via a stand-alone ToR or EoR – whether it’s DCB-capable or not — is not supported…

      Is this related to the hop-count discussion? Do CNAs have to be directly connected to a FCF, like the N5K, or can it be connected to a FEX or N4K and then to the FCF? My thought is that the CNA can be indirectly connected to a FCF via a DCB-capable bridge, and in fact, they can perform FIP snooping, which ensures that the appropriate E Node, with its FPMA, is allowed to begin a discovery and FLOGI session with the FCF.

      • says

        You are correct that CNAs do NOT need to be directly connected to the FCF (Nexus 5000, UCS 6100). The CNA could be indirectly connected to the FCF via a FEX, or via a DCB capable switch, or both. This is where FIP comes in, where the CNA asks for the VLAN and FCF MAC address it should be connected to, and the FCF responds with this information. The result is a virtual cable from the CNA to the FCF, and FLOGI begins from there. FIP Snooping makes sure that after the FIP process is complete no other stations connected to that DCB capable switch can spoof the MAC address of the FCF and compromise the virtual link.

    • says

      That’s the primary argument I try to make in this article: Whether the FEX is a “hop”, or not is largely irrelevant. Its the solution that matters.
      Cisco has provided a fundamentally different approach to achieving scale with the FEX architecture, providing the scale of “hops” without the configuration and management complexity of “hops”.
      Hence the inspiration for this post :-)


      • says

        I wouldn’t worry too much about semantics. Brocade, like any other vendor, has the right to sell their stuff. Everyone is guilty of fudging the truth to a certain degree.

        As for the media letting Brocade get away with a misstatement — well, if our mainstream corporate media outlets can shamelessly abandon their sacrosanct journalistic duty to challenge our former president, as they did when he was obviously lying and conspiring to wage a war of choice in Iraq, then I think they will allow Brocade to serve us a little bit of marketing BS. :-)

  3. says


    My understanding with regard to the FCoE roadmap, as it pertains to the extension of the FCoE domain, is to extend the lossless Ethernet network past today’s FCoE demarcation point, which today is the ToR, to the EoR, and then to the core. Moreover, that path may be part of what we would call a fabric (self-healing, built-in intelligence, horizontal expansion, full bi-sectional bandwidth availability, extended control plane across physical switch boundaries, etc), or simply a patchwork of lossless Ethernet switches connected to each other via the classic 802.1D architecture (STP, blocked uplinks, 50% bi-sectional bandwidth, necessary reconvergence after every fault, etc). Either way, the FCoE traffic will leverage a lossless Ethernet path.

    This having been said, what happens to the ToR FCF switch that presently terminates the FCoE domain and does NOT have FSB (FIP-Snooping Bridge) capability? I don’t understand what is to be done with the Brocade B-8000 or Cisco Nexus 5000 when data center architects want to take the next step and make the EoR switch the new demarcation/termination point for FCoE traffic – meaning, the point at which FC and Ethernet go their separate ways. Will the Nexus have to be ripped out? Perhaps I am wrong and the B-8000 and Nexus 5000 ARE indeed FSBs, which would mean that the FC ports would no longer be used on them, and the 10G CEE uplinks to the EoR will simply carry the FIP/ FCoE traffic to the EoR, where the new FCF will terminate the FCoE path. This question isn’t vendor specific. Thoughts?

    Lastly, I have a question with regard to the Nexus 5000 supporting a connection to an FSB. I am asking this because there are discussions being had among data center architects, and there are proposed reference architectures being submitted, in which a blade switch acting as an FSB (and a normal port aggregator, like every other switch, not a 1:1 pass-through) is connected to a single Nexus 5000 10G CEE port. So, to be crytstal clear, I am talking about a blade chassis with multiple blade servers, each with a CNA, all connected to an FSB Ethernet blade switch that aggregates all the CNA traffic and is connected/uplinked as a “trunk” to ONE port on the Nexus 5000.

    Is the N5K capable of supporting multiple FLOGI logins on a single port? If not, that means that reference architecture will not work. Yes or no?

    • says


      In general you are asking (paraphrasing here): “How will the Nexus 5000 handle FCoE uplinks for further unified fabric consolidation upstream? Or is that not possible?”

      Yes this is possible. There will be a couple of ways to do this.
      1) The Nexus 5000 reamins the FCF at the ToR and has standard VE_port FCoE uplinks to another FCF.
      2) The Nexus 5000 will have FCoE uplinks that are standard VN_Ports and uses these to proxy server FLOGIs upstream to the FCF. This is basically the same NPV type of implementation FC edge switches have been doing for a while, just now with FCoE. You could call it FCoE-NPV.

      For the Nexus 5000, FIP Snooping is not implemented because FCoE-NPV trumps it in functionality. Remember, FIP snooping is just looking at FIP packets, thats it. FCoE NPV takes that a step further and looks at the actual FC messages, including the FLOGI, to perform traffic engineering, security, and troubleshooting not possible with FIP Snooping alone.

      Your last question: “Is the N5K capable of supporting multiple FLOGI logins on a single port?”
      The answer is absolutely, Yes. This is made possible by a capability called NPIV, which the Nexus 5000 has supported for some time. Just configure ‘feature npiv’ in the CLI.
      However, in the case of a downstream FSB blade switch (such as Nexus 4000), each blade server has their own virtual FC interface on the Nexus 5000. So each server is actually logging in to their own port (a virtual port). So NPIV is not really in use here. The virtual FC interface on the Nexus 5000 is manually associated to the server CNA’s MAC address (FCoE_LEP). So, yes there is more administrative work to do. But that’s the trade-off of using a simple FSB downstream from the FCF. With FCoE-NPV downstream from the FCF, manually defining a virtual port for every server CNA MAC is not necessary, so the configuration is much easier and less error prone.

      Hope that helps,

  4. says

    Yes, it helps plenty, Brad. This is some very interesting stuff.

    Although the N5K uses its own mechanism to map a blade to a virtual FCOE LEP, the industry approach will be to use what you described first; an edge NPV enabled switch uplinked to a director or ToR FCoE that supports NPIV. This way the NPV enabled switch’s uplinked port can first perform a FLOGI (as an N-port would normally do) and log into the fabric, and then it will proxy all the blade server’s FLOGIs and present them to the director or ToR as FDISCs.

    But for that to happen, we would need NPV over FCoE to be supported on the NPV enabled switch, and we would need the director or ToR to be able to run NPIV over FCoE. So, do you think the N5Ks will support both NPV– and NPIV over FCoE? This way, a 3rd party blade switch (other than, say, the N4K) can be deoployed in the chassis and uplinked to the N5K.


    • says

      For the Nexus 5000, NPIV on FCoE ports is here today. Just configure ‘feature npiv’ at the CLI. The feature is applicable to all storage enabled ports, FC or FCoE. If your 3rd party blade switch supported FCoE NPV you would enable NPIV on the N5K and it should work. However, I’m not aware of any 3rd party FCoE blade switch shipping FCoE NPV functionality today.
      As for NPV, the Nexus 5000 does that today too, but for the FC ports only. Enabling the same NPV functionality with FCoE ports (FCoE-NPV as I described it) is something that is planned in a future software release.

  5. Alex says

    It could be a very stupid question, but here it goes:
    In an end-to-end FCoE where we have a CNA at the server and native FCoE storage arrays, do I need a switch performing as FCF ? I mean, could a DCB/FIP snooping capable switch only be used to connect both server and storage ?

    • says

      Both the CNA and Storage array are going to try to login to fabric with FLOGIs and such before they can pass traffic, and you need an FCF to process these logins.


Leave a Reply

Your email address will not be published. Required fields are marked *