Cisco Nexus 7000 connectivity solutions for Cisco UCS

Last summer I was invited by the Nexus 7000 product management team at Cisco to help co-author a whitepaper covering general guidelines and best practices for network integration of Cisco UCS with Cisco Nexus 7000.  The idea was to take a lot of the content already presented in my video series Cisco UCS Networking Best Practices (in HD), extract the material most relevant to Cisco UCS + Nexus 7000, and publish a narrative with diagrams in a whitepaper format.

I am pleased to announce that as of today this whitepaper is now the official Cisco publication:

Cisco Nexus 7000 Series Connectivity Solutions for the Cisco Unified Computing System

In summary, this whitepaper discusses the following topics:

  • Nexus 7000 bandwidth and density complimenting Cisco UCS deployments
  • Cisco UCS network connectivity overview
  • Cisco UCS End Host Mode vs. Switch Mode
  • Why End Host Mode is the preferred (and default) mode of operation
  • Why vPC uplinks from Cisco UCS to Nexus 7000 are preferred
  • Traffic patterns and failure scenarios with vPC uplinks to Nexus 7000
  • Why attaching Cisco UCS without vPC to Nexus 7000’s configured for vPC should be avoided
  • No vPC? No problem!  Best practices when connecting Cisco UCS to Nexus 7000 without vPC
  • Connecting Cisco UCS to separated Layer 2 networks
  • Connecting Cisco UCS to networks with Nexus 5000 and Nexus 7000 using vPC
  • Why connecting Cisco UCS to a Spanning Tree influenced Layer 2 access topology should be avoided.
  • Summary of Cisco UCS + Nexus 7000 networking best practice recommendations
  • Cisco Nexus 7000 architectural advantages for Cisco UCS connectivity
    • Hitless ISSU, Stateful process restarts, Stateful supervisor switchover
    • N+1 and Grid level power supply redundancy
    • End of row L2/L3 connectivity for high density compute pods
    • Scalability for large deployments, 128,000 MAC addresses – hardware learning
    • Infrastructure consolidation with virtual device contexts (VDC)
    • Support for next generation switching fabrics with FabricPath, and TRILL
    • SAN/LAN infrastructure consolidation with future support for FCoE & FCF

What’s NOT covered in this whitepaper:

  • Connecting Cisco UCS to Nexus 7000 FabricPath networks
  • Guidance on choosing Nexus 7000 F1 or M1 series linecards for Cisco UCS connectivity
  • FCoE uplinks from Cisco UCS to Nexus 7000

The above items not covered in this whitepaper may be the subject of future blogs here and/or additional Cisco whitepapers and CVD‘s.  However, I will take this opportunity to write a few comments on each subject.

Connecting Cisco UCS to Nexus 7000 FabricPath networks

Nexus 7000 switches configured for FabricPath have a new switch port mode available called, you guessed it, a FabricPath port.  These are the ports that directly connect to other FabricPath capable switches and must be explicitly configured as such.

interface Ethernet 1/1

description Connection to FabricPath network

switchport mode fabricpath

All other standard non-FabricPath ports are referred to as “Classic Ethernet” ports that normal switches and servers connect to without any knowledge or awareness of FabricPath.  This is the default port setting.

The Cisco UCS fabric interconnect is not a FabricPath aware switch, and as such should be connected to the Nexus 7000 on a normal “Classic Ethernet” port, in either End Host mode or Switch mode (end host mode is still preferred).  The Nexus 7000 may be participating in a larger FabricPath network upstream, but this fact is completely transparent to Cisco UCS or any other device attached to a normal “Classic Ethernet” port.

interface Ethernet 2/1

description Connection to Cisco UCS

switchport mode trunk

spanning-tree port type edge trunk

The Nexus 7000 “Classic Ethernet” ports can still be configured for vPC, so the best practice recommendation of connecting Cisco UCS to Nexus 7000 with vPC uplinks in End Host mode still applies, with or without FabricPath.

The Nexus 7000 configured for FabricPath has an enhancement to normal vPC, called vPC+ which basically makes the Nexus 7000 vPC domain appear as one Switch ID to the rest of the FabricPath network.  This is helpful in preventing the thrashing of Switch ID’s in the FabricPath forwarding tables, but has nothing to do with how Cisco UCS connects to the network.

In a nutshell, connecting Cisco UCS to a Nexus 7000 FabricPath network has little impact in how you would normally connect Cisco UCS.  Just make sure you’re connecting Cisco UCS to a normal “Classic Ethernet” port on the Nexus 7000.

More in this later…

Guidance on choosing Nexus 7000 F1 or M1 series linecards for Cisco UCS connectivity

First lets understand the some of the key differences in terms of price and capabilities…

The Nexus 7000 M1 series are the normal Layer 2 and Layer 3  capable linecards available since the beginning with an 80 Gbps connection to the switch fabric and 4:1 oversubscribed at the front panel 32 ports.  Additionally, the M1 series linecard support hardware learning for 128,000 MAC addresses, and roughly 1 million IP routes.  The M1 linecard Layer 3 capabilities and MAC scalability provides flexibility that is both simple and scalable, but at twice the price of the F1 linecard for an equivalent 32-ports of 10GE.  If price is more important than density, an 8-port non-oversubscribed M1 linecard is available for almost half the price of the 32-port card.

The Nexus 7000 F1 series is a new 32-port 10GE linecard that supports Layer 2 forwarding only with a 230 Gbps connection to the switch fabric and line rate non-blocking forwarding (320 Gbps) for all Layer 2 flows local to the linecard.  Additionally, the F1 linecard supports FabricPath and is FCoE ready.  Every two front panel ports are serviced by a switch on chip (SoC) that supports hardware learning for 16,000 MAC addresses.  If you simply spread all VLANs across all ports (all SoC), the entire linecard supports 16,000 MAC addresses.  With careful planning, you can try to isolate VLANs to fewer ports, and therefore expose the MAC addresses in those VLANs to fewer SoC.  The extreme case would be keeping any given VLAN unique to only one SoC, resulting in the F1 linecard supporting 256,000 unique MAC addresses (16 SoC’s each with 16K unique MACs).

Side note: When the F1 linecard receives traffic that needs Layer 3 switching, it will forward that traffic across the internal fabric to an M1 linecard (if one exists) for the Layer 3 lookup and forwarding.

Which linecard is best for Cisco UCS connectivity?  Each is a good choice with pros & cons, so it really depends on what’s more important to you: cost, scalability, flexibility, bandwidth, over-subscription, etc.

You might choose the M1 linecard under these criteria:

  • Scalability with simplicity, e.g. 128,000 MAC’s with no special planning.
  • You are linking Cisco UCS to the Aggregation layer Nexus 7000 where Layer 3 switching is required.
  • Consistency and simplicity of local forwarding for Layer 2 and Layer 3 flows.
  • Line rate non-oversubscribed forwarding for all Layer 2 and Layer 3 flows (8-port M1)
  • Low cost & low over-subscription more important than port density (8-port M1)

You might choose the F1 linecard under these criteria:

  • You are linking Cisco UCS to an Access/Edge Nexus 7000 where only Layer 2 switching is required.
  • You are linking Cisco UCS to a Nexus 7000 at the Edge of a FabricPath network.
  • Low over-subscription, low latency, for all end-to-end Layer 2 flows is a concern.
  • Both port density and cost are key concerns
  • MAC scalability is not a concern

In my experience, most customers connect their Cisco UCS to the Aggregation layer (this makes sense if you view the fabric interconnect as the Access layer).  Of those customers, given the choice, most choose the M1 linecard, except for those where cost, low latency, and low over-subscription for Pod-to-Pod layer 2 forwarding is a key concern.

Some customers are beginning to deploy Nexus 7000 in both the Access (end of row) and Aggregation layers for density requirements and to prepare themselves for FabricPath.  These customers are connecting their Cisco UCS fabric interconnects to the Nexus 7000 Access/Edge switch which is Layer 2 only by design, so the F1 linecard there is a no-brainer.

More on this later…

FCoE uplinks from Cisco UCS to Nexus 7000

There isn’t a lot of detail that can be discussed right now because two things still need to happen. But I think I can give you a hint of where this is heading.

  1. Nexus 7000 software (NX-OS) support for Fibre Channel forwarding (FCF)
  2. Cisco UCS Manager software support for FCoE uplinks

The key word in both items is software – Meaning, no new hardware that isn’t already available today will be required.
When these software capabilities arrive, we will begin to see topologies where Cisco UCS can link to a common pair of Nexus 7000’s that provide both the LAN and SAN infrastructure. The holy grail of unified fabric consolidation at both the access and aggregation layers starts to become a real world reality.

More on that later too…

Disclaimer:  The views and opinions expressed are those of the author, and not necessarily the views and opinions of the author’s employer.  The author is not an official media spokesperson for Cisco Systems, Inc.  For design guidance that best suites your needs, please consult your local Cisco representative.


  1. Patrick says

    Any thoughts on an a comparisom of the N7K to the competition?

    Maybe even the whole Nexus line to the competition in a series?

    I have really enjoyed and made use of the info provided in your blog and will be making even more use in the next two months on a UCS / N7K / Nexus 5548 / N2K install.


  2. robert r says

    What about the management connectivity on the UCS 6100s in multiple data centers with 7Ks ?
    Shared addressing across sites ? Separate addressing per data center ?

  3. Matt says

    Brad, this is a great whitepaper.

    In the section talking about separate L2 domains, you mention using N5Ks to create a common layer2 domain. What if you don’t want a common L2 domain, instead you want to have overlapping VLAN #s (i.e. have a VLAN 10 in DMZ1 and have a VLAN 10 in DMZ2) and be able to present both DMZs to a single server and allow the server send traffic to either DMZ? This may sound like a weird use case (and potentially a security architects nightmare!), but I can think of a few cases where a server needs to communicate north to two separate L2 domains. Thanks for your thoughts!

    • says

      In some cases, there are separate L2 domains using the same VLAN #’s perhaps coincidentally and you may try to connect UCS to each of these domains. Problem here is that the UCS fabric interconnect does not support overlapping VLANs and will not be able to distinguish VLAN 10 in network-A from VLAN 10 in network-B and will treat the two as the same network.

      In this situation you need to connect UCS to a switch that supports VLAN Translation, and then connect the separate L2 domains to that switch.

      I discuss this in Part 9 of my UCS video series here:



  1. […] While there are certainly advantages to uplinking Cisco UCS with virtual port channels, vPC is certainly not required. Cisco UCS easily and efficiently connects to any data center network environment with or without vPC. This section discusses best practices connecting UCS to networks without vPC.  The key best practice here is to always dual attach each Fabric Interconnect to two upstream network switches, whether its with vPC uplinks, or multiple individual uplinks.  Another suggested practice is to avoid attaching Cisco UCS to a second tier Layer 2 switch with spanning tree blocking links.  A better approach is to either have vPC capabilites at the second tier Layer 2 switch, or connect Cisco UCS directly to the tier 1 switch, avoiding a traffic bottlenecks induced by spanning tree. Follow up post and whitepaper: Cisco Nexus 7000 connectivity solutions for Cisco UCS […]

Leave a Reply

Your email address will not be published. Required fields are marked *