Cisco UCS and VMWare vSwitch design with Cisco 10GE Virtual Adapter
This diagram is a sample design of Cisco UCS running vSphere 4.0 utilizing the VMWare vSwitch and Cisco’s virtualization mezzanine adapter. The Cisco adapter is a dual port 10GE Converged Network Adapter supporting Fibre Channel over Ethernet (FCoE) and Network Interface Virtualization (NIV). The Cisco adapter is “virtual” in the sense that this single physical adapter can be carved up into as many as 128 virtual adapters. Each virtual adapter is then exposed to the operating system as a unique physical adapter. The virtual adapters can be either Ethernet (vNIC) or Fibre Channel (vHBA). This is just one of many possible VMWare + Cisco UCS designs I will be depicting in a series of future posts.
Many VMWare 3.5 installations today use as many as (4), (8), or even (12) 1GE adapters. Because the 10GE Cisco adapter can be presented to the ESX kernel as many adapters, this allows the designer to preserve existing multi-NIC VMWare designs using the traditional VMWare vSwitch for an easy migration to 10GE attached ESX hosts.
Some key observations and information related to this design:
- Some familiarity with Cisco UCS and its automated provisioning of servers with Service Profiles are assumed.
- The dual port 10GE Cisco virtual adapter on the UCS half height blade is exposing (4) Ethernet NIC’s and (2) Fibre Channel HBA’s to the ESX kernel.
- No special drivers or intelligence is needed by the ESX kernel to see the virtual adapters. When the operating system (ESX) scans the PCI bus it sees each virtual adapter as what it views to be a unique physical adapter plugged into its own slot on the PCI bus.
- The virtual Ethernet adapters are called vNIC’s (not to be confused with a virtual machine’s virtual NIC also called a vNIC). The virtual Fibre Channel HBA’s are called vHBA’s.
- The number of vNIC’s and vHBA’s presented by the Cisco mezzanine adapter are defined in the Service Profile for this server within the UCS Manager located on the Fabric Interconnect.
- Some vNIC’s can be given minimum guaranteed bandwidth or a higher QoS priority over other vNIC’s. vNIC QoS settings might be a good fit for the Service Console or VMKernel vNIC’s (not shown here). The vHBA’s supporting FCoE have QoS (minimum bandwidth guarantees, lossless ethernet) enabled by default. The Cisco virtual adapter QoS capabilities will be covered in detail in a future post.
- Each vNIC has a MAC address that was either manually defined or drawn from a pool of available MAC address when this Service Profile was created. Similarly, the pWWN for the vHBA was either manually defined or drawn from a pool of available WWN’s.
- The vNIC or vHBA is like a real adapter in the sense that it needs to be connected to its own dedicated upstream switch port with a dedicated cable. However in this design each virtual adapter gets its own dedicated “virtual” switch port on the Fabric Interconnect, and the cable that connects the virtual adapter to its virtual switch port is a VNTag.
- The Cisco virtual adapter applies a unique VNTag for all traffic egressing from a vNIC or vHBA. The VNTag is then received by the Fabric Interconnect and associated to a virtual switch port – vEth for a virtual Ethernet switch port, or vFC for a virtual Fibre Channel switch port.
- The Data Center IT team does not need to manually define, manage, or track VNTag’s. The VNTag numbering and associations per link are entirely managed by the UCS Manager behind the scenes during the Service Profile provisioning.
- Similarly, the Data Center IT team does not need to define or track the virtual switch ports on the Fabric Interconnect. When a vNIC or vHBA is defined in a Service Profile and then applied to a blade, the UCS Manager automatically provisions the necessary VNTags and virtual switch ports needed to complete the provisioning process.
- Cisco UCS and the Cisco virtual adapter combined have a unique “Fault Tolerant” feature that can be defined with each vNIC. The fault tolerant feature is shown in this design with the use of dashed lines. vHBA’s do not support this fault tolerance feature. (Important*: Please see the UPDATE below)
- A vNIC uses one of the two Fabric Interconnects as its primary path. Should any link or device failure occur along the primary path the vNIC will switch to the secondary path to the other Fabric Interconnect – this switchover occurs within micro seconds and is undetected by the operating system. This fault tolerance eliminates the need for active/passive NIC teaming to be configured in the operating system.
- When fault tolerance is defined for a vNIC the UCS Manager automatically provisions virtual switch ports (vEth) and VNTag’s on both Fabric Interconnects to assist in a speedy switchover process (one primary, the other standby). (<Important: Please see the UPDATE below)
- The Service Console and VMKernel port groups shown in this design do not have a NIC teaming configuration because the UCS fault tolerance is providing the high availability.
- The vHBA’s do not support the UCS fault tolerant feature and therefore a standard HBA multi-pathing configuration is still required in the operating system (ESX kernel) for Fibre Channel high availability.
- The VM port groups “vlan 10” and “vlan 20” do have a NIC teaming configuration for the purposes of vPort-ID based load sharing. This will allow VM’s to utilize both 10GE paths from the adapter.
- The Fabric Interconnect is a unified fabric switch and therefore plays the role of both an Ethernet access switch and a Fibre Channel access switch.
- The Fibre Channel configuration of the Fabric Interconnect by default operates in NPV mode (N_Port Virtualization). Therefore, the Fabric Interconnect does not need to be managed like an individual Fibre Channel switch, rather it connects to the SAN like an End Host.
- The FLOGI of each vHBA is forwarded upstream to the SAN switch port configured as an F_Port with NPIV enabled (both Cisco and Brocade FC switches support NPIV). The FC ports on the Fabric Interconnect connect to the SAN as an NP_port (N_port Proxy). Therefore, each vHBA is visible and therefore zoned by the SAN administrator as if they were connected directly to the SAN switch.
- The Ethernet configuration of the Fabric Interconnect can also attach to the Data Center LAN as an End Host, or as an Ethernet switch. This design is showing the Fabric Interconnect connecting to the LAN as an Ethernet switch.
- The Ethernet uplinks are standard 10GE and can connect to any standard 10GE LAN switch (Nexus 7000 or Catalyst 6500 are recommended).
- If the LAN switch is Nexus 7000 or Catalyst 6500, the LAN administrator can use vPC (Nexus 7000) or VSS (Catalyst 6500) features to allow the Fabric Interconnect to uplink to a redundant core with a single logical Port Channel. This provides full active/active uplink bandwidth from the Fabric Interconnect to the LAN with no Spanning Tree topology blocking links.
IMPORTANT UPDATE: This design depicts the use of the Cisco UCS vNIC “Fabric Failover” feature, or referred to in this post as “Fault Tolerant”, used in conjunction with a hypervisor switch (vSwitch or N1K). This design combination will be supported in Cisco UCS Manager version 1.4 and above. If you are using Cisco UCS Manager version 1.3 or below, “Fabric Failover” used with a vNIC assigned to a hypervisor switch as depicted in this design is not supported. See this post for more information.
NOTE: The Cisco virtual adapter shown in this design is one of four possible adapter options in Cisco UCS. The other three adapter options are the Intel Oplin, Emulex CNA, and Qlogic CNA.
CORRECTION: Contrary to what was indicated when this post was originally published, drivers and hardware qualification for the Cisco virtualized adapter highlighted in this design will be available in vSphere 4.0 (update 1), however not for VI 3.5. To run VI 3.5 on Cisco UCS you can use the Intel Oplin adapter, or the Emulex/Qlogic based adapters.