What does the virtual data center environment look like when you have a CNA (Converged Network Adapter) installed in a ESX 4.0 server running the Cisco Nexus 1000V virtual switch? I decided to draw it out in a diagram and came up with this:

Nexus 1000v with FCoE CNA with VMWare ESX 4.0

Some key observations of importance:

  • The version of ESX running here is ESX 4.0 (not yet released)
  • The Nexus 1000V software on the physical server acts like a line card of a modular switch, described as a VEM (virtual ethernet module)
  • The Nexus 1000V VEM is a direct replacement of the VMWare vSwitch function
  • The Nexus 1000V VSM (virtual supervisor module) acts like the supervisor engine of a modular switch
  • One Nexus 1000V VSM instance manages a single ESX cluster of up to 64 physical servers
  • The form factor of Nexus 1000V VSM can be a physical appliance or a virtual machine
  • The network administrator manages the Cisco Nexus 1000V (from the VSM) as a single distributed virtual switch for the entire ESX cluster
  • Each virtual machine connects to its own Virtual Ethernet (vEthernet) port on the Nexus 1000V providing the network administrator traffic visibility and policy control on a per virtual machine basis. Virtual machines can now be managed like physical servers in terms of their network connectivity.
  • In this diagram VM1 connects to interface VEth1 on the Nexus 1000v and keeps the same VEth1 interface even when it’s VMotion or powered up on a different physical server.
  • The VMKernel vmknic interface also connects to Nexus 1000V on a Virtual Ethernet port, ‘interface vEth 2’ for example (not shown here). Same goes for the Service Console vswif interface, it also connects to Nexus 1000V on a virtual ethernet port (not shown here).
  • The network administrator defines Port Profiles which are a collection of network configuration settings such as the VLAN, any access lists, QoS policies, or traffic monitoring such as NetFlow or SPAN.
  • In this example Port Profile BLUE could be defining access to VLAN 10 and enabling NetFlow traffic monitoring, for example.
  • Once enabled, Port Profiles are dynamically pushed to VMWare Virtual Center and show up as Port Groups to be selected by the VMWare administrator
  • The VMWare administrator creates virtual machines and assigns them to Port Groups as he/she has always done. By selecting VM1 to connect to Port Group BLUE in Virtual Center, the VMWare administrator has effectively connected VM1 to VLAN 10, along with any other security, monitoring, or QoS policies defined in Port Profile BLUE by the network administrator
  • The network administrator does not configure the virtual machine interfaces directly. Rather, all configuration settings for virtual machines are made with Port Profiles (configured globally), and it’s the VMWare administrator who picks which virtual machines are attached to which Port Profile. Once this happens the virtual machine is dynamically assigned a unique Virtual Ethernet interface (e.g. ‘int vEth 10’) and inherits the configuration settings from the chosen Port Profile.
  • The VMWare administrator no longer needs to manage multiple vSwitch configurations, and no longer needs to associate physical NICs to a vSwitch.
  • The VMWare administrator associates physical NICs to Nexus 1000v, allowing the network administrator to begin defining the network configuration and policies.
  • The Nexus 1000v VSM is for control plane functions only and does not participate in forwarding traffic.
  • If the Nexus 1000v VSM goes down it does not disrupt traffic between physical servers and virtual machines.
  • If an ESX host reboots or is added to the network, the Nexus 1000v VSM must be accessible.
  • The Nexus 1000v VSM can be deployed redundantly, with a standby VSM ready to take over in case of failure.
  • The ESX server has a 10GE connection to a physical lossless Ethernet switch that supports Data Center Ethernet (DCE) and Fibre Channel over Ethernet (FCoE), such as the Cisco Nexus 5000.
  • The Cisco Nexus 5000 provides lossless ethernet services for the FCoE traffic received from the CNA. If the Nexus 5000 buffers reach a high threshold an 802.3x pause signal with the CoS equal to FCoE will be sent to the CNA. This per CoS pause signal tells the CNA to pause the FCoE traffic only, not the other TCP/IP traffic that is tolerant to loss. The default CoS setting for FCoE is COS 3. When the Nexus 5000 buffers reach a low threshold, a similar un-pause signal is sent to the CNA. The 802.3x per CoS pause provides the same functionality as FC buffer credits, controlling throughput based on the networks ability to carry the traffic reliably.
  • The CNA and Cisco Nexus 5000 also support 802.1Qaz CoS based bandwidth management which allows the network administrator to provide bandwidth guarantees to different types of traffic. For example, the VMotion vmkernel traffic could be given a minimum guaranteed bandwidth of 10% (1GE), and so on.
  • Fibre Channel HBA’s are not needed in the physical server, as the Fibre Channel connectivity is supplied by a Fibre Channel chip on the CNA from either Emulex or Qlogic (your choice)
  • Individual Ethernet NIC’s are not needed in the physical server, as the Ethernet connectivity is supplied by a Ethernet chip on the CNA from either Intel or Broadcom.
  • The single CNA appears to the ESX Hypervisor as two separate I/O cards, one Ethernet NIC card, and one Fibre Channel HBA.
  • The ESX Hypervisor uses a standard Emulex or Qlogic driver to operate what it sees as the Fibre Channel HBA.
  • The ESX Hypervisor uses a standard Intel ethernet driver to operate what it sees as the Ethernet NIC.
  • <a href=”VMware’s ESX 3.5 Update 2 Hardware Compatibility List contains support for the Emulex CNA and Qlogic CNA.
  • ESX 4.0 is not required to use CNA’s and FCoE. FCoE can be deployed today with ESX 3.5 update 2.
  • ESX 4.0 is required for Nexus 1000V.
  • CNA’s are not required for Nexus 1000V.
  • The Nexus 1000v has no knowledge of FCoE and does not need to support FCoE because FCoE is of no concern to the Nexus 1000V deployment. To assert this fact, consider that the Nexus 1000v operates no differently in a traditional server that has individual FC HBA’s and individual Ethernet NIC’s. The Nexus 1000V uses the services of the Ethernet chip on the CNA and is unaware that the CNA is also providing FC services to the ESX host. Additionally, the virtual machine has no knowledge of FCoE.
  • The Menlo ASIC on the CNA guarantees 4Gbps of bandwidth to the FC chip. If the FC chip is not using the bandwidth, all 10GE bandwidth will be available to the Ethernet chip.

UPDATE: Jointly developed Cisco and VMWare white paper comparing the VMWare vSwitch to Cisco Nexus 1000V.

Cheers,
Brad