Simple use cases for Network Interface Virtualization

My most recent post Simple Example of Network Interface Virtualization generated enough interest and curosity to warrant a follow-up post showing simple uses cases for NIV.

NIV takes a single physical adapter and presents multiple virtual adapters as if they were physical adapters to the server and network.  Now that the server and network see multiple adapters, you can pretty much do with them as you wish.  However below we will look at what might be some pretty common use cases.

NIV use case #1: Presenting multiple adapters to a hypervisor switch to be used as uplinks

In this example the hypervisor scans the PCI bus, and sees each virtual adapter as if it were a physical adapter.  The server adminitstrator can then assign the virtual adapters to be used as uplinks for a vSwitch, vNetwork Distributed Switch, or Nexus 1000V.

NIV use case #2: Hypervisor Bypass using VMDirectPath I/O

In this example the hypervisor has scanned the PCI bus, and sees each virtual adapter as if it were a physical adapter.  The server administrator then enables VMDirectPath I/O and chooses virtual adapters on Palo to be used by a virtual machine directly.  This configuration results in bare metal like I/O and low latency for the VM I/O, however currently there is a trade off in that vMotion will not be possible (yet).  This is because the hypervisor no longer “owns” the virtual hardware presented to the virtual machine as a network adapter, and as a result the I/O state of the virtual machine cannot be transfered to another hypervisor.  There are ways to solve this problem, but they are not available yet.

NIV use case #3: Hypervisor pass through switching

In this case the hypervisor scans the PCI bus and sees each virtual adapter as if it were a physical adapter.  The server administrator then assigns the virtual adapters to a special hypervisor switch that doesn’t actually do any switching, rather is just passes through I/O from a VM to an uplink adapter explicitly for that VM only.  This configuration reduces the CPU cycles on the server required for VM I/O, improves I/O throughput, and reduces latency, but not to the same degree that VMDirectPath I/O does. By putting the hypervisor back in the I/O path, but only now with a more limited pass-through only role, we can now acheieve vMotion becuase the hypervisor is presenting the VM it’s virtual hardware for networking and is able to move the VM’s I/O state to another host.

NIV use case #4: Hypervisor Bypass & Hypervisor switching combination

In this case I am showing that there is flexability in how you use NIV in a virtual environment.  The server administrator has decided one of the VM’s is really I/O intense and needs full blown VMDirectPath I/O, while other more common VM’s are just fine with standard hypervisor switching.

NIV use case #5: Non virtualization OS / Apps running on Bare Metal

In this example I am showing that NIV is not just for VMware, because NIV operates at the hardware level in the server.  The server administrator is running for example Oracle or Microsoft Exchange on the bare metal and is using the multiple virtual adapters to satify requirements for multiple adapters these applications may require.  One example would be adapters dedicated to Oracle RAC heartbeat, public, and private networks.

Can you think of any other use cases for NIV? Submit your thoughts in the comments section.

Cheers, Brad

A simple example of Network Interface Virtualization

I’m seeing some confusion in the blogosphere about how Cisco’s implementation of Network Interface Virtualization (NIV) really works so perhaps a very simple example is needed, and that is the intent of this post.  My previous posts about NIV with Cisco’s Palo adapter were focused on the big picture and the complete solution, such as this post about NIV with the VMware vSwitch, and this post about NIV with the Nexus 1000V.  Perhaps in all of the grand detail some of the fundamental concepts were glossed over so I am attempting to revisit the simple concept of how multiple virtual adapters can be treated as if they were multiple physical adapters to provide true Network Interface Virtualization (NIV), or as some others are calling it “Virtual I/O”.

The main confusion I want to address is the belief that VLAN tagging must be implemented on the virtual adapters to uniquely differentiate each virtual adapter to the upstream network switch.  In this simple example I will show that belief is not at all true and that each virtual adapter does not need to be configured any differently than a physical adapter.

I will start off with a server that has (4) physical adapters, (2) Ethernet NIC’s and (2) Fibre Channel HBA’s.  Each adapter has its own cable that connects to a unique physical port on a switch.  The network each adapter connects to (VLAN or VSAN) is determined by the configuration settings of the physical switch port.  The adapters themselves are not doing any VLAN or VSAN tagging.  The adapter presents itself to the server through the PCIe bus slot it is inserted into.  Furthermore, the adapter presents itself to the network via the cable that connects it.

Before NIV

With the Cisco implemenation of NIV using the “Palo” adapter I can maintain the exact same configuration shown above while consolidating adapters, cables, and switches.  A single 10GE adapter (Palo) will present the same (4) adapters to the server using PCIe SR-IOV based functions.  Additionally, a single 10GE adapter (Palo) will present the same (4) adapters to the network switch using a unique NIV tag acting as the new virtual “cable”.

After NIV

In the “before” picture no VLAN tagging was used to connect Adapter #1 to VLAN 10.  The same holds true in the graphic above “after” where each vNIC can be configured exactly as the physical NIC with no VLAN tagging.  Each vNIC and vHBA is given a cable, a virtual cable more specifically that is its NIV tag.  That NIV tag is connected a virtual switch port on the unified fabric switch.  The virtual switch port can be configured the same way as the physical switch port in the “before” picture with VLAN and VSAN assignments that determine which network each virtual adapters belongs to.

In summary, I did not need to make radical changes in the server or adapter configurations  in order to reap the benefits of infrastructure consolidation.  This is a result of providing true Network Interface Virtualization (aka “Virtual I/O”) from both the server perspective with SR-IOV, and the network perspective with NIV tagging.

I hope this simple example makes the fundamental concepts of NIV a little more clear and easier to understand.

Cheers,  Brad.

UPDATE: A follow-up post Simple use cases for Network Interface Virtualization