My most recent post Simple Example of Network Interface Virtualization generated enough interest and curiosity to warrant a follow-up post showing simple uses cases for NIV.

NIV takes a single physical adapter and presents multiple virtual adapters as if they were physical adapters to the server and network. Now that the server and network see multiple adapters, you can pretty much do with them as you wish. However below we will look at what might be some pretty common use cases.

NIV hypervisor switch uplinks

In this example the hypervisor scans the PCI bus, and sees each virtual adapter as if it were a physical adapter. The server administrator can then assign the virtual adapters to be used as uplinks for a vSwitch, vNetwork Distributed Switch, or Nexus 1000V.


NIV use case #2: Hypervisor Bypass using VMDirectPath I/O

NIV hypervisor bypass

In this example the hypervisor has scanned the PCI bus, and sees each virtual adapter as if it were a physical adapter. The server administrator then enables VMDirectPath I/O and chooses virtual adapters on Palo to be used by a virtual machine directly. This configuration results in bare metal like I/O and low latency for the VM I/O, however currently there is a trade off in that vMotion will not be possible (yet). This is because the hypervisor no longer “owns” the virtual hardware presented to the virtual machine as a network adapter, and as a result the I/O state of the virtual machine cannot be transfered to another hypervisor. There are ways to solve this problem, but they are not available yet.


NIV use case #3: Hypervisor pass through switching

NIV hypervisor pass through switching

In this case the hypervisor scans the PCI bus and sees each virtual adapter as if it were a physical adapter. The server administrator then assigns the virtual adapters to a special hypervisor switch that doesn’t actually do any switching, rather is just passes through I/O from a VM to an uplink adapter explicitly for that VM only. This configuration reduces the CPU cycles on the server required for VM I/O, improves I/O throughput, and reduces latency, but not to the same degree that VMDirectPath I/O does. By putting the hypervisor back in the I/O path, but only now with a more limited pass-through only role, we can now achieve vMotion because the hypervisor is presenting the VM it’s virtual hardware for networking and is able to move the VM’s I/O state to another host.


NIV use case #4: Hypervisor Bypass & Hypervisor switching combination

NIV hypervisor bypass and switching

In this case I am showing that there is flexibility in how you use NIV in a virtual environment. The server administrator has decided one of the VM’s is really I/O intense and needs full blown VMDirectPath I/O, while other more common VM’s are just fine with standard hypervisor switching.


NIV use case #5: Non virtualization OS / Apps running on Bare Metal

NIV bare metal

In this example I am showing that NIV is not just for VMware, because NIV operates at the hardware level in the server. The server administrator is running for example Oracle or Microsoft Exchange on the bare metal and is using the multiple virtual adapters to satify requirements for multiple adapters these applications may require. One example would be adapters dedicated to Oracle RAC heartbeat, public, and private networks.

Cheers,
Brad