A simple example of Network Interface Virtualization

Filed in FCoE, NIV by on October 23, 2009 2 Comments

I’m seeing some confusion in the blogosphere about how Cisco’s implementation of Network Interface Virtualization (NIV) really works so perhaps a very simple example is needed, and that is the intent of this post.  My previous posts about NIV with Cisco’s Palo adapter were focused on the big picture and the complete solution, such as this post about NIV with the VMware vSwitch, and this post about NIV with the Nexus 1000V.  Perhaps in all of the grand detail some of the fundamental concepts were glossed over so I am attempting to revisit the simple concept of how multiple virtual adapters can be treated as if they were multiple physical adapters to provide true Network Interface Virtualization (NIV), or as some others are calling it “Virtual I/O”.

The main confusion I want to address is the belief that VLAN tagging must be implemented on the virtual adapters to uniquely differentiate each virtual adapter to the upstream network switch.  In this simple example I will show that belief is not at all true and that each virtual adapter does not need to be configured any differently than a physical adapter.

I will start off with a server that has (4) physical adapters, (2) Ethernet NIC’s and (2) Fibre Channel HBA’s.  Each adapter has its own cable that connects to a unique physical port on a switch.  The network each adapter connects to (VLAN or VSAN) is determined by the configuration settings of the physical switch port.  The adapters themselves are not doing any VLAN or VSAN tagging.  The adapter presents itself to the server through the PCIe bus slot it is inserted into.  Furthermore, the adapter presents itself to the network via the cable that connects it.

Before NIV

With the Cisco implemenation of NIV using the “Palo” adapter I can maintain the exact same configuration shown above while consolidating adapters, cables, and switches.  A single 10GE adapter (Palo) will present the same (4) adapters to the server using PCIe SR-IOV based functions.  Additionally, a single 10GE adapter (Palo) will present the same (4) adapters to the network switch using a unique NIV tag acting as the new virtual “cable”.

After NIV

In the “before” picture no VLAN tagging was used to connect Adapter #1 to VLAN 10.  The same holds true in the graphic above “after” where each vNIC can be configured exactly as the physical NIC with no VLAN tagging.  Each vNIC and vHBA is given a cable, a virtual cable more specifically that is its NIV tag.  That NIV tag is connected a virtual switch port on the unified fabric switch.  The virtual switch port can be configured the same way as the physical switch port in the “before” picture with VLAN and VSAN assignments that determine which network each virtual adapters belongs to.

In summary, I did not need to make radical changes in the server or adapter configurations  in order to reap the benefits of infrastructure consolidation.  This is a result of providing true Network Interface Virtualization (aka “Virtual I/O”) from both the server perspective with SR-IOV, and the network perspective with NIV tagging.

I hope this simple example makes the fundamental concepts of NIV a little more clear and easier to understand.

Cheers,  Brad.

UPDATE: A follow-up post Simple use cases for Network Interface Virtualization

About the Author ()

Brad Hedlund is an Engineering Architect with the CTO office of VMware’s Networking and Security Business Unit (NSBU), focused on network & security virtualization (NSX) and the software-defined data center. Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, systems integrator, architecture and technical strategy roles at Cisco and Dell, and speaker at industry conferences. CCIE Emeritus #5530.

Comments (2)

Trackback URL | Comments RSS Feed

  1. Eng Wee says:

    Hi Brad,

    If the server in your diagram is
    – an ESX
    – have stardard vswitch in it.
    – have 5 VMs in five different VLANs load sharing VNIC1 and VNIC2 in your diagram

    How will this be represented in the diagram?

    My guess will be that VNIC1 and VNIC2 will need to be trunked to carry the multiple vlans.
    But in this case, how many vEth will there be in the Fabric? If it is still two vEth in the fabric, then we do not have one VM to one vEth mapping. My understanding of VNlink in hardware is that we want to achieve such that each VM is “directly” connected to the fabric.

    Your articles are great and i learn a lot from it.

    THanks
    Eng Wee

Leave a Reply

Your email address will not be published. Required fields are marked *