Simple use cases for Network Interface Virtualization

Filed in FCoE, Nexus, NIV by on October 23, 2009 8 Comments

My most recent post Simple Example of Network Interface Virtualization generated enough interest and curosity to warrant a follow-up post showing simple uses cases for NIV.

NIV takes a single physical adapter and presents multiple virtual adapters as if they were physical adapters to the server and network.  Now that the server and network see multiple adapters, you can pretty much do with them as you wish.  However below we will look at what might be some pretty common use cases.

NIV use case #1: Presenting multiple adapters to a hypervisor switch to be used as uplinks

In this example the hypervisor scans the PCI bus, and sees each virtual adapter as if it were a physical adapter.  The server adminitstrator can then assign the virtual adapters to be used as uplinks for a vSwitch, vNetwork Distributed Switch, or Nexus 1000V.


NIV use case #2: Hypervisor Bypass using VMDirectPath I/O

In this example the hypervisor has scanned the PCI bus, and sees each virtual adapter as if it were a physical adapter.  The server administrator then enables VMDirectPath I/O and chooses virtual adapters on Palo to be used by a virtual machine directly.  This configuration results in bare metal like I/O and low latency for the VM I/O, however currently there is a trade off in that vMotion will not be possible (yet).  This is because the hypervisor no longer “owns” the virtual hardware presented to the virtual machine as a network adapter, and as a result the I/O state of the virtual machine cannot be transfered to another hypervisor.  There are ways to solve this problem, but they are not available yet.


NIV use case #3: Hypervisor pass through switching

In this case the hypervisor scans the PCI bus and sees each virtual adapter as if it were a physical adapter.  The server administrator then assigns the virtual adapters to a special hypervisor switch that doesn’t actually do any switching, rather is just passes through I/O from a VM to an uplink adapter explicitly for that VM only.  This configuration reduces the CPU cycles on the server required for VM I/O, improves I/O throughput, and reduces latency, but not to the same degree that VMDirectPath I/O does. By putting the hypervisor back in the I/O path, but only now with a more limited pass-through only role, we can now acheieve vMotion becuase the hypervisor is presenting the VM it’s virtual hardware for networking and is able to move the VM’s I/O state to another host.


NIV use case #4: Hypervisor Bypass & Hypervisor switching combination

In this case I am showing that there is flexability in how you use NIV in a virtual environment.  The server administrator has decided one of the VM’s is really I/O intense and needs full blown VMDirectPath I/O, while other more common VM’s are just fine with standard hypervisor switching.


NIV use case #5: Non virtualization OS / Apps running on Bare Metal

In this example I am showing that NIV is not just for VMware, because NIV operates at the hardware level in the server.  The server administrator is running for example Oracle or Microsoft Exchange on the bare metal and is using the multiple virtual adapters to satify requirements for multiple adapters these applications may require.  One example would be adapters dedicated to Oracle RAC heartbeat, public, and private networks.

Can you think of any other use cases for NIV? Submit your thoughts in the comments section.

Cheers, Brad

About the Author ()

Brad Hedlund is an Engineering Architect with the CTO office of VMware’s Networking and Security Business Unit (NSBU), focused on network & security virtualization (NSX) and the software-defined data center. Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, systems integrator, architecture and technical strategy roles at Cisco and Dell, and speaker at industry conferences. CCIE Emeritus #5530.

Comments (8)

Trackback URL | Comments RSS Feed

  1. Hello Brad,

    we just bought a bunch of UCS blades with the palo adapter and are awaiting delivery in march.
    Can you recommend some technical readings about the palo adapter? I’m specifically interested how failover in case of FI/FEX failure is handled and how the separation between Fabric A/B for storage is obtained.

    Best Regards

    Cristiano

  2. Eng Wee says:

    Hi Brad,

    I have a few questions.

    (1) Can i confirm that NIV use case #1 is known as VNlink in software?

    (2) For NIV use case #1, from the Palo to the Fabric Interconnect, do you still see VNtag? I read the doc and it seems to say that only when you do VNlink in hardware (NIV use case #3) that VNtag will be used. I just want to make sure i get the concept right. Is there any show command on the nxos in fabric interconnect that i can see the VNtag?

    (3) NIV use case #3 is also known as VNlink in hardware. In this case, the VM is like connected directly to the Fabric Interconnect. This also means that there is a one-to-one mapping between the VM and VNIC in Palo. If i have 30VMs, then i will need 30 vNIC in the palo. Is my understanding correct?

    I like your post, it helps me to understand a lot much more than just reading the UCS configuration document. When you virtualise, you need to be able to visualise in order to understand.

    Thanks again!
    Eng Wee

    • Brad Hedlund says:

      Eng,

      First lets start with a simple baseline definition of “VN-Link”: Providing a means to connect a Virtual Machine directly to the Cisco network, resulting in a 1:1 relationship of a VM’s vnic with a virtual Ethernet port on the Cisco network with persistent and policy driven network & security properties.

      (1) Yes, in case #1 the Virtual Machine’s vnic is connecting directly to the Nexus 1000V Cisco software switch. So this could be described as VN-Link in Software.

      (2) Yes, as shown in the diagram in case #1, the virtual Ethernet adapters on the Palo will each use a unique VN-Tag as their virtual cable that connect it to their own virtual Ethernet port on the Fabric Interconnect. You can see the virtual Ethernet ports on the Fabric Interconnect with simple “show interface” commands. The VN-Tag #’s used are dynamically negotiated and managed by the Fabric Interconnect. You can dig with cli commands to find the VN-Tag # but it doesn’t really gain you anything to worry about this or attempt to keep track.

      (3) Yes, that is correct. Please note that the (30) Palo vNIC’s for each VM will be created for you dynamically thanks to the Fabric Interconnect registering with vCenter as a Distributed Virtual Switch (DVS). When the server admin provisions a new VM and powers it on, the Fabric Interconnect will dynamically create the needed vNICs on Palo. When the VM is moved to another server, again the Fabric Interconnect will remove the dynamic Palo vNIC from the source machine and move it to the new destination machine anchored to the same persistent virtual Ethernet port on the Fabric Interconnect — from the network perspective its as if the VM never really moved.

      Excellent questions!!

      Cheers,
      Brad

  3. Arsalan says:

    Hello Brad, two query on NIV use case #2

    - If there is a 3rd VM in the picture will I see a vEth 3 on the unified fabric?

    - If this scenario is used with a UCS B series.
    1- The Palo adapter will put a tag on the packet.
    2- But as the packet is received by chassis FEX 2148 will puts it own VNTAG.

    So in this case we will have a (Palo tag) tag in a (VNTAG) tag? Because all the material suggests that Palo tag will make its own veth interface on the 6100.

    Thanks

    • Brad Hedlund says:

      Arsalan,
      No, the chassis FEX will only apply a VNTAG if it recieves a packet without one. There will always be one VNTAG. If you were using the standard Intel Oplin adapter which does not apply VNTAGs, then the chassis FEX will assign one to the packet.

      Cheers,
      Brad

  4. Ajay says:

    hello Brad!

    great great blog. This actually solved most of the doubts i had about the palo card. Just hope to find the budget to get one in my mini-DC. :)

    thanks again. as Eng said above, your blogs really make readers visualise the virtual stuff, which is paramount.

  5. Richard Chan says:

    Hello Brad,

    Just came across this post. What it you have Palo – UCS 2100 – UCS 6100?
    How do you manage VNTag-ing and prevent double tagging?
    Does the UCS 2100 have a sort of bypass mode where it knows that upstream is doing the VNTag?

    • Brad Hedlund says:

      Hi Richard,
      The UCS 2100 (FEX) will only apply a VN-Tag if it receives a frame from a server without one. This would only happen if a server had a non-NIV adapter such as Intel or Broadcom. The Palo adapter will always apply a VN-Tag.
      Prior to that, the Palo adapter and UCS 6100 have negotiated a VN-Tag value to use.

      Cheers,
      Brad

Leave a Reply

Your email address will not be published. Required fields are marked *