Network Virtualization is like a big virtual chassis

Filed in Fabrics, OpenFlow, SDN by on October 12, 2011 21 Comments

This is something I’ve been chewing on for a while now and here’s my first rough attempt at writing it down: Network Virtualization is the new chassis switch, only much bigger. (and a lot less proprietary)

  • The x86 server is the new Linecard
  • The network switch is the new ASIC
  • VXLAN (or NVGRE) is the new Chassis
  • The SDN Controller is the new SUP Engine

The result of this is a simplified data center network thanks to an expansive virtual chassis switch that can encompass the entire data center rooted in standards based technologies and open protocols.

The physical chassis switch is a brilliant piece of network engineering.  It provides a tremendous amount of simplicity for the network operator in that the network inside of chassis has already been built for you by the network vendor.  There’s a lot of complexity inside that chassis, built by a vast network of ASICs, but you don’t care.  You just slide your Linecards into the chassis and define your network configuration and policies at a single point that abstracts the underlying chassis complexity, the Supervisor Engine.

As you define the logical topology for your apps at the SUP Engine with things such as VLANs or VRFs, do you usually think about or worry about the “spaghetti mess” of traffic flows that happen inside this chassis as your app follows your logical topology? No, you usually don’t.  Especially if your chassis switch was built with an internal network of low latency ASICs and non-blocking bandwidth.

The problem here is that the physical chassis switch can only be built so big.  So once I start connecting this chassis switch to other switches and begin building my network, I have to configure the logical topology for my app at multiple places and manually stitch it all together.  Because the configuration of each switch forms the logical topology for my application, optimizing the spaghetti mess of application flows on these inter-switch links might become a concern.  And the bigger my network gets, the more complexity I have to manage.  The network vendor did not build the inter-switch network, I did, therefore I am responsible for it.  Some vendors have made attempts to construct this multi-switch network as one big vendor provided distributed chassis, such as Juniper QFabric. However this comes with the unfortunate consequence of proprietary technologies and protocols that create the biggest vendor lock-in we have ever seen in the network industry.  Not cool. :-(

There has to be a better way.

By using the approach of software defined networking (SDN) and leveraging open protocols such as OpenFlow, VXLAN, NVGRE, etc, it will be possible to virtualize the data center network with an underlying infrastructure rooted in open standards.  The result is a network that is manged like, and has the simplicity of, one big virtual chassis switch built with low-cost high performance commodity hardware, void of any overreaching vendor lock-in.   This is what I mean by “Network Virtualization”.

There will be different ways you can approach this as the Network Virtualization ecosystem matures, such as today with “edge virtualization”, or later “full virtualization”.

In a network based on edge virtualization (show above), you have in an infrastructure where the Linecard of your virtual chassis is the x86 server (hundreds or thousands of servers) running an instance of VXLAN or NVGRE, the scope of which represents the size of your virtual chassis’ virtual sheet metal.  Much like a physical chassis, the linecards of your virtual chassis (x86 servers) are connected with a fabric of ASICs (network switches).  The network switches (ASICs) form a standard Layer 3 switched IP network providing the backplane for the x86 linecards.  The supervisor engine of your virtual chassis is the SDN Controller which provides you the single point of configuration and a single view for defining your application’s logical topology across the network.  At this point the ASICs of your virtual chassis (switches) are not managed by the SDN controller, rather those are managed separately in this “edge virtualization” model.  Though, unlike before, the switch configuration has no bearing on forming the application’s logical topology, working to simplify the physical network configuration.

At this point, much like a physical chassis, do you really need to worry about the “spaghetti mess” of flows inside your virtual chassis?  If you’ve built the physical network with low latency, non-blocking, east-west traffic optimized gear, the hop count and link path of each little flow shouldn’t matter with respect to the latency and bandwidth realized at the edge (where it matters).

Looking forward to “full virtualization”, we take it a step further by now including the physical network switches (ASICs) under the auspices of the SDN Controller (Sup engine) using OpenFlow.  At this point the SDN controller provides a single view for the configuration and flows traversing the linecards and ASICs of your virtual chassis.  You might also have top of rack switches providing an edge for non-virtualized servers or other devices (services appliance, routers, etc.).  All of this using an underlying infrastructure rooted in open standards.

Now that’s Way Cool :-)

By the way, here’s some good additional reading from some really smart folks:

Cheers,

Brad

About the Author ()

Brad Hedlund (CCIE Emeritus #5530) is an Engineering Architect in the CTO office of VMware’s Networking and Security Business Unit (NSBU). Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, value added reseller, and vendor, including Cisco and Dell. Brad also writes at the VMware corporate networking virtualization blog at blogs.vmware.com/networkvirtualization

Comments (21)

Trackback URL | Comments RSS Feed

Sites That Link to this Post

  1. The Killer App For OpenFlow and SDN? Security. | Rational Survivability | October 27, 2011
  1. Derick Winkworth says:

    There is still room to drive virtualization in the forwarding-plane, as in multiple domains to separate forwarding rules. These domains should extend all the way into the forwarding hardware such that if an SDN controller attempts to build flow rules for transitivity between domains, the forwarding node should reject this. i think of this as being analogous to how virtualization is being driven into the CPU, making the hypervisor less relevant and potentially obsolete.

    If OpenFlow, for instance, could drive virtualization into the forwarding hardware than consumers and auditors can know that the proper separation is happening rather than assuming that the controller is managing all the flows appropriately. Who knows how the controller is building its forwarding logic (forwarding rules can contain wildcards)? Especially when the the controller has the ability to build arbitrary flows between any ports.

    • Brad Hedlund says:

      Hi Derick,
      I get what you’re saying. I just wonder if this will be important enough that customers will be willing to pay for the new hardware – just for the additional comfort factor. My guess is that some customers would definitely pay for this (government), but how many others?

      Thanks for the comment, and keep up the great contributions to Packet Pushers. It’s getting noticed.

      Cheers,
      Brad

      • Derick Winkworth says:

        I’ve come full circle on this. I really do think now that massively scaled data centers will not need smart network devices at all. The network will be a big fast bus extension between hosts.

        • Donny Parrott says:

          I like this guy. I have been yelling this from the mountain tops and getting reviled for it.

          The day is not far off when grid designs will become commong place. As companies like Nutanix drive the physical platform, software will consume may other pieces.

          One idea is that portability of virtualized workloads will drive all supporting resources to become untrusted transports. Security will be less about the bus “network” than the data being managed.

          Two thumbs up.

    • Dmitri Kalintsev says:

      > the proper separation is happening

      Isn’t that what we call today “VLANs”? They are, after all, implemented “in hardware”. Yes, granted – there are too few of these available, and thus they are a part of The Problem. ;)

  2. John G. says:

    Greenfield designs may take well to this. I find many networks have old sections that have just “not been integrated yet” and various states of code. If you can get past the potential for a big failure domain, this could really simplify things greatly.

    At the rate these protocols change, will people have the patience to wait for the SDN gets support for a new feature or interoperability to become standard?

    • Brad Hedlund says:

      Hi John,
      The VXLAN and NVGRE standards came together pretty quickly. At which point it only took 6 weeks for the Open vSwitch to get VXLAN.
      Given that edge virtualization can be accomplished entirely with software and standard x86 – the feature velocity is quite impressive.

      Cheers,
      Brad

      • Jon Hudson says:

        Hey Brad,

        Crazy cool stuff. Imagine years from now; grab a server, add some pci”next” cards, install some netos hypervisor like thing, go to a “Route Store” and download a ISIS app by Duke, a BGP app by Caltech, a Junos CLI app and shaboom you have a “Homebrew Router”

        I did want to add one detail though. Yes the time it took for VXLAN to show up in a product was awesome. However neither VXLAN or NVGRE are standards. And actually VXLANs intended status is experimental and NVGREs is informational. Not even standards track (yet). Not a big deal, just more detail.

        • Brad Hedlund says:

          Hey Jon,

          Those that do make a big deal of VXLAN and NVGRE having a status of “experimental” or “informational” are completely missing the point.
          What you have with VXLAN and NVGRE are multiple vendors and industry captains coming together, agreeing on how something should work, documenting it for the world to see, and publishing it in a standards body. Whether or not its on an official “standards” status is really inconsequential. Much like pointing out that a tomato is a fruit, not a vegetable.

          At this point, a large ecosystem can develop product and solutions around VXLAN and NVGRE. You dont have that with closed solutions such as QFabric.

          Cheers,
          Brad

  3. Scott DeShong says:

    Interesting post Brad. I’ve been trying to read up on OpenFlow and one thing I don’t quite understand is the control plane connectivity. Based on what all I’ve read I’m assuming OpenFlow uses a distributed forwarding methodology but how do we supply quick responses for control plane traffic, specifically new flow decision processes and security updates to existing flows? It seemed that Ivan somewhat addressed the issue in “What is OpenFlow Part 1” by mentioning the need for Terabit networks but I haven’t seen anything else address the low latency fast response necessary for the control plane. Would there be a separate control plane network between SDN controllers and hypervisor or ASIC based switches? I get the overall idea but I haven’t seen anyone address control plane traffic forwarding and how it’s handled. It could also be that I just missed it!
    Keep up the great work!

    • Brad Hedlund says:

      Hi Scott,
      With edge virtualization the physical network is not under the control of the SDN controller, so no special out-of-band SDN control plane network is needed there. On the x86 server host it’s quite common to have a NIC dedicated to management in normal virtualization environments (VMware, Hyper-V, etc.), and I don’t see why that would be any different with SDN edge virtualization.
      When you move to full virtualization it seems to make the most sense to have a separate out-of-band GE network linking the OpenFlow Controller to the physical switches for control plane messages and punted packets.
      Keep in mind that the OpenFlow Controller does not need to inspect the first packet of every single flow. Rather you can use proactive routing, where the OpenFlow controller configures the switches data plane in advance. I raised this same concern in my post “On data center scale, OpenFlow, and SDN” — check out the responses in the comments section about “proactive routing”.

      Cheers,
      Brad

      • Scott DeShong says:

        Brad,
        Thanks for the response. I see your point about this being an edge only implementation but why? If we already have control plane issues why not look at it from a holistic stand point instead of pin pointing the solution. I would suggest using the information gathered at the edge to determine what the physical switches should do and reduce hardware costs and management complexity across the board. Punt packets from the virtual OpenFlow switch on each host and push the flow information collected down to the aggregates. Since the controller knows the topology it could possibly be able to proactively create a forwarding table for the aggregates based on the edge information. This reduces each flow request to the originating point and removes duplicate requests from aggregates. Maybe use a GLBP style NHRP to allow each aggregate to have a local gateways…

        Not real sure on L3 but it seems like aggregating tables from virtual switch flows to aggregate switches by way of the SDN controller might alleviate some of the control plane overhead.

        On another note, I personally am seeing most dedicated management go away with 10GB. With multiple 10GB links per host traffic engineering can handle most prioritization between management and production networking. Having a custom built dedicated control plane network adds a lot of overhead and seems to negate a lot of the inherent benefits with OpenFlow.

        Sorry for the more random thought comment. I’m still trying to compile everything in my head.

  4. Michal says:

    Brad , nice write up , always interesting thoughts . Maybe not directly related to that article but I feel like the often missing piece in different debates around the network virtualization is the bandwidth availability or its utilization at different levels of the “infrastructure”.

  5. Chris Marino says:

    Nice post Brad,

    I don’t want to nitpick, but I think that do some degree you’re conflating ‘virtual networking’ with ‘Software Defined Networking’. I think someone could reasonably conclude from your analysis that a virtual network is simply a way to get more than 4K VLANs across a big, flat L2 domain.

    I know that’s not really what you’re saying, but I think what’s left unsaid here is what a virtual network actually is. To me, to build virtual networks with an SDN you need what the OpenFlow folks would call a ‘FlowVisor’ (basically an app on top of your SDN Controller). That’s certainly not the only way to implement a virtual network, but it provides a useful analog to the concept of a virtual machine for virtual compute.

    Here is a blog post and screencast of a presentation I gave at SDFourm a while back that describes what I think a virtual network really is. http://blog.vcider.com/2011/05/sd-forums-virutal-network-presentation/

    To me, the most important characteristics of a virtual network is a delegated control mechanism. And my own personal litmus test for a virtual network is ‘a network that I control, that runs on one that I don’t control’

  6. Joe Smith says:

    Brad, interesting post. Your thoughts can be quite constructive, although in this post you seem to be trying too hard to sound… innovative? You might also want to abandon the Cisco “SUP”references. Force 10 switches dont use “Supervisors.”

    • Brad Hedlund says:

      Hey Joe,
      Admittedly this isn’t the best post I’ve ever written. As stated in the first sentence it was a first “rough attempt” at jotting down some thoughts. I think I’ll take another stab at it in the coming weeks.

      As for the “SUP” references, nope, those will be staying for a while. Consider the audience ;)

      Cheers,
      Brad

  7. Alex says:

    Brad, I admire your blog, but you’ve got to laugh at the basis of your argument about Qfabric “However this comes with the unfortunate consequence of proprietary technologies and protocols that create the biggest vendor lock-in we have ever seen in the network industry.” Kettle Pot and Black jumps to mind :-)

    Isn’t not the propriety lock-in of Qfabric that’s driving SDN.

    • Brad Hedlund says:

      Alex,
      Thanks :-)
      Despiste all of the objections to that statement (mostly from Juniper folks), to this day nobody has citied an example of a bigger networking lock-in than QFabric.
      Oh btw, If you’re referring to my time at Cisco as being the “Kettle”, point taken, but this was my first post as a non-Cisco employee, so I get a pass for that, don’t I? :-)

      Cheers,
      Brad

  8. Dave says:

    Brad,Very well written! Thansk!

    I have seen most of the Physical Switches do not support NVGRE traffic? When a VM needs to go outside NVGRE network, the key field might contain VLAN ID above 4096 and most of the switches do not support a packet tagged with VLAN ID greater than 4096.

    Thank You!
    Dave

Leave a Reply

Your email address will not be published. Required fields are marked *