This is something I’ve been chewing on for a while now and here’s my first rough attempt at writing it down: Network Virtualization is the new chassis switch, only much bigger. And a lot less proprietary.

  • The x86 server is the new Linecard
  • The network switch is the new ASIC
  • VXLAN (or NVGRE) is the new Chassis
  • The SDN Controller is the new SUP Engine

The result of this is a simplified data center network thanks to an expansive virtual chassis switch that can encompass the entire data center rooted in standards based technologies and open protocols.

chassis switch

The physical chassis switch is a brilliant piece of network engineering. It provides a tremendous amount of simplicity for the network operator in that the network inside of chassis has already been built for you by the network vendor. There’s a lot of complexity inside that chassis, built by a vast network of ASICs, but you don’t care. You just slide your Linecards into the chassis and define your network configuration and policies at a single point that abstracts the underlying chassis complexity, the Supervisor Engine.

As you define the logical topology for your apps at the SUP Engine with things such as VLANs or VRFs, do you usually think about or worry about the spaghetti mess of traffic flows that happen inside this chassis as your app follows your logical topology? No, you usually don’t. Especially if your chassis switch was built with an internal network of low latency ASICs and non-blocking bandwidth.

The problem here is that the physical chassis switch can only be built so big. So once I start connecting this chassis switch to other switches and begin building my network, I have to configure the logical topology for my app at multiple places and manually stitch it all together. Because the configuration of each switch forms the logical topology for my application, optimizing the spaghetti mess of application flows on these inter-switch links might become a concern. And the bigger my network gets, the more complexity I have to manage. The network vendor did not build the inter-switch network, I did, therefore I am responsible for it. Some vendors have made attempts to construct this multi-switch network as one big vendor provided distributed chassis, such as Juniper QFabric. However this comes with the unfortunate consequence of proprietary technologies and protocols that create the biggest vendor lock-in we have ever seen in the network industry.

There has to be a better way.

network virtualization virtual chassis

By using the approach of software defined networking (SDN) and leveraging open protocols such as OpenFlow, VXLAN, NVGRE, etc, it will be possible to virtualize the data center network with an underlying infrastructure rooted in open standards. The result is a network that is managed like, and has the simplicity of, one big virtual chassis switch built with low-cost high performance commodity hardware, void of any overreaching vendor lock-in. This is what I mean by “Network Virtualization”.

There will be different ways you can approach this as the Network Virtualization ecosystem matures, such as today with “edge virtualization”, or later “full virtualization”.

In a network based on edge virtualization (show above), you have in an infrastructure where the Linecard of your virtual chassis is the x86 server (hundreds or thousands of servers) running an instance of VXLAN or NVGRE, the scope of which represents the size of your virtual chassis’ virtual sheet metal. Much like a physical chassis, the linecards of your virtual chassis (x86 servers) are connected with a fabric of ASICs (network switches). The network switches (ASICs) form a standard Layer 3 switched IP network providing the backplane for the x86 linecards. The supervisor engine of your virtual chassis is the SDN Controller which provides you the single point of configuration and a single view for defining your application’s logical topology across the network. At this point the ASICs of your virtual chassis (switches) are not managed by the SDN controller, rather those are managed separately in this “edge virtualization” model. Though, unlike before, the switch configuration has no bearing on forming the application’s logical topology, working to simplify the physical network configuration.

At this point, much like a physical chassis, do you really need to worry about the “spaghetti mess” of flows inside your virtual chassis? If you’ve built the physical network with low latency, non-blocking, east-west traffic optimized gear, the hop count and link path of each little flow shouldn’t matter with respect to the latency and bandwidth realized at the edge (where it matters).

Looking forward to “full virtualization”, we take it a step further by now including the physical network switches (ASICs) under the auspices of the SDN Controller (Sup engine) using OpenFlow. At this point the SDN controller provides a single view for the configuration and flows traversing the linecards and ASICs of your virtual chassis. You might also have top of rack switches providing an edge for non-virtualized servers or other devices (services appliance, routers, etc.). All of this using an underlying infrastructure rooted in open standards.

Now that’s Way Cool.

By the way, here’s some good additional reading from some really smart folks:

Cheers,
Brad