Network Virtualization is like a big virtual chassis

This is something I’ve been chewing on for a while now and here’s my first rough attempt at writing it down: Network Virtualization is the new chassis switch, only much bigger. (and a lot less proprietary)

  • The x86 server is the new Linecard
  • The network switch is the new ASIC
  • VXLAN (or NVGRE) is the new Chassis
  • The SDN Controller is the new SUP Engine

The result of this is a simplified data center network thanks to an expansive virtual chassis switch that can encompass the entire data center rooted in standards based technologies and open protocols.

The physical chassis switch is a brilliant piece of network engineering.  It provides a tremendous amount of simplicity for the network operator in that the network inside of chassis has already been built for you by the network vendor.  There’s a lot of complexity inside that chassis, built by a vast network of ASICs, but you don’t care.  You just slide your Linecards into the chassis and define your network configuration and policies at a single point that abstracts the underlying chassis complexity, the Supervisor Engine.

As you define the logical topology for your apps at the SUP Engine with things such as VLANs or VRFs, do you usually think about or worry about the “spaghetti mess” of traffic flows that happen inside this chassis as your app follows your logical topology? No, you usually don’t.  Especially if your chassis switch was built with an internal network of low latency ASICs and non-blocking bandwidth.

The problem here is that the physical chassis switch can only be built so big.  So once I start connecting this chassis switch to other switches and begin building my network, I have to configure the logical topology for my app at multiple places and manually stitch it all together.  Because the configuration of each switch forms the logical topology for my application, optimizing the spaghetti mess of application flows on these inter-switch links might become a concern.  And the bigger my network gets, the more complexity I have to manage.  The network vendor did not build the inter-switch network, I did, therefore I am responsible for it.  Some vendors have made attempts to construct this multi-switch network as one big vendor provided distributed chassis, such as Juniper QFabric. However this comes with the unfortunate consequence of proprietary technologies and protocols that create the biggest vendor lock-in we have ever seen in the network industry.  Not cool. :-(

There has to be a better way.

By using the approach of software defined networking (SDN) and leveraging open protocols such as OpenFlow, VXLAN, NVGRE, etc, it will be possible to virtualize the data center network with an underlying infrastructure rooted in open standards.  The result is a network that is manged like, and has the simplicity of, one big virtual chassis switch built with low-cost high performance commodity hardware, void of any overreaching vendor lock-in.   This is what I mean by “Network Virtualization”.

There will be different ways you can approach this as the Network Virtualization ecosystem matures, such as today with “edge virtualization”, or later “full virtualization”.

In a network based on edge virtualization (show above), you have in an infrastructure where the Linecard of your virtual chassis is the x86 server (hundreds or thousands of servers) running an instance of VXLAN or NVGRE, the scope of which represents the size of your virtual chassis’ virtual sheet metal.  Much like a physical chassis, the linecards of your virtual chassis (x86 servers) are connected with a fabric of ASICs (network switches).  The network switches (ASICs) form a standard Layer 3 switched IP network providing the backplane for the x86 linecards.  The supervisor engine of your virtual chassis is the SDN Controller which provides you the single point of configuration and a single view for defining your application’s logical topology across the network.  At this point the ASICs of your virtual chassis (switches) are not managed by the SDN controller, rather those are managed separately in this “edge virtualization” model.  Though, unlike before, the switch configuration has no bearing on forming the application’s logical topology, working to simplify the physical network configuration.

At this point, much like a physical chassis, do you really need to worry about the “spaghetti mess” of flows inside your virtual chassis?  If you’ve built the physical network with low latency, non-blocking, east-west traffic optimized gear, the hop count and link path of each little flow shouldn’t matter with respect to the latency and bandwidth realized at the edge (where it matters).

Looking forward to “full virtualization”, we take it a step further by now including the physical network switches (ASICs) under the auspices of the SDN Controller (Sup engine) using OpenFlow.  At this point the SDN controller provides a single view for the configuration and flows traversing the linecards and ASICs of your virtual chassis.  You might also have top of rack switches providing an edge for non-virtualized servers or other devices (services appliance, routers, etc.).  All of this using an underlying infrastructure rooted in open standards.

Now that’s Way Cool :-)

By the way, here’s some good additional reading from some really smart folks:



Starting a new journey with Dell Force10

With mixed emotions, this week I submitted my resignation to Cisco, a fantastic company with great products and great people.  This was the result of an exhausting and drawn out thought process lasting several months.  The data center networking industry is changing fast and this was purely a forward-looking move to best position myself and family for these changes.  Again, this was purely a business calculation.  My thinking was not clouded by any spite or hard feelings.

I am extremely excited to be joining the team at Dell Force10.  As word began to leak about my departure to Dell, I received a lot of mixed reactions (which I totally expected).  Those that have a broad view of the industry and really understand the how the data center is evolving totally “got it” and congratulated me right away on making a smart move.  Others who have been busy mastering their specific niche didn’t quite understand.  And that’s OK.  That tells me I’ve got a head start.

One particular piece of feedback I really enjoyed receiving today was from someone in the industry who I really admire, and a thought leader in Silicon Valley, who wrote this:

First, *excellent* move.  I’m very bullish on Dell these days.  Geng Lin has the right vision towards commoditizing the fabric and working with edge virtualization solutions …  Also, they are extremely well positioned … I think Dell can win, I really do..

… we would have loved to have spoken with you.  But honestly, Dell is probably a better fit.  It will allow you to continue to engage the online discourse on the datacenter, and to steer a behemoth in the right direction.  The next two years are going to see a huge shift in the market, and chances are you are riding the lead whale.

congrats again on a great move.  Well done.

Naturally, I couldn’t agree more!  Dell’s acquisition of Force10 was brilliant and presented the right opportunity at the right time.  I’m excited to go back to the basics and join the front lines as an Enterprise Sales Engineer, working directly with customers.  After I learn what works, what doesn’t, where we win, where we don’t win, etc. maybe then I’ll consider a corporate role.  But for now I have many whiteboards and customer handshakes in my future.  That’s where all the fun is at anyway.  And of course the blogging will never stop!

This is going to be fun.