A better fabric with VMware NSXi for your network switch

Filed in Fabrics, SDN, VMware by on November 8, 2012 11 Comments

I’m chewing on a few thoughts today I wanted to jot down here and marinate on for a while.  I’ll use VMware as the straw man for the sake of discussion, simply because — like it or not — they are the household name in virtualization.  Disclaimer: The illustrations here are purely of my own imagination and do not reflect anything more than that.

Does it make sense for the software that controls the host machine to also control the fabric that interconnects those hosts?

Note that the host machine software already has some control of the “fabric” — but not all.  What am I talking about?  The virtual edge.  The hypervisor vswitch — a network device (yes, it’s a network device) providing a network connection for the virtual machine (what we care about).

This brings us to the larger question of: What is the Fabric?  Most people think of “Fabric” as a specially constructed network of physical switches — with all of the emphasis placed on how we should connect these physical switches together, and how they should be configured, etc.

Meanwhile, there’s another fabric to contend with — the virtual fabric — constructed by the host software with virtual switches.  This is the fabric touching the virtual machines at the access edge.

We already know that VMware provides software to load on your favorite server hardware and cool stuff happens, right? Virtualization, multi-tenancy, intelligent resource allocations, QoS, push button automation, etc, etc.

VMware is a software company.  They don’t sell servers.  The model is and probably always will be: “You bring the hardware, we’ll bring the software.”  At least, that’s been the model for the *host* machines in our virtual data center.

The network is a different story though.  Here, the network switch vendor says: “I’ll bring both the hardware and the software — it’s a package deal.”

There has always been this proverbial line in the sand between host machines and network switches.  “You run your software there. I’ll run my software here — and we’ll all play nice together”.  Hence we never know what kind of thing we’ll need to play with on other side of the line.  So we need to establish some dumbed down and very basic rules of the game that just about anybody can follow.

In our case, those rules would be things such as: “Here’s how the host machine instructs the switch where some data should be delivered to, and the SLA you want.”  Hint: Destination IP address, ToS bits.

What we end up with is a very basic and lowest common denominator interface between the host and physical network — and by consequence this same basic interface applies to the virtual and physical fabrics.  Something just good enough to say “Here’s where I want this data to go and can you please take good care of it for me?”

Instead, what if the rules changed to:  “Here’s how you load and run software on this physical switch.”  Just like we do today with standard x86 servers.

Now you potentially have software in the physical fabric that intimately understands what the attached hosts are attempting to do.  And as a result we can play with a more sophisticated set of interfaces between the host and network, and what the information carried in those interfaces means to this fabric — not the IEEE or IETF.  This doesn’t necessarily mean switches with new special proprietary ASICs, although that’s possible.  You work with whatever your switch ASIC is capable of.

For example: Software vendors already work with the well-known capabilities of Intel x86 commodity silicon.  Similarly, software vendors could also work with the well-known capabilities of commodity switching ASICs (Intel and Broadcom).  Things like DCB and MPLS.

The end result perhaps being a more capable and contiguous Fabric.  A better blending of the physical and virtual. Something that delivers better capabilities around service assurance, traffic engineering, and better visibility into the ever changing correlation between the virtual and physical topology.

Further reading: Fabric: A Retrospective on Evolving SDN

Cheers,
Brad

About the Author ()

Brad Hedlund is an Engineering Architect with the CTO office of VMware’s Networking and Security Business Unit (NSBU), focused on network & security virtualization (NSX) and the software-defined data center. Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, systems integrator, architecture and technical strategy roles at Cisco and Dell, and speaker at industry conferences. CCIE Emeritus #5530.

Comments (11)

Trackback URL | Comments RSS Feed

  1. Dmitri Kalintsev says:

    Hi Brad,

    Have you seen “my selfish take on what the network should be”?
    https://telecomoccasionally.wordpress.com/2012/09/09/my-selfish-view-on-what-i-want-the-network-to-be/

    There are a few parallels with what you’re talking about, however from a slightly different angle: as a network user, I care much less about the principle of “universal software” that would drive “commodity hardware”, and much more about the outcomes (which are richly described in my post).

    What do you think?

    — Dmitri

    • Brad Hedlund says:

      Dmitri,
      Yes, I remember reading your post the day you wrote it. I’m glad you posted the link here because it’s highly relevant. I remember thinking about your post at the time, “Sounds great, but only a monolithic proprietary end-to-end hw+sw system could get close to delivering that”.

      We’ve seen the market reject things like that. Example: QFabric. If anybody can pull it off, it will probably be Cisco. Perhaps that is what the Insieme team is working on right now. So that will be one way to give you your desired outcome.

      The other way to provide the outcome you want might be what I’m showing here — where we open up the interface to loading software on the network switch. We treat network switches like servers — a box you can load software on the that meets your specific needs.

      And in theory, if an ecosystem evolves, you would have a *choice*. You could load VMware or Microsoft software on your switch — to build a VMware or Microsoft powered cloud. Or, just load the basic standard networking software provided by the network vendor. Your choice.

      Cheers,
      Brad

      • Dmitri Kalintsev says:

        Brad,

        I understand the concept of the “open networking hardware” and all the good that may come with it. What I’m not sure about is what chain of events could cause multitude of vendors to start making and pouring into the market large quantities of open, compatible networking hardware. Not saying there isn’t a strong driver for that to happen out there – just that I can’t see one at this point.

        Re: monolithic proprietary – while I’m somewhat surprised with myself saying that, but I’m not particularly concerned if a solution that would do all my magic stuff for me would be such. We are extremely early in the process of evolution of next wave of networking – I’d say between “Genesis” and “Custom Built” on @swardley’s ubiquitous diagram (see Figure 2 here for one: http://blog.gardeviance.org/2012/08/on-predictions.html), and it is inevitable that to deliver enough magic to be sufficiently interesting, a solution will have to be tightly integrated. Only after (and if) it has taken root and proven itself with early adopters for whom the new functionality is crucial for success, competing solutions would start appearing and the march toward standartisation, ubiquity, and eventually utility can begin.

        Maybe I’m totally wrong and missing something, but that’s what I think at this point in time.

        Cheers,

        — Dmitri

        • Brad Hedlund says:

          Dmitri,
          It probably starts with one switch vendor entering into a partnership with a software vendor, for the sake of being more competitive against the proprietary alternative.
          The Wintel vs iPad architecture war for data center virtualization begins.

          • Dmitri Kalintsev says:

            If that switch vendor is an established one, it would take some serious guts to disrupt their own cash cow. Yes, it has been proven time and again that it’s the right thing to do, but very few are *actually* capable of doing it. Also much longer refresh cycles in networking hardware (compared to PC/iPad) aren’t really helping.

            I want to believe; I really do. I guess we’ll have to wait and see how it plays out in the real world, though.

  2. Tim Rider says:

    > software in the physical fabric that intimately understands what the attached hosts are attempting to do

    Brad – perhaps you can elaborate on the “intimate” part here.. From where I sit – the physical network (or pFabric if you wish) needs to do just one thing – deliver packets from ESX Host-A to ESX Host-B. Throw in DSCP bits for QoS, if you want to be fancy. What else does the ESX host need to convey to the physical network, that’s not already possible with standard broadly deployed technologies?

    • Donny says:

      Tim,

      I agree. The next phase of networking will be a flat bus architecture with the intelligence moved to the nodes. “Move my bits please.” Midokura, Nicira, etc. are looking to move network control to the compute nodes and treat the pFabric as a bus.

      Brad,

      I believe that day may come, but that will be the realization of the paradigm for software defined datacenters. The central hub will receive registration from all assets and coordinate the deployment of resources in accordane with service designs.

    • Brad Hedlund says:

      Tim,
      See my response to Atrey. As I think about this more, I’m thinking more about the Control & Mgmt planes — not so much the Data plane. The ESX host to Switch interface can continue to use the existing well understood data plane protocols we already have today, IP, MPLS, etc.

  3. Atrey Parikh says:

    Brad: If I understood your post correctly you are suggesting between ESX host and pFabric in terms of intelligence only thing we have today is the source/dest IP/ports and QoS from what software can achieve, it is lowest level of intelligence. Let’s just say, as you suggested we use commodity switching ASICs from Intel or Broadcom and software vendors are able to write something more intelligent (more intelligent from pFabric’s perspective of course), may be I am not fully grasping the idea but where do you draw a line in terms of pFabric vs the vFabric capabilities, is the level of intelligence have to be same between both fabrics or you are suggesting pFabric should be more intelligent than what we have available today? Wouldn’t this always be dependent on underlying ASIC’s capability, so again unless vendors write software for individual switch ASIC, don’t we hit another common denominator?

    Again, as always great idea/thought to chew on. Thank you.

    • Brad Hedlund says:

      Atrey,
      I think what I’m asking here is: Would things be better if the Control & Mgmt planes of the pFabric are driven by software that has a closer coupling to the software running the vFabric?

      Today, we have a situation where the vFabric throws packets over the fence to the pFabric. And that’s OK, for now. But how does it get better from here?

      Lets take traffic engineering for example. How does one apply and measure a service assurance policy to a given application? Today, you configure the virtual side to throw packets over the fence with a ToS bit. Then you configure the physical side to say — “if I catch a packet with ToS X, do something special.”

      You end up with two configuration/policy domains that can, and probably will, drift away from each other. And how do you account for that?

      Cheers,
      Brad

    • Art Fewell says:

      I think one of the challenges here is that for all of us from the first day we started to learn about networking, our vision has been confined by what we perceived and what we were told were the hard limits of reality. But science has always had hard limits in any domain within a given paradigm, and science continually evolves to shatter the paradigms that imposed those hard limits.

      Here is the actual requirement IT is facing today: I would like a single private cloud, a single self service portal, and an automated workload orchestrator that can dynamically place or scale workloads on an infrastructure that is treated as a resource pool. I would like ALL of my enterprise applications to be run in this one common cloud environment with one common operational model … that includes everything from tier 1 apps that arent latency sensitive to tier 1 apps that are latency sensitive through tier 4 apps that are/arent latency sensitive (and other performance constraints). We need all of this to happen dynamically, and in real-time. Oh, and we would like to target ~80% asset efficiency across everything from RAM and CPU to the network fabric itself. Oh, and while we are doing this all, we need to do it for less money than ever before with less resources than ever before.

      Now, tell me how we do this with a traditional approach. As Brad highlights, we have a very primitive interface today between vfabric and pfabric, good luck meeting these requirements with the way things have been traditionally. And the same is true of all the plans the traditional network players had before SDN came around, it was never going to meet what was needed for the cloud.

Leave a Reply

Your email address will not be published. Required fields are marked *