I’m chewing on a few thoughts today I wanted to jot down here and marinate on for a while. I’ll use VMware as the straw man for the sake of discussion, simply because – like it or not – they are the household name in virtualization. Disclaimer: The illustrations here are purely of my own imagination and do not reflect anything more than that.

Does it make sense for the software that controls the host machine to also control the fabric that interconnects those hosts?

Note that the host machine software already has some control of the “fabric” – but not all. What am I talking about? The virtual edge. The hypervisor vswitch – a network device (yes, it’s a network device) providing a network connection for the virtual machine (what we care about).

This brings us to the larger question of: What is the Fabric? Most people think of “Fabric” as a specially constructed network of physical switches – with all of the emphasis placed on how we should connect these physical switches together, and how they should be configured, etc.

Meanwhile, there’s another fabric to contend with – the virtual fabric – constructed by the host software with virtual switches. This is the fabric touching the virtual machines at the access edge.

Bifurcated Fabric

We already know that VMware provides software to load on your favorite server hardware and cool stuff happens, right? Virtualization, multi-tenancy, intelligent resource allocations, QoS, push button automation, etc, etc.

VMware is a software company. They don’t sell servers. The model is and probably always will be: “You bring the hardware, we’ll bring the software.” At least, that’s been the model for the host machines in our virtual data center.

The network is a different story though. Here, the network switch vendor says: “I’ll bring both the hardware and the software – it’s a package deal.”

There has always been this proverbial line in the sand between host machines and network switches. “You run your software there. I’ll run my software here – and we’ll all play nice together”. Hence we never know what kind of thing we’ll need to play with on other side of the line. So we need to establish some dumbed down and very basic rules of the game that just about anybody can follow.

In our case, those rules would be things such as: “Here’s how the host machine instructs the switch where some data should be delivered to, and the SLA you want.” Hint: Destination IP address, ToS bits.

What we end up with is a very basic and lowest common denominator interface between the host and physical network — and by consequence this same basic interface applies to the virtual and physical fabrics. Something just good enough to say “Here’s where I want this data to go and can you please take good care of it for me?”

Instead, what if the rules changed to: “Here’s how you load and run software on this physical switch.” Just like we do today with standard x86 servers.

Contiguous Fabric

Now you potentially have software in the physical fabric that intimately understands what the attached hosts are attempting to do. And as a result we can play with a more sophisticated set of interfaces between the host and network, and what the information carried in those interfaces means to this fabric – not the IEEE or IETF. This doesn’t necessarily mean switches with new special proprietary ASICs, although that’s possible. You work with whatever your switch ASIC is capable of.

For example: Software vendors already work with the well-known capabilities of Intel x86 commodity silicon. Similarly, software vendors could also work with the well-known capabilities of commodity switching ASICs (Intel and Broadcom). Things like DCB and MPLS.

The end result perhaps being a more capable and contiguous Fabric. A better blending of the physical and virtual. Something that delivers better capabilities around service assurance, traffic engineering, and better visibility into the ever changing correlation between the virtual and physical topology.

Further reading: Fabric: A Retrospective on Evolving SDN