I have a hunch. Pure speculation, in fact, that there may be an even more interesting story developing here with the unveiling of networking software startup Nicira Networks. The story being that of open source networking software minimizing the role of network “protocols” and the diminishing role of standards bodies in building next generation networks.
Nicira Network’s network virtualization platform (NVP) leverages the Open vSwitch (much of which was developed by Nicira engineers as the data path for whatever edge networking device it’s installed on. In this case, its a server with a hypervisor and a bunch of virtual machines. Though, as Nicira points out in their literature, there’s nothing preventing the Open vSwitch from being installed on other networking devices and form factors, such as firewalls, load balancers, and physical switches. With the Open vSwitch as the workhorse, an open source software project, one can certainly make the claim that the solution is “open” and not proprietary. Furthermore, the data path of the Open vSwitch uses established and well understood open protocols; 802.1Q, GRE, etc.
Now that you have these Open vSwitches everywhere you need something to centrally configure and control their data path with a forwarding policy. This is where Nicira’s clustered controller comes in, or perhaps a controller provided by some other vendor. The Nicira central controller will control all of the edge Open vSwitches in an elegant way (perhaps a gross understatement). That’s where they’ll make money – selling their controller software and all the professional services you might need to get things working right in your environment.
This is where things get interesting. Most people think that you would need an open “protocol” for the controller to interface with the edge Open vSwitch. And absolutely, you certainly should have that. That way any vendor can supply the controller while using the same Open vSwitches. Right? And if you take a cursory look at the Open vSwitch documentation, as expected you’ll see OpenFlow as the protocol for this purpose.
When you use a protocol, you obviously need to follow the rules of the protocol otherwise you’re not adhering to the standard, and people tend to get really upset about that kind of stuff. So, somebody has to set the rules for others to follow. Which usually involves getting a group of people with inflated egos together to agree on something, be it vendor-led standards bodies such as the IETF, or in the case of OpenFlow a customer-led “foundation” such as the ONF. All of this takes time to get right. Lots of time. Meanwhile, there’s a market out there willing to pay for a solution now.
Think about this for a second – Why do we need to use an open “protocol” for a controller to program a switch? “Well, Brad, that’s obvious, because otherwise the solution would be deemed proprietary, heaven forbid!” True, perhaps, if you’re thinking in terms of the usual paradigm where Vendor-A’s box is running Vendor-A software, connected to Vendor-B’s box running Vendor-B software. This is obviously where we need protocols. But what if Vendor-A’s box was running open source software, and Vendor-B’s box was running the same open source software? Or at least, the communication path between Vendor-A and Vendor-B is through an open source software module. Do you need a “protocol” then?
With that in mind, take a closer look at the Open vSwitch documentation, dig deep, and what you’ll find is that there are other means of controlling the configuration of the Open vSwitch, other than the OpenFlow protocol.
Take for example the ovs-vsctl “component” of the Open vSwitch. This component can be used to remotely configure the Open vSwitch at a granular level – such as editing tables and records in the vswitch configuration database. It’s one piece of open software talking to another piece of open software over a standard TCP connection – you can’t call that proprietary. And guess what, you don’t need a dinosaur standards body to decide what goes in the code. Is ovs-vsctl a “protocol”? No.
Nicira may or may not be using the OpenFlow “protocol” to control the Open vSwitch in their current deployments. There’s enough evidence to suggest they had that choice to make. Perhaps the OpenFlow 1.0 spec was just too limited for what their customers needed at that time. If so, what’s wrong with coding around the limitations in an open software platform?
The point here isn’t to blow a standards dodger whistle, but rather to observe that, perhaps, a significant shift is underway when it comes to the relevance and role of “protocols” in building next generation virtual data center networks. Yes, we will always need protocols to define the underlying link level and data path properties of the physical network – and those haven’t changed much and are pretty well understood today.
However, with the possibility of open source software facilitating the data path not only in hypervisor virtual switches, but many other network devices, what then will be the role of the “protocol”? And what role will a standards body have in such case when the pace of software development far exceeds that of protocol standardization.
Great example: Take a look at the OpenStack Quantum project.
Disclaimer: The author is an employee of Dell, Inc. However, the views and opinions expressed by the author do not necessarily represent those of Dell, Inc. The author is not an official media spokesperson for Dell, Inc.