Today I am excited to write that a page has turned, starting a new chapter in my career, and life. I’ve concluded an excellent year of service with Dell as “Networking Enterprise Technologist” where we grew DELL networking revenues by 40% Y/Y. We launched cool networking software products like Dell Fabric Manager (fabric automation) and Active System Manager (converged infrastructure), and we launched the industry’s first 40/10GE converged blade server switches — the MXL and IO Aggregator. I believe Dell is on a path to become a serious contender in data center fabrics — something you or I would have never imagined just a few years ago. Along that path Dell has some tough decisions ahead, but I think they can make it happen.
In my time at Dell, I’ve learned to see the data center network from a different perspective. I observed this space from a bottom-up point of view, looking at the specific needs of big data and private cloud clusters of compute and storage. This, compared to the usual top-down monolithic network point of view I’ve had most of my career, looking at Core switches and trickling down from there to access layer protocols and port counts. Learning to see things from a different point of view expands your horizon and opens your mind.
Now, on to the next chapter. I couldn’t be more thrilled to be joining the Networking business unit at VMware (Nicira), as “Engineering Architect, Virtual Networks”, reporting to Martin Casado (need I say more?). Other members of the team are former Cisco fellow and IP/MPLS guru Bruce Davie, and Teemu Koponen (coding genius behind NVP) who recently won the 2012 SIGCOMM Rising Star award. Surround yourself with the right people and the rest will take care of itself.
Imagine an infrastructure where you can essentially draw and deploy your network topology, including the workloads, L2 segments, load balancers, firewalls, routers, gateways, etc — in any way, in any combination, all without touching any hardware configurations. And all on common hardware platforms in a cluster of fabric and compute. That’s a comprehensive L2-L4 network abstraction made possible by networking software built like a distributed system. Now make a template of that topology for rapid re-provisioning, disaster recovery, auditing and compliance, application portability, etc. That’s a virtual network.
This is not your Dad’s VLANs. Not your Uncle’s VRF. And not your Grandpa’s router CLI.
When the time comes to make a serious career change, you have to follow your passion and let your intuition and core beliefs guide you. That can be hard to do sometimes in an environment thick with hype, money, and start-up allure as we have right now in the networking industry. It shouldn’t be about picking a winner. It should be about finding something you really believe in, and making it a winner.
I’m a believer in distributed systems. Look at how distributed systems radically changed the storage and data analytics industry (eg. Hadoop). Petabytes of data can now be analyzed for business value in a matter of seconds — all on common hardware platforms in a cluster of fabric and compute. Can distributed systems bring the same kind of transformation to networking? I believe so.
I’m a believer in the intelligent edge and packet transport core (fabric). This is a proven architecture for service oriented networks. Look at the MPLS architecture of any service provider and this is what you see. The customer is connected to an ingress “Provider Edge” box where policies are applied and then placed on a packet transport label-switched path through the “Core” to the egress edge. It doesn’t make sense to re-inspect the same bits of a packet at each hop in the network. The same example can found in chassis switch architecture –intelligent edge linecards interconneted by packet transport fabric modules.
I also believe that x86 machines and the hypervisor vswitch are the ideal intelligent edge devices in our data center virtual network. The hypervisor vswitch is exposed to a much greater set of context than a typical top of rack switch. For example, it can differentiate VMs grouped together in the same application or tenant and program the vswitch accordingly. I also consider the first interface between the “outside world” and our virtual network to be an intelligent edge as well — the North/South edge. Which, again, is ideally x86 machines with the same L2-L4 vswitch programmed from the same context at the workload edge. And in the middle of it all is a packet transport fabric — the physical network.
With our hypervisor vswitch playing such an important role in our virtual network — the question becomes: Where is the ideal place to program the networking services and topology for our virtual network? Perhaps the same software managing the deployment and provisioning of the workloads, the VMs? Or something closely coupled to it? I believe so. The rationale being that you would want your application architecture defined in one tightly coupled policy engine — Rather than duct taping your VMs in one system to your virtual network in another system (that’s a loosely coupled kludge). Besides, one workflow is better than two, right?
And finally, I believe in a solution that works on standard, commonly available hardware. That the virtual and physical networks can and should be independently interchangeable and replaceable. This of course leaves all of the leverage and control with the customer, not the vendors — and cultivates an ecosystem along the way.
And that’s why I couldn’t be more jazzed to embark on this epic adventure with VMware Networking. I look forward to meeting you along the way!