Reporting from the front lines of network transformation

Filed in Career, Network Virtualization, SDN by on May 14, 2013 9 Comments

It’s been a while :-)

So what gives? Well, I’ve been spending most of my time on the front lines: meeting with customers, breaking the ice, laying out the fundamental case for Network Virtualization, face to face, heart to heart. Just a whiteboard, rolled up sleves, and a room full of intelligent IT converstationalists.

This is, actually, my favorite thing to do.

I’m not a real big fan of the formal presentation, the pomp and pageant of tech conferences, or endless pontificatating from atop some ivory tower “Office of the CTO” … “customers want this, customers want that, blah, blah, blah”. Not to minimize that stuff. It’s important too, and there’s always a time and place for that.

But there’s nothing better than having a raw, unscripted conversation, laying out the core concepts of a transformative networking tech and seeing where the dialogue takes you, and learning a few new things with each discussion.

And there’s never a shortage of things to talk about when the topic is Network Virtualization.

When you look what it takes to deploy an application, all the VMs and network services, you’ll find that network provisioning is a tremedous drag — up and down the stack — the VLANs, Firewalls, Load balancers, Routing (VRF), ACLs, QoS, IP addressing, DNS, ACLs, Monitoring, NAT, VPN, the list goes on.  Now try to pick that application up (network services and all) and move it to another data center … <pound head here>

The virtual machines are in this 21st century world of sofware automation, common hardware, API’s, mobility, and rapid provisioning. Provisioning the network, on the other hand, is still stuck in this 1990′s era of humans, keyboards, CLIs, specialized hardware, and chokepoints.  Despite the best efforts of server virtualization, the application is still not fully decoupled from hardware.

When you think about it … the problem with networking is NOT packet forwarding.  That’s one thing the networking industry has done really really well.  We have these wonderful line rate 10/40/100G switches running extremely well engineered and robust distributed routing protocols such as OSPF/BGP/ISIS. We don’t need to re-invent that.

The problem with networking is the manual deployment of networking services and policy.  All the stuff you need to configure in network hardware to get a new application online (or moved to another data center).

Contrary to the current SDN hype — we don’t need to decouple network hardware control planes from data planes.  Rather, we need to decouple the network policy from packet forwarding. Network Virtualization.

Networking needs to evolve.  Everybody seems to agree.

How do you do that?  Decouple, Distribute, Automate.

Decouple the application from networking hardware (finally!) — the entire L2-L7 stack.  Move the workload’s network closer to the workload — at the edge software layer.

Distribute networking services at the software edge.  Distributed in-kernel L3 routing.  Distributed in-kernel statefull firewall.  No more chokepoints.  Move the services to the workload.  Stop moving workloads to the services. End the traffic steering madness.

Automate the complete L2-L7 virutal network deployment in lock step with the compute.  The cloud provisioning system should be deploying the entire application stack — the VMs and its complete virtual network.  Throw some API messages at the server virtualization software. Throw some API messages at the network virtualization software.  Validate and snapshot the whole thing.

Now we’re talk’n :)

Cheers,
Brad

About the Author ()

Brad Hedlund (CCIE Emeritus #5530) is an Engineering Architect in the CTO office of VMware’s Networking and Security Business Unit (NSBU). Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, value added reseller, and vendor, including Cisco and Dell. Brad also writes at the VMware corporate networking virtualization blog at blogs.vmware.com/networkvirtualization

Comments (9)

Trackback URL | Comments RSS Feed

  1. Marcus Dandrea says:

    Brad,

    As always this is a great article and many of the conversations I have had with customers have been more software focused as it pertains to the network. Customers are not looking to reduce headcount if you ask me they are just looking at how to run things more efficiently.

    I think network virtualization is just the start if you ask me. The application layers will gain more intelligence as well and we will see a higher level of intelligence at the compute node where the network ports really start.

    100GBE will be how we connect Data Centers in a box and the total number of physical ports will be reduced significantly. A marriage will be formed between the network virtualization and application stacks and the intelligence will shift from the physical network box as we see it today into the back of the compute node.

    Application intelligence may come in the form of a PCI-E card and the real optimization may start at that level and extend into the virtual network layer. The day of a custom Network ASIC’s to gain advantages for 12-18 months at the expense of the customer will not longer be required as I’m sure standard industry ASIC’s will suffice.

    Now the network company that has the most density for 100GBE should win in the end but only time will tell.

  2. Art Fewell says:

    Great Article Brad! Glad to see you back in action on the blog!

  3. Matt Martin says:

    Brad,

    Great post. I am excited to see where SDN and other automation technologies will take us in the months to come. There is a lot of buzzin the air!

    How do you feel the current HW OEM’s of network gear are adapting to this change? Dell, Cisco, HP and the like.

  4. Donny Parrott says:

    Interesting premise, but I wonder what the long term impacts will be. For orchestration and automation, SDN will provide significant improvement in time-to-service.

    However, similar to the compute world, the node will become irrelevant through abstraction. The physical network will become a low cost transport bus. High value services (IPS, FW, NAT, Route, …)will be executed and consumed on the compute layers while networking manages transport.

    This paradigm will bring the datacenter to grid computing – any node, any location, any service.

    How, is an interesting question. Xsigo made a very good run at this and UCS is similar where logical networks and physical infrastructure are loosely coupled. This will enable a flat physical network will numerous logical layers where workload placement will drive network utilization. If multiple VMs are collocated, the network traffic never need leave the PCI bus.

    Interestingly, I helped test a peer networking design in college where there were no switches and all nodes were directly cabled together. Each node had direct access to its peers. The performance and capabilities were outstanding, but failed to scale (obviously). I believe a similar model is coming with a switched bus to which services can be injected. Plug in a service (storage, DPI, …) to the bus, and anyone connected can access and consume those services.

    • Naveen says:

      Donny, thats the key…build entire DC networks entirely with server/software and ZERO switch ports..only then enterprises will look at SDN. If you say a even a single phy port is requried, in comes CISCO and out goes SDN.

      Low cost SWs have been in decades..but still ENTs went with CISCO.

  5. Hey, no fair, thats a post of questions :) So I may disagree with a statement like we don’t need to decouple. I think its too early to put a stake in the ground and say that isn’t doable. Assuming you are saying it is hardware constraints and not a case of lack of technical reasons to not decouple.

    I personally think the path Nicira/Vmware is going is a proper path. Decouple at the edge, get that working and see whats next. Eventual consistency makes a lot of since inter-AS/Internet etc. Where I think it makes less since is intra-AS. Policy application without the proverbial CP bump in the wire leads to the hairpin/choke-points we have today in order to distribute policy. Too much money to be made from horizontalization will be the ultimate driver I reckon.

    If similar features to the DC software edge can’t be replicable over the next few years in the hard edge, I will start sticking PCIe servers in wiring closets :)

    Respect,
    -Brent

  6. Antonio de sousa says:

    Just a simple question, what is your definition of traffic steering, I’m assuming you mean all the load-balancing and multipathing practices used in common practice?

  7. Ethan Banks says:

    Nice to hear someone make the point that distributed routing protocols aren’t broken. That side of the story isn’t getting much love these days, but it’s at the root of the question I hear engineers asking about SDN the most: “Why do I need SDN again?” There’s a bigger conversation to be had – decoupling of control/data planes is not a foregone conclusion.

Leave a Reply

Your email address will not be published. Required fields are marked *