What is Network Virtualization?

Data centers exist for the sole purpose to deploy applications. Applications that automate business processes, serve customers better, enter new markets … you get the idea. It’s all about the Apps.

Server Virtualization

Applications are composed with both Compute and Network resources. It doesn’t make sense to have one without the other; a symbiotic relationship. And for the last decade, one half of that relationship (Compute) has been light years ahead of the other (Network). Compute and Network is a symbiotic relationship lacking any symmetry.

For example, it’s possible to deploy (virtual servers) the Compute of an application within seconds, through powerful automation enabled by software on general purpose hardware — Server Virtualization. The virtual network, on the other hand, is still provisioned manually, on specialized hardware, with keyboards and CLIs. Meanwhile the application deployment drags on for days, weeks, or longer, until the network is finally ready.

Server virtualization also enabled Compute with awesomeness like mobility, snapshots, and push button disaster recovery — to name a few. The network, on the other hand, doesn’t have the same capabilities. There is no mobility – the network configuration is anchored to hardware. Snapshots of the application’s network architecture is next to impossible because the network configuration state is spread across a multitude of disparate network devices (physical and virtual). And recreating the application’s network architecture at a second data center (disaster recovery) is a house of cards (at best), if not impossible, without the same automation, untethered mobility, and snapshots. The Compute portion of the application, with all of its virtualization capabilities, is held back from reaching its full potential, anchored to the non-virtualized network.

Network Virtualization is a solution with products that bring symmetry to the symbiotic relationship of Compute & Network. With network virtualization, the application’s virtual Network is provisioned in lock step with virtual Compute, with the same level of speed, automation, and mobility. With compute and network working in symmetry, through Server & Network Virtualization, compute and network are deployed together – rather than one waiting for the other. Applications are fully decoupled, with fully automated provisioning, and truly mobile.

What is Virtualization?

Virtualization is the basic act of decoupling an infrastructure service from the physical assets on which that service operates. The service we want to consume (such as Compute, or Network) is not described on, identified by, or strictly associated to any physical asset. Instead, the service is described in a data structure, and exists entirely in a software abstraction layer reproducing the service on any physical resource running the virtualization software. The lifecycle, identity, location, and configuration attributes of the service exists in software with API interfaces, thereby unlocking the full potential of automated provisioning.

The canonical example is Server Virtualization, where the familiar attributes of a physical server are decoupled and reproduced in virtualization software (hypervisor) as vCPU, vRAM, vNIC, etc., and assembled in any arbitrary combination producing a unique virtual server in seconds.

The same type of decoupling and automation enabled by server virtualization is made available to the virtual network with Network Virtualization.

What is the Network?

Virtual machines supporting the application often require network connectivity (switching and routing) to other virtual machines and the outside word (WAN/Internet) with security and load balancing. The first network device virtual machines are attached to is a software virtual switch on the hypervisor. The “network” we want to virtualize is the complete L2-L7 services viewed by the virtual machines, and all of the network configuration state necessary to deploy the application’s network architecture (n-tier, etc). The network relevant to the virtual machines is sometimes more specifically referred to as the virtual network.

Virtual servers have been fully decoupled from physical servers by server virtualization. The virtual network, on the other hand, has not been fully decoupled from the physical network. Because of this, the configuration necessary to provision an application’s virtual network must be carefully engineered across many physical and virtual switches, and L4-L7 service appliances. Despite the best efforts of server virtualization, the *application* is still coupled to hardware.

With Network Virtualization, the goal is to take all of the network services, features, and configuration necessary to provision the application’s virtual network (VLANs, VRFs, Firewall rules, Load Balancer pools & VIPs, IPAM, Routing, isolation, multi-tenancy, etc.) – take all of those features, decouple it from the physical network, and move it into a virtualization software layer for the express purpose of automation.

With the virtual network fully decoupled, the physical network configuration is simplified to provide packet forwarding service from one hypervisor to the next. The implementation details of physical packet forwarding are separated from, and not complicated by, the virtual network. Both the virtual and physical network can evolve independently. The virtual network features and capabilities evolve at software release cycle speeds (months). The physical network packet forwarding evolves at hardware release cycle speeds (years).

Packet forwarding is not the point of friction in provisioning applications. Current generation physical switches do this quite well with dense line-rate 10/40/100G silicon and standard IP protocols (OSPF, BGP). Packet forwarding is not the problem. The problem addressed by network virtualization is the manual deployment of network policy, features, and services constructing the network architecture viewed by application’s compute resources (virtual machines).

Network Virtualization

Network Virtualization reproduces the L2-L7 network services necessary to deploy the application’s virtual network at the same software virtualization layer hosting the application’s virtual machines – the hypervisor kernel and its programmable virtual switch. Similar to how server virtualization reproduces vCPU, vRAM, and vNIC – Network Virtualization software reproduces Logical switches, Logical routers (L2-L3), Logical Load Balancers, Logical Firewalls (L4-L7), and more, assembled in any arbitrary topology, thereby presenting the virtual compute a complete L2-L7 virtual network topology.

All of the feature configuration necessary to provision the application’s virtual network can now be provisioned at the software virtual switch layer through APIs. No CLI configuration per application is necessary in the physical network. The physical network provides the common packet forwarding substrate. The programmable software virtual switch layer provides the complete virtual network feature set for each application, with isolation and multi-tenancy.

Server & Network Virtualization

With Network Virtualization the virtual network is entirely provisioned in software, by software, with APIs, at the same speed and agility and in lock step with server virtualization. The same software tools already provisioning the application’s virtual machines can simultaneously provision both compute and network together (with templates), and subsequently validate the complete application architecture — compute and network together.

Next, rather than just taking snapshots of virtual machines, take a snapshot of the complete application architecture (compute and network) and ship a copy off to a disaster recovery site – on standby for push button recovery. The application’s network is finally equally mobile and running as fast as the compute.

Network Virtualization makes sense because of Server Virtualization. Compute and Network, a symbiotic relationship deployed in synchronization, with symmetry.

It’s a no-brainer.



  1. says

    Hey Brad, thanks for the clear, concise and insightful article. I’d love to see you tie this ‘what’ and ‘why’ into some ‘how’ and wonder if things would look so simple. I’m not too keen on the idea of overlays right now, it seems like twice the trouble.

    I know it wasn’t the point of the article and I hate to appear like a scare-monger but widespread use of virtualisation in this fashion surely points to some serious implications for the industry’s employment market too?

    • says

      I would argue that network virtualization will be good for network engineers. Why? Because it will get them back into the business of being *Engineers* again. Too much of their day is spent on remedial move/adds/changes required to provision applications. Removed from the mundane provisioning tasks, the network engineer can focus more on network analytics, capacity planning, performance optimization, and architecture — skills that are a lot more valuable, and endearing.


      • says

        Brad, I’d agree and obviously applaud this change, however, I’m still not 100% sure there’s enough room at the higher levels (in the engineering business). Your view seems to be shared by most and as I’ve said elsewhere general market growth and the lessons of history around the server ‘revolution’ are a strong argument for being positive so that’s what I’m going to try to be.

        • David Klebanov says

          Hi Steven,

          Not trying to steal Brad’s thunder here :-) Hats off to VMware for creating revolution in the form of server virtualization, but lets also remember that VMware is the one who actually created the initial issue of requiring Layer 2 extension across virtual machine mobility domain, which became a matter of major frustration for so many virtualization admins, since it was not easily solvable on the network side. It was much easier to throw the problem onto the network, rather that resolving it on the hypervisor layer… In fact, this is probably one of the major reasons the network was labeled “inflexible” for so many years.

          Now VMware is applying a patch-like approach to fix this through network overlays. I will leave it up to you to decide if you want to trust VMware network solution or adopt solutions from your existing network vendor. If your existing network vendor does not have solutions, time to look for a new vendor :-)


          • says

            Hey David, I appreciate you sharing your thoughts. I’m loath to play the blame game but if I were to I’m sure VMWare would come off lightly and that the energy they have introduced to networking more than mitigates for their faults, certainly prior to the EMC acquisition. Compare that with the larger vendors who’ve been quite happy with the status quo and making easy money, with little need or desire to innovate.

            Moving away from that I certainly have a preference for taking things much further and going the SDN route; it’s more disruptive but worth it in the long run.

          • says


            Decoupling (enabled by encapsulation) is not a “patch-like approach” — its a real solution to gaining agility with hardware independence.
            Encapsulating virtual networks into “Overlays” provides the same outcome as encapsulating virtual machines into a File. You get automation and mobility on any hardware.
            Virtual machines encapsulated into .VMDK and .VMX files was not a “patch-like approach”, it’s a fundamental enabler to obtain true virtualization. The same is true for virtual network encapsulation.

            For the virtual network, the decoupling also moves all of the virtual network’s features into software, where the time to market for new features is much faster than hardware.


      • says

        Brad, I agree that network virtualization as currently implemented helps maintain existing job silos and titles like “network engineer.” However, I see this a sign that the underlying problems created by management silos are not being solved by network virtualization – instead network virtualization is being used as grease to reduce the organization friction between existing enterprise network and systems teams.

        The radical improvements brought about by the DevOps movement are a result of breaking down silos and creating cohesive, integrated teams to tackle the difficult problems limiting growth and agility that developers and operations teams couldn’t address on their own.

        I see a more integrated approach in which network engineers become part of the DevOps team, along with orchestration systems that reflect that change, as the ultimate goal.


        • says


          Network Virtualization does not preclude a pure DevOps approach to network provisioning. At the end of the day you have an API that can provision full L2-L7 networking. Who uses that API, and what software tools are provisioning against the API is entirely up to the individual organization.


          • says


            Don’t get me wrong, I like the basic concept of network virtualization and agree that it doesn’t have to preclude a pure DevOps approach to provisioning. However, I think the current network virtualization architectures and APIs act as a barrier since they don’t provide feedback mechanisms.

            Your two cartoons show the network as a drag on compute, or slavishly following compute. I see the relationship as more of a dance. Network loads and topology affects the performance of compute tasks, and the placement of compute tasks and storage affects network performance. The APIs should reflect this symmetry – providing bi-directional visibility and control.

            The title of the article I linked to is, “Network virtualization, management silos and missed opportunities.” The thesis of the article is that we risk building network virtualization architectures and APIs that entrench existing bad practices, rather than making use of the opportunity to develop richer APIs that will facilitate a cooperative “DevOps” style of orchestrating network, server and storage resources.


          • says


            I don’t see any major technical barrier to achieving that with Network Virtualization as it exists today. It’s all about creating and retrieving useful operational data. The more you have, the better decisions you can make. The actions you decide to invoke based on the analysis of that data might already be supported in current APIs, or easily added.

            Given that every port on the virtual network is backed by software, the network virtualization layer sees all of the traffic — and there’s a tremendous amount of data that can be generated from that alone. For example, today we can bulk export sFlow all day long, for every port on the virtual network. As another data source, each hypervisor programmable vswitch has a real-time operational state database (OVSDB) that allows for subscriptions to data feeds with triggers to upstream tools (no polling).

            Furthermore, the physical fabric can also generate similar operational data. For example, the implementation of Arista EOS SysDB is very similar to OVSDB. The same tool could be collecting data from both the physical and virtual network, and made available for analysis. Whether or not you make a real-time decision from that data would be entirely up to the individual organization.

            Software and data will solve these problems. I don’t see this as an architectural shortcoming.


  2. David Klebanov says

    Hi Brad,

    Thank you very much for your insightful post! Always a good read. Let me share some thoughts here…

    I totally agree with you that automation and service agility are indeed some of the key elements of modern virtualized Data Center environments, however the question is whether network virtualization in a form of overlays (tunnels) is the appropriate way to address it.

    Before throwing the solution, in my view, it is important to understand how we got here. Back in the days of server virtualization technology onset, the concept of virtual switching edge was born. At first, it was no more than the most rudimentary set of network connectivity characteristics delivered through the integrated hypervisor switching. Everything, however, changed when Cisco introduced Nexus 1000v virtual switch for VMWare ESX, which for the first time gave the option of deploying main stream server virtualizaton technology leveraging feature-rich networking functions extended to the hypervisor layer. Unfortunately, initially, Nexus 1000v had its share of issues revolving around ease of deployment and upgrade. Taking control of hypervisor switching also meant more work for the network team and who would volunteer for extra work, right? All those concerns resulted in Nexus 1000v not being adopted across the board, which left server/virtualization admins, who controlled the Virtual Machine Management GUI (vCenter in case of VMware), responsible for provisioning Virtual Machine network connectivity properties by themselves. Physical servers and virtual servers have a lot of common characteristics, so even though understanding server virtualization was definitely a learning curve, it was a gap relatively easy to bridge. On the other hand, many server/virtualization admins had (and still have) limited understanding of what it takes to build successful networks in general and Data Center networks in particular… Overlays promise server/virtualization admins easy and carefree life, “unshackled” from the chains of understanding networking technology intricacies. After all, you only need IP address and dumb/fast Layer 3 network in between, so what can possibly be easier?!

    The truth is that building and more importantly operating Data Center networks is not easy and just as you can’t really loose weight by exercising 5 minutes a day as some commercials would want you to believe, you cannot build a successful and long-lasting Data Center network solution by applying a patch-like approach in the form of overlays. Not to say that all overlay deployments are doomed, after all, Data Center bandwidth is readily available, packet loss does not frequently occur and MTU is more than often all forgotten about. At the same time leveraging overlays in the Data Center exercises “fire and forget” approach, which challenges operational and troubleshooting methodology matured and developed by the network folks over the past decades. Welcome to the wild-wild-west of limited visibility, questionable practices and reinvented networking…

    In my mind, true, comprehensive and long-lasting solution should come from native Data Center network, where connectivity model is not dependent on blindfolded fabric traversed by over-the-top tunnels, but rather on intelligent infrastructure delivering automation, transparent troubleshooting methodology, full visibility and open framework.

    Thank you for reading :-)

    • says

      Hi David,

      A few comments:

      “many server/virtualization admins had (and still have) limited understanding of what it takes to build successful networks”

      I just wanted to say that making a topic of a customer’s “limited understanding” is absolutely the right approach in building up your pitch. Keep doing just that 😉
      Snark aside, I’m not suggesting that server/virt admins “build networks”. The issue at hand is how do you make networking services easy and faster to *consume*.
      Capacity planning & building is different from consumption. With network virtualization, the consumption side is automated, and the capacity side is built on top of general purpose hardware — much like server virtualization.

      “building and more importantly operating Data Center networks is not easy”

      There’s that “Not so Easy” button again. Funny, and a bit ironic. When it was time to take market share in servers with automation (UCS), it was an “Easy” Button. Now the time has come to defend market share in networking from automation and suddenly its a; whoa, whoa, whoa, “Not so Easy” Button.

      “you can’t really loose weight by exercising 5 minutes a day as some commercials would want you to believe”

      Actaully, you can. NY POST: No time for the gym? New study claims less than 15 minutes of exercise a week needed to stay fit

      Also, I find it a bit funny to hear all of this FUD about “Overlays” … coming from the networking vendor that proposed and sells: VXLAN, OTV, LISP, MPLS, DMVPN — all of which are … Overlays.


      • David Klebanov says

        Hi Brad,

        I do not agree with you on the point that overlays can be seen as a consumption model comparable to server virtualization. Even though virtualization does make a point to operate on top of general purpose servers (as an option), it fully leverages their characteristics. Hypervisors know and operate on the amount of memory, number of NICs and HBAs, available CPU resources, special hardware functionalities and so on. Contrary to that, network virtualization advocates decoupling from the underlying infrastructure, effectively creating a silo for server/virtualization admin to build, manage and operate.

        Network virtualization could be positioned as consumption model if overlays worked in synergy with the underlying infrastructure, much like hypervisors work in synergy with resources and capabilities provided by the underlying physical servers regardless of their manufacturer. Such relationship could be driven by orchestration across both, which goes way beyond rudimentary approach provided by VXLAN tunnel termination in hardware at the edges. This could be the true “Easy” button that builds comprehensive solution for server/virtualization admins to consume and the one that leverages proper domain expertise across network and server virtualization.

        Tunneling technologies do certainly have their use cases, specifically when they solve true problems. Data Center overlays, in my mind, solve artificial problem and as such should be carefully evaluated for their applicability in a case by case basis.


        P.S. Remind me to cancel my gym membership 😉

        • says

          Yes, network virtualization decouples the virtual network from the physical network. Similar to how server virtualization decoupled virtual servers from physical servers. Decoupling is important because it provides both architecture independence and automation. Both the physical and virtual evolve independently, and can be sourced & procured independently.

          I don’t know why you keep insisting that server admins will need to build/manage/operate the network — that is pure FUD. It sounds like legacy 1990’s thinking which ties roles and responsibilities to hardware. In the 21’st century virtualized IT, roles and responsibilities are tied to services. Server virtualization provides a computing service. Network virtualization provides a networking service, and there’s nothing stopping the network team from owning it. It’s entirely up to the individual organization.

          Just as server virtualization enabled a career/skill evolution from server ops > virtual admin, the same will be true with network virtualization enabling a career/skill evolution from network ops > virtual network admin.

          The network is built once, and consumed many times over (virtual networks). Just like a server is built once, and consumed many times over (virtual machines).
          Consuming a virtual network is no different than consuming a virtual machine. In fact, the two are better together, like peanut butter and jelly.


          • David Klebanov says

            Hi Brad,

            Don’t get me wrong, I do support the principles of indirection and I believe when necessary and when done right, it can indeed be a great tool. Having said that, the relationship of virtual machines to physical servers cannot be compared to the relationship between overlays and underlying network infrastructure. While server virtualization is fully leveraging characteristics of the physical servers (regardless of the server manufacturer), current network virtualization message totally dismisses underlying network infrastructure dumbing it down to fat/fast pipes. It is not an apples to apples comparison.

            The decoupling can happen, but not in the way VMware and other network virtualization vendors are trying to sell it, which in my view is a patch solution rather than a long lasting architecture.


          • says

            Good discourse — as expected from a disruptive tech. Server virtualization too had a similar discourse period in 2003-2005.
            That aside, your argument sounds more like a gripe — the network “dumbing down” (your words, not mine) — rather than a solid technical objection.
            I think it’s more appropriate to describe the physical network configuration as becoming *more simplified* — to do what it does best — packet forwarding.

            Switch vendors such as Cisco can still provide a robust packet transport fabric for network virtualization. When you fully decouple virtual networks from the physical network with network virtualization, the physical network architecture is not disrupted, or complicated by, the virtual network.


          • says

            Hey Brad. I’m with David on this one; overlays look like a vegetarian sausage; a poor replacement for the real thing which I’d consider to be SDN in some form. I’m happy to be persuaded otherwise but right now I don’t feel the following points have been addresses;

            1) A virtualised server runs on a real server and as David points out, is assigned resources that exist in hardware (CPU, RAM etc.). In the case of an overlay network there is no such correlation with the physical; by definition a network is distributed and contains multiple elements and many components of each network element are ‘shared’ in some fashion. Little if anything is dedicated to a particular service or flow. So, in that respect, network virtualisation is very, very different to server virtualisation; the logical (connectivity) is being virtualised, not the physical network (which in some ways is already virtualised). Are the benefits and drawbacks the same regardless or also completely different? If different, what are they?
            2) That lack of correlation is surely going to make operations far more difficult? Is an issue caused by the physical, the logical or the virtual? The hardware tied nature of a server VM contains this issue nicely but that’s not the case with a network where things are far more dynamic and less deterministic.
            3) How many ‘current’ network technologies can be applied to these overlays? By tunnelling through the existing logical network to gain the benefits of automation and orchestration (and independence I guess) are you not losing the benefits of shaping, QoS and other technologies? Is it worth that cost?

          • says


            Some comments on each point:

            1) Network and Compute are two very different services. So, yes, Network Virtualization and Server Virtualization are going to differ in the services they are virtualizing. One is providing a connectivity service, the other is providing x86 hardware emulation. Though they provide different services, the goals, implementation, and objectives are very similar (decouple, reproduce, automate). And because Network Virtualization is providing a connectivity service for the virtualized servers, it makes sense to implement the two at the same edge software layer — the Hypervisor and it’s software virtual switch.

            2) I would argue that “correlation” has always been difficult, and that network virtualization will make it better, not worse. The reason it gets better is because of data — lots of data. The more data you can collect, the better correlations you can make. Since every VM is connected to a software port on the hypervisor, the network virtualization layer sees every packet as it traverses the logical topology, and we can collect all kinds of data including counters, errors, flow stats, and more, as big data in the scale out NV controller cluster. Further more, with network virtualization, the hypervisors are always testing connectivity in the physical network and collecting data on that. If something breaks in the physical network you’ll know right away, with data on the hypervisors pairs being affected — something that’s not done today with basic vswitches and non-virtualized networking.

            3) Every virtual server is connected to software virtual switch port. That software port is capable of just about any feature you would find on a physical switch port. Things like QoS classification, shaping, and packet marking are absolutely supported. Also, the hypervisor will copy the inner QoS header to the outer tunnel header so that the physical network has visibility to the applied QoS policy.


  3. Amit says

    Great overview Brad.

    However, I am wondering what are the issues faced if the network overlays are initiated/terminated at the data center fabric level, rather than at the hypervisor level – apart from handling BUM traffic? There is so much effort going into getting network overlay right (read dynamically) in NVO3.

    I understand there are provisioning tools available that can get the job done.

  4. says

    Hello all,

    Nice post! I like especially the pro / cons arguments in David and Brad comments.

    If I may, I would like to add my 2 cents here.

    One of the problems which I see currently, based on my experience, it’s the lack of proper communication between server (datacenter) team and network one. In the past years, as both team were completely separate, minimal communication was sufficient to have the entire system working.
    Currently, it’s not going to work like this, if we really want to step forward.
    To understand my argument here, think of the dependencies in between ESXi and 1000v switch in term of versions. What can happen is that one of these two teams consider upgrading their part to a newer version without taking the option to inform the other party. You can imagine the outcome.

    Second, there is, in both teams, a desire to keep their things “private”. I should not ask about the HW performance of a server in which I need another virtual switch and they should not ask me why I need a new vswitch there. This is also coming from past times where actually there was no need of information sharing in this area.

    Third, I think the marketing and $$ desire made some companies to go too far with their public communication and associate network virtualization with end of the title / job “network engineer”. They went so far actually pretending that any person, not matter of experience or knowledge, will be able to deploy virtual networks just clicking on GUI interface. And all this without the need to understand the complexity of an enterprise network.
    You can imagine what this generated. On one hand the server guys will ask, hey if this is so simple why the network team doesn’t implement it immediately? On the other barricade the network engineers became frustrated with the idea that their years of experience and hard work will value zero, according to trending marketing topics about network virtualization. This lead to a step back in new technology implementation.

    Forth, is the barriers which sometimes you cannot cross, expecially in WAN virtualization (business demand, ISP requirements, countries restrictions and so on…).
    If I’m allow to make a parallel. The infrastructure is already there (Internet, satellites…) why I cannot view any TV channel which broadcast in U.S. (I’m located in EU)? For the same reasons, restrictions which apparently cannot be eliminated for now.

    Maybe I’m wrong, but this is my opinion based on my professional experience.. Of course there is some FUD from network engineers side, but looking around a lot of marketing campaigns fuel up this FUD.

  5. Lennie says

    The next logical step is AWS CloudFormation/OpenStack Heat where the application developer or whoever deploys an application specifies all the properties of an application in a template. The number of VMs, the scripts to call to deploy the VMs, the network connections, it’s database needs, etc. Which will automatically be combined with the information of the cloud-environment like the IP-addresses provided by the cloud-environment at deployment time.

    So it can automatically be deployed, re-deployed for DR or dev-, pre-production-, QA- or test-environment.

    And automatically scaled depending on the application load. Like a busy website running on a public cloud.

    Or total available capacity of the cloud-environment. Like on a private cloud in an enterprise where a business-critical application might need more resources and other applications might just need to temporarily suffer because of it. A cuase might be because temporarily loss of capacity.

  6. Simon Thibaudeau says

    I agree that the end goal would be for the network be considered as a resource of the application, just like storage and memory does. Whether that is accomplished with overlays or with some other virtualization/management concept matters little to me (of course, considering your employer you’d much rather that be Nicira-style overlays…) The main thing I have issues getting my head around, and has been touched in the comments here is the following:

    It all works as long as you have plenty of capacity.

    W.K. Ross describes it nicely in a recent blog post (http://siwdt.com/2013/06/02/fluidity-of-network-capacity-commoditization-diy-openness-and-the-demise-of-the-network-engineer/ , he also has a horse in that race)

    “Pointedly, network virtualization does not make capacity fluid — it makes workloads fluid. If workloads are fluid, it would be helpful to have fluid network capacities to allocate to the demands of the workloads.”

    And that’s not how I see the network right now. Can it be achieved? Absolutely, but it will require automation and monitoring of the network by the ‘application’ that will manage the network and the workloads (the Nicira controller and vCenter in the VMWare world could do that, but it would have to be able to talk to something like Cisco’s One Controller or SNMP or OpenFlow directly to the physical hardware of the switches to complete the solution.)

    I’d love to see it, but right now it seems like there is a piece missing in the puzzle.

    • says

      I enjoyed the article by W.K. Ross that you referenced. However, I think there are two ways of responding to the fluidity of workloads that results from network virtualization. The article mentions “fluid capacities to allocate to the demands of the workloads”, but a second response is to use fluidity of workloads to make more efficient use of network resources. Both strategies require comprehensive visibility and coordinated control of network, storage and compute resources.


  7. Donny Parrott says

    The responses are interesting. Territory wars ensue.

    There are a couple of points for consideration in evaluating network virtualization. Primarily we need to look at the services provided and identify the impact of each as we virtualize.

    1) Hardware management – While relatively unchanged (same for compute), the complexity of the installation and required support drops significantly. Server admins used to exercise “holy wars” on HP vs. Dell vs. IBM … Now, with virtualization, “just give me an x86 box with these capabilities”. The bleeding edge build there own. The same will come to the switch market. Complex, expensive, high overhead switches will succumb to cheap, fast, interconnects.

    2) Data transport – Job number 1. Move my packets, connect service A to service B. The virtualization of networking will allow the complexity of the architecture to flatten and provide a communication bus with segmentation, routing, etc. taking place at the edge. QOS is available in software as well.

    3) Security (FW/IPS/ERSPAN) – Already available in software, and yet not “trusted”. Because we put it in an off-blue box and triple the price it is now more secure?

    4) GLB / AppFirewall – See 3 above.

    5) Monitoring/Reporting – See 3 above.

    So the question becomes – “What is the ‘so difficult’ service that cannot be orchestrated through these software/virtualized solutions. The large service providers despise non-virtualized solutions and have removed most of them. For enterprises to maintain cost effective solutions, they must cross this bridge.

    Let’s flip the coin and as the question from 180 degrees. Removing non-datacenter services (desktops, wifi, etc.), what issues would arise if we disconnect everything behind the router and directly connect VMware clusters utilizing VCNS for border network services? Route, switch, Virtual World? hmmmm…

    • Lennie says

      1. the bleeding edge already does that with switches, look at Google and the Open Compute project.

      On all the other points I agree, this is what I imagine to happen too.

      I have to admit, maybe you got it wrong with mentioning only VMware. Commoditization doesn’t stop at hardware. Commoditization happends in software too.

      3. Security, is actually where I see the biggest problem. We see new ways to break out of the guest and get access to the host at least at a quarterly basis, probably more. From blue-pill to hypercall abuse.

      So vulnerabilties in the hypervisor running on the host-machine are a potential problem.

      When we talk about networking and security the question is.: Will we use SR-IOV and put the packets directly on the network and will the network provide the segmentation or will the host switch and maybe even route the packets ?

      However you slice or dice it, the host running the hypervisor needs to be locked down further, but we don’t want to compromise on performance or price.

      I think we want to not only switch but also route, NAT and firewall/drop packets between VMs on the host running these VMs, that means the host has access to the packets.

      Do we need SELinux, maybe with encrypted traffic to secure our environments ? Or is it more exotic like Ethos-OS and MinimaLT ?

      Or something completely different. I don’t know.

      In practise, I think most of the world will just deploy it first and not understand the consequences until it has been deployed.

      • Donny Parrott says


        I do see many references to break out theories, but actual deployment (especially in a 3rd party scenario) is extremely rare. Most virtual shops work very hard to keep the keys secure.

        But your comments about SR-IOV pulled another thread. IT is finally maturing to the point that the legacy systems are no longer priority (and in some cases don’t matter). The focus is being drug into service delivery. Service may be defined as applications, resources, data management, whatever. The key for IT is how to deliver these services through automated solutions simplified to consumer level demand.

        So, when architecting the next generation of IT solutions and platforms, the beginning of the design is the services. What is it that IT is to deliver? What method of consumption and deployment will our customers seek?

        From this dialogue, we then define the supporting structures, methods, and systems. To focus on networking, the consumer wants to say “put it with my other accounting systems” or “attach it to the extranet web farm”. This is interpreted to connect/FW/and secure by policy the requested service or resource through orchestration.

        I believe orchestrating a physical network is a loosing game. Similar to compute and storage, once baselined and attached to the environment, the physical network simply provides transport (processing or storage). What will Cisco, Arista, Nortel, etc. do to compete against Intel, Broadcom, etc. when there is no extended functionality in switching? SDN controllers will make the physical switch a commodity (like x86 server platforms).

        Interestingly, I have seen numerous Cisco UCS deployments where the uplinks were carved into 2 – 4 virtual channels (2x management + data or 2x management with separate data) and then never touched again. The virtual platform was used to segment the virtual channels further and provide layered services above data transport.

        Higher level functions (FW/IPS/AV/VLAN/etc.) are all established as part of the policy set applied at service instantiation and executed as close to the service as possible.

        In summary, we must develop the architectures and capabilities to best deliver customizable consumer services. What frustrates me is the fixation upon the tooling rather than the services. Networking will have to move forward and join the rest of the datacenter or be subjugated to WAN only.

        • Lennie says

          Didn’t see my earlier comment about applications and templates ?

          If delivering services is what you want, putting a PaaS on top of your IaaS/cloud-platform might be a good idea as well.

          Some of the PaaS platforms already come with such a template.

          The more parts that get standardized, the easier it gets to deliver applications quickly.

          • Donny Parrott says

            Sorry to be unclear. My point about services is to architect/design/implement toward service definitions and delivery. Every component should be vetted against the requirements for service delivery and compliance. If a solution doesn’t support service delivery or enhance compliance, why inject it into the environment.

            Case in point is high-end switches [pick your brand]. Extremely complex with enormous functionality and capability. If at the end of service definition and architecture we find we have simple requirements (transport/HA/6Gb/etc.), why would we purchase an asset with far more than is required.

            The compute and storage world have already gone through this transition and acquire solutions with specific functionality and capacity in accordance with requirements.

            As the virtualization of networks accelerates, the key play for their adoption is the consumption of network capabilities in support of and direct correlation to service delivery. The consumer doesn’t care if we can manipulate a packet 16 ways. They care about their application/data being rapidly available and simply managed.

  8. Lennie says

    So I was thinking about this capacity planning problem.

    What if you let the (network-)scheduler set a limit at the virtualswitch on how much bandwidth a VM/vNIC is allowed to consume.

    Or even notify the virtualswitch that there is less capacity now then before.

    Then you can use flowcontrol. That is the proper way to deal with this, right ?

    I know PAUSE-frames probably don’t have the best rep in the business, but you can always try an old idea at a different level.

    Now I don’t know if a vNIC (driver) supports PAUSE-frames.

    Or maybe set it at the hypervisor level, maybe the hypervisor can slow down forwarding packets.

  9. says

    Very Nice article.

    I do agree that SERVER virtualization is different from NETWORK virtualization and not a apples to apples comparison.

    By virtualizing some of the features likes firewalls and load balancers we are trying to remove dependency on the proprietary hardware by purely depending on the RAW power of CPU cycles but does it improve performance levels for high load/traffic applications ?

    I always hear people saying network virtualization is all about decouple reproduce and automate, but I have some doubts of my own .. (might be silly one)

    1. Network virtualization more or less became network overlays or some tunneling option. They added one more wrapping on existing packet until packet reaches the physical switch. It fixes current issue but is it a long run solution especially in datacenters ?

  10. Sam Sung says

    Could you please post your impressions about nCloudX from Anuta Networks. They have been touting their first in market product for some time. Thanks

  11. Romel says

    Hi Brad,

    I’m your disciple and been following your posts. :)

    Will this not introduced considerable latency overhead? I have seen one demo and routing between subnets via logical router has around 6ms latency but anyway that I guess is acceptable.

    • says

      I can’t speak to other implementations or demo setups, however the distributed logical router in VMware NSX operates in the hypervisor kernel, so there’s no considerable overhead in L3 forwarding between virtual subnets. You’ll usually see sub-ms ping latency between two VMs on the same hypervisor but different subnets.

  12. says

    Hi Brad,
    Very nice and informative topic written well versed.I am currently holding research topic related to network virtualization, Like Server virtuallization all the components(hardware) virtualized are with in same platform while in case of preforming network virtulaization first we have to make a common platform of Router ,Switches,Firewall etc.. and then then implement some software application holding the API to the common platform we just made,My question is that Is it possible to preform network virtualization with all the networks hardware separately distributed and not in common platform????
    Reasonable answer will be highly appreciated.

  13. Mojtaba says

    Very well-written article.

    I have two arguments/questions:
    1. L4-L7 virtualization is usually call “network functions virtualization” which is differentiated from “network virtualization.” Do you see them different or similar?
    2. Why are you excluding L1 in network virtualization? We can virtualize physical channel as well, Right? Here is a reference, as an example:
    Wang et al., Network virtualization: Technologies, perspectives, and frontiers, 2013



Leave a Reply

Your email address will not be published. Required fields are marked *