The vSwitch ILLUSION and DMZ virtualization

Server virtualization has gained tremendous popularity and acceptance to a point now that customers are staring to host virtual machines from differing security zones on the same physical Host machine.  Physical servers that were self contained in their own DMZ network environment are now being migrated to a virtual machine resting on a single physical Host server that may be hosting virtual machines for other security zones.

The next immediate challenge with this approach becomes: How do you keep the virtual machines from differing security zones isolated from a network communication perspective?  Before we go down that road, lets take a step back and revisit the commonly used network isolation methodologies…

Network Isolation Methodology & Policy

Before DMZ physical servers where migrated to virtual, communication from one DMZ server to another DMZ was steered through a security inspection appliance.  Traffic can be steered through a security appliance using physical network separation, or through logical network separation using network virtualization techniques such as VLANs, VRF, MPLS, etc.

Figure 1 below shows traffic steering through physical network separation.

Figure 1 – Physical Separation

In Figure 1 above, traffic from between the two groups of servers is steered through the security appliance simply because that is the only physical path by which the communication can take place.  Physical separation works under the principal that each server and security appliance interface is correctly cabled to the proper switch and switch port.  Isolation is provided by cabling.

The other method commonly used for traffic steering is through a means of logical network separation.  In this case, a single switch can be divided into many different logical forwarding partitions.  The switch hardware and forwarding logic prevents communications between these partitions.  An example of a partition could be a VLAN in a Layer 2 switch, a VRF in a Layer 3 switch, or MPLS VPNs in a broader network of Layer 3 switches and routers.

Figure 2 below shows traffic steering through logical network separation.

Figure 2 – Logical Separation

In Figure 2 above, traffic between the two groups of servers is steered through the security appliance simply because that is the only forwarding path by which communication can take place. Forwarding paths are created by the configuration of unique logical partitions in the network switch, such as a VLAN. Traffic entering the switch on one partition is contained only to that partition by the switch hardware. The security appliance is attached to both partitions, where as the servers are attached only to the partition in which they belong. The servers are attached to their partition as a result of the switch port configuration. Therefore, separation in this model is provided by switch configuration.

The use of logical or physical separation between DMZ zones might be defined in the IT Security policy. Your security policy may require physical network isolation between security zones. On the other hand, the IT security policy may simply specify that there must be isolation between zones, but without any strict requirement of physical isolation. In such a case logical separation can be used to comply with the general policy of isolation.

Attaching Physical Separation to Logical Separation

When the physical network separation method is attached to a switch using logical separation, an interesting thing happens — you loose all characteristics of physical separation. If you think of physical separation as being the most secure approach (the highest denominator) , and logical as being the lesser secure of the two (the lowest denominator), when the two are attached together the entire network separation policy adopts the lowest common denominator — logical separation.

Figure 3 below shows attaching differing separation policies together.  The result is inconsistent policy.

Figure 3 – Inconsistent Policy

In Figure 3 above, the physical switches that were once adhering to a physical separation policy have now simply become extensions of the what is universally a logical separation method.  If  your IT security policy specifically requires physical separation, this type of implementation would be considered “Out of Policy” and unacceptable.

Maintaining IT security policy with DMZ server virtualization

Now that we have covered the basics of network isolation and security policy lets circle back to our original question we started off with: How do you keep virtual machines from differing security zones isolated from a network communication perspective?  Furthermore, how do we keep DMZ virtualization consistent with IT security policy?

In this article I am going to primarily focus on the IT security policy of physical network separation, and how that maps to server virtualization.

As I have already discussed, once you have set a policy of physical network separation you need to keep that isolation method consistent throughout the entire DMZ, otherwise you will have completely compromised the policy.  Most people understand that.  So when the time comes to migrate physical servers to virtual, every attempt is made to maintain physical isolation between virtual machines in differing security zones.

Before we can do that, we must first acknowledge and respect the fact that with server virtualization a network is created inside the Host machine, a virtual network.  And when that Host machine is attached to the physical network through it’s network adapters, the virtual network on that Host machine becomes an extension of the physical network.  And vice versa, the physical network becomes an extension of the virtual network.  However you choose to think of it, the virtual and physical network together become one holistic data center network.

With VMware, the virtual network inside the ESX Host machine can be managed by an object called a vSwitch.  From the perspective of the VI Client, multiple vSwitches can be created on a single ESX Host machine.  This perception provided by VMware of having multiple vSwitches per ESX Host has lead to the conventional thinking that physical network separation can be maintained inside the ESX Host machine.  To do this you simply create a unique vSwitch for each security zone, attach virtual machines to their respective vSwitch along with one or more physical network adapters.  The physical adapter + vSwitch combination is then attached to a physically seprate network switch for that DMZ only.  You now have a consistent policy of physical network separation, right?  More on that later…

Figure 4 below shows the conventional thinking of vSwitch physical separation.

Figure 4 – vSwitch physical separation

With VMware vSphere, you also have the option of using the Cisco Nexus 1000V in place of the vSwitch to gain added visibility and security features.  However, one thing that some customers notice right away about the Nexus 1000V is that unlike the standard vSwtich, you cannot have multiple Nexus 1000V switches per ESX Host. <GASP!> How am I going to maintain physical separation between DMZ segements if I cant have multiple Nexus 1000V’s per Host?  Guess I can’t use the Nexus 1000V, right?

Not necessarilly … Paul Fazzone from Cisco Systems, a Product Manager for Nexus 1000V, wrote an excellent article that refutes this thinking titled Two vSwitches are better than 1, right? In this article Paul lays out the case of how Nexus 1000V’s Port Profiles and VLAN’s provide an equivalent and even better security mechanism to multiple vSwitches and that customers can safely deploy Nexus 1000V in an environment where physical separation is the policy.

The problem I have with Paul Fazzone’s article is that it does not address the fact that two differing separation methods have been attached together, thus creating an inconsistent and lowest common denominator logical separation policy.  The Nexus 1000V and its VLAN’s are a means of logical separation, a single switch containing multiple logical partitions.  The minute you attach the Nexus 1000V to the physical network, the holistic data center network is now reverted to a logical seperation policy… so … what’s the point in having separate physical switches anymore!?

No offense to Paul, he’s a REALLY BRIGHT guy, and he’s just doing his job of breaking down the obstacles for customers to adopt the Nexus 1000V.

Given that the Nexus 1000V is a single switch per ESX Host using logical partitions in the form of VLAN’s, a customer with a strict physical network separation policy may very well view the Nexus 1000V as not matching to their security model and choose not implement it solely for that reason.  In doing so, the customer has sacrificed all of the additional security, troubleshooting, and visibility features of the Nexus 1000V — but to them that doesn’t matter because the ability to have multiple vSwitches is still viewed as a better match for maintaining a consistent physical separation policy.

The vSwitch ILLUSION:  What you see isn’t what you get

The conventional thinking up to this point has been that multiple vSwitches can be configured on an ESX Host to maintain a consistent architecture of physical network separation.  Why would anybody think any differently? After all, when you configure networking on an ESX Host you see multiple vSwitches right before your very eyes that are presented to you as being separate from one another.  This visual provides the sense that adding a new separate vSwitch is no different than adding a new separate physical switch, right?

Figure 5 – The vSwitch view from VMware VI Client

Figure 5 – VI Client shows multiple separate switches

First of all, lets ask ourselves this question: What is the unique security posture characteristic of two physically separate switches? Most people would tell you that each physical switch has its own software and unique forwarding control plane. A software bug or security vulnerability in one switch may not affect the other switch because each could be driven by different code.  On the other hand, what is the unique security posture characteristic of a single switch with logical partitions? Most people would say that this switch is using a common code and common control plane implementing separation via unique logical partitions.

With that understanding in mind, if I create multiple vSwitches on an ESX Host, each vSwitch should have it’s own unique software that drives it, and a unique control plane that does not require any logical partitioning to separate it from other vSwitches, right?  Lets go ahead and put this theory to the test.  Lets see how much Host memory is used when there is only 1 vSwtich configured:

Figure 6 below shows Host memory usage with (1) vSwitch

Figure 6 – Host memory with (1) vSwitch

In Figure 6 above I have just (1) vSwitch configured on a Host with no virtual machines, and the memory used by the Host is 764MB.  Perfect, we now have a memory baseline to proceed.  If in fact every new vSwitch on an ESX Host provides the same physical separation characteristics as two separate physical switches, then each new vSwitch should result in a new copy of vSwitch code, consuming more host memory, right?  Lets add (10) more vSwitches to this Host and see what happens…

Figure 7 below shows a Host memory usage with (11) vSwitches

Figure 7 – Host memory with (11) vSwitches

Figure 7 above shows the same ESX Host with (11) vSwitches configured an no virtual machines.  As you can see the Host memory usage is still 764MB.  Adding (10) vSwitches did not add 1 single MB of Host memory overhead.  This is one simple example to show that (11), (20), or even (200) configured vSwitches on a Host is really just 1 Switch, running one piece of common code and control plane, and each new “vSwitch” is nothing more than a new unique logical forwarding partition, no different than a single physical switch with a bunch of VLANs.

Still don’t believe me?  Let me go back to Paul Fazzone’s article Two vSwitches are better than 1, right? in which Paul quotes Cisco’s principal software architect of the Nexus 1000V, Mark Bakke, from a video interview in which Mark says:

Each vSwitch is just a data structure saying what ports are connected to it (along with other information).

So while using vSwitches sounds more compartmentalized than VLANs, they provide equivalent separation

– Mark Bakke, Nexus 1000V Principal Software Architect, Cisco Systems

Mark would know better than anybody else, and my Host memory experiment above agrees with him.  The conventional thinking that multiple vSwitches are providing physical separation is nothing more than an ILLUSION.  The reality for the customer is that having an ESX Host with multiple vSwitches is providing the same security posture as a single switch with logical partitions, same as <GASP> … VLANs!  And when the customer attaches their multiple vSwitches to physically separate networks, the result is inconsistent policy and the holistic data center network is reduced to a security posture of logical separation.

Figure 8 below shows the actual REALITY of configuring multiple vSwitches

Figure 8 – The REALITY of multiple vSwithces

Figure 8 above shows that attaching a vSphere Host to physically separate networks is counter intuitive.

Consequences of the vSwitch ILLUSION

At this point you might be asking me .. “OK, Brad. You made your point.  But why are you fighting this battle? If a customer wants enforce a policy of physically separate networks even with the understanding that the vSwitch is not providing the equivalent separation, what’s the harm in that?  A little physical separation is better than none, right?”

My answer to that is simple, I have seen on many occasions customers making significant sacrifices in their virtualization architecture, believing they are getting something that they’re really not (physical isolation).

What are the sacrifices and consequences of the vSwitch illusion?

  • Many adapters are required in the vSphere Host server to connect to each physically separate network.
  • The requirement for many adapters results in purchasing larger servers simply for adapter real estate.
  • Many adapters in the server force the customer to use 1GE, and prohibit the use of 10GE adapters.
  • The requirement for many adapters force the customer to use rack mount servers, and prohibit the choice of blade servers.
  • The forced adoption of 1GE results in I/O bottlenecks that inhibit the scalability of the Host machine, resulting in fewer VM’s per Host, resulting in more Host servers to service any given number of VM’s, resulting in more power/cooling, more network switches, more vSphere licenses, you get the idea… more costly infrastructure for the customer.
  • The insistence upon using multiple vSwitches per Host for “separation” prohibits the use the vNetwork Distributed Switches, either the VMware vDS or the Cisco Nexus 1000V.  You can only have (1) VMware vDS per Host, and (1) Nexus 1000V per Host.
  • Sacrificing vNetwork Distributed Switches results in more management complexity of the virtual network.
  • Sacrificing the Nexus 1000V results in giving up valuable security features that would otherwise make what is already a logically separated network more secure.  Not to mention the troubleshooting and per VM visibility the Nexus 1000V provides.

Before I continue on I want to let VMware off the hook here.  By calling this a “vSwitch ILLUSION” I do not mean to insinuate that VMware has intentionally mislead anybody.  That’s not the case at all as I see it.  In fact, VMware’s representation of multiple vSwitches was actually a genious approach to make the networking aspects of VMware easier for the Server Administrator to understand (their key buyer).  Remember, VLANs are a concept that Network Administrators understand very well, but it generally isn’t the Network Administrator who’s purchasing the servers and VMware licenses.  So, VMware wanted to make it easy for the Server Administrator, their customer, to understand the networking elements of ESX.  While not all Server Administrators understand VLANs and logical separation, most all of them do understand what a switch does, so the representation of multiple switches in the VI Client is a genious way of helping the Server Admin understand network traffic flow on the ESX host without needing a college degree in networking.

Consistency with Logical Separation using Server + Network Virtualization

If the IT security policy does not specifically require physical separation, and now with the understanding that multiple vSwitches is not equivalent to physical separation, then why not have a consistent architecture of logical separation?  By combining the power of Server virtualization with Network virtualization you can achieve a secure, highly scalable virtual infrastructure.

Figure 9 below show Server + Network virtualization

Figure 9 – Server + Network virtualization w/ consistent Logical Separation

In Figure 9 above the logical separation posture of the vSwitch is complimented by Network Virtualization in the physical network.

A DMZ virtualization architecture with consistent logical separation has the following advantages:

  • Fewer physical networks means fewer physical adapters required in the server.
  • Fewer adapters required in the server allows for 10GE.
  • Fewer adapters required allows for a choice of either rack mount server or blade server.
  • 10GE adapters reduces I/O bottlenecks and allows for high VM scalability per Host.
  • Rack servers are right sized at 1U or 2U
  • Using a single “vSwitch” design on the ESX Host allows for the option to use vNetwork Distributed Switch, or Cisco Nexus 1000V.
  • Being able to use the Cisco Nexus 1000V allows for better virtual network security policy and controls.

Consistent Physical Separation using separate physical Hosts

If your IT security policy absolutely states that you must have “physical network separation” between security zones, you can still achieve a consistent separation model that adheres to the policy by deploying physically separate ESX Hosts for each security zone.  This is the only way to truly remain in compliance with a strict physical separation policy.

Figure 10 belows shows DMZ Virtualization consistent with physical separation

Figure 10 – Consistent Physical Separation

In Figure 10 above the IT security policy of physical separation is consistently applied in both the virtual and physical networks.

The DMZ virtualization architecture consistent with physical separation has similar advantages:

  • Adherence to a “Physical Isolation” IT security policy
  • Fewer physical networks required per server means fewer physical adapters required in each server.
  • Fewer adapters required in the server allows for 10GE.
  • Fewer adapters required allows for a choice of either rack mount server or blade server.
  • 10GE adapters reduces I/O bottlenecks and allows for high VM scalability per Host.  Better VM scalability per Host results in *fewer* servers required, not more.
  • Rack servers are right sized at 1U or 2U
  • Using a single “vSwitch” design on the ESX Host allows for the option use vNetwork Distributed Switch, or Cisco Nexus 1000V, and all of the management benefits they provide.
  • Being able to use the Cisco Nexus 1000V allows for more secure network security features, policy, and controls.

Securing the Virtual Switch for DMZ Virtualization

Whether you choose the physical or logical separation architecture, you still have a virtual switch in each ESX Host that can and should be secured.  The standard VMware vSwitch provides some security, but below is an overview of the security features of the Cisco Nexus 1000V that are above and beyond what is available with the standard vSwitch, or standard vDS.

Cisco Nexus 1000V Unique Security Features

  • IP Source Guard
    • duplicate IP, Spoofed IP protection
  • Private VLANs (source enforced)
    • stop denied frames at source host
    • minimize IP subnet exhaustion
  • DHCP Snooping
    • Rouge DHCP server protection
  • Dynamic ARP Inspection
    • Man-in-the-middle protection
  • IP Access List
    • filter on TCP bits/flags
    • filter TCP/UDP ports
    • filter ICMP types/codes
    • filter source/dest IP
  • MAC Access Lists
    • filter on Ethernet frame types
    • filter MAC addresses
  • Port Security
    • spoofed MAC protection
    • protect physical network from MAC floods

Securing the Physical Network Switch against Attacks

Similarly to the virtual switch, whether you adopt a consistent physical or logical DMZ virtualization architecture, there is still a physical switch that can and should be secured with some of the following solutions to certain type of attacks:

Attack: MAC overflow (macof).  Attacker uses a tool like macof to flood the switch with many random source MACs.  The switch MAC table quickly fills to capacity and begins flooding all subsequent frames to all ports like a hub.  Attacker can now see traffic that was otherwise not visible.

Solution: Port Security.  Switch can limit the number of MAC addresses learned on a switch port thereby preventing this attack

Attack: VLAN Hopping.  Attacker send forms an ISL or 1Q trunk port to switch by spoofing DTP messages, getting access to all VLANs.  Or attacker can send double tagged 1Q packets to hop from one VLAN to another, sending traffic to a station it would otherwise not be able to reach.

Solution: Best Practice Configuration.  Disable auto trunking (DTP) on all ports with ‘switchport nonegotiate‘.  VLAN tag all frames including the native VLAN on all trunk ports with ‘switchport trunk native vlan tag‘.  Use a dedicated VLAN ID for the native vlan on switch-to-switch trunk ports.

Attack: Rouge DHCP server.  Attacker spoofs as a DHCP server handing out it’s own IP address as the default gateway.  Attacker can now see and copy all traffic from the victim machine, then forward traffic toward the real default gateway so the victim machine is unaware of a problem.

Solution: DHCP Snooping.  Switch only allows DHCP responses on ports defined as trusted.

Attack: Spanning Tree Spoofing.  Attacker spoofs spanning tree protocol units (BPDU) and forces a network forwarding topology change to either cause distruption or direct traffic in manner that makes it more visible for snooping.

Solution: BPDU Guard, Root Guard. Switch can immediately shuts down host ports sending BPDU’s.  The switch can prevent the changing of the Root switch which prevents any topology change.


Logical or physical separation can be used to isolate virtual machines from differing security zones.  It’s important to keep the separation policy consistent throughout the physical and virtual network.

The VMware vSwitch does not provide physical isolation.  The VMware VI Client provides the presentation of physically separate vSwitches, but this is nothing more than an illusion.

The VMware vSwitch provides logical separation no different than a physical switch with VLANs, or the Nexus 1000V with VLANs.

Customers wrongly accept sacrifices and suffer consequences to their virtualization architecture under the belief they are achieving physical isolation with the standard VMware vSwitch.

The virtual and physical switch can be consistently secured for DMZ virtualization with the Cisco Nexus 1000V and Cisco security features present in physical switches.

Customers benefit the most from DMZ virtualization when a consistent isolation policy is used in both the physical and virtual network.

Presentation Download

Wow! You made it this far? I have a reward for your time and attention!

You can download the 20 slide presentation I developed on this topic here:

Architecting DMZ Virtualization version 1.5, by Brad Hedund




  1. says

    Whew, that was a long one but I’m glad I stuck it out! This article helped me affirm my own understanding of network virtualization and it was comforting to be able to nod my head as I read along. As we become more accepting of non-physical partitions it’s important to understand the intricacies of logical separation and I think you’ve laid that out nicely.

    My own home network makes use of both server and network virtualization as you can see in following diagram. It may not be immediately apparent, but I only have 2 physical servers and 1 physical switch.

    I tinkered with VMWare’s ESXi but eventually settled on Microsoft’s Hyper-V for a number of reasons but there’s one in particular on which I’d like your opinion. From the Hyper-V host you can create Virtual Networks (vSwitch equivalents) but you can disable access to them from the host (as seen in the screenshot below). How much more secure would you consider this? I don’t think we should ever equate virtual partitions with physical ones, but this is about as close as it gets right?

    • says

      I do have an ignorance card I could play with Hyper-V, but I’ll hold on to it for now and take a stab at this. As I understand it, Hyper-V is different than ESX in the sense that with Hyper-V the hypervisor is also an operating system itself (Windows). I believe this is referred to as “Para-virtualization”. At any rate, by leaving that box unchecked, you are telling Hyper-V that only virtual machines can use this adapter. From my perspective this is no different than what can be done with VMware ESX, where you can specify certain adapters to only be used for virtual machines, and other adapters to only be used for Host management. This can be accomplished through creating “multiple vSwitches” (as I covered here ad-nausiem), or multiple “Port Groups” on the same vSwitch. At any rate, we know now that both techniques are providing logical separation. The question I do not have the answer for is this: Is Hyper-V providing separation between Microsoft virtual switches any differently than VMware ESX? Is it providing separation between the “Microsoft virtual switches” and the Host operating system any differently than how VMware ESX does it? I’m sure there are small differences, but is it still ultimately the same switching code using logical forwarding partitions, as we have learned here with ESX? Your guess is as good as mine.

      Thanks for stopping by and chiming in!


  2. says

    Great article, Brad.

    All of your points are 100% correct, but I would like to point out the that logical separation that both VMware’s vSwitches and Cisco’s Nexus 1000V VEMs run as kernel modules in the ESX/ESXi host software. Neither has an IP address, nor any other kind of public interface, so the only attack vectors are those that can reach the vKernel, either directly in the case of ESXi or via the Service Console in the case of classic ESX.

    I would posit that this makes vSwitches and Nexus VEMs themselves at least as secure as Virtual Domain Contexts (VDCs) on the Nexus 7000 platform, which many organizations accept as functionally equivalent to physical separation. The fact that the Nexus 1000V Virtual Supervisor Module (VSM) has an IP address gives it a similar attack vector space as VMware’s vKernel/Service Console, so I would consider the vSwitch & N1KV solutions to be roughly equivalent in their capacity to be breached, (facilitating a hacker’s attack to allow inter-DMZ traffic), which is extremely low. Of course, the N1KV offers far more in terms of security features for intra-DMZ traffic, so it would always be preferable over the VMware vSwitch.

    My overall point here is that yes, vSwitches and N1KV VEMs are really a logical separation, it is a very strong one. It might not suffice for organizations that demand an “air gap” between DMZs like the Depart of Defense, (nor would the N7K VDC), but it will be more than sufficient for many organizations, as long as these limitations are well explained.

    Comments welcome, and thanks again for a very useful and detailed examination of this frequently discussed issue!

    Best regards,

    Greg Walker

    • says


      This article is not an indictment on vSwitch security vulnerabilities. Nor am I saying that logical separation is a bad thing. Quite the contrary. I agree with you that the logical separation provided by the vSwitch and Nexus 1000V is strong, and secure. Just as logical separation on a physical switch with VLANs is strong and secure. This gets to the main point of the article, which is this… If you accept logical separation in the virtual network you should also accept logical separation in the physical network. To do otherwise would be counter intuitive.


    • Jonathan Kim says

      Thanks Greg for the comment. I had similar thought. Simplicity and clear logical and physical separation is important but the ultimate question is whether logical separation is good enough. And based on experience, most attacks do not happen by attacking logical barrier, there are far easier targets.
      Agree with author that logical barrier is no physical barrier, it requires careful planning and execution. I do not believe physical barrier is obsolete. Logical barrier is too easy to mis-configure that having physical barrier provides comfort against the idiocies of the local admin.

  3. says


    Great article – thanks for explaining so clearly what our network guys have struggled for some time to get me to understand. I am curious to hear your reply to Greg Walker.


  4. Bryan says

    It seems to me this is a round about way of saying that physical separation is obsolete, which I agree with. However bureaucrat auditors will wave policy in your face whether it is relevant or not, and as a result the illusion of physical separation is important to maintain. Though I agree that if you can get them to accept the virtual network as secure, then they should certainly accept a logical one the same way.

    • says

      While I do have my own opinions on the necessity of physical separation, that is not the battle I’m fighting here. If you require physical separation because you think it’s more secure, or to simply appease auditors, great, go ahead and do it. But if you are going to implement physical separation … do it right and actually provide *physical separation* throughout the physical and virtual network. Otherwise, what have you gained? By perpetuating the illusion you still have an architecture that can be challenged and questioned by auditors, and you have subjected your server virtualization architecture to the consequences I outline in the article. Rather, if you do it the right way and provide consistent physical separation throughout the physical and virtual network, you end up with an architecture that is easily defensible under criticism and without any of the consequences from an inconsistent and “illusional” physical+logical separation design.


  5. says

    (Disclaimer: I work for VMware)
    Thanks for this great piece. It really helps to explain this issue more clearly and I think it will help a lot of people more rationally approach their datacenter security design.

    Although I agree with the conclusions, I think it’s worth mentioning that vSwitch segmentation is not entirely an illusion, when it comes to security. First off, I’ve heard a number of customers say that they don’t trust VLAN isolation because of the history of successful VLAN exploits (you have outlined ways to address these concerns above). Although these may largely be in the past, it still makes customers wary of relying upon them. By contrast, the track record of vSwitch isolation has been very good. In fact, there has not been any successful public exploit of the VMware vSwitch to date (not that it couldn’t happen in the future, of course). So, people have a greater comfort level with relying upon vSwitch segmentation for security, vs. only VLANs.

    The second point, perhaps more subtle, is that even if you “trust” VLAN isolation, some customers have said that the effort of configuring and administering VLANs throughout their environment introduces complexity that they feel weakens their security posture. They feel they don’t have sufficient maturity in terms of their management processes and policies to take on this risk. For them, vSwitch segmentation provides a neat, simple solution that doesn’t have as much management overhead.

    Again, I agree with your conclusion, and I believe this is the direction we should all be going in, but there are legitimate reasons to continue relying upon vSwitch segmentation today.

  6. says

    I’m glad I made it to the end! Excellent article. We’ve been looking at the 1000v vs. vSwitch for our new VMware environment. Seems like it may be worth the cost.

  7. says

    I have to agree with Charu on this one. I personally don’t have a preference, both options work fine but indeed the maturity of a customer heavily dictates which options you have. To them the graphical presentation in vCenter means physical separation and again means secure, although we all know this is not necessarily true.

    I really like this article though, keep them coming!

  8. Antonio says

    First of all: great blog! And excellent article. Only one point: while I think you are right about your assumption on vmware “more than one vSwitch” situation I don’t think you experiment is a definite proof of this for two reasons:
    1. Many OSs (I’m not sure about vmware of course) can “instantiate” multiple copies of a process/program without actually duplicate it in memory. Today’s processors can enforce (and allow) an OS to keep data memory segments separated from the code ones. Modern OSs tend to assign a “virtual memory space” (nothing to do with vmware :) ) to each process that is done by code “pages” and data “pages”. Such pages are allocated to a process and are pointers to the phisical memory (i.e. two process can share the same code page and not be aware of it).
    If this is true for ESXm each new vSwicth will add only the data segments to the used memory counter.

    2. Even if the vSwitch is just a memory structure somewhere I would have expected to see the used memory increased when you add some. Maybe the counter used from esx is to “raw” (or just rounded to MB). I don’t expect such data structure to be big (especially when “empty”) so it maybe just the counter don’t “notice” it.


  9. says

    This was the approach I have just used in desiging a DR network for a client who wanted to share a single cluster of ESX Hosts with multiple groups with different security requirements.
    I got upfront the decision to allow virtual seperation for the DR network (production requires physical airgaps) as rack space and copper switchport counts were at a premium.
    However you also need to take into account location physical security.
    These hosts are locked in a rack with limited physical access. The management network switches for them are located in the same racks, the Guests network switches are located in a shared comms rack. Cabing to that rack is provided by inter-rack ties.
    So for this config I still used 3 VSwitches – two interfaces were given to a management VSwitch, one interface to a VMotion switch, and two interfaces given to the guest switch, then using VLAN tags for the network seperation.
    The next vulnerability then is attacks against guests to cause leaks between running guests. For this reason we still seperate guests facing public networks from guests on the private networks by limiting the VLANs on each host.
    This way capacity can still be easily managed remotely by simply changing the guest VLANs supported on certain hosts depending on the disaster scenario we are configuring for

  10. Stephane says


    Great article ! Thanks !

    I get confused by the way. Let’s imagine corporate security policy states there should be physical isolation between security zones. On the ESX cluster running the untrusted VMs, you would still need some virtual networking to connect to Vmotion network / ESX host management network / possible IP storage network. How can you comply to physical isolation policy in that case ?


    • says

      Under a policy of strict physical network separation, you would definitely keep Vmotion isolated inside each zone. By providing physical network separation between zones, you have by definition already provided physical separation for access to storage over IP. Whether or not the data itself is resting on the same storage system is certainly up for debate. You could link all zones into a common ESX management framework, or not, it all depends on what your security policy tolerates and who’s enforcing it.


  11. says

    A well written article, however there is on glaring mistake. vSphere can have more than a single vDistributed Switch per host machine, it is only the Nexus1K that has the single switch per host limitation

  12. Marek says

    what about aproach of hypervisor bypass using PALO adapter? if we create separate vNIC for each DMZ server. can this be considered as physical network separation? The upper switch, for example nexus5k will have separate uplink port for each DMZ server.

  13. says

    Wow, absolutely fantastic article! Confirmed my suspicion, I always learned that different security zones on one piece of hardware wasn’t a good idea. But that was 10 years ago, times are changing and it seems that hardware isn’t king any longer, time will tell…

  14. Mike Voss says

    Great article, and I agree with your points. I had this very discussion with a customer and pushed them towards logical separation, but they insisted on having their DMZ VMs on physically separate ESXi hosts from their intranet VMs. My argument for running everything on a single physical cluster was this–they had vlans for the DMZ and intranet VMs all running on the same physical switches. So why have physical separation at the server level when you still have logical separation at the switch level? If they had been using physically separate switches, then I would have advised them to stay consistent. In the end, I suppose because server virtualization was new to them they weren’t ready to trust it like they do with traditional VLANs. Just give them time and they’ll come around… 😉

  15. HenryG says

    I probably understood 1% of this … but great and lengthy article … wow, i guess you’re passionate about this! =)

  16. Michael says

    Hi Brad,

    thank your for your post. I am new to the topic of virtual switches, i.e. vSwitch or Nexus 1000V.
    I am still asking my self why I should need to use one. Well I guess my ESX setup is too small for that one.

    Why shouldn´t I use tagged VLANs on my vNICs, get them through thet vSwitch/1000V layer and put an VRF capable switch behind it?


  17. says

    Great Article!!!

    It really helped me to think about making some changes in our infraestructure.

    This is the artice I was looking for.

    Thanks for joining your knowledge


  18. says

    Hi Brad,

    Great article and thanks for the content. I’d second the above comment that you can have multiple vDS per ESXi host.

    I’ve seen the issue with customers having separate physical DMZ hosts but logical separation, via VLANs on the network and thinking that they’ve got a physically separate network design.



  19. Rob Krumm says

    I’m busy building a new Home Lab for running some of the new Ruckus Wireless virtual controllers (and more).
    I was conflicted about the benefits of multiple vSwitches vs just a single one… This helped a huge amount, I cannot thank you enough for the level of detail you provided.

    Great Article!



  1. […] Also been working on DMZ design from the virtual layer all the way down to the physical hardware and found out a physical separation security policy is impossible to implement in a virtualized environment. Killer logical separation is the way to go, preferably with the Nexus1000V as the VDS. Very good article on this by Brad Hedlunt. You can find it here. […]

Leave a Reply

Your email address will not be published. Required fields are marked *