I thought it would be fun to pilot a series of posts where I pick out interesting search engine queries that were used to find my blog. Often times these are good questions that deserve a good answer, or other interesting topics that can start a good discussion or fun debate.

This particular series will focus on Data Center Networking. Another data center computing focused series may be started such as “Cisco UCS Q&A”. Stay tuned.

Each question or search query will be headlined in bold and the text beneath will be my answer, commentary, or general response.

Furthermore, if you have a question you think I might be able to answer please submit your question in the comments section to be considered for immediate answer or highlighted in a future Q&A post.

So here we go, this is the first Data Center Networking Q&A :) I hope you enjoy it!


hp qos marking configuration

I seem to be getting quite a number of these queries finding my site lately. This person might be trying to find out how to configure their HP blade switch to classify and mark traffic for a QoS policy in their data center. Well, there are two blade switches made by HP worth discussing: HP Virtual Connect Flex-10, and the HP Procurve 6120XG.

Lets start with HP Virtual Connect Flex-10. I’ve got some real bad news for you on this one. Flex-10 has no QoS capabilities what so ever. If you follow my blog and tweets you have heard me point this out several times and I’ll do it here again. Flex-10 is not capable of classifying traffic, not capable of marking traffic, and not capable of giving any special treatment or guaranteed bandwidth to important traffic. If you search for “QoS” in the latest Virtual Connect User Guide it produces zero hits. But don’t take my word for it, here is what HP says about Virtual Connect Flex-10 QoS:

VC does not currently support any user configurable Quality of Service features … these features are on the roadmap for future implementation.

Moving on to HP Procurve 6120XG; this blade switch made by HP actually has what I will describe as “so-so” QoS capabilities, much better than Flex-10 anyway. The Procurve 6120XG can give special treatment to traffic via (4) traffic queues, each with a differing degree of priority; Low, Normal, Medium, and High. The High priority queue will always be serviced before the Medium queue, and so on. This is a simple implementation of a QoS technique called Priority Queueing. The downside to Priority Queueing is that the high priority queue (if busy) can starve bandwidth from all other queues, with no insurance or fairness that all traffic will get some portion of the bandwidth. The 6120XG QoS implementation does not provide guaranteed bandwidth, it simply allows some packets to be transmitted before others, based on the queue they are serviced from.

The Procurve 6120XG can assign traffic to each of the (4) queues based on the incoming 802.1P COS value in the Ethernet header, or the IP DSCP or TOS value in the IP header. One important thing to note here is that the HP Procure 6120XP expects the packet to already be marked when entering the switch. The Procurve 6120XG is not capable of marking traffic based on MAC, IP, or TCP information. If a packet enters the 6120XG with no marking, the packet cannot be classified and not provided any special treatment, no QoS. The 6120XG can mark traffic based on incoming port, meaning all traffic from a certain port can be given a defined marking. However, this rudimentary port-based classification has little use in a 10G server environment where different types of traffic will be converged on a single 10G server interface.

Given that the HP Procurve 6120XG’s QoS capabilities are only useful when the traffic has already been marked before entering the switch, it’s important to understand if your blade servers are capable of traffic classification and packet marking. This becomes especially relevant with server virtualization hosts, such as with VMware. The VMware vStandard Switch (VSS) or vNetwork Distributed Switch (VDS) does not have traffic classification or marking capabilities, however the optional Cisco Nexus 1000V has a comprehensive set of QoS classification and marking capabilities. Hence, the Cisco Nexus 1000V can classify and mark important traffic leaving the VMware host (such as vMotion or management traffic) before entering the Procurve 6120XP where it can then be placed into one of the (4) QoS queues based on the marking it receives.

The downside to QoS on the HP Procurve 6120XG is that its basic Priority Queueing implementation does not provide any bandwidth guarantees to all traffic types, and the lack of traffic classification capabilities requires that your servers do the classification and marking, hence the need for Cisco Nexus 1000V on the vSphere Host. This is why I give it a “so-so” rating.

You can find the complete QoS configuration details here in the HP Procurve 6120XP Traffic Management Guide (PDF)

For HP blade servers, my recommendation is to not use the HP Procurve 6120XG, and instead use the HP 10G Passthrough module. This allows you to connect your HP blade servers directly to a Cisco Nexus switching environment, bypassing all of the other mediocre 10G blade switch options from HP. The Cisco Nexus series has a rich set of QoS capabilities that do not just provide basic Priority Queuing, but rather offer an advanced Class Based Weighted Fair Queueing mechanism that can assign minimum guaranteed bandwidth to all traffic types. This behaves in a manner very similar to how VMware provides reservations for CPU and memory to a virtual machine, but without limitations. If more CPU and memory is available, the virtual machine can use it, but there is always a minimum guarantee provided by the reservation. This is similar to how Cisco Nexus switches allocate bandwidth, minimum guarantees without limitations.

The Cisco Nexus switches are also capable of classifying and marking traffic based on MAC, IP, or TCP information. So if a packet arrives unmarked you can mark it at the switch and provide granular QoS, with or without the Nexus 1000V. You can later add the Nexus 1000V for all of the security, management, and network visibility reasons, but you can at least get started with rich QoS capabilities with or without the Nexus 1000V.


hp blades cisco nexus 1000v

Great news! You can absolutely run the Cisco Nexus 1000V on an HP blade server. Furthermore, this can be done with any blade switch. You can use Virtual Connect Flex-10, the Procurve 6120XG, the 10G Passthrough module, or any other standard Ethernet switch.

There is one particular note about using HP Virtual Connect Flex-10 with Nexus 1000V that I want to discuss. The Flex-10 module can be setup in two different switching modes, mapped mode, or tunnel mode. The most common mode is mapped mode because that is the default mode. In mapped mode the Flex-10 administrator defines vNets, which are basically VLANs inside the Virtual Connect domain. The Virtual Connect administrator can decide the VLAN ID used for these vNets independently of the network administrator who might be configuring the physical network switches and the Nexus 1000V. The Flex-10 uplinks to the network with VLANs defined by the network administrator, however the VLAN IDs on the network uplink are mapped to a VLAN ID of a vNet, which might be a different VLAN ID, or the same.

If the Virtual Connect administrator is using mapped mode and mapping the VLAN ID on the network to a different VLAN ID inside the Virtual Connect domain, this can cause a problem with the Nexus 1000V. If the Nexus 1000V VSM was configured with the VLAN ID’s known on the physical network, the Nexus 1000V VEM running on the server is expecting to see the same VLAN ID’s that were defined on the VSM. However if the Flex-10 mapped mode configuration is changing (mapping) the VLAN ID’s to something different, you will have broken the linkage between Nexus 1000V VSM and VEM.

Tunnel mode, on the other hand, simply takes all VLAN ID’s defined on the network uplinks and sends them down to the server ports, without changing or mapping the VLAN ID’s to anything different.

The moral of the story here is that if you are going to use Nexus 1000V with Virtual Connect Flex-10, you can, just make sure you are not changing VLAN ID numbers. If using mapped mode, do not map the network VLAN ID’s to a different VLAN ID at the server, keep them the same. Or, you can also use tunnel mode which does not provide any option of changing VLAN ID numbers.

One you have insured VLAN ID consistency inside of Virtual Connect you are all set to have a very successful implementation of Nexus 1000V on HP blades, even with Virtual Connect Flex-10. That’s all for now.

Hope you enjoyed the pilot episode of Data Center Networking Q&A #1 Please remember to submit questions or comments in the comment section below. Some questions may be answered here or featured in a future installment.