This is a follow-up question from the same reader Geoff who’s original question about traffic steering was discussed here. Geoff responded to my original answer by brining up my Folly in HP vs UCS Tolly article and doubting that Cisco UCS really has active/active fabrics.

Follow-up Question:

Hi Brad,

thank you very much for your comprehensive answer. However, I still see have a couple of questions (sorry). In Cisco documents I see only a heatbeat and synchronisation link between the two 6100s in a UCS. In your own article you show four 10Gb links between the two 6100s. Which is correct? By the way this is where I got the interleaving idea but now see it was blades 1-4 going to Fabric A and 5-8 going to Fabric B, and not odds and evens.

In the same article you also mention active/active fabric configuration but actually as far as each server blade is concerned it sees a active/failover configuration. There is no possibility for a dual channel server adaptor to drive with 20Gb which is what I would call active/active. (but did you say this in your UCS networking best practices talk? I will have to listen again.) I really wonder why UCS forces separate fabrics. It might make sense for Fiberchannel storage where this is best practice, but for a pure IP environment would it not make sense to have a single fabric? But maybe it is not possible to set up a vPC cluster with a pair of 6100s.

My Follow-up Answer:

Cisco UCS provides tremendous flexibility in how you architect the system to deploy server bandwidth to meet your specific application needs. Remember that UCS has two fabrics, and the fabric that each server vNIC uses is based on vNIC settings in the Service Profile (not the hardwires), and each server can have multiple vNIC’s each using one fabric or the other as its primary path.

Speaking of specific application needs, the Tolly Group test involved a single chassis with (6) blades, with pairs of blades sending a full 10G load between each other. The Tolly Group tried to show that a Cisco UCS chassis was not capable of 60 Gbps of throughput. However they made the unfortunate and fatal mistake in believing that only one fabric was active, and the other fabric was for failover only. Wrong! Consequently, they setup each server with one vNIC using just one fabric (infact, the second fabric may not have been present). Given that one fabric extender is 40 Gbps, of course (6) servers are not going to get 60 Gbps. Duh!

My response to Tolly’s flawed testing was simply providing an education to the Tolly Group in how they should have setup Cisco UCS to meet the criteria of their test. This is not necessarily how every Cisco UCS configuration should be deployed. Infact, I have yet to see any customers setting up their UCS in such a manner. A testament to how unrealistic Tolly’s botched “gotcha” testing really was.

Most customers setup their Cisco UCS servers with (2) or more vNICs, each using alternating fabrics. So, YES, you absolutely CAN have a server send 20 Gbps, one vNIC sending 10 Gbps on Fabric A, another vNIC sending 10 Gbps on Fabric B. Both fabrics handling traffic for all blades.

Cisco UCS has separate fabrics for the robust high availability customers expect from a mission critical enterprise class platform. What’s the downside with that? Especially when both fabrics are indeed ACTIVE/ACTIVE.