Cisco UCS Q&A #3 – flexible configuration

Filed in Cisco UCS, Q&A by on October 22, 2010 6 Comments

This is a follow-up question from the same reader Geoff who’s original question about traffic steering was discussed here.  Geoff responded to my original answer by brining up my Folly in HP vs UCS Tolly article and doubting that Cisco UCS really has active/active fabrics.

Follow-up Question:

Hi Brad,

thank you very much for your comprehensive answer. However, I still see have a couple of questions (sorry).  In Cisco documents I see only a heatbeat and syncronisation link between the two 6100s in a UCS. In your own article http://bradhedlund.com/2010/03/02/the-folly-in-hp-vs-ucs-tolly/ you show four 10Gb links between the two 6100s. Which is correct? By the way this is where I got the interleaving idea but now see it was blades 1-4 going to Fabric A and 5-8 going to Fabric B, and not odds and evens.

In the same article you also mention active/active fabric configuration but actually as far as each server blade is concerned it sees a active/failover configuration. There is no possibility for a dual channel server adaptor to drive with 20Gb which is what I would call active/active. (but did you say this in your UCS networking best practices talk? I will have to listen again.)  I really wonder why UCS forces separate fabrics. It might make sense for Fiberchannel storage where this is best practice, but for a pure IP environment would it not make sense to have a single fabric? But maybe it is not possible to set up a vPC cluster with a pair of 6100s.

My Follow-up Answer:

Cisco UCS provides tremendous flexibility in how you architect the system to deploy server bandwidth to meet your specific application needs.  Remember that UCS has two fabrics, and the fabric that each server vNIC uses is based on vNIC settings in the Service Profile (not the hardwires), and each server can have multiple vNIC’s each using one fabric or the other as its primary path.

Speaking of specific application needs, the Tolly Group test involved a single chassis with (6) blades, with pairs of blades sending a full 10G load between each other.  The Tolly Group tried to show that a Cisco UCS chassis was not capable of 60 Gbps of throughput.  However they made the unfortunate and fatal mistake in believing that only one fabric was active, and the other fabric was for failover only.  Wrong! Consequently, they setup each server with one vNIC using just one fabric (infact, the second fabric may not have been present).  Given that one fabric extender is 40 Gbps, of course (6) servers are not going to get 60 Gbps. Duh!

My response to Tolly’s flawed testing was simply providing an education to the Tolly Group in how they should have setup Cisco UCS to meet the criteria of their test.  This is not necessarily how every Cisco UCS configuration should be deployed.  Infact, I have yet to see any customers setting up their UCS in such a manner.  A testament to how unrealistic Tolly’s botched “gotcha” testing really was.

Most customers setup their Cisco UCS servers with (2) or more vNICs, each using alternating fabrics.  So, YES, you absolutely CAN have a server send 20 Gbps, one vNIC sending 10 Gbps on Fabric A, another vNIC sending 10 Gbps on Fabric B.  Both fabrics handling traffic for all blades.

Cisco UCS has separate fabrics for the robust high availability customers expect from a mission critical enterprise class platform.  What’s the downside with that? Especially when both fabrics are indeed ACTIVE/ACTIVE.

Disagree?

About the Author ()

Brad Hedlund (CCIE Emeritus #5530) is an Engineering Architect in the CTO office of VMware’s Networking and Security Business Unit (NSBU). Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, value added reseller, and vendor, including Cisco and Dell. Brad also writes at the VMware corporate networking virtualization blog at blogs.vmware.com/networkvirtualization

Comments (6)

Trackback URL | Comments RSS Feed

Sites That Link to this Post

  1. Cisco UCS criticism and FUD: Answered | December 31, 2010
  1. RyanB says:

    I agree that for most use cases dual fabrics with fabric failover is the best design. The fabric’s are “indeed ACTIVE/ACTIVE” insofar as the server can use multiple VLAN’s with a static distribution of VLAN’s across vNIC’s (and thus fabrics). But, that does not cover all use cases – we certainly have application servers that can only make use of a single VLAN, and thus one fabric.

    • Brad Hedlund says:

      Ryan,
      You make a valid point as it relates to a non virtualized OS running on the bare metal (Windows or Linux). By the way, I could be wrong, but I don’t believe Cisco UCS is at a real disadvantage here compared to the other major 10G blade vendors (HP, IBM, Dell) — With the exception of using a Nexus 2232 & Nexus 5000 passthrough 10GE design with HP, IBM, or Dell — This would allow a single MAC on a single VLAN to use both 10G paths ACTIVE/ACTIVE for transmit and receive (via an 802.3ad NIC team configuration).

      In terms of a Cisco UCS blade server running a hypervisor (e.g. VMware ESX) you certainly could have that server transmitting and receiving traffic on both fabrics on a single VLAN — because it may be hosting multiple VM’s on the same VLAN each forwarding on one fabric or the other.

      Cheers,
      Brad

  2. Joe Smith says:

    Brad, once again, thank you for all the in-depth technical information you continue to give us on this extremely informative blog. I use your training as a reference on a regular basis and point others to it.

    I don’t know anyone who knows the Cisco UCS system as much as you do, so I have a few questions for you that I would love for you to address.

    I regularly hear a few specific arguments critiquing the UCS that I would like you to respond to, please.

    1. The Cisco UCS system is a totally proprietary and closed system, meaning:

    a) the Cisco UCS chassis cannot support other vendor’s blades. For example, you can’t place an HP, IBM or Dell blade in a Cisco UCS 5100 chassis.

    b) The Cisco UCS can only be managed by the Cisco UCS manager – no 3rd party management tool can be leveraged.

    c) Two Cisco 6100 Fabric Interconnects can indeed support 320 server blades (as Cisco claims), but only with an unreasonable amount of oversubscription. The more accurate number is two 6100s for every four (4) 5100 UCS chassis (32 servers), which will yield a more reasonable oversubscription ratio of 4:1.

    d) A maximum of 14 UCS chassis can be managed by the UCS manager, which resides in the 6100 Fabric Interconnects. Therefore, this creates islands of management domains, especially if you are planning on managing 40 UCS chassis (320 servers) with the same pair of Fabric Interconnects.

    e) The UCS blade servers can only use Cisco NIC cards (Palo).

    f) Cisco Palo cards use a proprietary version of interface virtualization and cannot support the open SR-IOV standard.

    I would really appreciate it if you can give us bulleted responses in the usual perspicacious Brad Hedlund fashion. :-)

    Thanks!

  3. Raghu says:

    Hi Bard..

    Need clarification on one of the configuration which we want achieve using Cisco & VMware solution.

    Current Setup:-
    1. B200M2 Blade with VIC/Palo (Firmware 1.4 3U)
    2. UCS 6120 FI – Firmware 1.4 3U
    3. VMware ESXi 4.1.0 build 348481
    4. VMware vCenter Server
    5. Virtual Machines running Suse LinuX

    Requirement:-

    Do to the one of the requirement where backup device is only supporting FC connectivity and can be connected to MDS Switch for the accessibility.

    We want to configure FC HBA Pass Through and configure to one of the windows OS server where backup can be configured and scheduled. But we are unable to configure FC HBA Pass Through with the above configuration.

    Please advise if am are missing something here and let me know if you more information.

Leave a Reply

Your email address will not be published. Required fields are marked *