Cisco UCS Networking videos (in HD), Updated & Improved!

One of my most popular posts ever is perhaps Cisco UCS Networking Best Practices (in HD) posted last June (2010).  So what do you do with a good thing?  You figure out how to make it even better, right? Of course!

On that note I am thrilled to present a new and improved 12 part video series covering Cisco UCS Networking!  This series obsoletes the prior set with new, updated, and re-recorded content inspired from new developments in UCS Manager since the last series.  Much of this content I created for and presented at Cisco Live Europe 2011 (London) for the session BRKCOM-2003 (UCS Networking 201 – Deep Dive) on February 4, 2011.  Thanks to those that attended!  It was a fun session and a great audience.

This content and video series is not really a “Deep Dive” in the true technical sense.  Rather, this content is intended to be more of an Intermediate technical level geared towards the Data Center Architect, Network Designer, or IT Manager, to aid in understanding the overall architecture, best practices, and system level capabilities Cisco UCS brings to the data center.

Enjoy!


Part 1 – The Physical Architecture of UCS

In this video we take a look at the physical network architecture of Cisco UCS and incorporate the new capability of connecting both blade and rack mount servers to UCS Manager.


Part 2 – Infrastructure Virtualization & Logical Architecture

Here we look at how Cisco UCS virtualizes every significant component of the physical architecture; switches, cables, adapters, and servers. Then we look at how that virtualization creates a more simplified logical architecture transposed from the physical architecture.


Part 3 – Switching Modes of the Fabric Interconnect

In this video the unique behavior and advantages of End Host mode are discussed and compared and contrasted to Switch Mode, and a traditional Layer 2 switch.


Part 4 – Upstream Connectivity for SAN

Here we take a look at the different ways to integrate Cisco UCS to the data center SAN with FC End Host mode, and the connecting storage directly to UCS with the new FC Switch Mode.


Part 5 – Appliance Ports and NAS direct attach

In this video we discuss the new Appliance Port and how it can be implemented for connecting NAS or iSCSI directly to the UCS Fabric Interconnect.


Part 6a – Fabric Failover

The unique Fabric Failover capability is explained and its “Slam Dunk” uses are shown such as with Hyper-V, and bare metal OS installations.


Part 6b – Fabric Failover (cont)

We continue with discussing the potential of using Fabric Failover with VMware software switches and VM FEX.  The best practice design for integrating Nexus 1000V with Cisco UCS is also briefly discussed.


Part 7a – End Host mode Pinning

Here we take a look at the dynamic and static pinning behavior of End Host mode, and how load balancing works.


Part 7b – Upstream LAN connectivity

In this video we look at the different ways to uplink UCS to the upstream network, how failure scenarios are handled, and comparing individual uplinks vs. port channel upliks vs. vPC uplinks.


Part 8 – Inter-Fabric Traffic and Recommended Topologies

This video examines different examples of inter fabric traffic and the recommended topologies for linking UCS to the upstream LAN network that provide the best handling of all traffic flows.


Part 9 – Connecting UCS to Disjointed L2 Domains

Here we discuss the problems you can run into when connecting UCS to separate Layer 2 networks, and ways to make it work.


Part 10 – Gen2 Adapters

This is brief video covering the new Gen2 adapters from Emulex, Qlogic, Broadcom, and Intel. The Cisco VIC (Palo) adapter is also discussed and with it’s unique VM-FEX integration with VMware vSphere.


Part 11 – Cisco VIC QoS

In this video we take a deeper look at the advanced QoS capabilities of the Cisco VIC, and how that can be leveraged in server virtualization deployments as one example.


Part 12 – SPAN and IPv6

In closing the comprehensive SPAN capabilities of UCS are briefly discussed. Also, I pay some lip service to IPv6 (grin).



Disclaimer:  The views and opinions expressed are those of the author, and not necessarily the views and opinions of the author’s employer.  The author is not an official media spokesperson for Cisco Systems, Inc.  For design guidance that best suites your needs, please consult your local Cisco representative.

Comments

  1. toni says

    Hi,

    Good work! Any ideas if VN-FEX will support more links with fever FEX-uplinks? Now it’s quite unusable, if you have eg. 2 uplinks per FEX. You get so few interfaces per ESX-host that consolidation ratio is quite low (as far as I know at least). And one other problem is that if you run out of ports, ESX doesn’t know it and you still can vmotion a VM to a ESX host which doesn’t have network available for that VM.. it’s a bit problematic ihmo :)

    • says

      Hi toni,
      With (2) FEX uplinks, you have 28 VM-FEX ports to use per VIC in that chassis. That’s not too bad. Consolidation ratios of 30:1 are generally viewed as the new normal, if not aggressive.
      If you attempt to vmotion a VM to a server without any VM-FEX ports available the operation should error and the VM will stay put. Are you not seeing that?

      Cheers,
      Brad

  2. says

    Great stuff, Brad! From beginner UCS, to advanced FEX networking. Love it! Thanks so much for making these! Should be a pre-req for every SE selling UCS-based solutions.

  3. Pedro García says

    Hi Brad,

    I have to congratulate you for such a great explanation. One thing I think it is not clear is the reason why switch mode is not recomended in FI. And this is something I can find in Cisco documentation. It is a matter of performance??

    thanks

  4. Baron Schon says

    Brad,

    Thanks for all your good work on these videos–they are really well done and are really some of the best training out there that I have seen on UCS networking!

    Baron Schon, SE
    Midwave Corporation
    Minneapolis, MN

  5. Mohamed says

    Really good work . i have couple question first one about UCS HA why i have to connect 2 port (L1&L2) between 6100 and if one of 2 port failed , will failover happen . my second question is in vlan creation you have option to create same vlan name with different ID on both 6100 . my question is why

  6. says

    Hi Brad.
    Great work!!!! These video clips are really helpful. Would it be possible to post a video clip on VN-Tagging and VIC card limitations? Your thoughts on best practice deployments on that would be great.
    thanks!
    Adriaan

  7. joe smith says

    Great videos, Brad. Love ’em….

    Im wondering if you can clear up one thing for me….

    What exactly does a UCS FEX do besides aggregate the server traffic and push it up to the Fabric Interconnects? What I am wondering is what functionality/technology does it need to possess to do that?

    From my understanding, a conventional blade switch needs to be DCB-capable (support at least PFC and ETS) and do FIP snooping to be used as an FCoE pass-through. IS that the case with a UCS FEX?? I dont think so…it seems like nothing more than a “dumb” MUX that has no intelligence, no code to upgrade and simply passes DCB traffic between CNA and FI…

    What am I missing, if anything?

  8. Jeppe says

    The only thing that the Nexus 4000 blade switch is missing, is stacking capability. Do you know if this is on the roadmap for the future? We use BNT today, only because the Nexus 4000 isn’t stackable.

  9. Raman Azizian says

    Hello Brad,
    Thanks for providing these video’s. They have helped me tremendously to gain a faster understanding of the overall components of the UCS system.
    If you could kindly provide an anwer for the following questions that I have:
    If our network today had two seperate Ethernet Fabric, Fabric-Front-End, and Fabric-Back-End,
    can two network uplinks be configured on a pair of 6140’s to each respected network.
    There is no Interconnect between the two Fabrics for security reason. The 6140 would sit between the FE-Fabric, and BE-Fabric. FE-Fabric would advertise 7 VLAN’s to the 6140, and the BE-Network would advertise 2 VLAN’s.
    I was told that this may be very challenging, and I wanted to see if that is the case.
    I can provide a drawing if you would like to see that.
    Thank you in advance for any help you can provide.

    Regards,
    Raman Azizian
    SAIC/NASA Data Center

  10. imran says

    Hi Brad,

    Your sessions are great, i have a small request if you could explain in detail about the concept of VLANs especially native VLAN in UCS. How the VLAN in Cisco UCS maps with the VLAN required in the VMWARE virtual machines?

  11. vivek says

    Hey Brad, you have made life easier for so many SE like me. The videos really takes the knowledge on UCS solution from naive to expert level.
    it’s like 5 days UCS bootcamp done in 2 hours. (bootcamp on steroids)
    Carry on the good work.

  12. Gops says

    Excellent videos Brad.
    I have a basic question.. I cant understand why Emulex,Qlogic cant support Fabric failover. I believe those cards will be connected to both the FEX within UCS chassis, then why cant it support fabric failover?

    • says

      Gen1 Emulex and QLogic adapter DO support fabric failover because they have a Cisco ASIC called Menlo, in addition to the other Eth and FC chips.

      Gen2 adapters from E&Q do NOT support fabric failover because each are based on a single ASIC (non-Cisco) that does not implement fabric failover.

      Cheers,
      Brad

  13. Peter Slow says

    Brad,
    I’m being forced to fit a UCS Cluster into an environment that doesn’t support VSS or vPC upstream from the fabric interconnects. The L2 domain _does_ span the two upstream cats. On each of those switches there is also a VLAN interface participating in HSRP – one of the Cats is HSRP Active for all networks/VLANs. Since my customer’s current configuration has one switch acting as the “Master” for all traffic, and given the recommended practice of connecting each fabric IC to both upstream switches, does it make sense for me to configure pinning manually in this sort of configuration? The point would be to get all traffic flowing over the master/root/active/whatever switch, and avoid what would otherwise be a suboptimal L2 path with async. L3 routing.

    So: Pinning or no pinning (manually) in my situation, and by doing so, what other things would i be affecting or potentially breaking? Would there be concerns with how failover would work should I do this?

    Also, I have two environments like this for two different customers. One uses the N1K, the other is still using the cheap-o vSwitch in the basic version of ESXi.

    Thanks for your time,
    Peter Slow

    • Peter Slow says

      Additionally, I have a detailed network diagram that I’d love to show you along with my question, if that might make the topology clearer.

    • says

      Peter,
      It might be easier to just run the UCS FI in switch mode, and align your HSRP/STP priorities at the upstream switches. You’d get the same result with much less manual configuration.

      Cheers,
      Brad

  14. Rodrigo says

    I just finished viewing all your videos and I would like to truly thank you for making this available to the community.

    Good luck with your new journey at Dell Force 10.

    Regards,

  15. Maung says

    Very informative.
    Could you also touch on the requirement/non-requirement of native VLAN? And how the native VLAN
    plays in the UCS environment?

    Thanks

  16. Tushar Gupta says

    Hi Brad!

    I would like to thank you for the awesome videos on UCS. They are really a good source of knowledge and helped me a lot in enhancement of understanding about this technology.
    I am currently working on UCS technology in India. I have one query regarding storage assignment.
    I have successfully configured storage for my Blade servers. I have UCS 5108 chasis ( with 8 half-width blades) , Fabric Interconnect 6120 and Nexus 5020 . I have a requirement to allocate SAN storage to my existing MCS infrastructure. I have EMC VNX 5300 as SAN storage. Could you please tell me how to allocate storage in that case? Do I have to use Nexus 2248 for that to connect to my Nexus 5K and then provide storage ? I have Cisco MDS 9124 connected to my Nexus 5k.
    Please suggest.

    Regards,

    Tushar Gupta
    M: +91-9873171839
    New Delhi, India

    Skype id: tushar.gupta1720

  17. George says

    Hi Brad,

    I have learnt so much about watching these UCS videos and the advantages of implementing such a ROI system.The FCoe and VPC explanation and all the rest was really good.

    Best Regards
    George

  18. HD says

    Hi Brad,

    These videos were very helpful. Thank you for uploading them. I had a quick question, when we use 6509’s for uplink routers without using VSS can we use a full mesh topology without causing spanning tree loops? The UCS will be configured in the end host mode. From the video “inter-fabric traffic and recommended typologies” it says that’s doable but I am not sure how that can be done without causing spanning tree loops. Can you please elaborate a little more.

    Thanks!

  19. George says

    Hi Brad,

    I have never seen OR heard such good explanation regarding the UCS principles. I am really hooked onto this website and I have learnt alot about UCS in its entirety, since I started to watch these videos. A BIG thank you for these fantastic videos. God bless you.

  20. Gopsblogger says

    Hi Brad,
    Nice videos!.I want to expand my skill sets on Compute. Could you suggest some good books for UCS,N1k and Storage.

  21. Nalendra Wibowo says

    I was kind of late finding this site….December 2012….hmmm
    I will need to get back again and again… I really do thank to you….yes this has been very useful for me, I believe for many many many others 😀 :) , ….awesome works and contribution to all of us who need these knowledge,

    great!!!!

  22. Sara says

    Hi Brad,
    Excellent job!!!! These are greate videos!!!
    I have a question about the vNIC identifiers in the videos. On most slides various servers have the same vNIC identifier for example in video Part 7a – End Host mode Pinning all servers have vNIC0. Is this correct or should different servers have different vNIC “names”. I mean, how does the FI now which vEth belongs to which vNIC?
    Thanks in advance,
    Sara

  23. Ryan Ticer says

    Hi Brad,

    Is it possible to have a port-channel (VPC of sorts) between the Fabric Interconnects going to service profiles? The VMWare best practice for NFS calls for LACP on the VMWare host–otherwise only one path is used at a time for access wasting half the available bandwidth. Just curious if there’s a way to make this work.

    Thanks!

    Ryan

      • Mike says

        Brad, we’ve already purchased our 5108 and two 6248s but i found out that VSS doesn’t support my 4500-E classic line cards and we might not be able to replace them. I might be stuck with using two separate 4506-E chassis with a port channel between them running HSRP. I noticed you had in your video 7B a picture of two Layer 3 switches connected by a port channel (no VSS or vPC) with both fabric interconnects connected to both L3 switches with two port channels each. Is it possible to run EHM on the FIs and actually have four active forwarding paths? I’m assuming dynamic pinning would be supported? Thanks for your reply!

        • says

          Hi Mike,
          Short answer is, Yes. You don’t need vPC or VSS on the upstream switches to have all uplinks forwarding on your UCS FIs.
          The default EHM mode is all you need and let dynamic pinning do its thing. vPC or VSS on the upstream switches is a nice-to-have, not a need.

  24. Nav says

    I am not able to play these videos continously having the error The connection was reset by the server ,kindly help .I am able to play the videos upload earlier in 2010

  25. Drew says

    Hi Brad,

    Thank you for posting these videos, they are great sources for design and implementation knowledge / training.

    In Part 8, you have discussed various inter-fabric designs with all uplinks forwarding for all vlans.

    I am curious, if all uplinks are not forwarding for all vlans, what is the potential for deja vu checks, if any, if traffic must travel through an upstream device?

    Regards

Trackbacks

  1. […] This will be a quick post of hopefully many centered around experiences I have deploying and working with the Cisco Unified Computing System in my current environment. I won’t delve into UCS in it’s generalities as high-level overviews are all over YouTube and very good in-depth articles can be found on various blogs the likes of Brad Hedlund. […]

  2. […] A well-engineered physical network always has been and will continue to be very important part of the infrastructure. The Cisco Unified Computing System (UCS) is an innovative architecture that simplifies and automates the deployment of stateless servers on a converged 10GE network. Cisco UCS Manager simultaneously deploys both the server and its connection to the network through service profiles and templates; changing what was once many manual touch points across disparate platforms into one automated provisioning system. That’s why it works so well. I’m not just saying this; I’m speaking from experience. […]

  3. […] A well-engineered physical network always has been and will continue to be a very important part of the infrastructure. The Cisco Unified Computing System (UCS) is an innovative architecture that simplifies and automates the deployment of stateless servers on a converged 10GE network. Cisco UCS Manager simultaneously deploys both the server and its connection to the network through service profiles and templates; changing what was once many manual touch points across disparate platforms into one automated provisioning system. That’s why it works so well. I’m not just saying this; I’m speaking from experience. […]

Leave a Reply

Your email address will not be published. Required fields are marked *