Nexus 5000 & Nexus 2000: New technology requires new thinking

Filed in Nexus by on February 9, 2010 9 Comments

I sometimes hear or read complaints about the Nexus 5000 + Nexus 2000 fabric extender architecture that I want to take a minute to address.  This should be short and sweet, a blogging concept that is foreign to me if you follow my work. ;-)

The typical complaints about this architecture from network engineers are as follows.  These are direct quotes from a recent conversation:

“Unfortunately, Nexus 2000 is just a fabric extender and can ONLY be attached to Nexus 5000…”

“I haven’t figured out yet what’s the advantage of having this design (nexus 2000 -> nexus 5000) other than the “old” one (catalyst 4948 -> nexus 7000/cisco 6500).”

“The Nexus 2000 does no local switching so if you have any east-west traffic between ports on the same switch you’ll be better served by a more traditional access switch”

The Nexus 2000->5000 design does require looking at things a bit differently than you have in the past. Data Center architecture is changing fast due to the rapid onset of Data Center virtualization. Server & Storage administrators have been struggling with this change as well, this isn’t something unique to the Network.

There is a tendency to view the Nexus 2000 as a switch. And understandably so because it’s packaged like a switch, looks like a switch, and installs in the rack like a switch. Because of this perception it’s easy to subject it to the typical switch design criteria. But in doing so you begin an exercise that leads to more frustration than clarity because you are applying old thinking to new technology.

It makes more sense to view the Nexus 2000 as a linecard that has been pulled out of a switch, packaged up in sheet metal, and the backplane ports connecting to the supervisor engine changed to SFP+ ports. You now have a linecard that connects to its supervisor engine with cables.

Why is that significant? Because it reduces the complexity (and therefore total cost of ownership) of adopting a Data Center virtualization architecture.

For example, (10) Nexus 2000′s are managed no differently than (10) linecards. I think we can all agree that a linecard requires a lot less management than a switch.
It also allows the Data Center to grow into larger L2 domains required by virtualization by minimizing the # of L2 nodes, because the Nexus 2000 links to data center with L1, versus L2.

Business leaders are hearing a lot about cloud computing these days, and the cost advantages it promises to the business. Yet there is a valid concern with data privacy and security that comes with public cloud computing. If internal IT can transform their data centers into a private cloud, or at least drastically improve the operational efficiency and total cost of ownership of their own data centers … the wholesale outsourcing of the data center applications to the public cloud become less attractive to the business leaders.

In other words, wake up and smell the cloud.  Your career may depend on it.

About the Author ()

Brad Hedlund (CCIE Emeritus #5530) is an Engineering Architect in the CTO office of VMware’s Networking and Security Business Unit (NSBU). Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, value added reseller, and vendor, including Cisco and Dell. Brad also writes at the VMware corporate networking virtualization blog at blogs.vmware.com/networkvirtualization

Comments (9)

Trackback URL | Comments RSS Feed

  1. My only real problem with the Nexus 5000+2000 combo is that it doesn’t seem to map very well to smaller data centers. I’d *love* to do top-of-rack N2K’s with N5K aggregation, but I don’t *need* a N7K (and can’t justify one) when the data center only has 16 cabinets and a single DS3 WAN circuit. With the N5K unable to do inter-VLAN L3 switching, I’m in a bind.

    • Brad Hedlund says:

      Brian,
      I hear you loud and clear. The Catalyst 4900M’s small form factor and high performance fits nicely as the 10GE L3 switch aggregating the Nexus 5000′s in these small footprint environments.

      Cheers,
      Brad

      • jose liloia says:

        Hi Brad,

        I`ve got a question for you…I´ve got a Nexus 5000 and I´m trying to connect to a Cisco 4948E. The transceivers are:

        Nexus 5000: SFP-10G-SR=

        Cisco 4948: X2-10GB-SR

        That´s possible?

        Should I do another configuration?

        Thanks a lot.

  2. Livio says:

    Well,

    Can’t I make “the cloud” with traditional switches (4948 for example)? As I’ve said before, my only concern is that I’ll loose A LOT of access ports on Nexus 5000 that could be used by servers with 10GE/FCoE. Again, the only reasons you are giving me to use this design is “management facility” and vPC.

    So, putting it in a balance I see more losses than benefits. What’s the big problem on connecting to another device to manage it? Is this really a big loss? It’ll take 5 minutes more to make a service. I don’t think that this is the best benefit of this design. I would really appreciate to have all switches of the same series managed by the same program (cisco DCNM), unfortunally I think we are going the other way. Loosing 20 access interfaces, isn’t a good option for me…

    I’m not talking about a huge datacenter. I will only need 10 1G switch for the next years, so “big L2 domain” for me isn’t to much trouble. If you could explain better this problem maybe I change my mind…

    I’m expecting that 10G(with FCoE on some cases) will dominate the servers design, so my loss will be huge. I’ll maintain 1Gbps only for backward compatibility (10 years? hehehe).

    If Nexux 2000 could be attached directly to an Nexus 7000 (it is not quite difficult to make that works) the deisgn would perfect fit for our needs…

    I’ll send this to the list too.

    By the way,
    Nice blog.
    []‘s

  3. Chris Stand says:

    Livio,

    If you need storage over ethernet and don’t yet have support for fcoe in your storage or server products take a look at a recent VMware document on Exchange performance using iSCSI and NFS – http://blogs.vmware.com/performance/2009/07/exchange-performs-well-using-fibre-channel-iscsi-and-nfs-on-vsphere.html

    The testing used generic Intel nics vs dedicated fiber HBAs and showed iSCSI and NFS ( our interest is iSCSI which unlike FC CAN be port channeled and run over distance greater than FCOE distance limit of 300m) to be very close in performance. If the testing had used hardware iSCSI adapters it likely would have beaten native Fiber Channel.

    From your postings … the N5K & FEXes don’t seem to be candidates your environment – a couple of 3750Gs would probably be ideal and they could handle limited ethernet over IP in whatever fashion you needed now.

    • Joe Smith says:

      Chris, I think you make a valid point.

      Before the advent of 10GE, iSCSI was viewed as an up and coming storage technology that was largely relegated to SMB environments. But in the last 5 years, iSCSI has developed tremendously and offers a robust set of storage solutions. And with 40GE coming within the next 18 months or so, I think the FC people are damn worried.

      iSCSI has advantages over FC in that it runs natively over Ethernet; is routable across WANs without any gateways or protocol encapsulation; can transfer data at 9K Byte frames, not just 2.5K;; and its cheaper.

      My thoughts…

  4. Hi Brad,

    I am a Cisco oriented (but also worked hands on and designed Nortel, Juniper, Enterasys, HP infrastructures ) networking consultant who has been in this area for 7 years.

    It seems like you really get angry if people hide their identity so Ive tried my best :) Just kidding ….

    We are on the verge of setting up a new Data Center including almost 100 cabinets.
    We recently had a meeting with Cisco. Cisco proposal included 2 N7Ks (One vPC); 14 N5Ks (7 vPCs); and 70 N2Ks.

    Today Brocade visited us. They have emphasized the power dissipation of N5K and N2Ks compared to Brocade FCX and VDX series.

    Also the same topic here in this blog entry, which is N2K does not support local switching. This really pushes us to rethinking the solution again and again…. How can someone be really sure or design an infrastructure out of a sctrach that no local switching will be needed between two ESX or physical servers in the same cabinet ? Doesnt this design process involves every, NW, Storage, Application guys ?

    Thanks in advance.

    Couple of interens

    • Joe Smith says:

      Dumlu:

      I wouldn’t put the FCX in the same category as the Nexus by any stretch of the imagination. That’s an old Foundry switch that was mediocre at best before Brocade bought it. Moreover, in terms of features and architectural implications, it’s just in a separate planet.

      The VDX, however, is a totally different story. That is a true Brocade product and it sounds very promising. That is a true data center solution for the next generation data center.

      I would seriously look into that. Brocade has an interesting story to tell.

      It has a drawback in that it does not have native FC ports for unified fabric applications. It will in about 6 months or so, they say. But to be honest, I dont think FCoE is an architecture I would bet the family farm on now anyway. it’s immature and the enabling technologies are not fully developed either. I wouldn’t touch it with a 10-foot pole for at least another 3 to 5 years, if ever. And there is absolutely no reason to buy into it now anyway. What’s the rush? Traveling down the path of unified fabrics is a long journey and the evolution to FCoE is a slow process and the endgame is permanent. So, if there is indeed value in a tried and proven FCoE architecture 5 years from now, then you may want to consider it at that time. Until then, I wouldn’t worry about it too much.

      Just my thoughts on that one….

      Thanks

  5. Joe,

    Actually to be honest with you, one shouldnt care if the FCX series is still in its lifecycle, still supported and runs ok. To be honest with you all I would care is the latency, switching throughput, power consumption values for a ToR switch. ın terms of this, FCX is very promising compared to N2Ks. however I have no idea about the list prices though. BTW, it seems like one could not see the latency values of Nexus 2Ks anywhere in datasheets. In fact first statement I have come across has been in this blog by Brad (he stated 15-20 microseconds)) So latency values play an important role when it comes for two end hosts using the capacity of a 1G interface. With 3microsec latency and 15 microsecond latency value difference the bandwidth that can be use most in a second differs dramatically.

Leave a Reply

Your email address will not be published. Required fields are marked *