Cisco Nexus 5000 announced Today

What is the new Cisco Nexus 5000? — The industry first switch to deliver unified server I/O, providing Fiber Channel and IP traffic over a single 10G Ethernet port to the server.  Nexus 5000 delivers very low latency wire speed lossless Ethernet service to the server.

Nexus 5020

As you can see from the photo the Nexus 5000 does not have RJ45 ports, rather it utilizes SFP+ which can be populated by a SFP+ twinax copper cable to deliver 10GE copper down to the server.  Why not RJ45 10GBASE-T??  Two major reasons: (Power and Latency)

Power consumed (each end) by 10GBASE-T = ~8W

Power consumed (each end) by SFP+ Coax = ~.1W

Latency of 10GBASE-T = ~2.5us

Latency of SFP+ Coax = ~.25us

You could also use SFP fiber to the server but of course at the disadvantage of cost.  The SFP+ copper cable, on the other hand, is expected to be in the $100 or less range.

This will be the cable you can buy for dowlink connectivity to the server.  As you can see the SFP+ connector is soldered to the coax copper cable from the factory:

What is the downside of this SFP+ twinax copper cable?? — Max Distance = 10m

This means your SFP+ twinax copper will remain within the rack as it does not have the distance to travel throughout the data center.

The Nexus 5000 is therefore a Top of Rack switch.  You will then run fiber from your Nexus 5000 (in the top of the rack) to the 10GE Ethernet aggregation point (Nexus 7000) for traditional IP connectivity.  You will then have fiber from your Nexus 5000 (top of rack) to your SAN fabric (MDS 9500).

What is the impact of this? — Is Top of Rack clearly the way to go now?  Traditionally Cisco has never picked sides in the Top of Rack vs. End/Middle of Row debate in data center infrastructure cabling — we accommodate both implementations very nicely. However given that 10GE server connectivity appears to be going the SFP+ direction, does that mean we will start to encourage customers to give Top of Rack more consideration?

Or, will the large existing installed base of EndMiddle-of-Row Cat6 push Cisco to deliver a Nexus 5000/7000 with 10GBASE-T notwithstanding the power and latency issues that come with it??  Only time will tell.


Related post: Top of Rack vs End of Row Data Center Designs


  1. says


    We are still not advocating a particular access layer topology. With the launch in January, we maybe 10GbE available for end-of-row, rack, and blade form factors. We also noted that the Nexus will be a family of unified fabric switches. So, while the N5K is the first form factor available that delivers FCoE, its not the last, and in the long run, Cisco customers will still have the freedom to design the access layer that best meets their needs without forgoing any functionality.


  2. Kai says

    Thats to bad as this switch would be perfect for converged networking and high performance storage even for just a one rack. Il guess Il have to cross my fingers for a pair off down scaled 5010’s. :)

  3. says

    FYI – In a future version of software the first 15 ports of the Nexus 5000 will be able to support 1GB. The ports ASICs on Nexus 5000 are capable of 1G, its just that the current version of NXOS software is not capable of recognizing a 1GB SFP port yet. The next rev of NXOS that will recognize 1GB SFP+ should be available this year.

    • says

      Will a 3rd party SFP+ Cu cable work? YES! (keep in mind Cisco has currently only certified 5 meter cables, if you buy a longer 3rd party cable than 5 meters you are taking a risk).

      Is a 3rd party SFP+ Cu supported by TAC? Only the Cisco cables are currently (as of 6/5/09) TAC supported.

      Will TAC hang up on you? NO!

      If all troubleshooting options have been exhausted TAC does reserve the right to ask you to replace the 3rd party cable with a Cisco cable to continue the troubleshooting process.


    • says

      If I use the 5000 Today and put it in my Network. Do I have to disable VNTags?

      No. VNTag capabilities are in hardware waiting to be provisioned. The Nexus 5000 forwards frames with or without VNTags by default.

  4. says

    @etherealmind I have his name, email, and IP address if anybody questions it. Didn’t think it was necessary to post all that info … but my mind could be changed if he keeps it up.

  5. Chris Lee says

    Please excuse my ignorance here but I’m looking at VNtag support possibly in my new vm infrastructure. How do you go about enabling vntag on the Nexus 5000, what commands do you have to run? Also when I turn on vntag support does this effectively turn off the 1000v VSM and VSE functions. Thanks for the reply

    • says

      How do you go about enabling vntag on the Nexus 5000, what commands do you have to run?

      Quoting Mr. Miyagi … “Patience, Danielson” … Using VNTags on Nexus 5000 is not yet available in NX-OS. The hardware is VNTag ready, the software is not. When the software is ready (end of 2009) more information will be available on how to provision and configure use of VNTags.

      Also when I turn on vntag support does this effectively turn off the 1000v VSM and VSE functions

      Not at all. VNTags and Nexus 1000V are not mutually exclusive. You might be using VNTags to support a NIV capable virtual adapter. The single virtual adapter is then exposing multiple logical adapters to the ESX host, which could be using those logical adapters as uplinks for the Nexus 1000V. If, on the other hand, the virtual adapter is being used for VMDirectPath, where each VM connects directly to its own logical adapter on the physical adapter bypassing the hypervisor, Nexus 1000V is no longer needed on that host.

      Here is a recent post covering NIV and VNTags:


  6. Simon Walker says

    Which way is the air flow through the device?

    From a cooling management perspective with most switches, the ports are the cold intake and the ports on most servers are on the hot outlet, meaning either the switch is racked backwards to the desired air flow or you have to manage patching from the front of the rack to the back which is also problematic.

    • says

      The air flow on Nexus 5000 is front to back. The ports on the Nexus 5000 are actually located on the back of the switch. If you put your hand by ports shown in the picture above, you would feel hot air blowing on your hand. The switch is designed to be racked with the ports in back of the rack blowing air into the hot aisle. Because the server ports are also located in the back of the rack, it is very easy cable servers to the switch with much cleaner cable management.


  7. Jason B says

    I’m in the process of prcuring equipment that will tie in a Nexus and UCS solution with a NetApp back-end supporting and ESX farm for my ASP environment.

    One thing that I am somewhat confused with is the discussion of the 1000v, palo-cards, and how the UCS abstracts physical components via the service profiles.

    My configuration is going to include Qlogic mezzanine adapters as when I placed the order the Palo wasn’t available. The UCS will be already “virtualize” the NIC and FC adapter (CNA) via the vnic vhba capabilities of the service profiles. Am I simply moving where this is being performed via the palo card or even further up the stack with a 1000v?

    Thanks for the insight.

  8. Jason B says

    Apparently I needed more coffee or at least proof-reading of my comment above. My apologies for the grammatical and mechanical errors.

  9. says

    You are absolutely right that Cisco UCS as a system inherently “virtualizes” the server I/O with the current Qlogic and Emulex CNA’s. More than just virtual I/O, Cisco UCS creates a system where the complete bare metal server is “virtual” – meaning ALL the configuration settings of the server are abstracted from the physical hardware, stored in an XML scheme, and can be quickly moved to different hardware, duplicated, backed up, and re-provisioned.

    Having said that, the Palo mezzanine adapter simply allows for more than just (2) vNIC and (2) vHBA’s to be defined in the server’s profile. You could define a server with 20,30,40,50+ of any combination of vNIC and vHBA.

    Have you looked at this related post:


    • Jacob says

      Sorry if i am wrong. Do we really need an MDS 9000 series switch to connect to any SAN Storage? why cant we connect directly to a N5K switch which has FCoE capability? What exactly FCoE does?


Leave a Reply

Your email address will not be published. Required fields are marked *