“Jawbreaker”, merchant silicon, QFabric, and flat networks

Filed in FabricPath, merchant silicon, Nexus, QFabric, SDN, TRILL by on June 10, 2011 7 Comments

Brad, can you elaborate on Cisco’s Jawbreaker project? What exactly is it? Is it a response to Juniper’s Q-Fabric? Is it an attempt to rectify the inconsistencies in the differing purpose-built approaches of the N7K and N5K?

Why create a new architecture?

It seems like Cisco is really in trouble – creating a new architecture, abandoning its own silicon for merchant silicon…they seem to have missed the boat with regard to flat networks.

Here is an article worth commenting on:

http://www.networkworld.com/news/2011/031011-cisco-jawbreaker.html

“Jawbreaker”

For reasons that should be fairly obvious I can’t discuss in detail rumored Cisco R&D projects such as “Jawbreaker” in a public forum.

I did read the Network World article and found it suspiciously interesting how many presumptions were made about a Cisco R&D project, such as ship dates, architecture, motivations, etc. As far as I’m concerned, some of these assertions were just flat-out wrong.

Cisco, like any other tech company, is going to have R&D projects with cool sounding names. Some will survive and turn into real products (e.g. Cisco UCS), others will never see the light of day. It’s how any good tech company flushes out the good ideas from the bad.

You can also bet that Cisco is looking at ways to evolve the Nexus platform with dense and highly scalable 40G, 100G, and beyond. I think that’s a fairly obvious assumption to make. No? Just because a project may have a name like “Jawbreaker” doesn’t mean it’s not Nexus.

Merchant Silicon

Depending on the application, sometimes merchant silicon does the job well. Take for example the Nexus 3000. If your application just needs the lowest possible latency for say, high frequency trading, the merchant silicon available today does that well, really well. Therefore it makes sense for both Cisco and it’s customers to have timely access to these products at competitive prices, running industry proven and feature rich switch software (NX-OS), and backed by Cisco’s global partner network and world-class 24×7 Cisco TAC support.

While merchant silicon does speeds and feeds well, it’s not the place to find innovation. Custom software, custom hardware (ASIC), or a clever combination of the two is where new and innovative technologies will be introduced for immediate customer benefit. I don’t see that changing one bit. And you can bet Cisco will continue to lead in this arena.

There is a significant trend underway in software driven innovations, such as software defined networking (SDN). At some point you can only drive so many features into hardware so fast, while the innovation potential and velocity of software development is almost limitless, as far as I can tell. Some will say SDN means the end of custom hardware, just use merchant silicon everywhere and innovate only in software. Sorry, I don’t buy it. I tend to believe the best possible outcomes will result from an intentional mix of purpose-built software (SDN) and purpose-built hardware (ASIC).

Those that are only capable of innovating in one of the two areas (hardware or software) will do OK.  However those that can engineer both software and hardware innovations in a single system are best positioned for the next wave of innovation, IMHO.

Juniper QFabric

I have to give credit to Juniper for reaching into new territory with QFabric. It’s an interesting and bold concept. There, I said it.

Bold in the sense that Juniper is asking the customer to invest in one giant proprietary 128 “slot” switch. Most people are comfortable with an 18-slot switch, it’s not too much capacity to commit to one vendor, and not too big a failure domain. But a 128 slot switch? That’s unprecedented territory. I’ll be curious to see to how well that message is received once QFabric becomes a reality (still slideware as of today).

Interesting in the sense that each edge QF-Node has the posture much like a distributed forwarding linecard in a chassis switch. One of the challenges with this architecture is hardware consistency across all of the “linecards”. Those of you familiar with distributed forwarding chassis switch architecture know that if you have a chassis full of linecards, the entire switch has to dumb down to that of the least capable card in the system, to maintain system wide consistency. I’m curious to see how that will be managed in a 128 slot QFabric as customers try to simply add or migrate in-flight to newer QF-Node edge technology.

Flat networks

“[Cisco] seemed to have missed the boat with regard to flat networks”

Really? I don’t get that. Help me understand because Cisco is the only vendor shipping a 16-way “Spine” today with FabricPath, based on TRILL. Consider that each Spine can be an 18-slot modular switch with 512 10G ports. That’s a tremendous amount of capacity and bandwidth to build very a large “flat network”, available today. There are lots of vendors talking about “flat networks”, but which ones are actually shipping?

Of course today’s offering isn’t perfect and there will be improvements. In the near future you will have Nexus 5500 supporting FabricPath, perfect for the ToR “Leaf”. You will also have newer generations of Nexus 7000 linecards with higher 10G density and L2/L3 line rate switching, supporting FabricPath as well in the same 16-way Spine topology. All of this allowing for an even larger and higher performance “flat network” than is already available today.

Cisco “missed the boat” with regard to “flat networks”? I beg to differ. The boat is actually carrying Cisco spine/leaf “flat network” gear to customer door steps today, while the others are still showing slides. :-)

Cheers,
Brad


Disclaimer: The author is an employee of Cisco Systems, Inc.  However, the views and opinions expressed by the author do not necessarily represent those of Cisco Systems, Inc. The author is not an official media spokesperson for Cisco Systems, Inc.

About the Author ()

Brad Hedlund (CCIE Emeritus #5530) is an Engineering Architect in the CTO office of VMware’s Networking and Security Business Unit (NSBU). Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, value added reseller, and vendor, including Cisco and Dell. Brad also writes at the VMware corporate networking virtualization blog at blogs.vmware.com/networkvirtualization

Comments (7)

Trackback URL | Comments RSS Feed

  1. John G. says:

    I’ll add even if cisco responds to competitor pressure with Jawbreaker and merchant silicon, is it a bad idea to diversify within this space? I obviously will argue it is not. If you were to compete on just one plane with one set of vendors, the others will beat you at a slightly different game while you’re not looking.

    Cisco is being vigilant, and as you mentioned shipping solutions, not just showing slides. Anything that can be said about Jawbreaker can be said if and when it is turned into a product, it’s silly to assume people have the facts when it’s speculative information.

  2. joe smith says:

    Look, glib answers aside, it’s obvious that Cisco has lost focus and is now paying the price. Dumping 15% of your personnel in a 2-month period is no small thing. It shows Cisco is losing ground to Juniper and HP and they need to regain their footing or lose it altogether — and permanently.

    After 3 generations of Nexus, they are now releasing another solution set known as “jawbreaker.” What’s that all about? Where’s the steady direction? Today we found out that Cisco is going to revive their 6500 switch with a new SUP model that supports 2 Tbps.

    Maybe Cisco’s strategy is to convince the competition that they have no direction and are grasping at straws to catch them off guard…or maybe they really are confused and grasping at straws…

  3. Rob says:

    Great post. Thanks a lot

  4. Jim says:

    Brad,
    Great points and I agree that Qfabric is an interesting play… Although it does sound very similar to the rumors I have heard about jawbreaker… I was told imagine a giant switch with a spines backplane…(again just rumors and Hersey but for what it is worth…) finally, can you comment on arista networks in this discussion I have been doing testing with nexus 3k and arista blade and I have seen better performance with arista in some scenarios and better performance by cisco in others I have only scratched the surface with initial scenarios however, I am interested in your thoughts considering so much IP from cisco dc strategy has went on to work at arista…. some of he FUD I have been hearing makes sense to me from a business level but I do not know what to believe as fud is getting thrown all over the place… Also I do not think juniper is stable or truly a ull play at least for high frequency trading environments. Just my 2 cents… I am Just a customer of all the vendors out there and I would love to hear your “mostly non biased” opinion. Thanks for the wonderful blog… And sorry about grammar I am typing on iPhone while watching football lol

  5. randy says:

    Well. Q-Fabric is not slideware now. I chose it in a bake-off between Nexus. Brocade and Arista. I have it and I am doing interop testing now. Wish me luck. Your Dell’s weren’t ready at the time I did the vendor bake-off… I wish they were.

    Randy

    • Brad Hedlund says:

      Hi Randy,
      What were the factors that helped you choose QFabric over Nexus in your bake-off? How many 10G servers will you have in your fabric initially, and how large might it grow?

      Thanks,
      Brad

      • randy says:

        1) The underlying support organization needs a data center fabric that is easy to manage and having a single management point was attractive. The power requirements were also attractive. The Fibre-Channel interfaces were really easy to configure, as they just offload to our Brocade DCX FCF and QFX-node results were best with rfc 2889 & 3918 testing. Q-Fabric 3:1 oversubscription was important as well. The Nexus proposal had 1 chassis, fully-redundant and the oversubsription of 10G wasn’t where Q-Fabric was. The issues I am having with 5596UP Nexus running vPC & the David Yen hire was a red flag.
        We are starting with 150 DL580 (4-10G interface per server) then growing up to 450 servers.

Leave a Reply

Your email address will not be published. Required fields are marked *