Designing with the Nexus 5000 / 2000

Author
Peter Welcher
Architect, Operations Technical Advisor

I’ve been reading up heavily on the various aspects of Cisco Nexus and Data Center technology. While I am constitutionally unable to cheerlead, I must say I’m pretty impressed with the breadth of the vision. There are some mild feature gaps and good things to come, but overall the products look like they’ll meet the increasing Layer 2 design robustness needs of customers going forward. I do still intend to apply the “beer principle” — too much of a good thing may give you a headache!

 I’ve got a couple of customers where the N5K/N2K have seemed appropriate. I thought I’d briefly mention a couple of things that I noticed in trying to design using the boxes… maybe fairly obvious, maybe a gotcha. I’d like to think the first story is a nice illustration of how the N5K/N2K lets you do something you couldn’t do before!

Case Study 1

The first customer situation is a site where various servers are in DMZ’s of various security levels. Instead of moving the servers to a physically separate data center server zone, as appears to have been originally intended (big Nortel switches from a few years back), they extended the various DMZ VLANs to the various physical server zones using small Cisco switches with optical uplinks. That gear (especially the Nortel switches) is getting rather old, and it’s time to replace it. 

For that, the N5K/N2K looks perfect. We can put one or a pair of N5K’s in to replace the big Nortel “DMZ overlay core” switches, and put N5K’s out in the server zones (rows or multi-row areas of racks). For redundancy, we can double everything up. Right now one can make that work in a basic way, and it sounds like Cisco will fairly soon have some nice VPC (Virtual Port Channel) features to minimize the amount of Spanning Tree in such a dual N5K/N2K design, using Multi-Chassis EtherChannel (aka VPC). Neat stuff!

The way I’m thinking of this is as a distributed or “horizontally smeared” 6500 switch (or switch pair). The N2K Fabric Extender (FEX) devices act like virtual blades. There’s no Spanning Tree Protocol (STP) running up to the N5K (good), and no local switching (maybe not completely wonderful, but simple and unlikely to cause an STP loop). So the N5K/N2K design is like a 6500 with the Sup in one zone and the blades spread across others. 

From that perspective, the 40 Gbps of uplinks per N2K FEX is roughly comparable to current 6500 backplane speeds.  So the “smeared 6500” analogy holds up in that regard.

The sleeper in all this is that the 10 G optics aren’t cheap. So doing say 10-12 zones of 40 G of uplink, times optics and possibly special multi-mode fiber (MMF) patch cords, adds say 12 x ($2000) of cost, or $24,000 total. Certainly not a show-stopper, but something to factor into your budget. If you’re considering doing it with single-mode fiber (SMF), the cost is a bit higher. On the other hand, that sort of distributed Layer 2 switch is a large Spanning-Tree domain if you build it with prior technology.

Case Study 2

The second customer situation is a smaller shop, not that many servers but looking for a good Top of Rack (ToR) solution going forward. The former Data Center space is getting re-used (it was too blatantly empty?). And blade servers may eventually allow them to fit all the servers into one or two blade server enclosures in one rack. Right now we’re looking at something like 12 back-to-back racks of stuff, including switches. 

For  ToR, the 3560-E, 3750-E, 4900M, and N5K/N2K all come to mind. The alternative solution that comes to mind is a collapsed core pair of 6500’s. The cabling would be messier, but the dual chassis approach would  offer more growth potential, and a nice big backplane (fabric). 

The 3560-E and 3750-E have a 20 G of uplink per chassis limitation, not shabby, not quite up to the 6500 capacity per blade. That’s workable and not too limiting. 

The issue is, what do you aggregate them into? A smaller 6500 chassis? In that case, the alternatives are 6500 pair by themselves, or 6500’s (maybe smaller) plus some 3560-E’s or other small ToR switches, at some extra cost. 

Or the N5K/N2K, one might think. The N5K/N2K is Layer 2 only right now, so you need some way to  route between the various server VLANs (gotcha!). So until Layer 3 support is available, you still would need to connect the N5K/N2K’s to something like 4900M’s or 6500’s, to get some pretty good Layer 3 switching performance between VLANs. Right now, that external connection is either a pretty solid bottleneck, or you burn a lot of ports doing 8 way or (future) 16 way EtherChannel off the N5K/N2K. Bzzzt! That starts feeling rather klugey. 

My conclusions:

  1. The N5K/N2K right now seems to fit in better with a Nexus 7000 behind it. And I’d much prefer local Layer 3 switching to maximize inter-VLAN switching performance.
  2. The initial set of Nexus line features are probably chosen for larger customers —  standalone Layer 3 N5K/N2K being something more attractive to a smaller site. And smaller sites tend not to be early technology adopters.
  3. You can mitigate this to some extent by careful placement of servers in VLANs. On the other hand, my read on current Data Center design is that the explosive growth in numbers of servers and the need for flexibility have left “careful placement of servers” in the historical dust. Nobody’s got the time anymore.

One response to “Designing with the Nexus 5000 / 2000

  1. There are two potential things that I’m aware of that you might overlook with the Nexus 5000/2000 FEX.

    (1) The only FEX right now only does 1 Gig copper. No 10/100/1000. So that makes the N5K/2K more suitable for new zone (rack rows) build-outs than mixed retrofits, at least until something like a 48 or 96 10/100/1000 variant comes out.

    (2) As mentioned, Layer 3 support. It’s not clear whether Cisco intends L3 support for the N5K/2K.

    That leads to a related thought … what I hadn’t quite expected (realized?) is that some N7K bundles can come pretty darn close on price to the 6500 with DFC3C’s etc. So for high performance mixes, maybe we need to think of "small Nexus 7000" (is that an oxymoron?) rather than N5K/2K. Saying that another way, N5K/2K is a specialized L2 combination, and while it might be pretty neat if it could route between VLANs, maybe we shouldn’t automatically look at the N7K and think "big sites / high cost only".

    At one site, I’ve been looking for a combination collapsed campus (building) and data center core alternative to the 6500. The N5K/2K looks attractive since the 2K could be used for ToR (Top of Rack) and the heck with patch panels. But the lack of routing dooms it for this sort of usage. If we’re talking big chassis and cabling home runs vice ToR, 6500 versus N7K … well, how much speed is in YOUR future? 🙂

Leave a Reply

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.