I’ve been reading up heavily on the various aspects of Cisco Nexus and Data Center technology. While I am constitutionally unable to cheerlead, I must say I’m pretty impressed with the breadth of the vision. There are some mild feature gaps and good things to come, but overall the products look like they’ll meet the increasing Layer 2 design robustness needs of customers going forward. I do still intend to apply the “beer principle” — too much of a good thing may give you a headache!
I’ve got a couple of customers where the N5K/N2K have seemed appropriate. I thought I’d briefly mention a couple of things that I noticed in trying to design using the boxes… maybe fairly obvious, maybe a gotcha. I’d like to think the first story is a nice illustration of how the N5K/N2K lets you do something you couldn’t do before!
Case Study 1
The first customer situation is a site where various servers are in DMZ’s of various security levels. Instead of moving the servers to a physically separate data center server zone, as appears to have been originally intended (big Nortel switches from a few years back), they extended the various DMZ VLANs to the various physical server zones using small Cisco switches with optical uplinks. That gear (especially the Nortel switches) is getting rather old, and it’s time to replace it.
For that, the N5K/N2K looks perfect. We can put one or a pair of N5K’s in to replace the big Nortel “DMZ overlay core” switches, and put N5K’s out in the server zones (rows or multi-row areas of racks). For redundancy, we can double everything up. Right now one can make that work in a basic way, and it sounds like Cisco will fairly soon have some nice VPC (Virtual Port Channel) features to minimize the amount of Spanning Tree in such a dual N5K/N2K design, using Multi-Chassis EtherChannel (aka VPC). Neat stuff!
The way I’m thinking of this is as a distributed or “horizontally smeared” 6500 switch (or switch pair). The N2K Fabric Extender (FEX) devices act like virtual blades. There’s no Spanning Tree Protocol (STP) running up to the N5K (good), and no local switching (maybe not completely wonderful, but simple and unlikely to cause an STP loop). So the N5K/N2K design is like a 6500 with the Sup in one zone and the blades spread across others.
From that perspective, the 40 Gbps of uplinks per N2K FEX is roughly comparable to current 6500 backplane speeds. So the “smeared 6500” analogy holds up in that regard.
The sleeper in all this is that the 10 G optics aren’t cheap. So doing say 10-12 zones of 40 G of uplink, times optics and possibly special multi-mode fiber (MMF) patch cords, adds say 12 x ($2000) of cost, or $24,000 total. Certainly not a show-stopper, but something to factor into your budget. If you’re considering doing it with single-mode fiber (SMF), the cost is a bit higher. On the other hand, that sort of distributed Layer 2 switch is a large Spanning-Tree domain if you build it with prior technology.
Case Study 2
The second customer situation is a smaller shop, not that many servers but looking for a good Top of Rack (ToR) solution going forward. The former Data Center space is getting re-used (it was too blatantly empty?). And blade servers may eventually allow them to fit all the servers into one or two blade server enclosures in one rack. Right now we’re looking at something like 12 back-to-back racks of stuff, including switches.
For ToR, the 3560-E, 3750-E, 4900M, and N5K/N2K all come to mind. The alternative solution that comes to mind is a collapsed core pair of 6500’s. The cabling would be messier, but the dual chassis approach would offer more growth potential, and a nice big backplane (fabric).
The 3560-E and 3750-E have a 20 G of uplink per chassis limitation, not shabby, not quite up to the 6500 capacity per blade. That’s workable and not too limiting.
The issue is, what do you aggregate them into? A smaller 6500 chassis? In that case, the alternatives are 6500 pair by themselves, or 6500’s (maybe smaller) plus some 3560-E’s or other small ToR switches, at some extra cost.
Or the N5K/N2K, one might think. The N5K/N2K is Layer 2 only right now, so you need some way to route between the various server VLANs (gotcha!). So until Layer 3 support is available, you still would need to connect the N5K/N2K’s to something like 4900M’s or 6500’s, to get some pretty good Layer 3 switching performance between VLANs. Right now, that external connection is either a pretty solid bottleneck, or you burn a lot of ports doing 8 way or (future) 16 way EtherChannel off the N5K/N2K. Bzzzt! That starts feeling rather klugey.
- The N5K/N2K right now seems to fit in better with a Nexus 7000 behind it. And I’d much prefer local Layer 3 switching to maximize inter-VLAN switching performance.
- The initial set of Nexus line features are probably chosen for larger customers — standalone Layer 3 N5K/N2K being something more attractive to a smaller site. And smaller sites tend not to be early technology adopters.
- You can mitigate this to some extent by careful placement of servers in VLANs. On the other hand, my read on current Data Center design is that the explosive growth in numbers of servers and the need for flexibility have left “careful placement of servers” in the historical dust. Nobody’s got the time anymore.