I’ve recently been doing most data center network designs with Nexus 9300 switches. As dense server virtualization has shrunk the data center, organizations with say 500 VMs no longer need as much data center space or as many switch ports. Organizations that used to need (or use) a Nexus 7000 can now shrink their data center footprint considerably and stretch their budget further.
For such purposes, the Nexus 9300 switches have lots of high-speed ports, right? Well, as with most things in networking, “it depends”.
For some / many sites, two switches with 10 Gbps ports might be all they need. For general back office and sales support business applications, bandwidth consumption is not growing rapidly. Any higher-speed marketing video server(s) may well be hosted in the cloud, and / or leverage CDN (Content Delivery Network) caching.
For those that need a few more connection points, a 2 spine / 2 leaf or 2 spine / 4 leaf topology may be enough. For many of those, 10 or 25 Gbps server ports with 40 / 100 Gbps uplinks is great. That may be more bandwidth than is needed, but it is fairly affordable and provides good headroom for growth.
Then there’s the others. Dell has apparently found a way to compete in switching with Cisco, by selling a GUI-driven switch fabric to server admins, as part of a server chassis sale. This standardized design accommodates up to 9 Dell chassis.
If you were planning a Cisco spine and leaf design, that can rather … change your day!
As in “thank you, but we’ll do our own switching — keep it closer to the servers for lower latency”? — And under the server team’s control … at least until it breaks?
After you recover from hearing that, you may then hear “and just connect us with 100 Gbps connections, we only need 16 of them”.
In the meantime, new NetApp storage now leads to requests for similar numbers of 40 Gbps connections.
If you were planning on using leaf switches with 6 or 12 of the 40 / 100 Gbps ports, that’ll consume your high-speed port budget rather quickly!
Or bump you up from a Nexus 9336 to a 9364 for more 40 / 100 Gbps ports!
If that’s not enough ports, the next step of course might be a Nexus 9500 with high-speed line cards or an even bigger Nexus 9500.
To complete the real-world story: between a pair of core Nexus 9364s and 4 leaf Nexus 93240s, you would have 2 x 64 + 4 x 12 = 176 x 40 / 100 Gbps ports, in a small footprint and at a fairly low cost! Note that with say 2 x 100 Gbps uplinks from each leaf to the spine switches, 8 of those ports will be consumed for infrastructure. 16 if you want lower oversubscription ratios.
Stepping Up to 400 Gbps
If that isn’t enough for you, Cisco recently announced some 400 Gbps switches; two 9300 series switches, and two 3400 series switches (as of the time this was written).
Where might you use such a beast?
Well, suppose you’ve got a lot of server capacity and / or storage connected at 40 and 100 Gbps speeds. You might want to do spine and leaf topology, with 400 Gbps for the spine to leaf links.
The other use case that comes to mind is aggregating connections for a 400 Gbps fiber-based connection to elsewhere.
By the way, Cisco’s 400 Gbps announcement mentioned that Cisco is working on 400 Gbps BiDi optics. I’ve come to really appreciate the convenience and low cost of the 40 and 100 Gbps BiDi optics, especially around using existing fiber plants (in some / many cases, anyway).
On a Related Note
At CiscoLive 2019, I heard the comment that Cisco is heavily researching, since electrical to optical interfaces are likely to become the next major technical bottleneck impeding greater speed.
- Nexus 9300 EX and FX Switches
- Nexus 9500 line cards
- Nexus 3000 ultra-low latency switches (including 3400 models)
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Hashtags: #CiscoChampion #TechFieldDay #TheNetCraftsmenWay #Routing #Switching #DataCenter
Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at firstname.lastname@example.org.