Need for Speed
I’ve recently been doing most data center network designs with Nexus 9300 switches. As dense server virtualization has shrunk the data center, organizations with say 500 VMs no longer need as much data center space or as many switch ports. Organizations that used to need (or use) a Nexus 7000 can now shrink their data center footprint considerably and stretch their budget further.
For such purposes, the Nexus 9300 switches have lots of high-speed ports, right? Well, as with most things in networking, “it depends”.
For some / many sites, two switches with 10 Gbps ports might be all they need. For general back office and sales support business applications, bandwidth consumption is not growing rapidly. Any higher-speed marketing video server(s) may well be hosted in the cloud, and / or leverage CDN (Content Delivery Network) caching.
For those that need a few more connection points, a 2 spine / 2 leaf or 2 spine / 4 leaf topology may be enough. For many of those, 10 or 25 Gbps server ports with 40 / 100 Gbps uplinks is great. That may be more bandwidth than is needed, but it is fairly affordable and provides good headroom for growth.
Then there’s the others. Dell has apparently found a way to compete in switching with Cisco, by selling a GUI-driven switch fabric to server admins, as part of a server chassis sale. This standardized design accommodates up to 9 Dell chassis.
If you were planning a Cisco spine and leaf design, that can rather … change your day!
As in “thank you, but we’ll do our own switching — keep it closer to the servers for lower latency”? — And under the server team’s control … at least until it breaks?
After you recover from hearing that, you may then hear “and just connect us with 100 Gbps connections, we only need 16 of them”.
In the meantime, new NetApp storage now leads to requests for similar numbers of 40 Gbps connections.
If you were planning on using leaf switches with 6 or 12 of the 40 / 100 Gbps ports, that’ll consume your high-speed port budget rather quickly!
Or bump you up from a Nexus 9336 to a 9364 for more 40 / 100 Gbps ports!
If that’s not enough ports, the next step of course might be a Nexus 9500 with high-speed line cards or an even bigger Nexus 9500.
To complete the real-world story: between a pair of core Nexus 9364s and 4 leaf Nexus 93240s, you would have 2 x 64 + 4 x 12 = 176 x 40 / 100 Gbps ports, in a small footprint and at a fairly low cost! Note that with say 2 x 100 Gbps uplinks from each leaf to the spine switches, 8 of those ports will be consumed for infrastructure. 16 if you want lower oversubscription ratios.
If that isn’t enough for you, Cisco recently announced some 400 Gbps switches; two 9300 series switches, and two 3400 series switches (as of the time this was written).
Where might you use such a beast?
Well, suppose you’ve got a lot of server capacity and / or storage connected at 40 and 100 Gbps speeds. You might want to do spine and leaf topology, with 400 Gbps for the spine to leaf links.
The other use case that comes to mind is aggregating connections for a 400 Gbps fiber-based connection to elsewhere.
By the way, Cisco’s 400 Gbps announcement mentioned that Cisco is working on 400 Gbps BiDi optics. I’ve come to really appreciate the convenience and low cost of the 40 and 100 Gbps BiDi optics, especially around using existing fiber plants (in some / many cases, anyway).
At CiscoLive 2019, I heard the comment that Cisco is heavily researching, since electrical to optical interfaces are likely to become the next major technical bottleneck impeding greater speed.
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Hashtags: #CiscoChampion #TechFieldDay #TheNetCraftsmenWay #Routing #Switching #DataCenter
Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at email@example.com.
Need for Speed
Container-Based WAN Monitoring
What is NVMe and How Does It Impact My Network?
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.