SD-Access Single Site Design

Author
Peter Welcher
Architect, Operations Technical Advisor

This blog is about SD-Access single site designs. My goal is to touch on some (but likely not all) important aspects of single-site design and provide links to good relevant Cisco documents.

Prior blogs in this series:

The chances are that you’re doing an SD-Access single site design because:

  • You’re doing Proof of Concept in a lab
  • Or learning in a lab
  • Or doing your first site without a lab
  • Or all you have is one site, probably fairly large.

If you’re going to eventually be doing multiple sites, especially interconnected ones rather than standalone, I’d suggest trying that with two sites, and maybe “datacenter” with fusion firewall and a “core network” switches.

Pre-Requisites Checklist

Here are some things to check:

  • Before you buy anything, let alone start deploying it, do your homework: make sure your design and the switches are sized properly – not just the number and speed of ports, but other scalability factors, such as sizing of CN’s, BN’s, WLC’s
  • If you’re doing new equipment, you are probably ok, but you should always be in the habit of checking feature support
  • If you’re planning on using older equipment, for example, re-using some gear for a lab, double-check feature support and any code upgrade limitations, using the latest Compatibility Matrix (see References, below).

Best Practice: Carefully do this homework before you buy! Better: get your Cisco partner and/or Cisco account team to review your draft BOM re ISE and DNAC, and review features/hardware mix for new and existing devices.

Reference Models

Cisco has defined some “reference models” for sites, based on size – number of endpoints, VN’s, and AP’s, along with diagrams. This provides a useful way to tell if you need to distribute the workload and get a rough idea of how much hardware.

Can the BN and CN functions be on shared devices, or do they need to be on different devices? And similarly, for WLC (wireless controller) functionality.

The section “SD-Access Site Reference Models” in the Design Guide contains some useful information:

  • In general, fewer sites and larger fabrics scales better.
  • Sizing model information (for single sites):

It provides the following table:

The Cisco design guide section goes on to discuss each of the above in-depth, with diagrams. They tend to include routers to the rest of the network and some of the services.

The diagrams are pretty much standard 2 or 3 tier switching architectures. What changes is how the SDA roles are spread around as the scale increase. Here is my version/summary:

FIAB

FIAB consists of one switch or stack, connected to ROTN (Rest Of The Network), possibly by a router. WLC in the FIAB box along with EN, BN, and CN roles.

Very Small

Very Small is 1 or 2 BN’s (also CN’s and probably EN role as well), with some EN’s in closets. They can be single switches or stacks.

Generally, when possible, I prefer to go with 2 BN’s for resiliency, both acting as CN’s and probably EN’s as well. The two BN’s connect to each other, and the EN’s or EN stacks are dual-homed to the BN’s. The WLC can be embedded or physical.

Small

Small is similar, but more EN’s / stacks. Possibly larger BN’s would be needed, depending on the number of uplinks/closets. A physical, separate, dedicated WLC is recommended.

Medium

Medium, I’d personally call “pretty fair-sized.” Anyway, Medium might be a three-tier (core, distribution, access) hierarchy spread across multiple buildings or some sort of campus. The CN function is offloaded to a router or switch, off the path, connected via two BN’s.

The distribution switches (not shown) could be pure underlay, or if end-system devices connect to them, they might be assigned the EN role. (I don’t think I’ve seen this discussed in print anywhere; it makes the diagrams uglier?)

A physical separate dedicated WLC pair is recommended for a Medium site.

The Medium site probably has a services block as well (DNS, DHCP, etc.)

Large

Large is similar, but more BN’s, possibly dividing up the workload over an internal BN pair and an external BN pair. (Internal to connect to the rest of the organization, external to connect the outside world / Internet via border devices, firewalls, etc.).

You can add ISE PSN’s (Policy Service Nodes) for local scalability if desired. (After some consideration, I’m pretty sure they do not provide site WAN/core isolation survivability – I don’t think I’ve seen this discussed/documented anywhere.)

I’ll note that Cisco’s smaller site diagrams all show a fusion router at the site. You may well be able to use L3 switching on the BN’s to tie into the core or WAN if you do not need router features such as VPN and fancy QoS. Their larger site diagrams do so.

Sizing and Services

The previous section covers how the high-level design approach handles scaling. This refers primarily to network device scaling and to distribute the LISP and other workloads sufficiently.

For DNAC version 1.3.3.x there is a DNAC Scaling Document; see the References below. Use this to determine which DNAC SKU / model is needed. (As new versions come out, no doubt there will be new versions of the scaling document.)

Presumably, you have already done something similar with Cisco ISE, with the ISE Scaling document. If you’ve been doing just TACACS, you may need to beef up your ISE cluster to handle RADIUS and 802.1x. See the References below.

Criticality and high availability might be another consideration. SD-Access doesn’t work very well when your ISE is down.

As more of the Cisco value-add moves to software, we are living more and more in the application space, where we have to (a) think about scaling the server-side of things (ISE and DNAC), and (b) be careful about feature support in the software, and which hardware supports which features.

References

 

Disclosure Statement

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.