SD-Access Design Revisited: Sites

Peter Welcher
Architect, Operations Technical Advisor

I recently posted a blog about prior blogs I’d written bearing on SD-Access/DNA Center design and some implementation details.

Cisco has documented implementations well. However, what they have seems much more focused on single-site topics and more on implementation and driving the GUI than design. My prior blogs go into some of the other topics you probably need to consider when designing and planning for multi-site SD-Access.

And I believe there are some overall design questions that really ought to be part of your pre-purchase and pre-deployment planning.

As I noted in the recent blog, NetCraftsmen has recently had an upsurge in SD-Access design and deployment work. The design discussions have revisited many of the themes from my prior blogs and work.

I’m quite pleased that:

  1. Most of the design topics I identified have come up again, i.e., weren’t single-customer issues, especially the ones I haven’t seen Cisco really mentioning.
  2. No new topics have surfaced, although I may have a new approach to some of them.
  3. Yes, there are some somewhat-related topics, like ISE and survivability, that I didn’t write about previously.

As a result of the new work, I’ve found myself spelunking through my old blogs (and internal/customer-facing documents) in support of that. To my relief, my prior blogs and content seem to be holding up pretty well as things have evolved.

This blog is the start of a possible series revisiting some of the design topics and related discussions that have come up.

What Should be a Site?

Yeah, this didn’t really get covered before. What I wrote was more of a catalog of types of sites. Borders, edges, etc.

Where some challenges may come in is in taking your existing network and deciding which parts of it should be sites. Good hierarchical modular design can play a role in that. Staff, staff mobility, and security boundaries can also play a role.

Generally, I want a site to be physically contiguous or nearly so. Thus, a site might be:

  • A single building, small or large, possibly with multiple floors.
  • Part of a building, when there is a desire for clear security or functionality separation (division) (e.g., public safety and/or call center), data center, staff, etc. For example, a public library inside a city or county building might be a site separate from the rest due to separate funding and/or security requirements.
  • Probably NOT a whole multi-building campus

When there is one or two MAN or WAN links out of a building or a small group of buildings going to the rest of the network, that feels to me like the building ought to be a separate site.

Coming at this in a different way, I’ve been a strong believer in hierarchical design for years. So, my preference is for a spine-leaf or distribution-access switching structure to be a site. Three levels of switching are ok, too, as one site, within rational scaling bounds.

Any domain with VLANs spanning it is a candidate as a site. Exception: large L2 VLAN spans, which are a Really Bad (and ancient) Design approach.

From this perspective, L3 switching, or routers typically form the edge of the site.

And having MAN/WAN routed links that are NOT part of a switched fabric can be A Good Thing in an SD-Access design – they can be underlay. See below.

What’s the goal of carving out sites?

  • A site should have a well-contained geography with MAN/WAN interconnections.
  • Common macro and micro-segmentation needs (although multiple sites can share a common scheme for those).
  • Locations with major differences in function or security needs maybe should be different sites.
  • In general, holding down the number of sites simplifies building and maintaining things. But generally, in the absence of WAN L2 or other considerations, different geographic locations should probably be different sites for SD-Access purposes.

An Example for Discussion

Suppose you have three adjacent buildings in a distinct physical location, not too large, whose external connections go through a shared pair of L3 switches. Say each building has two or four uplinks from a building distribution switch pair to the L3 switches.

Should that be one site or three?

My answer: Yes. Either. It depends.

Questions that come to mind:

  • Do people move around between the buildings? Outdoor wireless or any network between the buildings (like enclosed corridors or whatever)?
  • Do you need to distinguish between the buildings as far as device addressing? (Somewhat easier with separate sites.)
  • Are there security or other distinctions, or are they just three buildings with similar job roles, etc., across all three?


The underlay must be contiguous. It provides forwarding between sites and also external border sites/data centers/etc. You don’t really want to be doing that with traversal of some site smack in the middle of your VXLAN tunnels.

SD-Access SDA-Transit can handle routing between sites over such an underlay in a scalable way.

If you like VRF-Lite, you can do that for underlay as IP Transit. Be aware that it does not scale at all well if you’re going to have more than a couple of VRFs in a multi-site design. There’s also a new technology vs. comfort zone factor lurking here.

External Border Sites

If you have Internet connections, they will likely be at one or two “External Border Sites” with (technically speaking) IP Transit connections from some SDA border routers to the fusion firewall complexes, etc.

If those sites are also data centers, as they often are, so much the better.

If the data centers are separate, then some discussion is needed. Do you need your VRFs to extend to the data centers? Are they also going to have fusion firewalls in them?

And are both data centers connected to both Internet-connected sites? If not, that mildly complicates routing.

I would hope that if you intend external border site redundancy, the underlay connects other sites to the external border sites with redundancy and no common failure points. If not, then maybe you live with the SPOFs (single point of failure(s)) while planning for better dual-homing. Assuming that can be done in a cost-effective fashion.

If that’s not possible, I’d have to see the specific situation. Usually, the cabling is the problem, with the cost to remediate the lack of redundancy in a campus or metro environment is the key issue.


You may not find choosing sites ex-site-ing (groan over bad pun here), but doing it well can pay off in ease of understanding, diagramming, building out, and troubleshooting an SD-Access network.


Disclosure statement