How Is Networking Changing?

Author
Peter Welcher
Architect, Operations Technical Advisor

I’ve  noticed recently that blog and “news” topics etc., that I’ve been seeing have changed. So, I started trying to figure out what’s different and why. This blog discusses some of what I think I’ve been seeing.

Let’s call it a discussion of large industry trends, bearing in mind the industry is in a state of considerable flux. Blog and other trends suggest what is considered new and interesting, which to some extent also captures shifts in what people spend their time doing (or anticipate spending future time doing).

Constructive comments and discussion are welcomed; LinkedIn is probably the easiest way to do that. (Find me, find the mention of this blog, and comment away!)

Driving Factors

Mulling this over, I see four driving factors. There may be more, but these are the ones I see right now. They are:

  • Cloud (and Edge)
  • Skilled Staff, Staffing in general
  • Automation
  • Security

There’s also Artificial Intelligence (AI), but I see that more as LLM-related hype right now, aka “shiny new thing.” People are trying to figure out where LLMs might be useful. And yes, there may be some uses for LLMs, but (as Bruce Davie recently noted) they model language, not the world. Other parts of AI get little attention but are quite useful. That’s a good subject for another blog.

Cloud is, of course, the elephant in the room. With Edge as a baby elephant? The world is still learning how to use the Cloud efficiently, when NOT to use it, and how to use it without breaking the bank. With DevOps including security and networking personnel in some cases (broader skillset and tools familiarity required). I think I’ve blogged about that previously, for instance see this.

Measurement/observability is a related factor (and yes, I may have had too much Kentik cool-aid!) Some of their recent blogs have been interesting, going into what an observability team might look like, etc. See also DataOps. Larger firms do this, but then again, that’s Kentik’s wheelhouse. It resonates for me as high relevance to effective DevOps, figuring out how best to structure your Cloud apps for cost and speed efficiency, and understanding the trade-offs.

Staff with experience and skills (and affording them) has become a real challenge for organizations. One partial answer is outsourcing if your internal pay scale won’t let you pay enough. BlueAlly/NetCraftsmen has benefited from that (managed services, partnering). But another aspect of the pay situation is staff efficiency and retention. Better retention can mean having staff that have accumulated a broader skillset.

Vendors have been pushing management platforms and AI (artificial intelligence) to address staffing concerns. And also as revenue streams when chips were scarce and hardware badly back-ordered. The supply chain problem seems to be clearing up, so I expect a mild “deployment catchup” surge to deploy back-ordered gear and service new orders to replace old gear or expand networks.

One strength of such management platforms from the customer’s point of view is the potential for increased reliability and better management. This trend may accelerate as vendors add AI-based reporting and troubleshooting to the management platforms and get better at it.

Staffing considerations have also driven some organizations to move to a NaaS (and other aaS) model. Have someone else build or operate the network with economies of scale. Small to medium-sized organizations find it hard to gain such economies, and as technology gets more complex, in-house staffing for all the necessary skills becomes a real challenge. In particular, compensation – not being able to pay enough to retain adequately skilled staff. Smart organizations also recognize that it is best to have networks designed and deployed by people who solidly know what they’re doing.

Flexible outsourcing can help by covering spot needs for specific advanced technical skill sets. NetCraftsmen/BlueAlly has some current customers with open-ended staff supplementation, where the in-house staff are doing switching, routing, and some firewall, but firewall migration and broader security tooling (ISE, etc.) are being done by our consultants. For others, they handle day-to-day operations but outsource change/deployment due to the sheer large scale. Yet others, use consulting services to design and install Identity and Security tools, given the plethora of integrated products that is becoming common.

Automation of operations is growing. It does require scale to amortize costs and ditto skills. The obvious result of automation is usually increased reliability, along with better use of scarce staff time. The ability to thoroughly test automation scripting etc., is also a challenge at a smaller scale.

That’s where the vendor platforms are a win. While life seems to be moving away from the CLI, templating does offer a shortcut to a fair degree of automation, empowering smaller shops.  The CLI holds historical relevancy but can also bring a degree of lock-in. I’ve been mulling over a Cisco product-neutral CLI that could instantiate the appropriate commands for each platform. I like the idea but don’t see any real incentive for Cisco to go there. Skipping ahead to GUI-based management with complexities such as VXLAN “hidden under the hood” probably makes more sense.

Looking forward (and at ONUG), automation of key network constructs across multi-cloud seems highly attractive. And perhaps eventually on-prem as well? The risk is moving complexity from the various piecemeal approaches to the “meta-tools” that cross the CSP and on-prem boundaries. This area is hot and not boring, routine right now! But also costly – in a shift of costs from skills and labor to automation, perhaps?

Adding AI might be something better left to the pros, baked into network and security management products. Staff will have to know what the AI is doing and its strengths and weaknesses – but maybe not how to code it! There is some cool stuff happening in that space. For example, Cisco is using AI (or what I prefer to call “advanced statistics”) and its vast amount of data on equipment and part failure rates to predict when failures may occur.

Security. The Security space has ballooned in terms of products, capabilities, and now ZT/ZTNA. Far too many, as I see it. Not that I have an answer. I’ve seen an “eye chart” listing 47 varieties of security products with something like 1000 vendor icons classified into those groupings. Ouch!

The complexity is compounded by vendor lock-in and different cloud versions of things. My sense of this space is thinner – I tune out due to the overhype and wealth of minute details of some of it. Just have to draw the line somewhere as to where I’m going to put in the time.

SecOps outsourcing for all but the largest firms may be required due to economies of software and staffing: small- to medium-sized organizations just can’t staff up enough and maintain tools enough in this space.

The other thing at play in Security is the toolset. There are a lot of niche tools and differing approaches, ditto in the ZT/ZTNA space. Ease of deployment and simple macro- and micro-segmentation are key goals. That alone might be a good topic for another blog or two. Identity management is another focus and a major source of complexity (and need for skills), and Security alerting/response/mitigation is yet another (where even fairly big shops may not be able to staff up sufficiently?).

Consequences

Automation, better tools, etc. also means less time typing CLI at devices, deploying new/replacement devices, etc. Plus higher fidelity, deploying to match intent. All of which is good for those who can do that. So some of the legacy networking skillset and job tasks have become boring, routine, and automated. Good!

The same is true on the management/troubleshooting side of things. Recent growth in the tools space has become vendor-driven, where the tool selectively pulls SNMP or telemetry data, collects logs, and analyzes it all for you, pro-actively and in troubleshooting scenarios. I will call that the niche- or module-management approach, e.g., campus switch management versus data center management versus WAN/SD-WAN management (based on Cisco). If one tool covers all three niches, so much the better!

Products that (allegedly) work cross-vendors: why am I thinking of Juniper Apstra right now? Others???

If you’re in a single-vendor shop, that’s great (but: lock-in). In a mixed-vendor shop, that creates another pain point.

The flip side of good tools is that skills become additive: one needs some level of CLI and/or hardware skills, PLUS skills with the automation or GUI tools and their quirks. So calling a medium level of skill a “1” becomes maybe 0.6 (basic) + 0.8 (tool #1) = 1.4, i.e. more to know.

Modeling is also great for testing changes before deploying them, assuming someone has the time to do that well. Modeling is not quite there yet in a cost-effective way. I’m impressed with the evolution of network modeling tools (limited by vendor image availability) and related automation tools (like Ivan Pepelnjak’s netlab). It would be great if vendors (ahem, Cisco in particular) could resolve concerns about providing a wide set of lab images or containers, e.g., sufficient lock-down to prevent use for anything but lab.

Automation Framework Tools

There also seems to be a growing number of tools for repeated deployments or “cookie-cutter” automation. They look like project-management front ends to which you can attach back-end automation, scripts, and/or other tools. That makes sense regarding the likely market, which will likely require senior management for such costly products.

Some of the tools let you “design” standard sites (think small, medium, or large WAN sites?) and will take input, build a BOM, and (to some unclear degree) assist in deployment.

From the marketing literature, everything looks seamless and easy, of course. It is also hard to tell how much of the automation back end is already present versus something you have to provide or purchase as services from the vendor for a suitably large fee! (Is my cynicism showing?)

My superficial impression is that some such tools are intended for in-house onsite use and may include building BOMs etc. Others appear to be more multi-cloud oriented, as in creating common logical constructs across cloud vendors without having to sweat terminology and other differences. (Begging the question of different behaviors, especially regarding security?)

Russ White is fond of “look for the trade-offs.” I can’t disagree. There are always trade-offs. My variant for the automation space is “look for the (hidden?) complexity.” Did you just move the complexity? Is the result more or less complex? Does the tool complexity help address a staff shortcoming or skills gap?

Some of these “PM downwards automation” tools (is there a better name?) seem likely to have a lot of complexity “under the hood.” Large shops can perhaps afford 1-2 specialists in such a tool. Small organizations, maybe not. That gives an edge in using them to managed services providers, perhaps?

And that’s not even considering anything involving or touching ServiceNow … (Not going to continue that thought.)

The deployment planning/automation tools could also create efficiencies for vendors and VARs – lower the costs of bidding and deploying anything that fits common design patterns. Which should be just about everything these days?

If a reseller or managed services firm has a standard design approach and set of products for networking and security, having that built out ought to be cheaper and faster than do-it-yourself (maybe with some consulting)? And maybe accept some compromise in terms of product mix?

So, the big question is: will that happen, or will somewhat less comprehensive automation tools be preferred?

Less all-inclusive tools provide the opportunity to build internal skills, try various toolsets, and find the best fit to your organization’s needs.

Let’s start a conversation! Contact us to see how NetCraftsmen experts can help with your complex challenges.

 

Disclosure statement