What’s in Store for the Future of the Network?

Author
Peter Welcher
Architect, Operations Technical Advisor

In a recent blog post, I discussed how the WAN is changing for some organizations. This post considers the broader topic of what else might be changing in networks, affecting staffing and skills. In fact, this topic should perhaps become an annual blog topic to address what’s shifted in the last year.

Let’s treat this as a catch-up blog on trends I’m seeing, which may or may not catch on.

Wired and WLAN

WLAN speeds have been increasing, and the quality of WLAN services and operational support improving. Cisco’s WLAN offerings are strong in this regard, particularly with the mobile device QoS “Fast Lane” and other cooperative efforts with Apple, and integration enterprise mobile device management (MDM) products.

WLAN done well has the potential to replace most wired networking, particularly since most users use primarily WLAN at home. Of course, enterprise APs currently require a wired POE infrastructure. WLAN done poorly — well, that’s for masochists. There are apparently a lot of people who qualify — enterprise WLAN is different than home WLAN.

One neat Cisco product can be added to APs to allow them to also operate as cellular small-cell technology, in conjunction with a cell provider. They use IPsec tunnels for secure backhaul of the cellular traffic. The visual description that comes to mind is “Mickey Mouse ears for your AP,” which probably manages to offend both Cisco and Disney in one sentence. So let’s pretend I didn’t write that and go with “outboard antenna ears with mounting behind your AP,” as I’m in no way intending to disparage or poke fun at the product. There are other modules coming that can be added to certain Cisco APs as well for different market niches, e.g. retail beacons.

If you hadn’t noticed, leaky coax and other DAS systems are generally poor for WLAN. Think MIMO, frequency limitations, etc. The Cisco product is a solution that does traditional WLAN supplemented with small cell and backhaul, which avoids doing one of the two things well and the other sub-optimally.

Potential problem areas for most WLAN networks:

  • Legacy site surveys. Have YOU re-surveyed recently? With RF signal turned down? Maybe planning for dense cells even? What kind of bandwidth per user is your requirement? By the way, all this means coordinating with your site survey person before they do the survey, so they measure properly.
  • Cheap printers (wake up HP!) and laptops (many vendors) that inexcusably are 2.4 Ghz band only, still. 802.11ac is where they should be now.

Reasons to keep wired networks:

  • Wired phone handsets.
  • Other wired devices, especially POE devices.

I can’t really say wired will go away. It may shrink.

Bear in mind that with Cisco facilities switches for smart lighting etc., there may be a shift. POE devices in various forms still retain value with wiring, or perhaps provide greater ROI for POE, and low-power facilities cabling could be useful. Maybe we’ll be talking about wireline-powered IOT and wireless IOT devices?

Datacenter

Cloud is the obvious technical impact here. My claim is that applications really ought to be done over right, or CoLo is more what you’re achieving, at cloud prices.

SaaS does have the impact of shrinking whatever datacenter you have: fewer apps. Support costs also decline. Office 365 is a leading example of that. That frees up server admin cycles to deal with the specialized apps, VDI, etc.

UCS and converged infrastructure do the same. You can run a medium-sized business out of one UCS chassis. And if you’re doing that, why not stick it in a CoLo? This fits into the one-to-a-few racks (cabinets) size range.

In conjunction with this, I’ll note Ivan Pepelnjak’s blog about only needing two top-of-rack switches in a datacenter. I did similar math a while ago, and keep updating it. You can run a lot of VMs in a UCS chassis, and three to four times as many in a rack. You might need two switches to interconnect all that, or two racks of such chassis. Even years ago, the math came out to 1000-2000 VMs, depending on how much CPU and RAM they use. You’ve got to be a pretty big company to need more than that. Heavy virtualization = “Honey, I Shrank the Datacenter.”

So once you’ve done a good job of virtualizing, this could be another incentive to move things to a CoLo. It costs less to put a shrunken datacenter into a CoLo. Dare we refer to a non-virtualized datacenter as “bloated”?

Combining this with a CoLo-centric WAN works well. We are seeing more and more customers shifting to this. It provides better datacenter redundancy and security than a datacenter in costly office space.

In one case, it helped anticipate moving the organization to a new building: The customer moved the datacenter to CoLo space, then could focus solely on standing up the new building user space, WAN link to the datacenter, and moving users. (Pro tip, thanks to John and Mark!)

Other datacenter-related thoughts:

  • SSD solid state storage greatly increases IOPS. That moves the bottleneck elsewhere, probably back to CPU and network. Do you know where your bottlenecks are?
  • VMware VSAN and similar “hyperconverged” server technologies definitely impose heavier burdens on the network, both bandwidth-wise and regarding stability and reliability. You do monitor up/down, error%, and discard% on all your infrastructure links, don’t you?
  • I suspect VMware VSAN (or similar technology) scales to a point then reaches a point of diminishing returns. I’d like to know what that scaling point is. If you know of any good research on this, please share a link via Twitter and/or a blog comment!

Summing Up So Far

So you’ve moved your datacenter, WAN, and firewalling/network edge to the CoLo. What’s left is some user access networking at each site, some mix of wired ports and heavy WLAN.

For what it’s worth, NetCraftsmen runs Cisco Jabber out of a CoLo. I can’t quite say super, since Jabber seems to not reliably auto-detect changes of location/address/web proxy or security device. I’ve learned to restart Jabber whenever I change site. That’s an application issue, not a network issue!

File Shares and WAN

Some moderately sized businesses we have worked with are experiencing issues with file sharing to field offices, scattered around a region consisting of a few states centered on HQ/datacenter. Some recent (painful) testing showed that site-based NetApp SMB2 + latency didn’t work very well with 1 Gbps MetroEthernet (relatively inexpensive!) links and 30 msec latency. That customer and a couple of others have reported good experience with Panzura caching. This solves the problem of trying to manually or semi-manually pre-cache large AutoCAD files, etc. at project sites, especially when your remote workers skills mix doesn’t necessarily align well with a single office managing a project.

I can’t help but think, if that works well out of HQ/datacenter, it could work as well out of a CoLo.

Datatility has some interesting things to say about use of such caching with CoLo or cloud-based storage, particularly for backup and recovery. When comparing Cloud to CoLo for backup, it is important to factor in actual storage costs, which may not favor a cloud-based approach.

Security

It appears there is a continuing plethora of too many vendors and point solutions. Security monitoring architectures and automation are key, unless you’re into building a large security team. I see indications that for small to medium organizations, managed services can be very attractive. As with UC&C, security is just becoming too much for such organizations to handle in-house.

Applications

I’ve become a big believer in simplicity, to the extent possible. I also believe that we have created major network and datacenter complexity and fragility catering to poorly written applications. (And thanks to ipspace.net for this perspective.) Yes, some of it is self-inflicted (e.g., by customizing each site’s design). Using standardized designs and sticking to them is key to scalable and frugal operations, support, and automation.

This is why shifting to SaaS-based applications is a winner, particularly for smaller businesses. It lightens the server/application admin burden, and uses applications designed for the internet, which (one hopes) avoids the complexities of the past. Or at least makes them someone else’s problem. It perhaps makes it clearer that customization = additional cost for the life of the application.

We’re not out of the woods yet. It’ll take years, even decades, to rewrite the remaining applications to be “cloud-ready” or at least “robust high-availability supporting.” Some of my wish list: ease of re-IP and DNS name changes for applications, ability to easily clone app front ends to different locations and different IP addresses for use with load balancers.

Unified Communications and Collaboration

My crystal ball says “cloud-based” and “managed services.” Our UC&C folks may disagree, but I didn’t take the time to discuss this with them.

Network and Datacenter Management

Network and other management tools can now collect and process a lot more data. Vendors may be starting to learn how to detect and report potential issues in better ways, ones where we don’t have to go poking around all over the place to find a problem. Reporting changes in routing tables, significant changes in link and router performance, and helping us with “It’s broken, what changed?” is another area where tools might do a lot better.

Having said that, cross-platform integration is hard. It’s even harder when a vendor has several disparate products for various technical niches. Still, I’m mildly optimistic that the opportunity is there for network management tools to get a lot better.

Personally, I think Network, Datacenter, Server, SAN, Storage, and Application all need to be more tightly integrated in tools. Security has to tie them together as well, but maybe in a different way. Or maybe via a shared topology model under the hood. Whatever the answer is, the tools need to do a far better job of saving us work, especially in detecting and resolving problems.

My theory of sailing: Boat maintenance time must be noticeably less than fun time on the water. My current beat-up dinghy fails that test. A variant of that applies to tools: Tool maintenance time must be significantly less than time the tool is useful for troubleshooting, etc.

Staffing

Overall, we see small to medium-sized organizations grappling with technical complexity, compounded with finding skilled staff at pay rates they can afford. Even if you have a small number of in-house network, server/storage, and security staff, UC&C and ISE and other specialty areas may be too much to cover well in-house.

One answer is partnering for technical services, something NetCraftsmen has been increasingly providing to our customers. It may be more helpful to have someone with deep knowledge of an area a couple of days a week, rather than a less-skilled full-time employee. While that may sound like a marketing pitch, it is also an observation of what works for an increasing number of our customers.

There are also areas such as ISE or UC&C, where supplementing one employee in that area with expert advice and a sounding board for ideas can help. If you’re the one ISE person in a fair-sized organization, who do you talk to about ISE? Who can advise based on lots of field experience at multiple sites?

Skills

I think the above has skills implications, depending on where your employer, network, and interests are going. Note the datacenter doesn’t go away, it shrinks and maybe moves to a CoLo. So the basic elements are the same, just used in novel and more compact ways.

One shift: You’ll probably be spending more time with your UCS chassis than on the switches in the CoLo. What else? WLAN, and WAN/internet routing will still be needed. Maybe some IWAN or SD-WAN management, which should be automated for you already or soon.

Interacting with the cloud or CoLo provisioning and management platforms would be a useful skill, as well as being able to advise on good cloud application approaches and networking, VMware NSX, and/or container networking design. With security, scalability and operational manageability.

White Box Switches

Some people think “white box” switches will rule the world. I view them as shifting device purchase cost to human support costs. Since staff time is scarce already, spending for reliable hardware/software and support is a win. Of course, if you don’t receive said reliability, then that’s a problem.

What I like about some of the white box switch discussions is that in a lot of settings, all that may be needed is VLANs and MLAG. OK, I get that.

I view white box devices with a mix of software as a support nightmare. Who validates a combination of a particular white box switch hardware model and various open source code versions? Who can help me troubleshoot a bug with such a beast?

A recent blog post covered server complexity and being bound to fail is perhaps related; variety causes complexity and failures.

I see the potential cost savings as working for very big organizations, or maybe university/research environments. Maybe. The rest of us don’t have enough cycles as things are.

Sharing the Crystal Ball

I’m sure I haven’t spotted every change in network design that’s in progress! Please leave a comment to add trends that you’ve noticed, or your thoughts about how things are changing.

Comments

Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!

Disclosure Statement
Cisco Certified 20 Years

Leave a Reply