The Chase for Network Speed May Soon Run Short of Energy

NetCraftsmen®

Editor’s Note: Our friend Russ White, a network architect with LinkedIn, recently passed along his thoughts on why the end of the IT world’s obsession with speed may soon be at hand. We thought that you would appreciate his take. Enjoy.

When I first started working on networks, I was installing terminal emulation cards in Z-80 all-in-one computers with dual disk drives and green text on black background screens. To get enough speed for our new personnel network at McGuire Air Force Base in the mid-1990s, we had to install reverse multiplexers to bundle multiple T1 speed links to support the terminals in a single building (it didn’t help that we had paper- and lead-wrapped wiring, at least until we ripped all the cabling out and replaced the entire phone system).

Since that time, I’ve never known anything other than faster. Faster wireline networks, moving from thicknet to thinnet to the first generation of fiber to 100g over fiber. Faster wireless networks. Faster storage and memory. And, finally, faster processors. The first 8088 I owned seemed like it would blow the doors off an old Z80, but now I’m working on a laptop with an i7 processor, 16g of memory, and a 512g SSD. My under-2.5-pound laptop outruns any machine I’ve ever owned in the last 20 years, no matter how large and bulky.

At some point, this continuous improvement in speed is going to end. Consider this: airline speeds increased until we reached the Concorde. But the Concorde never really shipped, did it? At some point you don’t need to incur any more risks or costs to get a slight improvement in speed. When we can run hundreds (or perhaps thousands) of servers on a single processor, and still get reasonable performance, maybe our hardware is actually starting to outrun our need for speed.

But there’s a second problem on the horizon that the network engineering community hasn’t internalized to any real degree. I mentioned it as a reason that software really isn’t going to eat hardware, but I’ve run across another paper that has increased my awareness of the scope of the problem. What is this problem?

Energy.

To quote from the paper, The Cloud Begins With Coal:

The information economy is a blue-whale economy, with its energy uses mostly out of sight. Based on a mid-range estimate, the world’s Information-Communications-Technologies (ICT) ecosystem uses about 1,500 TWh of electricity annually, equal to all the electric generation of Japan and Germany combined — as much electricity as was used for global illumination in 1985. The ICT ecosystem now approaches 10% of world electricity generation. Or in other energy terms — the zettabyte era already uses about 50% more energy than global aviation.

Reduced to personal terms, although charging up a single tablet or smart phone requires a negligible amount of electricity, using either to watch an hour of video weekly consumes annually more electricity in the remote networks than two new refrigerators’ use in a year.

When I first read this last line I couldn’t believe it, so I checked the reference (like any good Ph.D. student would). To my surprise, everything in that paragraph was absolutely true in 2013, when the paper was written. Things have changed some since then, but not much. So what does this have to do with processor and processing speeds? It seems Intel is working on new generations of chips that won’t run faster, but will run cheaper, at least in terms of power utilization.

To quote the MIT Technology Review:

However, the new technologies Holt cited would not offer speed benefits over silicon transistors, suggesting that chips may stop getting faster at the pace that the technology industry has been used to. The new technologies would, however, improve the energy efficiency of chips, something important for many leading uses of computing today, such as cloud computing, mobile devices, and robotics.

What is the potential impact of this line of thinking? First, the future may not be as “cloudy” as we all assume.

As cloud companies grow they’re going to move from “building” to “maintaining,” which also means figuring out how to control costs in parallel to offering innovative products and acquiring new customers. One of the pieces of “low-hanging fruit” in this world is going to be energy costs. We’re already seeing companies try interesting new ideas to reduce energy consumption, such as dropping data centers into the sea to take advantage of the higher energy density of water, as well as potentially using tidal movements to generate power.

There is a delicate balance between the cost of building (and operating) a data center and the cost of hauling traffic over a wide area to take advantage of compute and storage resources. As this balance changes, the economics of build versus the cloud will shift. Falling processor prices can reduce the cost of cloud, but it also reduces the cost of local compute at the same time.

Network engineers need to be prepared for these shifts; it is entirely possible that we are headed to a world where connectivity becomes more important and centralized processing less so. The moral of this story is that the network is far from dead or uninteresting.

How will this emerging prioritization of energy efficiency over speed affect your organization’s network? Reach out for a conversation.

Russ White has more than 20 years of experience in designing, deploying, breaking, and troubleshooting large scale networks. He has co-authored more than 40 software patents, spoken at venues throughout the world, participated in the development of various internet standards, helped develop the CCDE and the CCAr, and worked in Internet governance with the ISOC. Russ is currently a member of the Architecture Team at LinkedIn. His most recent books are “The Art of Network Architecture” and “Navigating Network Complexity.”

Leave a Reply

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.