Router Buffer Tuning

Author
Terry Slattery
Principal Architect

Buffer tuning has long been an interesting topic for me.  I recently found a blog post by Brough Turner, who wrote an interesting article about a potential misconfiguration of AT&T wireless routers that cause ping round trip times to be either < 200ms or around 8000ms (yes, 8 seconds!).

http://blogs.broughturner.com/2009/10/is-att-wireless-data-congestion-selfinflicted.html

There was quite a bit of controversy in the comments, which make for interesting reading.  One of the things that I didn’t see covered in the original post or in the comments was TCP retransmissions.  TCP measures the round-trip time and retransmits the unacknowledged segment if it does not receive an ACK within the retransmission timeout (see RFC2988).  Retransmissions occur at an interval that doubles upon each timeout (the “back-off timer”).  An increase to from 200ms to 8000ms is possible with six retransmissions.  If there are a lot of TCP connections using the same link, and a lot of buffering is used, the buffers contain more and more retransmitted data, increasing the congestion on the link as retransmitted data needs to be transmitted.  If there are just a few TCP connections, then something else is causing the long delays.

I can envision other mechanisms causing the long delays.  The pings were data.  If QoS were used to prioritize voice and there was a lot of voice traffic at the time of the test, the data could have been buffered for a long time.  I’ve seen this in network testing.  It is easy to replicate this in a relatively small network running old routing protocols like RIP and IGRP.  Create a lot of routes, so the updates are relatively large and use a slow link.  Setup a workstation to do a ping at 1 second intervals over several minutes and capture the resulting data.  Import the data into Excel and plot the sequence number against the round-trip time (you’ll need a ping output that includes the sequence number so you can detect packet loss).  You’ll see the ping packets get delayed when the routing updates occur.  A saw-tooth pattern appears in the plotted data.  Ping packets can be delayed by several seconds when the updates are large and the links are slow.  I am not familiar with the Layer 1/2 protocols used in celular networks, but I could also believe that there’s a low-level protocol, maybe similar to X.25, that’s buffering the data and eventually getting it pushed through a very lossy link.

Back to buffer tuning.  It is not well covered in router classes – it is something that you have to dig to find.  I can certainly believe that network staff (or their managers) who don’t understand how TCP works would focus on packet loss and insist on configuring enough buffering to avoid packet loss.  I can hear them now:  “Our network is great!  We don’t have any packet loss!”  [I could see a manager thinking that packet loss is like dropped calls and wanting to minimize it.]

There are times when buffer tuning is valuable.  I look for interfaces that have a lot of buffer misses.  In the Cisco gear, this occurs when a packet arrives, needs a buffer, and no buffers of the appropriate size are available.  The packet is dropped and a new buffer is created to handle future packets of the same size.  If an interface shows a high number of buffer misses of a particular packet size, I recommend increasing the number of fixed buffers by no more than 10% and then watching for further buffer misses.

If there are more misses, I start to look at other mechanisms to handle the load.  Increasing the interface speed is the preferred mechanism, where it can be done.  QoS obviously allows important data to receive priority treatment.  (Refer to my post “Cisco Router Interface Wedged” for an example of what happens when QoS isn’t properly implemented.)

Since some pings were 8000ms, it is clear that something somewhere is hanging onto the packets.  I doubt that they circulated in the network that long without the TTL counting down to zero.  Unfortunately, there is no way to absolutely know what is happening without additonal data like a packet trace.  How many concurrent connections exist?  What is the direction of the data flows?  How many retransmissions are occurring? Unfortunately, flow monitoring tools (netflow, sflow, IPFIX) do not provide the level of detail that is needed.  Getting TCP retransmission data from one of the endpoints is valuable, but often difficult to obtain and is dependent on the OS in use.  See the Windows Performance Monitor Counters for an example.

I like to use Cisco’s IP SLA tool, combined with another tool to manage a lot of IP SLA tests, to let me know that a particular link or path to a remote site is experiencing high latency, jitter, or packet loss.  Once I’ve identified a poorly operating path, I can apply whatever tools are needed to determine the origin of the problem.

-Terry

_____________________________________________________________________________________________

Re-posted with Permission 

NetCraftsmen would like to acknowledge Infoblox for their permission to re-post this article which originally appeared in the Applied Infrastructure blog under http://www.infoblox.com/en/communities/blogs.html

infoblox-logo

Leave a Reply

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.