Interface Bandwidth Statements

Terry Slattery
Principal Architect

I’ll periodically run into someone who is attempting to implement policy routing by modifying interface bandwidth statements.  What they’re trying to do is to have traffic prefer a certain path through their network by either increasing or decreasing the bandwidth of a link, therefore forcing the routing protocol to adjust its metric on the link.  With EIGRP, the other ‘knob’ that you have available is to modify the delay metric of the interface, but with OSPF, bandwidth is the only metric that is automatically used in calculating the routing metric.

There are other commands that can be used to adjust the metric on an interface and they should be used in these cases.  Modifying the bandwidth statement (which doesn’t actually modify the link’s clocking speed) is not accepted best practice (there are Cisco docs that cover this topic too).

First, the routing protocols use the configured bandwidth (i.e. what the bandwidth statement says) to calculate how much link bandwidth it can consume while doing routing updates.  EIGRP will use up to 50% of the link bandwidth by default (yes, it can be modified with the   ip bandwidth-percent eigrp interface  command).  When can this be a problem?  When the default bandwidth setting is much higher than the actual bandwidth.  Let’s say that you have a 128Kbps link to a remote retail office and you’re doing voip over the link as well as some data communications but no special qos configuration and the default serial interface bandwidth of 1544 (T1 speed).  By default the routing protocols will have the highest priority.  Let’s also assume that the site is not configured as a stub and is taking a full routing table from the headquarters.  This is the scenario that Bruce Enders of Netcraftsmen outlined to me with regard to a voip problem he once had to resolve.  The problem occurred when there was a routing topology change that caused a flood of routing exchanges with the remote office.

 The higher priority routing updates saturated the link, causing the voice call to drop out for the duration of the update.  This happened infrequently, so it was very difficult to isolate.  According to Bruce, the clue was knowing that a call had a problem and finding that the only thing that simultaneously occurred was a routing topology change elsewhere in the network.

The other factor that comes in to play in today’s networks is that QoS is often implemented to prioritize mission critical traffic over less important traffic.  Instead of allocating a set amount of bandwidth to a particular traffic class, it is useful to allocate bandwidth by a percentage of the link speed. When you give the router the incorrect information for the interface, it will then modify the amount of bandwidth allocated to each traffic class that is based on a percentage of link speed.

The final factor is that your network management systems will likely use the SNMP variable (ifSpeed) for its calculations of link utilization.  So if you modify the link bandwidth, you’ll need to remember that when you view the utilization stats in your network management application.  The interface will either run over 100% or top out at a much lower utilization, depending on whether the bandwith statement was lower or greater than the interface’s true speed.

The summary of all this discussion is that you should set the bandwidth to the clocking speed of the interface so that the various factors that use it have accurate information.  If you need to do policy routing, then take a look at using route maps, which provides a much better level of control over how different traffic is handled.



Re-posted with Permission 

NetCraftsmen would like to acknowledge Infoblox for their permission to re-post this article which originally appeared in the Applied Infrastructure blog under


Leave a Reply