Analyzing Throughput Testing Data

Author
Terry Slattery
Principal Architect

Last week I helped a customer with firewall throughput testing that generated some interesting results.  It is educational to look at the test results to understand how we determined what was happening from the data we collected.

The test configuration was a Smartbits packet tester connected to a pair of Gigabit interfaces on the firewall as shown below.

TestSetup

We were using the Throughput test, configured to stop if it encountered any packet loss as it increases the load in steps.  The traffic was UDP, to simulate a voice and video stream, going from a single source to a single destination.

Background: The Smartbits can be configured in two modes:

  • Step mode, which increases in equal steps that you define.
  • Binary mode, which increases in a binary progression where the second step is half way between the first step and the maximum.  Using binary mode from 10% to 100% would test at 10%, 55%, 77.5%, 88.75%, 94.37%, 97.18%, 98.59%, 99.29%, and 100%.

In the first iteration, it encountered some loss at 10% load and stopped.  It only lost 13 packets, so we set the loss threshold to 0.1%.  It ran through the entire test, only dropping packets at the 10% load iteration.

BinaryThroughputTest

At first, we thought that the ARP learning process was causing packets to get dropped as the test started.  But the configuration of the Smartbits and the firewall both looked good.

After experimenting with some of the Smartbits configuration parameters, we discovered that there was no packet loss at any throughput load of 4% or less.  Still not totally conclusive, but it hinted that the learning process was not a problem.  We did consider that at 4% load, the firewall could be buffering the first packet and that the learning could have happened before the second packet arrived.

At that point, the firewall vendor engineer who was working in the lab mentioned that the firewall was stateful and that its state timer for UDP was 60 seconds.  We thought about that information a bit and found that we could configure the Smartbits to ARP on each iteration in a test run and to also wait 70 seconds on each ARP attempt.  Sure enough, we saw packet loss on each iteration in the test run.  This time we switched to the step mode and set the increment to 5% so we had good resolution on the expected packet loss at each step(see the Step Throughput Test figure).

StepThroughputTest

We ran some additional tests in which we watched the firewall state timeout.  Every instance where the state existed would run with no loss and if the state had expired there would be loss.  This was good enough for us.  (To be absolutely sure, we’d need to look at the packet capture data. It would be interesting to see which packets were dropped.)  We did note the interesting spike in packet loss around 65-70% load, but didn’t investigate this further.

The question was now whether the loss incurred when establishing a flow was significant enough to cause a problem in the production network.  At 4%, the data rate is 40Mbps on the 1Gbps interface.  We aren’t aware of any single source of UDP data that uses that much bandwidth.  While the aggregate of a lot of flows may exceed that figure, we are sure that we wouldn’t see a single flow using that much bandwidth in this network.

We did not run any tests with a mix of traffic, which would have been a better test, but it would take a lot more time to setup.  It would be interesting to see if the creation of multiple flows at one time causes a change in the packet loss characteristics.

The key point of this story is that we had to think about the data that we were seeing and come up with a possible cause for the packet loss and then test that theory.  It gave us a good idea of some of the limitations of the firewall and whether they would impact the network’s operation.

-Terry

_____________________________________________________________________________________________

Re-posted with Permission 

NetCraftsmen would like to acknowledge Infoblox for their permission to re-post this article which originally appeared in the Applied Infrastructure blog under http://www.infoblox.com/en/communities/blogs.html

infoblox-logo

Leave a Reply

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.