Solving Video Application Problems

Author
Peter Welcher
Architect, Operations Technical Advisor

Maybe it’s another sign of experience, aging, cynicism, and/or ego. Lately I seem to be doing a lot of troubleshooting of application performance problems. “What on earth was the developer thinking?” is becoming a recurrent refrain, at least in my head. This is (I hope) not ego, just a real question about what appears to be a sub-optimal approach.

Here is a case in point that I hope you’ll find entertaining and informative. I worked with a hospital using a medical application that provides steerable video for intensive care patients. Medical and network diagnoses should start with the symptoms. Hence:

Symptoms: There were problems such as the video client locking up, loss of ability to steer the camera, erratic camera steering, etc.

After inquiring about the application, I was told that the application used a TCP-based remote control channel and TCP-based video. This set off some alarm bells in my head.

Some Technical Review

There’s a reason most real-time media, VoIP, and video use UDP. With video and voice, there’s a trade-off between timeliness of the data and reliability. Lost UDP video frames result in those pixelated artifacts on screen. It’s real-time and ugly.

Reliability and TCP requires retransmission of lost packets. Doing so takes time. That means the application needs a buffer to pre-fetch the video frames, allowing seconds to minutes of lead-time for retransmissions to get caught up. You can’t just pause the video, waiting for a missing frame; that would play out in rather choppy fashion. So TCP-based video is mostly used for streaming media, e.g. YouTube, video commercials on web pages, etc. UDP-based video is usually used for real-time viewing. The value of doing this is that the play-out of the buffer is much smoother, and may only have artifacts or pauses for longer-term hiccups in the traffic stream.

There are apparently lots of proprietary ways of streaming video over TCP. If you Google “streaming video protocols,” you’ll find that people have come up with clever ways to leverage the congestion avoidance and flow control of TCP, provide adaptive video quality based on estimated bandwidth available, etc. I just did a quick refresh on the topic (not an area of deep expertise for me) and apparently some of the protocols treat video delivery as a stream of small file transfers. Interesting stuff, but my need to know more is low right now. Just be aware, TCP video may not be one big stream.

Diagnosis Continued

Given that the video was TCP-based, I suspected packet loss. There was a WAN involved for some sites, and for the remaining remote sites there was shared LAN media in a fiber-driven loop path through daisy-chained sites (geography and cost-driven design).

There was some QoS in place, somewhat inconsistent in how it was configured – and missing in places.

This was a concern, because any congestion due to micro-bursts of traffic would cause dropped TCP packets and retransmission. Queuing delays might also trigger retransmissions. Retransmissions might make the congestion worse, stepping on newer video to deliver packets that might even be no longer relevant video data. Emphasis on “might” there.

Prescription: Remediate two sites daily until all sites and paths have the proper QoS.

Side note: The hospital in question had a lovely “network weather map,” which often showed rather low utilization. I like having such real-time data readily visible. The challenge with such data is that it is likely displaying averages over some period of time, which is rarely visible. Averaged data does not provide information about “micro-bursts.” I think of IP video as operating in micro-burst fashion: It sends frequent I-frames (think full-screen image in all detail) followed by frames indicating changes to that background. When you’re receiving, a garbled picture will likely stay that way until the next I-frame arrives in a second or three.

Digging Deeper

There was another item of concern, one that I’ve been encountering a lot lately. Some of the links were sub-rate links, where the physical media speed was higher than the contracted data rate, and where the carrier was likely policing excess traffic to enforce the contractual data rate – e.g., Fast Ethernet physical link with 20 or 40 Mbps contracted data rate. This is becoming a very common WAN approach, because the carrier can provision it once, and the customer can use a web portal to adjust the contracted rate upward or downward. More revenue for little effort; providers have to love that!

Sub-rate links are a red flag for me. My analogy is the famous I Love Lucy chocolate factory episode, where she cannot keep up with the conveyor belt. Think of the conveyor belt as a 1 Gbps link. Say the contracted rate is 200 Mbps. That means Lucy can keep up with every fifth chocolate position on the conveyor belt. Send more chocolate than that, and the extras end up on the floor (policing).

The solution is for the sending router to shape the traffic, buffering it. In terms of Lucy, take micro-bursts and pace the chocolate transmission, only occupying every fifth time slot or conveyor belt position. Then none of it ends up on the floor (or in Lucy’s hair, mouth, etc.).

The challenge lately is cost and technology. When a site gets a 1 Gbps Ethernet physical WAN/MAN link in, the site staff often want to connect it to a LAN switch, rather than buying a router as well. Unfortunately, if you wish to do traffic shaping to the contracted rate, you need a router.

Prescription #2: Add traffic shaping to sub-rate links. (Both to and from the main hospital.)

In fairness to the un-named video application vendors, I should note that the hospital in question deployed the application over the WAN, which may not have been an intended use case by the vendor.

I hope to share some stories about other issues in future blogs. For example, stuck TCP connections, and a robust web-scale app.

Comments

Comments are welcome, both in agreement or constructive disagreement about the above. Or about your adventures in video over IP. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!

Hashtags: #TCPvideo #VoIP #Video #QoS #Shaping

Twitter: @pjwelcher

Disclosure Statement
Cisco Certified 15 YearsCisco Champion 2014

 

 

 

Leave a Reply

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.