The Problem with Latency Part 1: How Latency Harms Collaboration

On By Cullen Jennings4 Min Read
Hybrid Employee Experiencing Network Latency On Laptop During Online Video Conferencing Session

Have you ever thought about the typical rhythm of a conversation? We take turns. You speak, then I speak.

A great user experience researcher that I get to work with, Vanessa Costa-Massimo, puts it this way: “Without thinking about it, we know when one person’s turn is up, and it’s the other person’s turn to talk. When that natural rhythm is off, the conversation quickly becomes awkward and unenjoyable.”[1]

This is why we complain so much about latency on video calls.

What exactly is latency?

If you and I are on a call, latency is the time it takes from when I speak to when you hear me. It’s also the time from when the camera on my computer captured my image to when that image is displayed on your screen.

We measure latency in seconds or milliseconds. So the lower that number is, the better. Another way to assess latency is by measuring responsiveness. Mac users can test network connections with a metric developed by Stuart Cheshire and the Internet Engineering Task Force called round trips per minute (RPM). The higher the number is, the better, because it tracks how much data made round trips to and from a destination over the course of a minute.

Sometimes, latency hardly affects a hybrid work conversation. But if it gets long enough, it can cause unnatural pauses in dialogue, making conversations frustrating and impacting collaboration.

What causes latency?

One of the big contributors to latency is the distance data needs to travel.

On a Webex call, for example, data packets that include my voice and my image leave my laptop, travel through my service provider network all the way to the nearest Webex cloud server. Then, that data will travel to its final destination, through another service provider network and then to my co-worker’s local Wi-Fi and, at last, to their computer.

Even though data is traveling at over two-thirds the speed of light, the distance it must travel can sometimes amount to many thousands of miles, which takes a significant amount of latency. The farther data has to travel, the more likely this latency will impact the user experience.

It also depends on network connection. I might have an amazing connection, but if my co-worker is having some Wi-Fi trouble, the quality of our call will be impacted, and we’ll likely experience latency as a result. What’s more, as packets are lost or delayed, the techniques that are used to compensate for this often cause additional latency.

Why latency causes problems

Vanessa, who has researched the psychological impacts of latency, shared something very insightful: “When we experience latency, our first instinct is to blame not a faulty network connection, but the person we’re talking to,” she says. “We assume that if it takes a couple seconds for someone to respond, they’re not paying attention, or they’re lazy, rude, or some other negative quality, before we realize it could actually be a latency issue. ”

She points to a University of Oxford 2021 study that analyzed 25 video consultations for medical services in the UK. When calls only experienced about 100 milliseconds of latency, turn-taking was barely affected. But with about 700 milliseconds of latency or higher, participants struggled routinely.

Here’s the interesting part. “Even with delays over a second (1200 milliseconds), users still deemed the quality of the call acceptable,” says Vanessa. “That means they assigned the delay to another call participant, concluding that person was tired, thinking, or not listening. However, if the delay was accompanied by an audio or visual glitch, then the users realized it was low quality. That means that conversation cadence is of the most importance to people when determining quality.”

Can’t we do better?

We can, and a group of us from Cisco are working on this with the Internet Engineering Task Force. Together, we’re building a new publisher/subscriber protocol, Media Over QUIC. We define it as a “simple low-latency media delivery solution for ingest and distribution of media” that is applicable for use cases like real-time video conferencing, as well as gaming and live streaming, among others.

That’s because latency is also hindering growth and interaction in several industries — not just real-time collaboration. I’ll share more about this in part two of this series.

Learn more about Media Over QUIC here.

[1] Source: Journal of Pragmatics

More from Cullen Jennings:

About The Author

Cullen Jennings
Cullen Jennings Chief Technology Officer Cisco
Cullen is the Chief Technology Officer of the Collaboration Technology Group at Cisco and is responsible for the next generation of enterprise collaboration products.
Learn more

More like this