July 9th, 2024

Beyond bufferbloat: End-to-end congestion control cannot avoid latency spikes

End-to-end congestion control methods like TCP and QUIC face challenges in preventing latency spikes, especially in dynamic networks like Wi-Fi and 5G. Suggestions include anticipating capacity changes and prioritizing latency-sensitive traffic for a reliable low-latency internet.

Read original articleLink Icon
Beyond bufferbloat: End-to-end congestion control cannot avoid latency spikes

End-to-end congestion control methods like TCP and QUIC are crucial for managing internet congestion, but a recent paper by Domos highlights a key limitation—they cannot prevent latency spikes. Despite efforts to improve TCP's latency performance, issues persist, especially in networks with rapidly changing link capacities like Wi-Fi and 5G. Bufferbloat, identified as a major source of latency, results from poor congestion signaling causing unnecessary queuing. While progress has been made in reducing bufferbloat, challenges remain in achieving low latency. The paper emphasizes that existing congestion control methods cannot fully address latency spikes, necessitating alternative solutions. Suggestions include anticipating capacity changes, underutilizing links, prioritizing latency-sensitive traffic, and diversifying links to enhance reliability. The study underscores the need to acknowledge and address these limitations to realize a dependable low-latency internet, particularly in the context of 5G networks. Researchers and engineers are encouraged to explore these alternative strategies to mitigate latency issues effectively.

Related

Timeliness without datagrams using QUIC

Timeliness without datagrams using QUIC

The debate between TCP and UDP for internet applications emphasizes reliability and timeliness. UDP suits real-time scenarios like video streaming, while QUIC with congestion control mechanisms ensures efficient media delivery.

I Add 3-25 Seconds of Latency to Every Page I Visit (2020)

I Add 3-25 Seconds of Latency to Every Page I Visit (2020)

Reducing latency in web browsing can boost revenue in the consumer web industry. Intentionally adding latency to browsing activities can help curb addiction and enhance control over internet usage. Various methods like using specific browsers or tools are suggested.

The Rise of Packet Rate Attacks: When Core Routers Turn Evil

The Rise of Packet Rate Attacks: When Core Routers Turn Evil

Packet rate attacks, a new trend in DDoS attacks, overload networking devices near the target. OVHcloud faced attacks exceeding 100 Mpps, some from MikroTik Routers, prompting enhanced protection measures.

P4TC Hits a Brick Wall

P4TC Hits a Brick Wall

P4TC, a networking device programming language, faces integration challenges into the Linux kernel's traffic-control subsystem. Hardware support, code duplication, and performance concerns spark debate on efficiency and necessity. Stalemate persists amid technical and community feedback complexities.

C++ patterns for low-latency applications including high-frequency trading

C++ patterns for low-latency applications including high-frequency trading

Research paper explores C++ Design Patterns for low-latency applications, focusing on high-frequency trading. Introduces Low-Latency Programming Repository, optimizes trading strategy, and implements Disruptor pattern for performance gains. Aimed at enhancing latency-sensitive applications.

Link Icon 9 comments
By @flapjack - 3 months
One of the solutions they mention is underutilizing links. This is probably a good time to mention my thesis work, where we showed that streaming video traffic (which is the majority of the traffic on the internet) can pretty readily underutilize links on the internet today, without a downside to video QoE! https://sammy.brucespang.com
By @westurner - 3 months
- "MOOC: Reducing Internet Latency: Why and How" (2023) https://news.ycombinator.com/item?id=37285586#37285733 ; sqm-autorate, netperf, iperf, flent, dslreports_8dn

Bufferbloat > Solutions and mitigations: https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti...

By @comex - 3 months
Sounds like a good argument for using a CDN. Or to phrase it more generally, an intermediary that is as close as possible to the host experiencing the fluctuating bandwidth bottleneck, while still being on the other side of the bottleneck. That way it can detect bandwidth drops quickly and handle them more intelligently than by dropping packets - for instance, by switching to a lower quality video stream (or even re-encoding on the fly).
By @dilyevsky - 3 months
There's a paper by Google where they demonstrated that you can successfully push utilization far beyond of what is suggested here[0]

[0] - https://www.cs.princeton.edu/courses/archive/fall17/cos561/p...

By @fmbb - 3 months
Can someone explain this to a layman? Because it seems to me the four solutions proposed are:

1. Seeing the future 2. Building a ten times higher capacity network 3. Breaking Net neutrality by deprioritizing traffic that someone deems “not latency sensitive” 4. Flood the network with more redundant packets

Or is the whole text a joke going over my head?

By @scottlamb - 3 months
> That is not true for networks where link capacity can change rapidly, such as Wi-Fi and 5G.

Is this problem almost exclusively to do with the "last-mile" leg of the connection to the user? (Or the two legs, in the case of peer-to-peer video chats.) I would expect any content provider or Internet backbone connection to be much more stable (and generally over-provisioned too). In particular, there may be occasional routing changes but a given link should be a fiber line with fixed total capacity. Or is having changes in what other users are contending for that capacity effectively the same problem?

By @jchanimal - 3 months
It's fun to think about the theoretical maximum in a multi-link scenario. One thing that pops out of the analysis -- there are diminishing returns to capacity from adding more links. Which means at some point an additional link can begin to reduce capacity as it starts to eat more into maintenance and uptime budget than it offers in capacity.
By @mrtesthah - 3 months
macOS sonoma supports L4S for this purpose:

https://www.theverge.com/23655762/l4s-internet-apple-comcast...

By @bobmcnamara - 3 months
This result is explainable by Danish tandem q theory.