June 23rd, 2024

Timeliness without datagrams using QUIC

The debate between TCP and UDP for internet applications emphasizes reliability and timeliness. UDP suits real-time scenarios like video streaming, while QUIC with congestion control mechanisms ensures efficient media delivery.

Read original articleLink Icon
Timeliness without datagrams using QUIC

The article discusses the debate between using TCP and UDP for internet applications, focusing on the importance of reliability and timeliness in data delivery. It explains how UDP datagrams are preferred for scenarios requiring real-time delivery, such as live video streaming, due to their ability to prioritize timeliness over reliability. The text delves into the complexities of network congestion, buffer management, and the challenges of building a custom transport protocol on top of UDP. It highlights the advantages of using QUIC for media delivery, emphasizing the need for congestion control mechanisms like BBR and stream prioritization to ensure timely data transmission. The article also touches on the evolving support for datagrams in protocols like QUIC and WebTransport, while cautioning against the pitfalls of relying solely on UDP for developing new video protocols. Ultimately, it advocates for leveraging existing technologies like QUIC for efficient and reliable media delivery over the internet.

Link Icon 30 comments
By @tliltocatl - 4 months
Most of TCP woes comes from high-bandwith latency-sensitive stuff like HFT and video, but TCP isn't particularly good for low-bandwidth high-latency networks either (e. g. NB-IoT with 10 seconds worst case RTT):

- TCP will waste roundtrips on handshakes. And then some extra on MTU discovery.

- TCP will keep trying to transmit data even if it's no longer useful (same issue as with real-time multimedia).

- If you move into a location with worse coverage, your latency increases, but TCP will assume packet loss due to congestion and reduce bandwidth. And in general, loss-based congestion control just doesn't work at this point.

- Load balancers and middleboxes (and HTTP servers, but that's another story) may disconnect you randomly because hey, you haven't responded for four seconds, you are probably no longer there anyway, right?

- You can't interpret the data you've got until you have all of it - because TCP will split packets with no regards to data structure. Which is twice as sad when all of your data would actually fit in 1200 bytes.

By @promiseofbeans - 4 months
We use straight UDP datagrams for streaming high-frequency sensor data. One of our R&D people built a new system that uses quic and solves most of our problems with out-of-order delivery. We still use datagrams over UDP for everything because we have to support some 3rd party sensors out of the box without adapters, and UDP is all they can do.
By @adunk - 4 months
This may seem like a minor nit, but I think there is a problem with using the term "unreliable" to describe UDP. The more commonly used term, and IMHO better term, is "best-effort" [1]. UDP makes its best effort to deliver the datagrams, but the datagrams may be dropped anyway. But it does not make UDP inherently unreliable.

[1] https://en.wikipedia.org/wiki/Best-effort_delivery

By @PhilipRoman - 4 months
IMO stream abstractions make it too convenient to write fragile programs which are slow to recover from disconnections (if they do at all) and generally place too many restrictions on the transport layer. Congestion control is definitely needed but everything else seems questionable.

In a datagram-first world we would have no issue bonding any number of data links with very high efficiency or seamlessly roaming across network boundaries without dropping connections. Many types of applications can handle out-of-order frames with zero overhead and would work much faster if written for the UDP model.

By @nyc_pizzadev - 4 months
One thing not mentioned often is that a lot of networks will drop UDP packets first when encountering congestion. The thinking is that those packets will not re-transmit, so it’s an effective means to shed excess traffic. Given we now have protocols that aggressively re-transmit on UDP, I wonder how that has changed things. I do seem to remember QUIC having re-transmit issues (vs HTTP1/2) years ago because of this.
By @dang - 4 months
I've attempted to replace the clickbait title* using terms from the article itself, but if someone can suggest a more representative phrase from the article, we can change it again.

(* in keeping with the HN's title guideline: "Please use the original title, unless it is misleading or linkbait" - https://news.ycombinator.com/newsguidelines.html)

By @TuringNYC - 4 months
>> The common wisdom is: >> >> use TCP if you want reliable delivery >> >> use UDP if you want unreliable delivery >> What the *(& does that mean? Who wants unreliability?

I dont agree with the premise of this article, UDP isnt for unreliability, it provides a tradeoff which trades speed and efficiency and provides best-efforts instead of guarantees.

It makes sense depending on your application. For example, if I have a real-time multi-player video game, and things fall behind, the items which fell behind no longer matter because the state of the game changed. Same thing for a high-speed trading application -- I only care about the most recent market data in some circumstances, not what happened 100ms ago.

By @blueflow - 4 months
What should be based on datagrams:

- Local discovery (DHCP, slaac, UPnP, mDNS, tinc, bittorrent)

- Broadcasts (Local network streaming)

- Package encapsulation (wireguard, IPSec, OpenVPN, vlan)

By @mjw_byrne - 4 months
Silly clickbait title, which the author even admits up front.

UDP and TCP have different behaviour and different tradeoffs, you have to understand them before choosing one for your use case. That's basically it. No need for "Never do X" gatekeeping.

By @cenriqueortiz - 4 months
Nahhh. While most of the applications/cases will be using session-based connections, there are uses for using datagrams directly — don’t be afraid. Yes, you will have to take care of many more details yourself. And as a side line, it is a great way of learning the low level aspects of networking.
By @ggm - 4 months
Quic is implemented over UDP. It's literally running over datagrams.
By @thomashabets2 - 4 months
> The bytes within each stream are ordered, reliable, and can be any size; it’s nice and convenient. Each stream could be a video frame […] But you can tell the QUIC stack to focus on delivering important streams first. The low priority streams will be starved, and can be closed to avoid wasting bandwidth.

Is the author saying that with QUIC I can send a "score update" for my game (periodic update) on a short-lived stream, and prevent retransmissions? I'll send an updated "score update" in a few seconds, so if the first one got lost, then I don't want it to waste bandwidth retransmitting. Especially I don't want it retransmitted after I've sent a newer update.

By @forrestthewoods - 4 months
It mentions that the video game industry uses UDP but then fails to further address that use case. So, should competitive shooter video games switch to QUIC? Is that even supported across all the various gaming platforms?
By @dicroce - 4 months
I wonder if this guy thinks video GOPs should all be hundreds of frames long because P frames are so much smaller than I frames and since in his world we NEVER use an unreliable network you might as well.
By @gary_0 - 4 months
I wonder if the DiffServ[0] bits in IPv6 could be another way to prevent bufferbloat from affecting real-time datagrams? Or are they like IPv4's ToS[1] bits, which I think were never implemented widely (or properly) enough for any software to bother with?

[0] https://en.wikipedia.org/wiki/Differentiated_services

[1] https://en.wikipedia.org/wiki/Type_of_service

By @ozim - 4 months
Great in depth article from what seems really a person who knows that stuff.
By @bitcharmer - 4 months
This is a narrow way of looking at UDP applications. The whole HFT, low-latency fintech world is built on top of datagrams. Using TCP would be pure the worst choice possible.
By @kierank - 4 months
Imagine Ethernet was designed like this and you had to implement mandatory congestion control and other cruft. The layer of the stack that has knowledge of the content should be implementing the congestion control.
By @asdefghyk - 4 months
Maybe ... never use on a congested network.

or Never use on a network where congestion is above a certain level

or Never us on a network where this parameter is above a certain level - like network latency

or only use on a LAN not a WAN ....?

By @dragonfax - 4 months
I've seen UDP used for great effect in video streaming. Especially timely video streaming such as cloud gaming. When waiting a late packet is no longer useful.
By @Anon_Admirer - 4 months
Hope this one gets captured by quackernews - can’t wait to see its description.
By @nabla9 - 4 months
The choice is not UDP vs TCP.

UDP adds minimum over raw sockets so that you don't need root privileges. Other protocols are build on top of UDP.

It's better to use existing not-TCP protocols instead of UDP when the need arises instead of making your own. Especially for streaming.

By @ta1243 - 4 months
Used high numbers of UDP packets over intercontinental internet links for mission critical applications for 15 years. 20mbit of UDP carrying RTP. Loss on a given flow is quite rare, and the application helps (via duplication, or retransmits)

As time has progressed increased from nothing to fec to dual-streaming and offset-streaming to RIST and SRT depending on the criticality.

On the other hand I've seen people try to use TCP (with rtmp) and fail miserably. Never* use TCP.

Or you know, use the right tool for the right job.

By @JackSlateur - 4 months
Network is nothing but datagrams.
By @20k - 4 months
I feel like this article misses why people avoid TCP like the plague, and why people use UDP for many applications

1. Routers do all kinds of terrible things with TCP, causing high latency, and poor performance. Routers do not do this to nearly the same extent with UDP

2. Operating systems have a tendency to buffer for high lengths of time, resulting in very poor performance due to high latency. TCP is often seriously unusable for deployment on a random clients default setup. Getting caught out by Nagle is a classic mistake, its one of the first things to look for in a project suffering from tcp issues

3. TCP is stream based, which I don't think has ever been what I want. You have to reimplement your own protocol on top of TCP anyway to introduce message frames

4. The model of network failures that TCP works well for is a bit naive, network failures tend to cluster together making the reliability guarantees not that useful a lot of the time. Failures don't tend to be statistically independent, and your connection will drop requiring you to start again anyway

5. TCP's backoff model on packet failures is both incredibly aggressive, and mismatched for a flaky physical layer. Even a tiny % of packet loss can make your performance unusable, to the point where the concept of using TCP is completely unworkable

Its also worth noting that people use "unreliable" to mean UDP for its promptness guarantees, because reliable = TCP, and unreliable = UDP

QUIC and TCP actively don't meet the needs of certain applications - its worth examining a use case that's kind of glossed over in the article: Videogames

I think this article misses the point strongly here by ignoring this kind of use case, because in many domains you have a performance and fault model that are simply not well matched by a protocol like TCP or QUIC. None of the features on the protocol list are things that you especially need or even can implement for videogames (you really want to encrypt player positions?). In a game, your update rate might be 1KB/s - absolutely tiny. If more than N packets get dropped - under TCP or UDP (or quic) - because games are a hard realtime system you're screwed, and there's nothing you can do about it no matter what protocol you're using. If you use QUIC, the server will attempt to send the packet again which.... is completely pointless, and now you're stuck waiting for potentially a whole queue of packets to send if your network hiccups for a second, with presumably whatever congestion control QUIC implements, so your game lags even more once your network recovers. Ick! Should we have a separate queue for every packet?

Videogame networking protocols are built to tolerate the loss of a certain number of packets within a certain timeframe (eg 1 every 200ms), and this system has to be extremely tightly integrated into the game architecture to maintain your hard realtime guarantees. Adding quic is just overhead, because the reliability that QUIC provides, and the reliability that games need, are not the same kind of reliability

Congestion in a videogame with low bandwidths is extremely unlikely. The issue is that network protocols have no way to know if a dropped packet is because of congestion, or because of a flaky underlying connection. Videogames assume a priori that you do not have congestion (otherwise your game is unplayable), so all recoverable networking failures are 1 off transient network failures of less than a handful of packets by definition. When you drop a packet in a videogame, the server may increase its update rate to catch you up via time dilation, rather than in a protocol like TCP/QUIC which will reduce its update rate. A well designed game built on UDP tolerates a slightly flakey connection. If you use TCP or QUIC, you'll run into problems. QUIC isn't terrible, but its not good for this kind of application, and we shouldn't pretend its fine

For more information about a good game networking system, see this video: https://www.youtube.com/watch?v=odSBJ49rzDo, and it goes over pretty in detail why you shouldn't use something like QUIC

By @dale_glass - 4 months
IMO one of the worst mistakes made in IP development was not having made a standard protocol for the one thing that pretty much everyone seems to want:

A reliable, unlimited-length, message-based protocol.

With TCP there's a million users that throw out the stream aspect and implement messages on top of it. And with UDP people implement reliability and the ability to transmit >1 MTU.

So much time wasted reinventing the wheel.

By @karmakaze - 4 months
TL;DR - use QUIC (should have just looked at the domain name)
By @FpUser - 4 months
I use UDP for my desktop game like software with clients all over the world to propagate anonymized state. It needs no encryption as the transmitted data has zero value to any third party. It needs no reliability since dropped packets would be handled by predictive filter with practical reliability that is way more than needed.

So why the F.. would I bother with anything else?