Timeliness without datagrams using QUIC
The debate between TCP and UDP for internet applications emphasizes reliability and timeliness. UDP suits real-time scenarios like video streaming, while QUIC with congestion control mechanisms ensures efficient media delivery.
Read original articleThe article discusses the debate between using TCP and UDP for internet applications, focusing on the importance of reliability and timeliness in data delivery. It explains how UDP datagrams are preferred for scenarios requiring real-time delivery, such as live video streaming, due to their ability to prioritize timeliness over reliability. The text delves into the complexities of network congestion, buffer management, and the challenges of building a custom transport protocol on top of UDP. It highlights the advantages of using QUIC for media delivery, emphasizing the need for congestion control mechanisms like BBR and stream prioritization to ensure timely data transmission. The article also touches on the evolving support for datagrams in protocols like QUIC and WebTransport, while cautioning against the pitfalls of relying solely on UDP for developing new video protocols. Ultimately, it advocates for leveraging existing technologies like QUIC for efficient and reliable media delivery over the internet.
Related
The return of pneumatic tubes
Pneumatic tubes, once popular across industries, now thrive in hospitals for efficient transport of medical items. Evolved systems enhance operational efficiency, tailored for specific needs, reducing task time significantly.
Exposition of Front End Build Systems
Frontend build systems are crucial in web development, involving transpilation, bundling, and minification steps. Tools like Babel and Webpack optimize code for performance and developer experience. Various bundlers like Webpack, Rollup, Parcel, esbuild, and Turbopack are compared for features and performance.
FreeBSD Bhyve Companion Tools
The author details transitioning from VirtualBox to FreeBSD Bhyve, praising Bhyve's benefits in a FreeBSD setting. Tools like VNC connection and pause/resume scripts optimize Bhyve operations, simplifying VM management.
I've stopped using box plots (2021)
The author critiques box plots for being unintuitive and prone to misinterpretation, advocating for simpler alternatives like strip plots to effectively communicate distribution insights without confusing audiences.
Finnish startup says it can speed up any CPU by 100x
A Finnish startup, Flow Computing, introduces the Parallel Processing Unit (PPU) chip promising 100x CPU performance boost for AI and autonomous vehicles. Despite skepticism, CEO Timo Valtonen is optimistic about partnerships and industry adoption.
- TCP will waste roundtrips on handshakes. And then some extra on MTU discovery.
- TCP will keep trying to transmit data even if it's no longer useful (same issue as with real-time multimedia).
- If you move into a location with worse coverage, your latency increases, but TCP will assume packet loss due to congestion and reduce bandwidth. And in general, loss-based congestion control just doesn't work at this point.
- Load balancers and middleboxes (and HTTP servers, but that's another story) may disconnect you randomly because hey, you haven't responded for four seconds, you are probably no longer there anyway, right?
- You can't interpret the data you've got until you have all of it - because TCP will split packets with no regards to data structure. Which is twice as sad when all of your data would actually fit in 1200 bytes.
In a datagram-first world we would have no issue bonding any number of data links with very high efficiency or seamlessly roaming across network boundaries without dropping connections. Many types of applications can handle out-of-order frames with zero overhead and would work much faster if written for the UDP model.
(* in keeping with the HN's title guideline: "Please use the original title, unless it is misleading or linkbait" - https://news.ycombinator.com/newsguidelines.html)
I dont agree with the premise of this article, UDP isnt for unreliability, it provides a tradeoff which trades speed and efficiency and provides best-efforts instead of guarantees.
It makes sense depending on your application. For example, if I have a real-time multi-player video game, and things fall behind, the items which fell behind no longer matter because the state of the game changed. Same thing for a high-speed trading application -- I only care about the most recent market data in some circumstances, not what happened 100ms ago.
- Local discovery (DHCP, slaac, UPnP, mDNS, tinc, bittorrent)
- Broadcasts (Local network streaming)
- Package encapsulation (wireguard, IPSec, OpenVPN, vlan)
UDP and TCP have different behaviour and different tradeoffs, you have to understand them before choosing one for your use case. That's basically it. No need for "Never do X" gatekeeping.
Is the author saying that with QUIC I can send a "score update" for my game (periodic update) on a short-lived stream, and prevent retransmissions? I'll send an updated "score update" in a few seconds, so if the first one got lost, then I don't want it to waste bandwidth retransmitting. Especially I don't want it retransmitted after I've sent a newer update.
or Never use on a network where congestion is above a certain level
or Never us on a network where this parameter is above a certain level - like network latency
or only use on a LAN not a WAN ....?
UDP adds minimum over raw sockets so that you don't need root privileges. Other protocols are build on top of UDP.
It's better to use existing not-TCP protocols instead of UDP when the need arises instead of making your own. Especially for streaming.
As time has progressed increased from nothing to fec to dual-streaming and offset-streaming to RIST and SRT depending on the criticality.
On the other hand I've seen people try to use TCP (with rtmp) and fail miserably. Never* use TCP.
Or you know, use the right tool for the right job.
1. Routers do all kinds of terrible things with TCP, causing high latency, and poor performance. Routers do not do this to nearly the same extent with UDP
2. Operating systems have a tendency to buffer for high lengths of time, resulting in very poor performance due to high latency. TCP is often seriously unusable for deployment on a random clients default setup. Getting caught out by Nagle is a classic mistake, its one of the first things to look for in a project suffering from tcp issues
3. TCP is stream based, which I don't think has ever been what I want. You have to reimplement your own protocol on top of TCP anyway to introduce message frames
4. The model of network failures that TCP works well for is a bit naive, network failures tend to cluster together making the reliability guarantees not that useful a lot of the time. Failures don't tend to be statistically independent, and your connection will drop requiring you to start again anyway
5. TCP's backoff model on packet failures is both incredibly aggressive, and mismatched for a flaky physical layer. Even a tiny % of packet loss can make your performance unusable, to the point where the concept of using TCP is completely unworkable
Its also worth noting that people use "unreliable" to mean UDP for its promptness guarantees, because reliable = TCP, and unreliable = UDP
QUIC and TCP actively don't meet the needs of certain applications - its worth examining a use case that's kind of glossed over in the article: Videogames
I think this article misses the point strongly here by ignoring this kind of use case, because in many domains you have a performance and fault model that are simply not well matched by a protocol like TCP or QUIC. None of the features on the protocol list are things that you especially need or even can implement for videogames (you really want to encrypt player positions?). In a game, your update rate might be 1KB/s - absolutely tiny. If more than N packets get dropped - under TCP or UDP (or quic) - because games are a hard realtime system you're screwed, and there's nothing you can do about it no matter what protocol you're using. If you use QUIC, the server will attempt to send the packet again which.... is completely pointless, and now you're stuck waiting for potentially a whole queue of packets to send if your network hiccups for a second, with presumably whatever congestion control QUIC implements, so your game lags even more once your network recovers. Ick! Should we have a separate queue for every packet?
Videogame networking protocols are built to tolerate the loss of a certain number of packets within a certain timeframe (eg 1 every 200ms), and this system has to be extremely tightly integrated into the game architecture to maintain your hard realtime guarantees. Adding quic is just overhead, because the reliability that QUIC provides, and the reliability that games need, are not the same kind of reliability
Congestion in a videogame with low bandwidths is extremely unlikely. The issue is that network protocols have no way to know if a dropped packet is because of congestion, or because of a flaky underlying connection. Videogames assume a priori that you do not have congestion (otherwise your game is unplayable), so all recoverable networking failures are 1 off transient network failures of less than a handful of packets by definition. When you drop a packet in a videogame, the server may increase its update rate to catch you up via time dilation, rather than in a protocol like TCP/QUIC which will reduce its update rate. A well designed game built on UDP tolerates a slightly flakey connection. If you use TCP or QUIC, you'll run into problems. QUIC isn't terrible, but its not good for this kind of application, and we shouldn't pretend its fine
For more information about a good game networking system, see this video: https://www.youtube.com/watch?v=odSBJ49rzDo, and it goes over pretty in detail why you shouldn't use something like QUIC
A reliable, unlimited-length, message-based protocol.
With TCP there's a million users that throw out the stream aspect and implement messages on top of it. And with UDP people implement reliability and the ability to transmit >1 MTU.
So much time wasted reinventing the wheel.
So why the F.. would I bother with anything else?
Related
The return of pneumatic tubes
Pneumatic tubes, once popular across industries, now thrive in hospitals for efficient transport of medical items. Evolved systems enhance operational efficiency, tailored for specific needs, reducing task time significantly.
Exposition of Front End Build Systems
Frontend build systems are crucial in web development, involving transpilation, bundling, and minification steps. Tools like Babel and Webpack optimize code for performance and developer experience. Various bundlers like Webpack, Rollup, Parcel, esbuild, and Turbopack are compared for features and performance.
FreeBSD Bhyve Companion Tools
The author details transitioning from VirtualBox to FreeBSD Bhyve, praising Bhyve's benefits in a FreeBSD setting. Tools like VNC connection and pause/resume scripts optimize Bhyve operations, simplifying VM management.
I've stopped using box plots (2021)
The author critiques box plots for being unintuitive and prone to misinterpretation, advocating for simpler alternatives like strip plots to effectively communicate distribution insights without confusing audiences.
Finnish startup says it can speed up any CPU by 100x
A Finnish startup, Flow Computing, introduces the Parallel Processing Unit (PPU) chip promising 100x CPU performance boost for AI and autonomous vehicles. Despite skepticism, CEO Timo Valtonen is optimistic about partnerships and industry adoption.