July 1st, 2024

Four lines of code it was four lines of code

The programmer resolved a CPU utilization issue by removing unnecessary Unix domain socket code from a TCP and TLS service handler. This debugging process emphasized meticulous code review and system interaction understanding.

Read original articleLink Icon
Four lines of code it was four lines of code

The programmer encountered a performance issue with CPU utilization in a code snippet handling TCP and TLS services. After thorough investigation, a specific section of code related to Unix domain sockets was identified as the culprit. By removing this code, which was unnecessary for TCP sockets, the performance significantly improved. The problematic code was causing unnecessary system calls and event triggers, leading to inefficient processing. The fix highlighted the importance of meticulous code review and understanding the underlying system interactions. The debugging process involved testing, code adjustments, and ultimately pinpointing the root cause to enhance the code's efficiency. This experience underscores the complexity of software development and the critical role of identifying and rectifying subtle coding issues for optimal performance.

Link Icon 6 comments
By @vidarh - 4 months
"strace -c" is usually how I start looking for things like this. It so often immediately reveal some vast difference in syscalls that'll give a clue, and is usually faster/more convenient as a first step than starting with a full profiler trace or looking through code.
By @boffinAudio - 4 months
Whenever I need to do network things in Lua, I refer to the turbo.lua framework, which I've used in the past on other projects and I just find it so useful:

https://turbo.readthedocs.io/en/latest/

Whether its a set of file descriptors, a network socket, or some higher level session abstraction, I can usually find the 'best' way to deal with it by inspecting the turbo.lua sources - or, if feasible, I just use turbo.lua for the application on the table. It has been a seriously valuable tool in the 'get some networking/signals/fd_set thing working as quickly as possible" department.

Not that I'm suggesting the author switch to it, just that there is already a battle-tested and well-proven Lua-based framework for these things, and its worth investigating how it manages these abstractions. The distinction between local and IP sockets is well managed, also ..

By @barsonme - 4 months
The site seems down, so here's the wayback link: https://web.archive.org/web/20240701032732/https://boston.co...
By @mbreese - 4 months
Best line of the post:

> And as it is with these types of bugs, finding the root cause is not trivial, but the fix almost always is.

By @scottlamb - 4 months
> There's a race condition where writting data then calling close() may cause the other side to receive no data. This does NOT appoear to happen with TCP sockets, but this doesn't hurt the TCP side in any case.

This might come down to a different default for SO_LINGER for UNIX-domain sockets than for INET{,6}-domain sockets, and possibly Unix-domain sockets not having an equivalent of RST to signal the loss. IIUC, typical best practice anyway is to shut down the sending half then wait for the receiver to shut down the receiver half before closing the socket. It's been almost 20 years(!) since I've played with this, but the behavior you get by just calling close with unsent data at least used to vary by platform anyway with TCP. [1]

[1] https://web.archive.org/web/20060924184856/http://www.slamb....

By @kstrauser - 4 months
Why does it take 52 seconds of CPU time to serve 275 gopher requests? That’s a couple orders of magnitude more than I’d expect.