July 3rd, 2024

Background of Linux's "file-max" and "nr_open" limits on file descriptors (2021)

The Unix background of Linux's 'file-max' and 'nr_open' kernel limits on file descriptors dates back to early Unix implementations like V7. These limits, set during kernel compilation, evolved to control resource allocation efficiently.

Read original articleLink Icon
Background of Linux's "file-max" and "nr_open" limits on file descriptors (2021)

The article discusses the Unix background of Linux's 'file-max' and 'nr_open' kernel limits on file descriptors. It explains that these limits trace back to the early implementations of Unix, such as V7, which used fixed-size arrays for open files and file descriptors due to the simple kernel designs at the time. The sizes of these arrays were set during kernel compilation and influenced memory usage. Over time, as kernels evolved to dynamically allocate resources, limits were introduced to control the number of file descriptors per process and globally. The separate limits serve different purposes, with the per-process limit preventing resource leaks and the global limit restricting kernel memory usage. The article speculates on the naming conventions of these limits in Linux, suggesting historical reasons for the use of underscores and dashes in their names. The discussion provides insights into the evolution of file descriptor limits in Unix-based systems, shedding light on the rationale behind their design and implementation.

Related

The Dirty Pipe Vulnerability

The Dirty Pipe Vulnerability

The Dirty Pipe Vulnerability (CVE-2022-0847) in Linux kernel versions since 5.8 allowed unauthorized data overwriting in read-only files, fixed in versions 5.16.11, 5.15.25, and 5.10.102. Discovered through CRC errors in log files, it revealed systematic corruption linked to ZIP file headers due to a kernel bug in Linux 5.10. The bug's origin was pinpointed by replicating data transfer issues between processes using C programs, exposing the faulty commit. Changes in the pipe buffer code impacted data transfer efficiency, emphasizing the intricate nature of kernel development and software component interactions.

Htop explained – everything you can see in htop on Linux (2019)

Htop explained – everything you can see in htop on Linux (2019)

This article explains htop, a Linux system monitoring tool. It covers uptime, load average, processes, memory usage, and more. It details htop's display, load averages, process IDs, procfs, and process tree structure. Practical examples are provided for system analysis.

CVE-2021-4440: A Linux CNA Case Study

CVE-2021-4440: A Linux CNA Case Study

The Linux CNA mishandled CVE-2021-4440 in the 5.10 LTS kernel, causing information leakage and KASLR defeats. The issue affected Debian Bullseye and SUSE's 5.3.18 kernel, resolved in version 5.10.218.

The weirdest QNX bug I've ever encountered

The weirdest QNX bug I've ever encountered

The author encountered a CPU usage bug in a QNX system's 'ps' utility due to a 15-year-old bug. Debugging revealed a race condition, leading to code modifications and a shift towards open-source solutions.

For the Love of God, Stop Using CPU Limits on Kubernetes

For the Love of God, Stop Using CPU Limits on Kubernetes

Using CPU limits on Kubernetes can lead to CPU throttling, causing more harm than good. Setting accurate CPU requests is crucial to avoid throttling. Memory management best practices are also discussed, along with a tool for resource recommendations.

Link Icon 2 comments
By @pixl97 - 3 months
>Specifically on Linux there are two system-wide sysctls: fs.nr_open and fs.file-max. (Don't ask me why one uses a dash and the other an underscore, or why there are two of them...)

Somewhere it should be lore that the person that named these syscalls went on to name functions in PHP.

By @cassepipe - 3 months
What doe NR stand for ?