September 2nd, 2024

Undefined behavior in C is a reading error (2021)

Misinterpretations of undefined behavior in C have caused significant semantic issues, risking program stability and making C unsuitable for critical applications. A clearer standard interpretation is needed for reliable programming.

Read original articleLink Icon
Undefined behavior in C is a reading error (2021)

The article discusses the concept of undefined behavior in the C programming language, highlighting how misinterpretations of the C89 standard have led to significant issues in C semantics. Undefined behavior allows compilers to optimize code in ways that can render entire programs meaningless if any part of the execution involves undefined behavior. The prevailing interpretation suggests that the standard imposes no requirements on how compilers handle undefined behavior, leading to arbitrary optimizations that can destabilize codebases. This misreading has resulted in C becoming unsuitable for various applications, including operating systems and cryptography, as developers must use specific compiler flags to mitigate the risks associated with undefined behavior. The article argues for a more constructive interpretation of the standard that limits undefined behavior to specific constructs and data, thereby restoring semantic coherence to the language. It emphasizes the need for clear guidance from the standard to prevent further degradation of C semantics and to ensure reliable programming practices.

- Misinterpretation of undefined behavior in C has led to significant semantic issues.

- Compilers exploit undefined behavior for optimizations, risking program stability.

- The current state of C makes it unsuitable for critical applications like operating systems.

- A more constructive interpretation of the standard could restore coherence to C semantics.

- Clear guidance from the C standard is essential for reliable programming practices.

Related

How GCC and Clang handle statically known undefined behaviour

How GCC and Clang handle statically known undefined behaviour

Discussion on compilers handling statically known undefined behavior (UB) in C code reveals insights into optimizations. Compilers like gcc and clang optimize based on undefined language semantics, potentially crashing programs or ignoring problematic code. UB avoidance is crucial for program predictability and security. Compilers differ in handling UB, with gcc and clang showing variations in crash behavior and warnings. LLVM's 'poison' values allow optimizations despite UB, reflecting diverse compiler approaches. Compiler responses to UB are subjective, influenced by developers and user requirements.

The Byte Order Fiasco

The Byte Order Fiasco

Handling endianness in C/C++ programming poses challenges, emphasizing correct integer deserialization to prevent undefined behavior. Adherence to the C standard is crucial to avoid unexpected compiler optimizations. Code examples demonstrate proper deserialization techniques using masking and shifting for system compatibility. Mastery of these concepts is vital for robust C code, despite available APIs for byte swapping.

C Isn't a Programming Language Anymore (2022)

C Isn't a Programming Language Anymore (2022)

The article examines the shift in perception of C from a programming language to a protocol, highlighting challenges it poses for interoperability with modern languages like Rust and Swift.

Clang vs. Clang

Clang vs. Clang

The blog post critiques compiler optimizations in Clang, arguing they often introduce bugs and security vulnerabilities, diminish performance gains, and create timing channels, urging a reevaluation of current practices.

The difference between undefined behavior and ill-formed C++ programs

The difference between undefined behavior and ill-formed C++ programs

The article explains the difference between undefined behavior and ill-formed programs in C++. It highlights the risks of ill-formed no diagnostic required programs and suggests tools for mitigation.

Link Icon 12 comments
By @layer8 - about 1 month
TFA is misunderstanding. As he cites from the C standard, “Undefined behavior gives the implementor license not to catch certain program errors that are difficult to diagnose.” Since it’s difficult (and even, in the general case of runtime conditions, impossible at compile time) to diagnose, the implementor (compiler writer) has two choices: (a) assume that the undefined behavior doesn’t occur, and implement optimizations under that assumption, or (b) nevertheless implement a defined behavior for it, which in many cases amounts to a pessimization. Given that competition between compilers is driven by benchmarks, guess which option compiler writers are choosing.

The discussions in comp.lang.c (a Usenet newsgroup, not a mailing list) were educating C programmers that they can’t rely on (b) in portable C, and moreover, can’t make any assumptions about undefined behavior in portable C, because the C specification (the standard) explicitly refrains from imposing any requirements whatsoever on the C implementation in that case.

The additional thing to understand is that compiler writers are not malevolently detecting undefined behavior and then inserting optimizations in that case, but instead that applying optimizations is a process of logical deduction within the compiler, and that it is the lack of assumptions related to undefined behavior being put into the compiler implementation, that is leading to surprising consequences if undefined behavior actually occurs. This is also the reason why undefined behavior can affect code executing prior to the occurrence of the undefined condition, because logical deduction as performed by the compiler is not restricted to the forward direction of control flow (and also because compilers reorder code as a consequence of their analysis).

By @Ericson2314 - about 1 month
I really don't like reading these dimwitted screeds. We did not get here because layering over a standard or a document --- this is not the US supreme court or similar. We got here because

- There are legit issues trying to define everything without loosing portability. This affects C and anything like it.

- Compiler writes do want to write optimizations regardless of whether this is C or anything else --- witness that GCC / LLVM will use the same optimizations regardless of the input language / compiler frontend

- Almost nobody in this space, neither the cranky programmers against, or the normy compiler writers for, has a good grasp of modern logic and proof theory, which is needed to make this stuff precise.

By @nickelpro - about 1 month
If you want slow code that behaves exactly the way you expect, turn off optimizations. Congrats, you have the loose assembly wrapper language you always wanted C to be.

For the rest of us, we're going to keep getting every last drop out of performance that we can wring out of the compiler. I do not want my compiler to produce an "obvious" or "reasonable" interpretation of my code, I want it to produce the fastest possible "as if" behavior for what I described within the bounds of the standard.

If I went outside the bounds of the standard, that's my problem, not the compiler's.

By @brudgers - about 1 month
A consensus standard happens by multiple stakeholders sitting down and agreeing on what everyone will do the same way. And agreeing one what they won't all do the same way. The things they agree to doing differently don't become part of the standard.

With compilers, different companies usually do things differently. That was the case with C87. The things they talked about but could not or would not agree to do the same way are listed as undefined behaviors. The things everyone agreed to do the same way are the standard.

The consensus process reflects stakeholder interests. Stakeholders can afford to rewrite some parts of their compilers to comply with the standards and cannot afford to rewrite other parts to comply with the standards because their customers rely on the existing implementation and/or because of core design decisions.

By @saghm - about 1 month
The crux of this argument seems to be that the author interprets the "range of permissible behavior" they cite as specifications on undefined behavior as not allowing the sort of optimizations that potentially render anything else in the program moot. A large part of this argument depends arguing that the earlier section defining the term undefined behavior has an "obvious" interpretation that's been ignored in favor of a differing one. I don't think their interpretation of the definition of undefined behavior is necessarily the strongest argument against the case they're making though; to me, the second section they quote is if anything even more nebulous.

To be overly pedantic (which seems to be the point of this exercise), the section cites a "range" of permissible behavior, not an exhaustive list; it doesn't sound to me like it requires that only those three behaviors are allowed. The potential first behavior it includes is "ignoring the situation completely with unpredictable results", followed by "behaving during translation or program execution in a documented manner characteristic of the environment". I'd argue that the behavior this article complains about is somewhere between "willfully ignoring the situation completely with unpredictable results" to "recognizing the situation with unpredictable results", and it's hard for me read this as being obviously outside the range of permissible behavior. Otherwise, it essentially would mean that it's still totally allowed by the standard to have the exact behavior that the author complains about, but only if it's due to the compiler author being ignorant rather than willful. I think it would be a lot weirder if the intent of the standard was that deviant behavior due to bugs is somehow totally okay but purposely writing the same buggy code is a violation.

By @mst - about 1 month
The practical reality appears to be that compilers use the loose interpretation of UB and that every compiler that works hard to optimise things as much as possible takes advantage of that as much as it can.

I am very much sympathetic to the people who really wish that wasn't the case, and I appreciate the logic of arguments like this one that in theory it shouldn't be the case, but in practice, it is the case, and has been for some years now.

So it goes.

By @ajross - about 1 month
I think the problem is sort of a permutation of this argument: way way too much attention is being paid to warning about the dangers and inadequacies of the standard's UB corners, and basically none to a good faith effort to clean up the problem.

I mean, it wouldn't be that hard in a technical sense to bless a C dialect that did things like guarantee 8-bit bytes, signed char, NULL with a value of numerical zero, etc... The overwhelming majority of these areas are just spots where hardware historically varied (plus a few things that were simple mistakes), and modern hardware doesn't have that kind of diversity.

Instead, we're writing, running and trying to understand tools like UBSan, which is IMHO a much, much harder problem.

By @Animats - about 1 month
Well, where have we had trouble in C in the past? Usually, with de-referencing null pointers. The classic is

   char* p = 0;
   char c = *p;
   if (p) {
      ...
   }
Some compilers will observe that de-referencing p implies that P is non-null. Therefore, the test for (p) is unnecessary and can optimized out. The if-clause is then executed unconditionally, leading to trouble.

The program is wrong. On some hardware, you can't de-reference address 0 and the program will abort at "*p". But many machines (i.e. x86) let you de-reference 0 without a trap. This one has caught the Linux kernel devs at least once.

From a compiler point of view, inferring that some pointers are valid is useful as an optimization. C lacks a notation for non-null pointers. In theory, C++ references should never be null, but there are some people who think they're cool and force a null into a reference.

Rust, of course, has

    Option<&Foo>
with unambiguous semantics. This is often implemented with a zero pointer indicating None, but the user doesn't see that.

So, what else? Use after free? In C++, the compiler knows that "delete" should make the memory go away. But that doesn't kill the variable in that scope. It's still possible to reference a gone object. This is common in some old C code, where something is accessed after "free". This is Common Security Weakness #414.[1]

Not a problem in Rust, or any GC language.

Over-optimization in benchmarks can be amusing.

   for (i=0; i<100000000; i++) {}
will be removed by many compilers today. If the loop body is identical every time, it might only be done once. This is usually not a cause of bad program behavior. The program isn't wrong, just pointless.

What else is a legit problem?

[1] https://cwe.mitre.org/data/definitions/416.html

By @jcranmer - about 1 month
So a while back, I did some spelunking into the history of C99 to actually try to put the one-word-change-theory to bed, but I've never gotten around to writing anything that's public on the internet yet. I guess it's time for me to rectify it.

Tracking down the history of the changes at that time is a bit difficult, because there's clearly multiple drafts that didn't make it into the WG14 document log (this is the days when the document log was literal physical copies being mailed to people), and the drafts in question are also of a status that makes them not publicly available. Nevertheless, by reading N827 (the editors report for one of the drafts), we do find this quote about the changes made:

> Definitions are only allowed to contain actual definitions in the normative text; anything else must be a note or an example. Things that were obviously requirements have been moved elsewhere (generally to Conformance, see above), the examples that used to be at the end of the clause have been distributed to the appropriate definitions, anything else has been made into a note. (Some of the notes appear to be requirements, but I haven't figured out a good place to put them yet.)

In other words, the change seems to be have made purely editorially. The original wording was not intended to be read as imposing requirements, and the change therefore made it a note instead of moving it to conformance. This is probably why "permissible" became "possible": the former is awkward word choice for non-normative text.

Second, the committee had, before this change, discussed the distinctions between implementation-defined, unspecified, and undefined behavior in a way that makes it clear that the anything-goes interpretation is intentional. Specifically, in N732, one committee introduces unspecified behavior as consisting of four properties: 1) multiple possible behaviors; 2) the choice need not be consistent; 3) the choice need not be documented; and 4) the choice needs to not have long-range impacts. Drop the third option, and you get implementation-defined behavior; drop the fourth option, and you get undefined behavior[1]. This results in a change to the definition of unspecified and implementation-defined behavior, while undefined behavior retains the same. Notice how, given a chance to very explicitly repudiate the notion that undefined behavior has spooky-action-at-a-distance, the committee declined to, and it declined to before the supposed critical change in the standard.

Finally, the C committee even by C99 was explicitly endorsing optimizations permitted only by undefined behavior. In N802, a C rationale draft for C99 (that again predates the supposed critical change, which was part of the new draft in N828), there is this quote:

> The bitwise logical operators can be arbitrarily regrouped [converting `(a op b) op c` to `a op (b op c)`], since any regrouping gives the same result as if the expression had not been regrouped. This is also true of integer addition and multiplication in implementations with twos-complement arithmetic and silent wraparound on overflow. Indeed, in any implementation, regroupings which do not introduce overflows behave as if no regrouping had occurred. (Results may also differ in such an implementation if the expression as written results in overflows: in such a case the behavior is undefined, so any regrouping couldn’t be any worse.)

This is the C committee, in 1998, endorsing an optimization relying on the undefined nature of signed integer overflow. If the C committee is doing that way back then, then there is really no grounds one can stand on to claim that it was somehow an unintended interpretation of the standard.

[1] What happens if you want to drop both the third and fourth option is the point of the paper, with the consensus seeming to be "you don't want to do both at the same time."

By @kazinator - about 1 month
This drivel is posted in a private blog precisely in order to evade expert arguments.
By @mianos - about 1 month
It's quite ironic that a blog named 'keeping simple' is summarised by GPT as:

"In summary, the essay employs a hyperbolic tone to argue that the prevailing interpretation of undefined behavior has severely compromised the utility and stability of C. While it raises valid points about the implications of undefined behavior, the dramatic language and sweeping claims might make the situation appear more catastrophic than is universally agreed upon."

I know it's bad form to quote GPT, but I could not say this better.

As someone who writes C and C++ every day of the week I feel I just wasted 30 minutes of my life reading it and the arguments.