August 1st, 2024

Is C99 actually Turing-complete? (2016)

The discussion examines whether C99 is Turing-complete, highlighting its memory limitations and practical constraints that prevent it from fully emulating a Turing machine in real-world scenarios.

Read original articleLink Icon
Is C99 actually Turing-complete? (2016)

The discussion revolves around whether the C programming language, particularly C99, is Turing-complete. The initial assumption is that C can emulate a Turing machine due to its ability to address memory. However, this is challenged by the fact that the C standard imposes limitations on memory size, specifically through the type size_t, which is finite in any given implementation. This raises the question of whether C can simulate an arbitrary Turing machine, as it cannot handle an infinite amount of memory.

Several contributors argue that while C can theoretically support unbounded recursion and complex data structures, practical implementations are limited by the hardware and the specific compiler used. The finite nature of memory in real-world systems means that C cannot fully emulate a Turing machine without external storage, which would shift the discussion to whether C combined with infinite storage is Turing-complete, a less interesting question.

The conversation also touches on the nature of programming languages and their implementations, noting that many mainstream languages are considered Turing-complete despite their inherent limitations. Ultimately, the consensus leans towards the idea that while C has features that could allow for Turing-completeness, its practical limitations in memory and implementation specifics prevent it from being classified as such in a strict theoretical sense.

Link Icon 15 comments
By @omoikane - 3 months
This question seem to be applying a theoretical definition of Turing-completeness (with infinite memory) to a practical implementation of C (with finite integer sizes and finite addressable memory). This framing doesn't seem consistent.

Either a theoretical definition of C should be used (where integers can be arbitrarily large), or a practical definition of Turing-completeness should be used (where some implementation of a finite computational device is possible), such that it's the same theoretical or practical consideration on both sides of the comparison.

By @AdamH12113 - 3 months
Note that the question here is about the C language itself -- obviously a physical computer is not equivalent to a Turing machine with infinite storage. But the issue seems to be that C fundamentally uses the idea of addressable storage -- objects have addresses, and those addresses must be a finite length (the length of a pointer), which is implementation-dependent but must be finite so that sizeof(pointer) returns a valid result. Even if you could have an infinite amount of RAM, no standard C implementation could address it.

The accepted answer tries to go further with register variables (which officially don't have addresses) and recursion (whose depth is not limited by the standard), but founders on the limitation that you can't make an array out of register variables. Functions can only have a finite number of arguments and return a finite-sized structure.

Another answer from a couple years later tries to make a non-addressable array using va_list and va_copy(). I don't know enough about the quirks of varargs to tell whether this would work, although nobody seems to have an unanswered objection so far.

By @skybrian - 3 months
A related question: why do we even talk about algorithms that assume infinite storage when all our devices are finite? How are they relevant?

I think in part it’s because storage limits are platform-specific and context-dependent. Sure, your laptop has a hard drive with limited storage capacity, but how much free space do you have? Running out of disk space isn’t an algorithmic limit, it’s a limit of the environment, which is constantly changing.

So this is a way of splitting concerns. When we’re analyzing algorithms, storage capacity is someone else’s problem. It’s not part of the function’s API. (Though, handling running out of storage gracefully might be part of the API, depending on the language.)

By @ozb - 3 months
You can implement the "nonstandard arithmetic" suggestion using bignum integers backed by an infinite tape (subject to availability of said infinite tape). Finite integers have a Halt symbol, non-finite ones simply don't. Arithmetic on non-finite integers is not computable, but individual digits generally are. Any finite integer is computably less than any non-finite integer. "Less than" between two non-finite integers is not generally computable, therefore not defined. "Not equals" is semidecidable, so generally all "not equals" statements between two non-finite integers are defined, but "equals" mostly isn't. printf on a non-finite integer will simply print out infinite digits one by one. You can also define and generate non-finite integers from any computable sequence of integers. size(void*) can be defined as eg 1111111... (Repeating forever, in an arbitrary base).

If you demand that you can always computably do arithmetic on size_t's, allow storing arbitrary arithmetic or even logical expressions in your bignum integers, and call those bignum as well. Define "less than" on infinite integers based on "alphabetical" order. Then the only thing that is non-computable is (in)equality between two expressions for which non-finite-ness is unprovable under First order logic. Given Godel's Completeness (note, not Incompleteness) Theorem, that should probably meet the definition of a C implementation, though I haven't read the standard.

By @KMnO4 - 3 months
By this logic, there aren’t any existing Turing-complete language implementations, since memory is always finite.

This doesn’t seem like a helpful argument.

By @ratorx - 3 months
I don’t think this proof is particularly useful without considering the complexity of changes required to overcome this limitation.

For example, pointers could theoretically use a variable length scheme (kinda like Utf-8), as long as the underlying hardware supported it or supported being able to pass only the next n bits of the pointer in something like a chained syscall.

Of course that isn’t in the specification, but the transformation seems theoretically implementable without needing e.g. infinite length pointers for every access.

In contrast, there is no way to coerce a DFA into becoming a Turing Machine (that is theoretically implementable on the DFA).

So the proof is not necessarily wrong, but it might not be the right kind of proof to be useful.

By @metacritic12 - 3 months
Isn't it true that if there are finite atoms in the world with finite states, Turning-Complete machines can't ever exist physically?

It's just that Turning machines are a useful model for some actual computers.

By @sandywaffles - 3 months
Well considering I just saw a post recently saying `find` + `mkdir` is Turing Complete, I sure as hell hope C99 is as well.
By @TJSomething - 3 months
I'm unsure how banked memory works in the C abstract machine. But it seems to me that if you had a banked architecture with an unbounded number of banks and a write-only bank increment/decrement register, then you could write Turing complete C.
By @zarzavat - 3 months
If we’re going to be this petty, then can’t you represent a pointer by a sequence syscalls? So if you were on an OS with infinite memory you could still have universe-sized pointers into that memory.
By @dathinab - 3 months
applying a concept from such a high abstraction level that it's fully unbound in time and space to a practical real world focused thing is fun but a bit silly and it often leads to funny questions like can you define MAX_SIZE, size_t etc. as infinite?

in context of IRL things the question of turning completeness is normally not "if it can be used to simulate any Turing machine" (wikipedia) but " if it can be used to simulate any Turing machine bound to some realistic size limit"

By @fsckboy - 3 months
TL;DR: all C implementations are FSMs (finite state machines, bounded by addressable memory based on wordsize) which are not Turing Complete because of the finite, as Turing machines have infinite tape.