July 23rd, 2024

Turing's topological proof that every written alphabet is finite (2010)

Alan Turing's 1936 article introduced the Turing machine and proved the undecidability of the halting problem, foundational in computer science. Turing's topological proof demonstrates limitations on symbol diversity in written languages and visual representations.

Read original articleLink Icon
CuriosityConfusionInterest
Turing's topological proof that every written alphabet is finite (2010)

Alan Turing's 1936 article introduced the Turing machine and proved the undecidability of the halting problem, foundational in computer science. In the paper, Turing uses topology to argue that all written alphabets must be finite due to the limitations of symbols distinguishability. He explains that symbols can be treated as compact subsets in a square, and the human eye cannot differentiate symbols that are too similar. By employing the Hausdorff metric, Turing's claim is proven by showing that any written alphabet cannot have more than a finite number of visually distinct symbols. This reasoning extends beyond alphabets to other visual representations like drawings and colors. The article discusses the compactness of sets, the Hausdorff metric for measuring distances between compact sets, and the implications of symbol limitations in various contexts. The topological proof presented by Turing highlights the fundamental constraints on the diversity of symbols in written languages and visual representations.

Related

Rosser's Theorem via Turing Machines (2011)

Rosser's Theorem via Turing Machines (2011)

The post delves into Rosser's Theorem, a potent extension of Gödel's Incompleteness Theorems, showcasing self-reference in formal systems. It elucidates Rosser's stronger disproof concept and its impact on system consistency. Additionally, it links Rosser's Theorem to the halting problem's unsolvability, underlining computational insights and Gödel's lasting influence on computability theory.

The Point of the Banach Tarski Theorem

The Point of the Banach Tarski Theorem

The Banach-Tarski theorem challenges common sense by showing a solid ball can be split into pieces to form two identical balls in 3D space. It questions measurement principles and the Axiom of Choice's role in resolving mathematical paradoxes.

Ensemble Mathematical Universe Hypothesis

Ensemble Mathematical Universe Hypothesis

The Mathematical Universe Hypothesis (MUH) posits the universe as a mathematical entity, facing critiques on assigning probabilities to infinite structures and conflicts with Gödel's theorem. Tegmark defends MUH's computability and links it to multiverse theories, amid debates on testability and radical Platonism.

With fifth busy beaver, researchers approach computation's limits

With fifth busy beaver, researchers approach computation's limits

Researchers led by graduate student Tristan Stérin determined BB(5) as 47,176,870 using Coq software. Busy beavers, introduced by Tibor Radó, explore Turing machines' behavior. Allen Brady's program efficiently analyzes and classifies machines, advancing computational understanding.

The Perpetual Quest for a Truth Machine

The Perpetual Quest for a Truth Machine

Historical pursuit of truth machines dates back to Ramon Llull in the 13th century, evolving through Leibniz, Boole, and Shannon. Modern language models like ChatGPT continue this quest for automated certainty.

AI: What people are saying
The comments on Turing's article reflect a range of perspectives on the implications of his work.
  • Several commenters discuss the limitations of symbol representation and the finite nature of symbols in written languages.
  • There is a call for clearer definitions and explanations of mathematical concepts for readers less familiar with topology.
  • Some express concerns about the redundancy of concepts in mathematics and whether they may hinder the pursuit of new ideas.
  • Discussions arise around the context-dependence of symbols and the potential for infinitely many meanings based on context.
  • Questions are raised about the assumptions made in Turing's arguments, particularly regarding compactness and the nature of symbols.
Link Icon 18 comments
By @montecarl - 5 months
It is even easier to just say, given a fixed area of paper, and a finite printing resolution, there is a finite number of symbols possible?

Say you have a 1 in by 1 in area of paper and a printing resolution of 300 DPI then there are 300*300 total dots. If the printer is monochrome and each dot is either black or white then there are 2^(300^2) possible symbols.

By @kazinator - 5 months
> The human eye cannot tell two symbols apart when they are too similar.

That leads to a simple information-theory based argument. There is a limited spatial resolution and noise floor, and so the amount of information that can be coded is finite.

By @thih9 - 5 months
> We cannot tell at a glance whether 9999999999999999 and 999999999999999 are the same.

Off topic - some of us can, iOS underlines one of these for me, treating it as a phone number…

By @nxobject - 5 months
Just a reading note to myself: the key assumption is that there is a "resolving limit" \epsilon such that symbols that are "closer and smaller" than \epsilon are indistinguishable. The implicit assumption there is that you can resize symbols while keeping them the same.
By @tasteslikenoise - 5 months
A different but to my taste more mathematically conventional way to make this argument would be instead to say that a symbol is a function from the square to [0, 1], that is, a grayscale image. A symbol, because of limitations of either its writer/printer or its reader/viewer, should have some "regularity": as you move around the image, there should be some quantitative restrictions on how rapidly the darkness of the gray color should change. In this kind of setup, the space of symbols is compact by a version of the Arzela-Ascoli theorem, which I think was fairly well-known by the time Turing was working. This also has the advantage of being straightforward to generalize to things like colored images by just changing [0, 1] to, say, [0, 1]^3 to represent RGB space, or whatever you want, as some others are mentioning in the comments.

As an aside, this is an interesting companion read:

"A pedagogical history of compactness" (https://arxiv.org/abs/1006.4131)

As a working mathematician, I can say that this kind of argument has become totally routine and, were I reading Turing's paper carefully, after seeing "epsilon" and "compact" I would think "makes sense" and move on. But, historically speaking, it's interesting to realize how recent the development of the abstract idea of compactness was when Turing was writing---the time between Frechet and Hausdorff's work on compactness (see the pedagogical history) and Turing is about the time between Google being founded and today.

By @colinwilyb - 5 months
"I am assuming that the reader is familiar with the terms metric, metric space, topological space, and compact set."

On behalf of the math-declined folks, I wish math articles had subtitles, references, and definitions built-in.

I wonder if a browser extension for Greek letter definitions would be possible?

By @tempfile - 5 months
Turing's argument says "conditionally compact" but the article talks only about "compact". They are not the same; "conditionally compact" means every sequence has a Cauchy subsequence, while (sequentially) compact means every sequence has a convergent subsequence. If the metric space Turing mentions is complete, then I guess they're the same. Is that obvious?
By @fiforpg - 5 months
From a quick reading of the footnote, it seems that Turing uses optimal transport distance rather than the Hausdorff distance. Convergence in the latter implies convergence in the former, but not the other way round.
By @meroes - 5 months
Isn’t this just taking a common sense concept and adapting it to the concepts and language of Turing machines and also topology?

While yes it shows these are good tools, capable of confirming/proving these intuitions and showing exactly how these specific systems achieve that, is there ever any concern we are making too many redundant concepts? That may lead to confusion and obfuscate the search for novel ideas, which I think won’t always happen through said tools. It’s not like these kind of intuitions were on shaky grounds or could be disproven.

I wonder if anyone shares these opinions?

By @thriftwy - 5 months
I don't agree. What if the next symbol's meaning conditionally depends on the previous ones?

For example, in 1937 the third glyph is number three, but in ВАЗ, the third glyph is Cyrillic letter Z. They're not different in writing.

It would not be hard to devise a symbolic system where new symbols are context dependent, and there are infinitely many of these depending on the context. The context will be governed by a finite number of rules. The glyphs will not change meaning in a specific text.

Human language is an example of such system. If you treat words as a whole as symbols, you can have infinitely many different words as they become defined via the specific text, but obviously you can't have infinitely many different words in a single text because they will stop being mutually intelligible at some point.

By @ainoobler - 5 months
Interesting argument. This assumes that cognition must also be happening on a compact manifold which seems like a reasonable assumption but the conclusion is somewhat counterintuitive because it means there are only finitely many personality types and ways of thinking.
By @ridiculous_fish - 5 months
Why do we need the property that X is compact? Say we define X (the square) to exclude the boundary: X = (0,1)x(0,1). This is no longer a compact set, but it seems silly for the proof to now fail.

Is boundedness sufficient?

By @k92mueller - 5 months
I have no idea about topology, so sorry if my question is stupid: Assume we are limited to the unit square (0<=x<=1 and 0<=y<=1). Now I map every real number in [0,1] to a symbol by simply defining the symbol for the number r as x=r and y=r. This results in a uncountable infinite amount of symbols. Now it's clearly not true that it's possible to describe any of those new symbols with a finite amount of symbols from a finite alphabet (because otherwise this would also be possible for real numbers). Where is my intuition wrong?
By @nyc111 - 5 months
"In this paper Turing gives a topological argument that every written alphabet must be finite."

I searched Turing's paper and the word "alphabet" does not appear. I guess the author interprets Turing's use of "symbol" as an "alphabet". Can anyone familiar with Turing's paper clarify?

By @motohagiography - 5 months
non-anything question: Is this text/sheet an analogy for a thing's relationship to its substrate, or does the proof define a relationship or category of "thing-and-substrate?"
By @nyc111 - 5 months
Neither finite nor infinite is defined. I don't know what those words mean. So the assumption of an infinite alphabet doesn't make sense to me.
By @andybak - 5 months
I made a joke:

> Prior to his seminal work Turing published a lesser known 1935 paper: "On Caffeine with an Application to the Anfangsverzögerungsproblem"

I think it might be a little too niche.