Turing's topological proof that every written alphabet is finite (2010)
Alan Turing's 1936 article introduced the Turing machine and proved the undecidability of the halting problem, foundational in computer science. Turing's topological proof demonstrates limitations on symbol diversity in written languages and visual representations.
Read original articleAlan Turing's 1936 article introduced the Turing machine and proved the undecidability of the halting problem, foundational in computer science. In the paper, Turing uses topology to argue that all written alphabets must be finite due to the limitations of symbols distinguishability. He explains that symbols can be treated as compact subsets in a square, and the human eye cannot differentiate symbols that are too similar. By employing the Hausdorff metric, Turing's claim is proven by showing that any written alphabet cannot have more than a finite number of visually distinct symbols. This reasoning extends beyond alphabets to other visual representations like drawings and colors. The article discusses the compactness of sets, the Hausdorff metric for measuring distances between compact sets, and the implications of symbol limitations in various contexts. The topological proof presented by Turing highlights the fundamental constraints on the diversity of symbols in written languages and visual representations.
Related
Rosser's Theorem via Turing Machines (2011)
The post delves into Rosser's Theorem, a potent extension of Gödel's Incompleteness Theorems, showcasing self-reference in formal systems. It elucidates Rosser's stronger disproof concept and its impact on system consistency. Additionally, it links Rosser's Theorem to the halting problem's unsolvability, underlining computational insights and Gödel's lasting influence on computability theory.
The Point of the Banach Tarski Theorem
The Banach-Tarski theorem challenges common sense by showing a solid ball can be split into pieces to form two identical balls in 3D space. It questions measurement principles and the Axiom of Choice's role in resolving mathematical paradoxes.
Ensemble Mathematical Universe Hypothesis
The Mathematical Universe Hypothesis (MUH) posits the universe as a mathematical entity, facing critiques on assigning probabilities to infinite structures and conflicts with Gödel's theorem. Tegmark defends MUH's computability and links it to multiverse theories, amid debates on testability and radical Platonism.
With fifth busy beaver, researchers approach computation's limits
Researchers led by graduate student Tristan Stérin determined BB(5) as 47,176,870 using Coq software. Busy beavers, introduced by Tibor Radó, explore Turing machines' behavior. Allen Brady's program efficiently analyzes and classifies machines, advancing computational understanding.
The Perpetual Quest for a Truth Machine
Historical pursuit of truth machines dates back to Ramon Llull in the 13th century, evolving through Leibniz, Boole, and Shannon. Modern language models like ChatGPT continue this quest for automated certainty.
- Several commenters discuss the limitations of symbol representation and the finite nature of symbols in written languages.
- There is a call for clearer definitions and explanations of mathematical concepts for readers less familiar with topology.
- Some express concerns about the redundancy of concepts in mathematics and whether they may hinder the pursuit of new ideas.
- Discussions arise around the context-dependence of symbols and the potential for infinitely many meanings based on context.
- Questions are raised about the assumptions made in Turing's arguments, particularly regarding compactness and the nature of symbols.
Say you have a 1 in by 1 in area of paper and a printing resolution of 300 DPI then there are 300*300 total dots. If the printer is monochrome and each dot is either black or white then there are 2^(300^2) possible symbols.
That leads to a simple information-theory based argument. There is a limited spatial resolution and noise floor, and so the amount of information that can be coded is finite.
Off topic - some of us can, iOS underlines one of these for me, treating it as a phone number…
As an aside, this is an interesting companion read:
"A pedagogical history of compactness" (https://arxiv.org/abs/1006.4131)
As a working mathematician, I can say that this kind of argument has become totally routine and, were I reading Turing's paper carefully, after seeing "epsilon" and "compact" I would think "makes sense" and move on. But, historically speaking, it's interesting to realize how recent the development of the abstract idea of compactness was when Turing was writing---the time between Frechet and Hausdorff's work on compactness (see the pedagogical history) and Turing is about the time between Google being founded and today.
On behalf of the math-declined folks, I wish math articles had subtitles, references, and definitions built-in.
I wonder if a browser extension for Greek letter definitions would be possible?
While yes it shows these are good tools, capable of confirming/proving these intuitions and showing exactly how these specific systems achieve that, is there ever any concern we are making too many redundant concepts? That may lead to confusion and obfuscate the search for novel ideas, which I think won’t always happen through said tools. It’s not like these kind of intuitions were on shaky grounds or could be disproven.
I wonder if anyone shares these opinions?
For example, in 1937 the third glyph is number three, but in ВАЗ, the third glyph is Cyrillic letter Z. They're not different in writing.
It would not be hard to devise a symbolic system where new symbols are context dependent, and there are infinitely many of these depending on the context. The context will be governed by a finite number of rules. The glyphs will not change meaning in a specific text.
Human language is an example of such system. If you treat words as a whole as symbols, you can have infinitely many different words as they become defined via the specific text, but obviously you can't have infinitely many different words in a single text because they will stop being mutually intelligible at some point.
Is boundedness sufficient?
I searched Turing's paper and the word "alphabet" does not appear. I guess the author interprets Turing's use of "symbol" as an "alphabet". Can anyone familiar with Turing's paper clarify?
> Prior to his seminal work Turing published a lesser known 1935 paper: "On Caffeine with an Application to the Anfangsverzögerungsproblem"
I think it might be a little too niche.
Related
Rosser's Theorem via Turing Machines (2011)
The post delves into Rosser's Theorem, a potent extension of Gödel's Incompleteness Theorems, showcasing self-reference in formal systems. It elucidates Rosser's stronger disproof concept and its impact on system consistency. Additionally, it links Rosser's Theorem to the halting problem's unsolvability, underlining computational insights and Gödel's lasting influence on computability theory.
The Point of the Banach Tarski Theorem
The Banach-Tarski theorem challenges common sense by showing a solid ball can be split into pieces to form two identical balls in 3D space. It questions measurement principles and the Axiom of Choice's role in resolving mathematical paradoxes.
Ensemble Mathematical Universe Hypothesis
The Mathematical Universe Hypothesis (MUH) posits the universe as a mathematical entity, facing critiques on assigning probabilities to infinite structures and conflicts with Gödel's theorem. Tegmark defends MUH's computability and links it to multiverse theories, amid debates on testability and radical Platonism.
With fifth busy beaver, researchers approach computation's limits
Researchers led by graduate student Tristan Stérin determined BB(5) as 47,176,870 using Coq software. Busy beavers, introduced by Tibor Radó, explore Turing machines' behavior. Allen Brady's program efficiently analyzes and classifies machines, advancing computational understanding.
The Perpetual Quest for a Truth Machine
Historical pursuit of truth machines dates back to Ramon Llull in the 13th century, evolving through Leibniz, Boole, and Shannon. Modern language models like ChatGPT continue this quest for automated certainty.