October 22nd, 2024

One Plus One Equals Two (2006)

The discussion highlights the historical and mathematical significance of "Principia Mathematica," noting its complex proof of 1+1=2 and how modern mathematics has streamlined such proofs through advancements in logic and notation.

Read original articleLink Icon
CuriosityAmusementAppreciation
One Plus One Equals Two (2006)

The discussion revolves around the historical and mathematical significance of Whitehead and Russell's "Principia Mathematica," particularly its proof that 1+1=2. The work, published around 1910, is noted for its extensive length and complexity, taking a thousand pages to establish foundational mathematical concepts, including the simple arithmetic of addition. The author highlights how the notation and techniques used in "Principia Mathematica" differ from modern mathematical practices, emphasizing the evolution of mathematical logic and notation since its publication. The text critiques the redundancy in the work, where similar concepts are repeated due to the authors' lack of advanced techniques available today. The author also explains how the understanding of relations and sets has progressed, allowing for a more streamlined approach to mathematical proofs. The proof of 1+1=2 is presented as a culmination of earlier theorems, demonstrating the logical connections between sets and their properties. The author reflects on how contemporary mathematics would simplify the proof significantly, thanks to advancements in the understanding of ordered pairs and set theory. Overall, the piece serves as both a historical commentary and a critique of the mathematical methods employed in "Principia Mathematica."

- "Principia Mathematica" took a thousand pages to prove foundational concepts, including 1+1=2.

- The work is noted for its complex notation and redundancy due to the authors' early understanding of mathematical logic.

- Modern mathematics has streamlined the proof process by unifying concepts of sets and relations.

- The evolution of mathematical notation has made proofs simpler and more efficient compared to those in 1910.

- The discussion highlights the historical significance of "Principia Mathematica" in the development of mathematical logic.

AI: What people are saying
The comments reflect a diverse range of perspectives on the mathematical concepts discussed in the article, particularly regarding the significance of foundational principles in mathematics.
  • Several commenters discuss the complexity and historical context of mathematical proofs, particularly referencing "Principia Mathematica" and Gödel's work.
  • There is a debate about the nature of mathematical concepts, such as whether relations should be considered sets and the implications of different foundational approaches.
  • Some comments draw analogies between mathematics and computer science, emphasizing the importance of abstraction and efficient representation in both fields.
  • Philosophical reflections on the nature of mathematical truths and their relation to reality are also present, with discussions on determinism versus probability.
  • Humor and light-hearted commentary are interspersed, with some users making playful remarks about mathematical statements like "1+1=2."
Link Icon 18 comments
By @pvg - 4 months
The mentioned size and density of Whitehead & Russel's Principia make the few dozen pages of Goedel's On Formally Undecidable Propositions of Principia Mathematica and Related Systems one of the greatest "i ain't reading all that/i'm happy for u tho/or sorry that happened" mathematical shitposts of all time.
By @Tainnor - 4 months
> theorems like ∗22.92: α⊂β→α∪(β−α)

Either I misunderstand the notation or there seems to be something missing there - the right hand side of that implication arrow is not a formula.

I would assume that what is meant is α⊂β→α∪(β−α)=β

By @youoy - 4 months
Thanks for sharing! I like to look at this example inside the debate of if mathematics are invented or discovered.

> That is how Whitehead and Russell did it in 1910. How would we do it today? A relation between S and T is defined as a subset of S × T and is therefore a set.

> A huge amount of other machinery goes away in 2006, because of the unification of relations and sets.

Relations are a very intuitive thing that I think most people would agree that are not the invention of one person. But the language to describe them and manipulate them mathematically is an invention that can have a dramatic effect on the way they are communicated.

By @awanderingmind - 4 months
That was a lovely read, thank you. I particularly enjoyed the analogy between 'a poorly-written computer program' (i.e. one with a lot of duplication due to inadequate abstraction), and the importance of using the appropriate mathematical machinery to reduce the complexity/length of a proof. It brings the the Curry–Howard isomorphism to mind: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
By @Animats - 4 months
It's easier if you start from something closer to Peano arithmetic or Boyer-Moore theory. I used to do a lot with constructive Boyer-Moore theory and their theorem prover. It starts with

    (ZERO)
and numbers are

    (ADD1 (ZERO))
    (ADD1 (ADD1 (ZERO)))
etc. The prover really worked that way internally, as I found out when I input a theorem with numbers such as 65536 in it. I was working on proving some things about 16-bit machine arithmetic, and those big numbers pushed SRI International's DECSystem 2060 into thrashing.

Here's the prover building up basic number theory, one theorem at a time.[1] This took about 45 minutes in 1981 and takes under a second now.

Constructive set theory without the usual set axioms is messy, though. The problem is equality. Informally, two sets are equal if they contain the same elements. But in a strict constructive representation, the representations have to be equal, and representations have order. So sets have to be stored sorted, which means much fiddly detail around maintaining a valid representation.

What we needed, but didn't have back then, was a concept of "objects". That is, two objects can be considered equal if they cannot be distinguished via their exported functions. I was groping around in that area back then, and had an ill-conceived idea of "forgetting", where, after you created an object and proved theorems about it, you "forgot" its private functions. Boyer and Moore didn't like that idea, and I didn't pursue it further.

Fun times.

[1] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...

By @jk4930 - 4 months
By @cubefox - 4 months
Oh, so the λ in lambda calculus was just a poor man's circumflex.

Unrelated, but why doesn't Hacker News have support for latex? And markdown, for that matter?

By @ngcc_hk - 4 months
Finally get why they need a thousand pages to prove 1+1=2!

The issue is 1+1 has no guarantee it will be two. You look carefully you can see the first 1 is exactly the same as the second 1 !!!!

Hence put the set of all Russell that do that kind of maths and add to another Russell also do that maths. You still ended up with one Russell.

That is why go all the trouble to say no intersection and first oneness set does not overlap with the second oneness set etc etc

Qed

By @adrian_b - 4 months
The main point of the parent article is not about 1+1=2, but about the importance of the concept of ordered pair in mathematics and about how the introduction and use of this concept has simplified the demonstrations that were much too complicated before this.

While the article is nice, I believe that the tradition entrenched in mathematics of taking sets as a primitive concept and then defining ordered pairs using sets is wrong. In my opinion, the right presentation of mathematics must start with ordered pairs as the primitive concept and then derive sequences, sets and multisets from ordered pairs.

The reason why I believe this is that there are many equivalent ways of organizing mathematics, which differ in which concepts are taken as primitive and in which propositions are taken as axioms, while the other concepts are defined based on the primitives and other propositions are demonstrated as theorems, but most of these possible organizations cannot correspond to an implementation in a physical device, like a computer.

The reason is that among the various concepts that can be chosen as primitive in a mathematical theory, some are in fact more simple and some are more complex and in a physical realization the simple have a direct hardware correspondent and the complex can be easily built from the simple, while the complex cannot be implemented directly but only as structures built from simpler components. So in the hardware of a physical device there are much more severe constraints for choosing the primitive things than in a mathematical theory that only describes the abstract properties of operations like set union, without worrying how such an operation can actually be executed in real life.

The ordered pair has a direct hardware implementation and it corresponds with the CONS cell of LISP. In a mathematical theory where the ordered pair is taken as primitive and sets are among the things defined using ordered pairs, many demonstrations correspond to how various LISP functions would be implemented. Unlike ordered pairs, sets do not have any direct hardware implementation. In any physical device, including in the human mind, sets are implemented as equivalence classes of sequences, while sequences are implemented based on ordered pairs.

The non-enumerable sets are not defined as equivalence classes of sequences and they cannot be implemented as such in a physical device but at most as something of the kind "I recognize it when I see it", e.g. by a membership predicate.

However infinite sets need extra axioms in any kind of theory and a theory of finite sets defined constructively from ordered pairs can be extended to infinite sets with appropriate additional axioms.

By @earthboundkid - 4 months
Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
By @singleshot_ - 4 months
> The ⊢ symbol has not changed; it means that the formula to which it applies is asserted to be true. ⊃ is logical implication, and ≡ is logical equivalence.

A strange thing happened to me in mathematics. When I got to the point where these symbols started showing up (ninth grade, more or less) I did not get a thorough explanation of the symbols; they just appeared and I tried to intuit what they meant. As more symbols crept into my math, I tried to ignore them where possible. Eventually this meant that I could not continue learning math, as it became mostly all such symbols.

I got as far as a minor in math. I'm not sure how any of this this happened, but I wish I had a table of these symbols in ninth grade.

By @ngcc_hk - 4 months
Tw there is a follow-up. Not much. But still an update : https://blog.plover.com/math/PM-translation.html
By @redbell - 4 months
I often use the analogy "1+1=?" in debates with both friends and strangers, especially when discussing subjective topics like politics, religion, and geopolitical conflicts. It's a simple way to highlight how different perspectives can lead to vastly different conclusions.

For instance, I frequently use the example "1+1=10" in binary to illustrate that, while our reasoning may seem fundamentally different, it's simply because we're starting from different premises, using distinct methods, and approaching the same problem from unique angles.

By @bazoom42 - 4 months
So have someone made a modern revision of Principia using the simplifications made possible by more recent development?
By @yohannparis - 4 months
Thank you, it's an interesting read, because on my own, without the explanation this will have been over my head.
By @anthk - 4 months
The Computational Beauty of Nature shows that with Lisp.
By @dvh - 4 months
1+1=3 (for very large values of 1)
By @wildermuthn - 4 months
“1 + 1 = 2” is only true in our imagination, according to logical deterministic rules we’ve created. But reality is, at its most fundamental level, probabilistic rather than deterministic.

Luckily, our imaginary reality of precision is close enough to the true reality of probability that it enables us to build things like computer chips (i.e., all of modern civilization). And yet, the nature of physics requires error correction for those chips. This problem becomes more obvious when working at the quantum scale, where quantum error correction remains basically unsolved.

I’m just reframing the problem of finding a grand unified theory of physics that encompasses a seemingly deterministic macro with a seemingly probabilistic micro. I say seemingly, because it seems that macro-mysteries like dark matter will have a more elegant and predictive solution once we understand how micro-probabilities create macro-effects. I suspect that the answer will be that one plus one is usually equal to two, but that under odd circumstances, are not. That’s the kind of math that will unlock new frontiers for hacking the nature of our reality.