What Is Theoretical Computer Science?
Moshe Y. Vardi argues that theoretical computer science (TCS) is a distinct field essential for understanding computing technology, emphasizing its practical applications and historical evolution, rather than being a subset of mathematics.
Read original articleThe article discusses the nature and scope of theoretical computer science (TCS), challenging the common perception that it is merely a branch of mathematics. Moshe Y. Vardi argues that TCS should be viewed as a distinct field focused on the formal foundations of computer science and technology, rather than being confined to mathematical abstraction. He highlights the historical evolution of TCS, noting that its scope has narrowed in the U.S. since the 1980s due to the rise of specialized conferences. Vardi emphasizes the importance of TCS in explaining and predicting real-world computing phenomena, drawing parallels with theoretical physics. He critiques the current focus on computational complexity theory, particularly NP-completeness, for not adequately addressing practical applications, such as the effectiveness of SAT solvers. Vardi concludes that TCS should not be seen as a subset of mathematics but as a vital discipline in its own right, with a broader mission to understand and advance computing technology.
- Theoretical computer science (TCS) is distinct from mathematics and should focus on the foundations of computer science and technology.
- The scope of TCS has narrowed in the U.S. since the 1980s, influenced by the rise of specialized conferences.
- TCS should aim to explain and predict real-world computing phenomena, similar to theoretical physics.
- Current emphasis on computational complexity theory may overlook practical applications and effectiveness in real-world scenarios.
- TCS is essential for advancing understanding and innovation in computing technology.
Related
We must seek a widely-applicable Science of Systems
The text discusses the importance of a Science of Systems, focusing on Complex Systems. Emphasizing computer science's role, it explores potential applications in various fields and advocates for scientific progress through unified theories.
The Zombie Misconception of Theoretical Computer Science
The blog post delves into misconceptions in theoretical computer science, focusing on computability and complexity theory. It clarifies the distinction between functions and questions, NP-hard problems, and the P versus NP dilemma. Emphasizing the importance of grasping fundamental principles, the author seeks reader input on combating these misunderstandings.
Primitive Recursive Functions for a Working Programmer
The article discusses Turing completeness in programming languages, arguing that non-Turing complete languages can effectively solve practical problems. It explores theoretical implications, Turing machines, and finite state automata, emphasizing computational equivalence.
Why Does Everyone Hate Haskell, Jazz, and Pure Math?
The article argues that Haskell, jazz, and pure mathematics, often seen as elitist, are crucial for innovation, leading to practical advancements in programming and music, benefiting society overall.
Machine learning and information theory concepts towards an AI Mathematician
The paper by Yoshua Bengio and Nikolay Malkin explores AI's limitations in mathematical reasoning, proposing an information-theoretical approach to generate new conjectures, set for publication in 2024.
>Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing.
as well as the emphasis on the difference between TCS in Europe and the US. I remember from the University of Crete that the professors all spent serious time in the labs coding and testing. Topics like Human-Computer Interaction, Operating Systems Research and lots of Hardware (VLSI etc) were core parts of the theoretical Computer Science research areas. This is why no UoC graduate could graduate without knowledge both in Algorithms and PL theory, for instance, AND circuit design (my experience is from 2002-2007).
I strongly believe that this breadth of concepts is essential to Computer Science, and the narrower emphasis of many US departments (not all) harms both the intellectual foundations and practical employment prospects of the graduate. [I will not debate this point online; I'll be happy to engage in hours long discussion in person]
Maybe the issue is how he thinks about mathematics... He quotes Von Neuman saying:
> We should remember the warning of John von Neuman,e one of the greatest mathematicians and computer scientists of the 20th century, regarding the danger of mathematics driven solely by internal esthetics: “There is a grave danger that the subject will develop along the line of least resistance.”
But for example, there is an article in Quanta [0] about a recent proof in Number Theory (you cannot get more "mathematical" than that), and the guy who proved it said:
> “Mathematics is not just about proving theorems — it’s about a way to interact with reality, maybe.”
Which is in line with Von Neuman's quote, and with the spirit of what the author is saying.
So maybe a more accurate subtitle would be:
"Thinking of theoretical computer science as a mathematical formal system is harmful to the discipline."
[0] https://www.quantamagazine.org/big-advance-on-simple-soundin...
Regarding the article, I feel like there's some context missing, is this part of some ongoing debate about TCS? The piece abruptly ends.
One that comes to mind is the recent breakthroughs in AI which have caught theorists flat-footed. Sanjeev Arora and others openly admit we don't have a good theoretical understanding of the empirical successes of deep learning, how they are able to do the things they do with emergent phenomena, scaling effects, etc.
Scott Aaronson has also suggested that theoretical CS should be seen as analogous to theoretical physics.
On the other hand, I don't know if the argument could be abused to dismiss things like P vs NP as having no realistic scientific value. That would be the flip side of Vardi's argument. I do vaguely recall Aaronson in one of his blog posts or pdf essays* giving a more subtle argument about why P vs NP is central to computer science anyways, and I think that would be a slightly different position than either Vardi's or his reading of Widgerson's here.
*See the first several subsections about common Objections in pp 6-7 in: https://www.scottaaronson.com/papers/pnp.pdf
The argument seems to be that 1. a theoretical computer scientist is not a general mathematician and wouldn't be hired to a tenured math position ("sociological argument") and 2. treating it as a branch of math is "harmful" to theoretical computer science.
There are hints about what OP believes TCS is which isn't math, but I wonder why not make that an explicit argument — that would make it much easier to reason about and argue either for or against.
Without that, neither 1 nor 2 make a convincing case for anything: 1. a Linear Algebra expert might not get a position in a Statistics department either, and 2. it'd be useful to show some "harm" before you claim anything being "harmful". CS is also lucrative enough that it pays to have a dedicated faculty just for CS too (as in, it attracts students, contracts with external parties for a university, etc).
What does von Neumann mean here? Why is it bad that it will develop along the line of least resistance? Does von Neumann advice that working on "harder" problems is more beneficial for TCS? Could not one argue that we should be solving away low hanging fruits first?
I am not sure if I am understanding von Neumann's quote nor this article properly. I would love to hear some simpler explanation (I am a new BSc. CS graduate).
The definition of "theoretical physics" might have rapidly changed between 1900, 1920, 1940, and 1950--but certainly people who called themselves theoreticians remained mostly unchanged. Analyzing how everyone's definitions were changing gives a wealth of information about when and where breakthroughs were happening. 1919 and 1945 come to mind as such examples of when a theoretical field changed as a result of experiments [1][2].
Back to computing: Dijkstra told the story of attempting to put "Programmer" as his profession on his marriage certificate in 1957, and was rejected [3]. Clearly there are both pros and cons of using the sociological definition of a field. We all know programming existed before 1957, but the perception of it as a profession was so foreign that it wasn't allowed on an official document. It would've been impossible, apparently, where he lived to ponder about "What is programming?" if no one could BE a programmer. For that reason, we should probably be flexible and always be willing to discuss different definitions for every field so that we gain the benefits from multiple lines of reasoning.
[1] https://en.wikipedia.org/wiki/Eddington_experiment
[2] https://en.wikipedia.org/wiki/Trinity_(nuclear_test)
[3] https://amturing.acm.org/award_winners/dijkstra_1053701.cfm
Dr. Vardi's Second Law of Thermodynamics for boolean SAT and SMT (Satisfiability Modulo Theory) solvers is truly a marvel of interdisciplinary ambition. In his framework, computational entropy is said to increase with each transition of the Turing machine, as if bits themselves somehow carry thermodynamic weight. He posits that any algorithm—no matter how deterministic—gradually loses "information purity" as it executes, much like how heat dissipates in a closed system. His real stroke of genius lies in the idea that halting problems are not just undecidable, but thermodynamically unstable. According to Dr. Vardi, attempts to force a Turing machine into solving such problems inevitably lead to an "entropy singularity," where the machine's configuration becomes so probabilistically diffuse that it approaches the heat death of computation. This, he claims, is why brute-force methods become inefficient: they aren’t just computationally expensive, they are thermodynamically costly as well. Of course, there are skeptics who suggest that his theory might just be an elaborate metaphor stretched to breaking point—after all, it’s unclear if bits decay in quite the same way as particles in a particle accelerator.
My gripe with theoretical computer science was that if felt like a Newtonian physics level model of digital processes, while an equivalent of biology level models would be needed for/suited to most "real-life computing".
Incidentally, real-life computing depends quite a lot on the laws of the physics of the universe we live in. This has been known quite clearly since the theoretical discovery of quantum computing, with the resultant split of classical complexity theory and quantum complexity theory.
Morever, physics also determines what sort of devices you can make and with what performance. The real-life speed of solving NP-complete problems depends on the absolute and relative performance of transistors, CPUs, memories etc.
Hence, I submit that TCS has the same relationship to Physics as does Mechanical Engineering to Physics.
I don't think that's a fair characterisation. Clearly, the SAT subset that SAT solvers solve efficiently is in P, so in that sense complexity theory "explains" it and even "predicts" it: clearly, there are subsets of SAT in P; some simple ones, such as 2SAT, can be easily described.
What complexity theory is yet to do is:
1. Either prove that the subset of SAT that can be solved efficiently covers all of SAT (in which case either P=NP or NP can be solved with such a low exponent that makes it tractable in practice) or identify and succinctly describe the subset of SAT that modern solvers solve efficiently, and it is in that sense that it doesn't yet "predict" it. But there are many open questions (including P vs NP) in complexity theory; that doesn't mean that the discipline is flawed. Many problems studied in complexity theory originate from practical questions, such as cryptography, and complexity theory does successfully and usefully explain and predict such real-world "phenomena".
2. Explain why that subset of SAT is prevalent in practice (making solvers "effective"). I'm not sure this is the job for complexity theory or any theoretical discipline for that matter (as opposed to empirical ones); after all theoretical computer science is meant to be theoretical. But there are mathematical disciplines (e.g. random graph theory) that do describe and even explain and predict mathematical structures that are more likely to arise in "nature".
> U.S. TCS has a quite narrower scope than European TCS, which I find unfortunate.
TCS is generally considered to have two primary subdisciplines: Theory of Computation (ToC, sometimes also referred to as "Theory A"), which is concerned with complexity and algorithms, and Theory of Programming (ToP, sometimes also referred to as "Theory B"), which is concerned with programming languages and type systems (and may be considered a sub-discipline of formal logic). Indeed, US ToC is more prominent in the US while ToP is more prominent in Europe, but I don't think some of the other CS sub-disciplines Moshe mentions (such as databases) are commonly considered theoretical computer science.
I don't have a position on the utility of describing theoretical computer science as a branch of mathematics or not doing so, but the very name "theoretical computer science" acknowledges that there are other sub-disciplines in computer science that are empirical rather than theoretical (such as studying the mistakes programmers empirically make, or the common causes for cascading failures, or studying side-channel attacks etc. etc.).
I don't think that those who consider TCS to be a branch of mathematics also consider all of CS to be a branch of mathematics or claim that theoretical computer science should aim to answer all interesting questions in computing, just as theoretical physics acknowledges the necessity of experimental physics and doesn't claim to cover everything physicists do.
As in theoretical physics and mathematics, results in theoretical computer science do sometimes explain and predict things we observe in the real world and sometimes not yet. But in all these cases, the theoretical subdisciplines aren't wholly subservient to the empirical ones or vice-versa.
As an example, the question of computability (decidability) isn't directly practical (as many decidable problems arent practically tractable), but the more practical question of tractability/complexity directly evolved from the study of computability.
Still waiting for theoretical physics to discard string theory, as it fails in predicting anything.
I love his “sociological sense”, and agree that at some level all these words are just words in language games that are defined with great diversity across the world’s academics, much less engineers and laypeople. But I also agree that there’s something worth defending in “computer science in its own right”, as I said above; that is, in turn, the core task of philosophy of science.
Of course I would expect him to push back on this description of the field because philosophy has lost all its street cred, but that would only encourage me to redouble my efforts. If it’s not empirical study of actual results, and it’s not mathematical study of intuitively-based theorems, what else is left?
With that said, I regard TCS as the study of discretized computation at scale. Or, more colloquially, what is the fastest way to do X, N times? This could be sorting, searching, indexing, drawing, tracking, balancing, storing, accessing, saving, reading, writing, whatever. The distinguishing characteristic is complexity analysis in time and space.
As a counterexample, I think cryptographic methods such as AES and RSA are much more pure math / number theory than TCS.
> The centrality of computing stems from the fact that it is a technology that has been changing the world for the past 80 years, ever since the British used early computing to change the tide of war in World War II.
I take issue with the idea hinted at here. Just like Algebra and other branches of mathematics were invented to deal with daily down-to-earth issues like accounting or farming, you'd be hard pressed to find a consensus that mathematics "aims to explain the real world".
The historical origin is very clear, but the train left the "real world" station pretty fast, "unreasonable effectiveness" notwithstanding. Am I to understand that because Enigma was broken using a physical machine, the field is bound to study physical reality? To me this feels as uncomfortable as to refer to astronomy as "telescope studies".
> I believe that thinking of TCS as a branch of mathematics is harmful to the discipline. [...] > Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing
Yeah, if you hired me to design harmful approaches, not in a year I would have come up with something as harmful as this.
Related
We must seek a widely-applicable Science of Systems
The text discusses the importance of a Science of Systems, focusing on Complex Systems. Emphasizing computer science's role, it explores potential applications in various fields and advocates for scientific progress through unified theories.
The Zombie Misconception of Theoretical Computer Science
The blog post delves into misconceptions in theoretical computer science, focusing on computability and complexity theory. It clarifies the distinction between functions and questions, NP-hard problems, and the P versus NP dilemma. Emphasizing the importance of grasping fundamental principles, the author seeks reader input on combating these misunderstandings.
Primitive Recursive Functions for a Working Programmer
The article discusses Turing completeness in programming languages, arguing that non-Turing complete languages can effectively solve practical problems. It explores theoretical implications, Turing machines, and finite state automata, emphasizing computational equivalence.
Why Does Everyone Hate Haskell, Jazz, and Pure Math?
The article argues that Haskell, jazz, and pure mathematics, often seen as elitist, are crucial for innovation, leading to practical advancements in programming and music, benefiting society overall.
Machine learning and information theory concepts towards an AI Mathematician
The paper by Yoshua Bengio and Nikolay Malkin explores AI's limitations in mathematical reasoning, proposing an information-theoretical approach to generate new conjectures, set for publication in 2024.