>Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing.
as well as the emphasis on the difference between TCS in Europe and the US. I remember from the University of Crete that the professors all spent serious time in the labs coding and testing. Topics like Human-Computer Interaction, Operating Systems Research and lots of Hardware (VLSI etc) were core parts of the theoretical Computer Science research areas. This is why no UoC graduate could graduate without knowledge both in Algorithms and PL theory, for instance, AND circuit design (my experience is from 2002-2007).
I strongly believe that this breadth of concepts is essential to Computer Science, and the narrower emphasis of many US departments (not all) harms both the intellectual foundations and practical employment prospects of the graduate. [I will not debate this point online; I'll be happy to engage in hours long discussion in person]
Perhaps you're talking about the split between Electrical Engineering and Computer Science? That one isn't universal as some departments only offer EECS and not CS as a major, but when CS on its own is offered, "hardware" courses tend to be about microarchitecture, with practical work done using simulators. You're not required to know much of anything about electronics. But there is no program I'm aware of where a person can do nothing but math and get a CS degree. You have to write and test code.
Now, theoretical physics is a bit of a troubled child however in recent years.
If we map computer science aspects in not the four physics disciplines, we get:
Software / hardware engineering
Applied computer science
Theoretical computer science
Mathematics dealing with problems inspired by computer science
This isn't really true, is it? There are mathematical models that predict but do not explain the real world. The most glaring of them is the transmission of EM waves without a medium, and the particle/wave duality of matter. In the former case, there was a concerted attempt to prove existence of the medium (luminiferous aether) that failed and ended up being discarded - we accept now that no medium is required, but we don't know the physical process of how that works.
Methods for explaining why include Counterfactual inference and/or now quantum Causal interference.
All or some of quantum statistical mechanics, fluid dynamics, and quantum chaos intend to predict with lower error too
List of popular misconceptions and science > Science, technology, and mathematics : https://en.wikipedia.org/wiki/List_of_common_misconceptions :
> See also: Scientific misconceptions, Superseded theories in science, and List of topics characterized as pseudoscience
Allegory of the cave > See also,: https://en.wikipedia.org/wiki/Allegory_of_the_cave
The only true statement:
All models are wrong: https://en.wikipedia.org/wiki/All_models_are_wrong
Map–territory relation: https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation :
> A frequent coda to "all models are wrong" is that "all models are wrong (but some are useful)," which emphasizes the proper framing of recognizing map–territory differences—that is, how and why they are important, what to do about them, and how to live with them properly. The point is not that all maps are useless; rather, the point is simply to maintain critical thinking about the discrepancies: whether or not they are either negligible or significant in each context, how to reduce them (thus iterating a map, or any other model, to become a better version of itself), and so on.
No this guy doesn’t get it. He doesn’t understand what science is.
In science nothing can be proven. If I say all swans are white as my hypothesis this statement can never be proven because I can never actually verify that I observed all swans. There may be some swan hidden on earth or in the universe that I haven’t seen. Since the universe is infinite in size I can never confirm ever that I’ve observed all swans.
However if I observe one black swan it means I falsified the entire hypothesis. Thus in science and in reality as we know it nothing can be proven… things can only be falsified.
Math on the other hand is different. Math is all about a made up universe where axioms are known absolutely. It has nothing to do with observation or evidence in the same way science does. Math is an imaginary game we play and in this game it is possible to prove things.
This proof is the domain of mathematics… not science. Physics is a science because it involves gathering evidence and attempting to falsify the hypothesis.
Einstein said it best: “No amount of experimentation can ever prove me right; a single experiment can prove me wrong”
Basically newtons laws of motion are a perfect example of falsification via experimentation with relativity later being confirmed as the more accurate theory that matches more with observation.
So what’s the deal with computer science?
First of all the term already hits the first nomenclature issue. Computer science is ironically not a science. It lives in the same axiomatic based world as mathematics and therefore things can be proven in computer science but not in science itself.
So this nomenclature issue is what’s confusing everyone. The op failed to identify that computer science isn’t actually a freaking science. Physics is a science but computer science isn’t.
So what is computer science? Sorry to say but it’s a math. I mean it’s all axioms and theorems. It’s technically math.
CS is a math in the same way algebra and geometry is math. Physics is a science and it is not a math. It’s a totally orthogonal comparison.
Your job as programmers is more like applied math. It’s completely orthogonal to the whole topic but People often get this mixed up. They start thinking that because programming is applied computer science then computer science itself is not a math.
Applied math ironically isn’t really math in the same way writing isn’t a pencil. Yes you use a pencil to write but they are not the same. Same thing with computer science and programming.
But I think there is also an etymological issue or something as well, which is causing additional confusion. Scientific theories aren’t proven of course, but they are proofed, in the same manner as when you pick up a breastplate from the blacksmith and note that it has a dent in it from where he shot it once. That’s the proof mark, the armor was tested by giving it a reasonably challenging test that is appropriate to the task it is intended for. Does this prove the armor is indestructible? No, of course not, as a physical device the armor has some finite limitations which could be overcome with enough gunpowder. But it is still reassuring.
Therefore I propose we clearly delineate between the two, by always spelling the logic thing with a ve and the “well, I tested it pretty hard and it didn’t break” thing with an f.
Newtonian physics was mostly mechanical engineering proof. It turns out electrical engineering, the study of fields and waves, that was a higher caliber. The standard model seems to be EE proof. But physicists are always constructing more energetic experiments.
That fails as a scientific hypothesis on purely structural grounds, before you have made any observations. Scientific hypotheses have to explain something. It's not enough to say that all swans are white, you have to say why. "All swans are white" is an observation, not a (scientific) hypothesis.
An example of a legitimate scientific hypothesis is that all swans are white because being white provides swans with some benefit in terms of reproductive fitness (and then you have to go on to say what that benefit is). You can then go on to predict that there might be non-white swans, but that these are expected to be rare because evolutionary pressure would drive non-whiteness out of the gene pool. Or something like that. But "all swans are white" by itself is a non-starter as a scientific hypothesis.
Asking here because it is mostly on-topic: This phrase is repeated often, but shouldn't it actually be, "In science, a hypothesis is either fundamentally verifiable or fundamentally falsifiable, but never both"? The two simply being the logical negation of each other.
"All swans are white" is fundamentally falsifiable (by seeing a black swan) but not verifiable, as you described.
"Black swans exist" is fundamentally verifiable (by seeing a black swan), but not falsifiable.
> In science nothing can be proven.
Who are you referring to? Nobody has mentioned anything about proven. Your entire comment is focused on arguing against science being proofs, but nobody has said science is proofs.
OP said that scientific theories should "explain and predict" which they should. Those that don't should be discarded eventually (which they should). Why are your discussing mathematical proofs of science when nobody brought it up nor proposed that?
Engineering - Physics - Math
Computer Engineering - Computer Science
Notice that the second only has two steps? Engineering isn't really science, but there is a sort of science. Now one will be quick to point out that computer engineering does depend upon physics (and chemistry, and even a wee bit of biology and higher). But I think Computer Science is large enough in scope it contains true Computer Science, which is as much math as math is, but it also contains more real world test and verify parts of computing. Things that might be able to fit up under physics or chemistry, but which ended up being placed under computer science.
If we are talking a Turing machine, that is math (or something equivalent to math). If we are talking about how a compiler can optimize code by rewriting some things to be computational equivalent but to put operations in an order that most CPUs can run faster, that feels like we've stepped into science. We've overloaded the terminology and now have some linguistic debt that needs paying off.
I sometimes joke that my undergrad degree in Computer Science involved only a single class in Computer Science.
In theoretical computer science, design and analysis of algorithms is the study of algorithms as mathematical objects. The unfortunately named algorithm engineering is the science of algorithms. It deals with algorithms that are written in code and run on real computers. Among other things, it seeks to build models that can predict the performance of an algorithm without implementing and running it.
Then there is algorithmic economics (or whatever it's called today), which was a huge success for theoretical computer science. At least in some amoral sense. The microeconomic models it created enabled turning the internet into the ad-centric data-driven dystopia it currently is.
For example, distributed systems and networking are more like a physical science because they seek to make generalized learnings and theorems about real world systems.
The author’s last point around complexity theory also resonates because it demonstrates the value of designing experiments with real-world conditions like computing hardware speed and input sizes.
An amusing thing about a hypothesis like "all swans are white" is that if you do want to go around making observations to support it you don't actually need to observe any swans.
"All swans are white" is logically equivalent to "All non-white things are not swans". Thus you can gather observational evidence for the hypothesis by finding non-white things and checking if they are swans.
My monitor is not white, and I see it is not a swan. I've just made an observation in support of "all swans are white". I can make hundreds of such observations without even leaving my house.
I feel sorry for all those other swan color researchers who had to trudge around slimy rivers and mucky wetlands checking the colors of swans.
I think I first saw this in a Martin Gardner book. It is amusing, but actually there has been quite a bit of serious work among logicians and philosophers over this [1] since the logic seems correct but intuitively it seems you shouldn't be able to research swan color by looking around at the furniture in your house.
Maybe the issue is how he thinks about mathematics... He quotes Von Neuman saying:
> We should remember the warning of John von Neuman,e one of the greatest mathematicians and computer scientists of the 20th century, regarding the danger of mathematics driven solely by internal esthetics: “There is a grave danger that the subject will develop along the line of least resistance.”
But for example, there is an article in Quanta [0] about a recent proof in Number Theory (you cannot get more "mathematical" than that), and the guy who proved it said:
> “Mathematics is not just about proving theorems — it’s about a way to interact with reality, maybe.”
Which is in line with Von Neuman's quote, and with the spirit of what the author is saying.
So maybe a more accurate subtitle would be:
"Thinking of theoretical computer science as a mathematical formal system is harmful to the discipline."
[0] https://www.quantamagazine.org/big-advance-on-simple-soundin...
As Kant teaches us in A Critique of Pure Reason, mathematics is the cultivation and extension of intuitive intellectual tools. This puts it in natural opposition to philosophy (which deals with conceptual intellectual tools) and natural science (which deals with the empirical nature of the Actual world). None of them would be very useful without the others, but that does not mean that we should abandon the distinctions, imo.
That article is great, but I think the immediately preceding sentence is telling:
Earlier that year, his father had died, and Pasten found himself turning to math for comfort. “I found mathematics really helpful for that,” he said. “Mathematics is not just about…
To put it bluntly: I think that’s just cope. Again, I absolutely agree that mathematics can be useful for natural science, but we need to look no further than the multitude of mathematically-consistent models of the universe to see that it is not itself natural science.I guess this points back to the eternal debate about if mathematical objects are invented or discovered. If they are discovered, then that would mean that there is an underlying truth/semantics behind the mathematical formalisms. So in that case mathematics studies and interacts with reality. Take for example Calculus and Newton and Leibniz [0] who invented/discovered calculus independently.
I will let you decide on what side of the debate you feel more comfortable :)
[0] https://en.m.wikipedia.org/wiki/Leibniz%E2%80%93Newton_calcu...
Regarding the article, I feel like there's some context missing, is this part of some ongoing debate about TCS? The piece abruptly ends.
One that comes to mind is the recent breakthroughs in AI which have caught theorists flat-footed. Sanjeev Arora and others openly admit we don't have a good theoretical understanding of the empirical successes of deep learning, how they are able to do the things they do with emergent phenomena, scaling effects, etc.
Scott Aaronson has also suggested that theoretical CS should be seen as analogous to theoretical physics.
On the other hand, I don't know if the argument could be abused to dismiss things like P vs NP as having no realistic scientific value. That would be the flip side of Vardi's argument. I do vaguely recall Aaronson in one of his blog posts or pdf essays* giving a more subtle argument about why P vs NP is central to computer science anyways, and I think that would be a slightly different position than either Vardi's or his reading of Widgerson's here.
*See the first several subsections about common Objections in pp 6-7 in: https://www.scottaaronson.com/papers/pnp.pdf
The argument seems to be that 1. a theoretical computer scientist is not a general mathematician and wouldn't be hired to a tenured math position ("sociological argument") and 2. treating it as a branch of math is "harmful" to theoretical computer science.
There are hints about what OP believes TCS is which isn't math, but I wonder why not make that an explicit argument — that would make it much easier to reason about and argue either for or against.
Without that, neither 1 nor 2 make a convincing case for anything: 1. a Linear Algebra expert might not get a position in a Statistics department either, and 2. it'd be useful to show some "harm" before you claim anything being "harmful". CS is also lucrative enough that it pays to have a dedicated faculty just for CS too (as in, it attracts students, contracts with external parties for a university, etc).
What does von Neumann mean here? Why is it bad that it will develop along the line of least resistance? Does von Neumann advice that working on "harder" problems is more beneficial for TCS? Could not one argue that we should be solving away low hanging fruits first?
I am not sure if I am understanding von Neumann's quote nor this article properly. I would love to hear some simpler explanation (I am a new BSc. CS graduate).
If you look at new theories (let's say you are starting to study topology, or group theory) they start from some definitions/axioms that seem to come from "nowhere", but they are in fact a product of working and perfectioning a language for the intuitions that we have in mind. Once we set for the correct descriptions, then there are a lot of consequences and new results that come from interaction almost entirely with the formal system.
The interactions with the formal system is the path of least resistance.
The power of mathematics is that once you figure out a correct formalization of the intuitions, using just the formal system allows you to get a lot of information. That is why sometimes people identify mathematics with just the formal system.
Knuth thinking about algorithms led to all these research questions about combinatorics, that have turned out to be very interesting, but are much more messy and disjointed results.
.. mathematical ideas originate in empirics, although the genealogy is sometimes long and obscure. But, once they are so conceived, the subject begins to live a peculiar life of its own and is better compared to a creative one, governed by almost entirely aesthetical motivations, than to anything else and, in particular, to an empirical science. There is, however, a further point which, I believe, needs stressing.
As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from "reality," it is beset with very grave dangers. It becomes more and more purely aestheticizing more and more purely l'art pour l'art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much "abstract" inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque, then the danger signal is up. It would be easy to give examples, to trace specific evolutions into the baroque and the very high baroque, but this, again, would be too technical.
In any event, whenever this stage is reached, the only remedy seems to me to be rejuvenating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this was a necessary condition to conserve the freshness and the vitality of the subject and that this will remain equally true in the future.
The definition of "theoretical physics" might have rapidly changed between 1900, 1920, 1940, and 1950--but certainly people who called themselves theoreticians remained mostly unchanged. Analyzing how everyone's definitions were changing gives a wealth of information about when and where breakthroughs were happening. 1919 and 1945 come to mind as such examples of when a theoretical field changed as a result of experiments [1][2].
Back to computing: Dijkstra told the story of attempting to put "Programmer" as his profession on his marriage certificate in 1957, and was rejected [3]. Clearly there are both pros and cons of using the sociological definition of a field. We all know programming existed before 1957, but the perception of it as a profession was so foreign that it wasn't allowed on an official document. It would've been impossible, apparently, where he lived to ponder about "What is programming?" if no one could BE a programmer. For that reason, we should probably be flexible and always be willing to discuss different definitions for every field so that we gain the benefits from multiple lines of reasoning.
[1] https://en.wikipedia.org/wiki/Eddington_experiment
[2] https://en.wikipedia.org/wiki/Trinity_(nuclear_test)
[3] https://amturing.acm.org/award_winners/dijkstra_1053701.cfm
Dr. Vardi's Second Law of Thermodynamics for boolean SAT and SMT (Satisfiability Modulo Theory) solvers is truly a marvel of interdisciplinary ambition. In his framework, computational entropy is said to increase with each transition of the Turing machine, as if bits themselves somehow carry thermodynamic weight. He posits that any algorithm—no matter how deterministic—gradually loses "information purity" as it executes, much like how heat dissipates in a closed system. His real stroke of genius lies in the idea that halting problems are not just undecidable, but thermodynamically unstable. According to Dr. Vardi, attempts to force a Turing machine into solving such problems inevitably lead to an "entropy singularity," where the machine's configuration becomes so probabilistically diffuse that it approaches the heat death of computation. This, he claims, is why brute-force methods become inefficient: they aren’t just computationally expensive, they are thermodynamically costly as well. Of course, there are skeptics who suggest that his theory might just be an elaborate metaphor stretched to breaking point—after all, it’s unclear if bits decay in quite the same way as particles in a particle accelerator.
From https://arxiv.org/pdf/1010.2067 "Manin and Marcolli [20] derived similar results in a broader context and studied phase transitions in those systems. Manin [18, 19] also outlined an ambitious program to treat the infinite runtimes one finds in undecidable problems as singularities to be removed through the process of renormalization. In a manner reminiscent of hunting for the proper definition of the “one-element field” F_un, he collected ideas from many different places and considered how they all touch on this central theme. While he mentioned a runtime cutoff as being analogous to an energy cutoff, the renormalizations he presented are uncomputable. In this paper, we take the log of the runtime as being analogous to the energy; the randomness described by Chaitin and Tadaki then arises as the infinite-temperature limit."
There is no such thing as the Second Law of Thermodynamics of a Turing Machine.
Unless! You turn the machine off. Then energy input equals zero, it becomes a closed system, and entropy kicks in.
My gripe with theoretical computer science was that if felt like a Newtonian physics level model of digital processes, while an equivalent of biology level models would be needed for/suited to most "real-life computing".
By the way, a lot of small CPU cores is what we use inside graphic cards. However they are not actors. They are very deterministic. The Newtonian physics model.
Incidentally, real-life computing depends quite a lot on the laws of the physics of the universe we live in. This has been known quite clearly since the theoretical discovery of quantum computing, with the resultant split of classical complexity theory and quantum complexity theory.
Morever, physics also determines what sort of devices you can make and with what performance. The real-life speed of solving NP-complete problems depends on the absolute and relative performance of transistors, CPUs, memories etc.
Hence, I submit that TCS has the same relationship to Physics as does Mechanical Engineering to Physics.
The fact that functions computable with such a machine is equivalent to functions computable in the lambda calculus and Herbrand general recursive functions is the remarkable results.
The fact that it can somehow be linked to an actual computing machines outside of logic is merely a happy accident.
Having said that you could think I disagree with Vardi but the truth is: I think the point he brings is just void of substance. That’s only of interest to people who like university politics and how departments hire. It’s of no impact to the field whatsoever. Why does it matter what is or isn’t semantically TCS and if it is or not mathematics? The problems will still be there the same.
(See also how (much of) the math for General Relatively was developed without any application in mind.)
General Relativity much more natural than quantum mechanics. It was mostly created from a theoretical motivation. People were dragged towards quantum mechanics kicking and screaming and it took about 30 years to develop.
I don't think that's a fair characterisation. Clearly, the SAT subset that SAT solvers solve efficiently is in P, so in that sense complexity theory "explains" it and even "predicts" it: clearly, there are subsets of SAT in P; some simple ones, such as 2SAT, can be easily described.
What complexity theory is yet to do is:
1. Either prove that the subset of SAT that can be solved efficiently covers all of SAT (in which case either P=NP or NP can be solved with such a low exponent that makes it tractable in practice) or identify and succinctly describe the subset of SAT that modern solvers solve efficiently, and it is in that sense that it doesn't yet "predict" it. But there are many open questions (including P vs NP) in complexity theory; that doesn't mean that the discipline is flawed. Many problems studied in complexity theory originate from practical questions, such as cryptography, and complexity theory does successfully and usefully explain and predict such real-world "phenomena".
2. Explain why that subset of SAT is prevalent in practice (making solvers "effective"). I'm not sure this is the job for complexity theory or any theoretical discipline for that matter (as opposed to empirical ones); after all theoretical computer science is meant to be theoretical. But there are mathematical disciplines (e.g. random graph theory) that do describe and even explain and predict mathematical structures that are more likely to arise in "nature".
> U.S. TCS has a quite narrower scope than European TCS, which I find unfortunate.
TCS is generally considered to have two primary subdisciplines: Theory of Computation (ToC, sometimes also referred to as "Theory A"), which is concerned with complexity and algorithms, and Theory of Programming (ToP, sometimes also referred to as "Theory B"), which is concerned with programming languages and type systems (and may be considered a sub-discipline of formal logic). Indeed, US ToC is more prominent in the US while ToP is more prominent in Europe, but I don't think some of the other CS sub-disciplines Moshe mentions (such as databases) are commonly considered theoretical computer science.
I don't have a position on the utility of describing theoretical computer science as a branch of mathematics or not doing so, but the very name "theoretical computer science" acknowledges that there are other sub-disciplines in computer science that are empirical rather than theoretical (such as studying the mistakes programmers empirically make, or the common causes for cascading failures, or studying side-channel attacks etc. etc.).
I don't think that those who consider TCS to be a branch of mathematics also consider all of CS to be a branch of mathematics or claim that theoretical computer science should aim to answer all interesting questions in computing, just as theoretical physics acknowledges the necessity of experimental physics and doesn't claim to cover everything physicists do.
As in theoretical physics and mathematics, results in theoretical computer science do sometimes explain and predict things we observe in the real world and sometimes not yet. But in all these cases, the theoretical subdisciplines aren't wholly subservient to the empirical ones or vice-versa.
As an example, the question of computability (decidability) isn't directly practical (as many decidable problems arent practically tractable), but the more practical question of tractability/complexity directly evolved from the study of computability.
Still waiting for theoretical physics to discard string theory, as it fails in predicting anything.
https://www.preposterousuniverse.com/podcast/2018/10/15/epis...
https://www.preposterousuniverse.com/podcast/2019/01/28/epis...
I love his “sociological sense”, and agree that at some level all these words are just words in language games that are defined with great diversity across the world’s academics, much less engineers and laypeople. But I also agree that there’s something worth defending in “computer science in its own right”, as I said above; that is, in turn, the core task of philosophy of science.
Of course I would expect him to push back on this description of the field because philosophy has lost all its street cred, but that would only encourage me to redouble my efforts. If it’s not empirical study of actual results, and it’s not mathematical study of intuitively-based theorems, what else is left?
With that said, I regard TCS as the study of discretized computation at scale. Or, more colloquially, what is the fastest way to do X, N times? This could be sorting, searching, indexing, drawing, tracking, balancing, storing, accessing, saving, reading, writing, whatever. The distinguishing characteristic is complexity analysis in time and space.
As a counterexample, I think cryptographic methods such as AES and RSA are much more pure math / number theory than TCS.
But asymptotics are heavily used, true. That is because often the theoretically interesting properties appear only when the inputs are huge (I'm sure this happens in other areas as well).
> The centrality of computing stems from the fact that it is a technology that has been changing the world for the past 80 years, ever since the British used early computing to change the tide of war in World War II.
I take issue with the idea hinted at here. Just like Algebra and other branches of mathematics were invented to deal with daily down-to-earth issues like accounting or farming, you'd be hard pressed to find a consensus that mathematics "aims to explain the real world".
The historical origin is very clear, but the train left the "real world" station pretty fast, "unreasonable effectiveness" notwithstanding. Am I to understand that because Enigma was broken using a physical machine, the field is bound to study physical reality? To me this feels as uncomfortable as to refer to astronomy as "telescope studies".
> I believe that thinking of TCS as a branch of mathematics is harmful to the discipline. [...] > Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing
Yeah, if you hired me to design harmful approaches, not in a year I would have come up with something as harmful as this.