My favorite quotes:
when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. “I’m disappointed by the naivete of the questions that you’re communicating,” he grumbles. “I deserve better.”
“There’s a tradition of scientists approaching senility to come up with grand, improbable theories,” the late physicist Freeman Dyson told Newsweek back in 2002. “Wolfram is unusual in that he’s doing this in his 40s.”
That is a brutal take down. Did Dyson and Wolfram have a math-beef going or something?
He's an interesting character, and rare in that is his both obviously very intelligent, and yet not nearly as intelligent as he thinks he is.
I suspect he's the sort of person who can't stand the idea that he is not the smartest guy in the room - in perception or reality. He may well have constructed his career as an "outsider" to reduce the occurrences of this, perhaps not intentionally.
See, for example: A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity (2002) http://bactra.org/reviews/wolfram/
I met Wolfram some 20 years before that review when he was on a world tour promoting the earliest iterations of Mathematica, the first iteration post his symbolic differentation work.
This was the period when cellular automata, Mandelbrot sets, and symbolic math were pretty hot topics about math departments - computer assisted proofs on monster groups in symbolic algebra were recent, Cayley (the first iteration of Magma) was being written at Sydney University, etc.
Even then he had many of the traits that Cosma Shalizi described in the linked review above and was already dismissing various people for their 'poor ideas' and later claiming those ideas as his own.
He's a smart guy. He swam in waters filled with smart people, some smarter. He was never, IMHO, as smart as his own legend, as authored by himself.
We ended up running way overtime because he was having fun showing me things with Mathematica. He is a fascinating person, I successfully kept him off talking about his math / physics theories and on the idea of a programming language leading to better thinking and more break-throughs.
I left the discussion pretty impressed by him and he did in the discussion have some vague worries that he maybe got so focused on the idea of a notation for science in Mathematica that he neglected the actual work that sent him on this path. But he wasn't sure that the notation wasn't more valuable itself.
Notebooks, like Jupter, clearly came from his work and the other thing that hasn't reached mainstream he seems to have invented is having data sort of embedded in the programming language, in standard libraries, where it's easy to get the number of calories in the moon if it were made of cheese or whatever.
While I often hear this claim from Wolfram and his supporters, I have never seen any evidence that it was his innovation. MathCAD was the first software released with a notebook interface, and there was research using those ideas prior to the release of the first Mathematica notebook. Maybe his particular take was an improvement on the others, but the claim that it was entirely his idea seems to me to be 100% incorrect.
> having data sort of embedded in the programming language, in standard libraries, where it's easy to get the number of calories in the moon if it were made of cheese or whatever.
If you like this sort of thing check out Frink. https://frinklang.org/I predict that within 100 years, computers will be twice as powerful, ten thousand times larger, and so expensive that only the five richest kings of Europe will own them. -- Professor John Frink
The units file itself is a worthy read just for the commentary.
// WARNING: Use of "Hz" will cause communication problems, errors, and make
// one party or another look insane in the eyes of the other.
//
// In other words, if you use the Hz in the way it's currently defined by the
// SI, as equivalent to 1 radian/s, you can point to the SI definitions and
// prove that you follow their definitions precisely. And your physics
// teacher will *still* fail you and your clients will think you're completely
// incompetent because 1 Hz = 2 pi radians/s. And it has for centuries.
// You are both simultaneously both right and both wrong.
// You cannot win.
// You are perfectly right. You are perfectly wrong. You look dumb and
// unreasonable. The person arguing the opposite looks dumb and unreasonable.
//
// Hz == YOU CANNOT WIN
//
// (Insert "IT'S A TRAP" image here.)Theo Gray is who came up with the notebook, that iPython -> Jupyter were a multilanguage shout-out to, and they cite such. Other UIUC professors wrote significant parts of Mathematica originally, were paid for such.
Mathematica has no notation, and that's the worst thing about it.
Mathematics has M-Expressions, like S-Expressions, which are extremely powerfully and human-like for reasoning in multiple logical (not geometric) dimensions (using Lisp-style macro expansion)
Anything that is Turing Complete is going to exhibit at least some degree of what I call "Turing Chaos", which is the sort of chaos you have in trying to understand what a given Turing Complete system is going to do in light of the fact that a Turing Complete system is going to include some equivalent of "if (something) { run this program } else { run that program }", which means that there is inevitably going to be uncertainty amplification in any attempt to understand a program. By "uncertainty amplification" I mean exactly what anyone who has every tried to understand a code base has been through; you can tell that your uncertainty about what the "something" value is gets amplified into the question of which entire program is being run, and that can iterate for quite a while. It's very chaotic.
However, for all that, and despite the famous way in which changing a single bit of a program may completely change how it operates, in practice with real human programs changing a single random bit is statistically most likely to have no user-visible impact. We spend a lot of time constraining our system's chaos. We have to. We can't work with systems in which literally every bit change completely changes the program.
However, CAs tend to work that way. A single bit flip will spread out at the relevant "speed of light" and change everything.
As a result, while they may be some of the simplest Turing Complete things, they are humanly useless. They are not useful for modeling processes; you have to be too precise with the initial states, and the thing you are modeling has to be too precise in its usage of the CA rules. They are not useful for engineering, which is precisely why we don't use them.
Or, to put it in a nutshell, while A New Kind Of Science is full of pretty pictures and legitimately interesting ideas... it's also in essence, comprehensively wrong. Not a "not even wrong"; it rises to the level of "real" wrongness. But it's comprehensively, from top to bottom, wrong about practical utility or any future practical utility.
(You can sit down and try to strip this characteristic from a sufficiently well-designed CA, but getting the precise balance of just the right amount of chaos is going to be difficult, and getting it to be also somehow useful afterwards raising the bar even higher. In the meantime, I've got von Neumann machines right here for people who want to do real work and the lambda calculus for people who want to work directly in mathematical abstractions without going insane, so... why?)
Yeah, after 20 some years, that's has to be the answer. At its basic level I think it just exploited the idea that people, including me, like to see interesting or complicated patterns, especially arising out of simple iterative rules like https://en.wikipedia.org/wiki/Rule_30 or https://en.wikipedia.org/wiki/Rule_110.
Of course, I can see how snail pattern or other natural patterns might be generated by a similar process but it's nowhere near revolutionizing any science like the title was claiming.
But Wolfram being Wolfram doesn't give up. There is the https://www.wolframinstitute.org and there is some activity there. I periodically drop by to see what's happening.
> They are not useful for engineering, which is precisely why we don't use them.
Exactly, we'd think by now they'd be some AI super-chip or something tangible based on of the cellular automata thing discovered by Wolfram.
He's also quite a bit less self-aggrandizing.
Otoh, they do seem to like reach other, which in my book does cast some doubt on taleb's ability to judge people...
He's full of himself but has interesting things to say.
WolframAlpha is a gem on its on right. Yeah we have Gemini, GPT, Mixtral but when it comes to actual compositional compute, Wolfram alpha gets you the right answer and shows you the math.
In the for hour conversation on topic, he asks "if you want to know the weights of various dinosaurs, you can ask Wolfram alpha and it will tell you". So I asked it "what's the weight of a stegaussaurus", and it gave me some number. The i asked it "what's the total weight of all the stegaussaurus that ever lived" and it gave me some nonsense about the average [don't remember] for the population of the US. It didn't even understand the question. Calling it compute is overestimating it by as much as Wolfram overestimates himself.
Are you able/allowed to provide more details?
- Headquartered in Champaign, Illinois (not a bad town, but not sexy)
- As "cool" as their software is, not a lot of people use it. Python is eating their lunch, ESPECIALLY outside of academia. Although, they're losing ground in academia as well
- Stephen Wolfram isn't a charismatic leader who is fun to work for. There's no shortage of stories of him short circuiting in meetings and treating employees disrespectfully.
- They're not doing quite as much cutting edge stuff (that matters, at least) these days. Their AI/ML suite isn't that interesting, numpy/scipy does a lot of numerical stuff better, Matlab does a lot of stuff (like digital signal processing, for example) better. And Python, being free and open source, is a better prototyping language for most stuff. Symbolic computing is probably the one place it is actually a leader in... but for so many applications in the real world (engineering, r&d, real-time algorithms, etc) symbolic computing simply isn't needed.
As you hint at, they can attract some talent because there are opportunities to work on some niche stuff that's hard to work on elsewhere. But that's a minority of roles at the company.
Source: Used to work there.
How do they pay compared to other midwestern employers?
Their Glassdoor reviews used to be a bit dodgy too, nothing you wouldn't be able to guess after watching 10 minutes of any video of you-know-who though.
They are also just small, only a few hundred people if I remember correctly.
Every tech CEO is full of themselves too, but because Stephen is an awkward looking nerdy guy who is less business oriented people dislike him for it.
He's an interesting guy, even if he isn't as interesting as he thinks he is.
Stephen Wolfram has a PhD in particle physics (source: https://en.wikipedia.org/w/index.php?title=Stephen_Wolfram&o...).
or Penrose's https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...
I know some people who are great in their area of research, but are, to put it very midly, not the most humble persons (to give an example of one such person (a great research): "the people who could learn a whole lot from me are all tenured professors in my area of research"; just to be clear: his judgement is right :-) ).
In my experience, great researchers who are full of themselves nearly always had to work/fight very hard for where they are now, and are thus very bitter about worse researchers who have it easier.
I'll spare myself commenting on Wolfram, it's enough to do Ctrl+F on "arrogant" in this topic. Frankly, I don't even care. It's just that New Kind of Science didn't meaningfully advance anywhere beyond being "an interesting concept" for all of his natural life.
[0] https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
Only 2 instances so far?!
EDIT 1: Maybe I'll update this as I listen, maybe not we'll see.
But so far:
- Doesn't remember anything he used to talk about with his parents. Doesn't remember any particularly interesting conversations. Doesn't know anything about his parent's political inclinations.
- He a brother younger by 10 years. Says he was an "only child for about 10 years". For the rest of the conversation says his parent's experience with children were data size 1.
EDIT 2: getting super tedious. I'd like to hear what his kids or his brother think of him.
I bought NKS for fifty bucks twenty years ago because I thought I could form a group around it that would take turns summing up chapters for each other, but unfortunately, if humans who give a f** represent states and the number of f**s a human can give represent colors, we're talking about a 1,0 turing machine here.
That said, I too have read a lot of his work (yes, even A New Kind of Science). I came away very impressed but lacking the right kind of framing to make the best of it (which is, I suspect, what most people who actually make the effort will feel).
It's his way of making everything about himself and his plagiarism that put people off. He's Feynman without the charm.
Copied from transcript and lightly reformatted, but otherwise not corrected (girdle for Gödel, etc all left intact):
1:40:2x
in the fall of 1981 and I spent that time kind of studying The Works of people like bonan and girdle and bunch of stuff about neural Nets and I was that was all in kind of the can I understand foundationally how complexity shows up in the world and that caused me to um uh to kind of um uh try and develop sort of the simplest model that might do something interesting and it led me to these things called cellular autometer which are very simple uh systems where like tiny programs where you just have a row of black and white cells and you just have a rule that says how to update those cells and um I started looking at those things in the fall of 1981 and
"It led me to" does not imply "led me to [discover/invent]" but rather "led me to [start studying]"
His own notes here directly state that they were considered long before he studied them:
I understand the need for the masses to have people ideas that are obviously practical.
Stephen Wolfram is more of an explore. And he is documenting phenomena that I don't see any one else doing because everyone else is so teleological.
I think we need to give a break to researchers doing this original non teleological research.
I don't understand why people find him "insufferable"?
https://news.ycombinator.com/item?id=39456628
(He does come across as arrogant, because he probably is, but his arrogance doesn't extend so far as to include this a claim that he invented something that he’s acknowledged were discussed before his birth.)
Source?
What should it be hubris to talk about your life for 4 hours?
From what I gather of other people’s comments, they are often bothered by his apparently pervasive discussion of himself and his life.
I’ve never met the man, but the few interviews I’ve see or read about him I thought were pretty interesting.
I find that everything I try to consume from him contains his autobiography interspersed in the giant wall of text. This video is exquisitely cringeworthy.
He has no collaborators. He gives no credit to others. He just relentlessly names things after himself, takes singular credit for everything, and name-drops other famous scientists he bumped into.
I genuinely find the wolfram physics project interesting, but the behavior of wolfram himself sets off all my bullshit alarms.
I suspect that it will be someone else that will take these ideas over the finish line. He seems completely oblivious to the fact that his behavior makes it harder to take the ideas seriously.
My gateway to the ideas was Jonathan Gorard. Check out his videos if you are curious, they are much more accessible than Wolfram's own content.
It takes a village to build something.
But it takes a leader to assemble a village around a cause.
I do appreciate your take. The village deserves credit for the work they've done.
But, at the same time, for many folks I see "well actuallyed" for their achievements because of the village... I don't think the change would have manifested in the world if the village didn't have that person.
An example I see more frequently now is that a market for electric cars wasn't willed into existence by Elon. There are variations of this claim, from him not being the original founder to the huge number of employees involved with Tesla's accomplishments.
But, at the end of the day, I have zero reason to believe Mercedes Benz would be releasing an electric car if Elon had decided to take his market winnings and go sit on a beach.
I have no reason to characterize the wolfram language, and its ecosystem, as anything other than a magnum opus that was willed into existence by Wolfram.
I'm open to being wrong here. But I've not yet learned why I am.
A few years later a list of "books that have been added to his permanent collection" appeared in lieu of a bibliography. It's pretty good but perhaps too comprehensive. https://www.wolframscience.com/reference/books/
In 2012 he wrote about why he didn't have references or bibliography. It's a New Kind of Publishing, too. https://writings.stephenwolfram.com/2012/05/living-a-paradig...
1 – Why I Am So Wise
2 - Why I Am So Clever
3 – Why I Write Such Excellent Books, Part 1
4 – Why I Write Such Excellent Books, Part 2
5 - Why I Write Such Excellent Books, Part 3
6 - Why I Am a Fatality
Not listening to him because of this is a mistake, Wolfram is a true genius and, even if "his" ideas aren't fully his, you will probably not hear them with such clarity anywhere else. He is, at a minimum, an amazing explainer like few people I've ever seen.
I think all the ad hominem you see in here are from people who waste too many precious hours of their life listening to him. I know I am: I spent weeks as a young teen over NKS when it came out. Wasn't as revelatory as he kept insisting it was. Turned me off of cellular automata.
EDIT: Generally when people are "true geniuses" their _peers_ identify them as such. That's not the case here.
Counterexample: Kurt Heegener. OK, not a genius, but nevertheless a mathematician whose proof of a deep result (class number 1 problem [1]) was not accepted by his peers
Quote from [2]:
"In 1952, he published the Stark–Heegner theorem which he claimed was the solution to a classic number theory problem proposed by the great mathematician Gauss, the class number 1 problem. Heegner's work was not accepted for years, mainly due to his quoting of a portion of Heinrich Martin Weber's work that was known to be incorrect (though he never used this result in the proof)."
[1] https://en.wikipedia.org/w/index.php?title=Class_number_prob...
[2] https://en.wikipedia.org/w/index.php?title=Kurt_Heegner&oldi...
People have very different needs for harmony. Your statement likely implies that yours is rather high. My stance differs: great research is what advances mankind. Unpleasant great researchers will die some day, their research is there to stay.
(just to be clear: there is nothing good or bad with having a high or low need for harmony)