Whether that model provides "insight" (or a "cause"; I still don't know if that's supposed to mean something different) is a deeper question, and e.g. the topic of countless papers trying to make sense of LLM activations. I don't think the answer is obvious, but I found Norvig's discussion to be thoughtful. I'm surprised to see it viewed so negatively here, dismissed with no engagement with his specific arguments and examples.
Pearl defines a ladder of causation:
1. Seeing (association) 2. Doing (intervention) 3. Imagining (counterfactuals)
In his view - most ML algos are at level 1 - they look at data and draw associations, and "agents" have started some steps in level 2 - doing.
The smartest of humans operate mostly in level (3) of abstractions - where they see things, gain experience, and later build up a "strong causal model" of the world and become capable of answering "what if" questions.
I'm not saying LLMs are a particularly good model, just that everything else is currently worse. This includes Chomsky's formal grammars, which fail to capture the ways humans actually use language per Norvig's many examples. Do you disagree? If so, what model is better and why?
Chomsky, of course, never attempted to model the generation of natural language and was interested in a different set of problems, so LLMs are not really a competitor in that sense anyway (even if you take the dubious step of accepting them as scientific models).
I certainly don’t agree with Norvig, but he doesn’t really understand the basics of what Chomsky is trying to do, so there is not much to respond to. To give three specific examples, he (i) is confused in thinking that Gold’s theorem has anything to do with Chomsky’s arguments, (ii) appears to think that Chomsky studied the “generation of language” (because he he’s read so little of Chomsky’s work that he doesn’t know what a “generative grammar” is), and (iii) believes that Chomsky thinks that natural languages are formal languages in which every possible sentence is either in the language or not (again because he’s barely read anything that Chomsky wrote since the 1950s). Then, just to make absolutely sure not to be taken seriously, he compares Chomsky to Bill O’Reilly!
On point (iii), see http://www.linguistics.berkeley.edu/~syntax-circle/syntax-gr..., and the last complete paragraph of p. 145.
If you believe that some of human cognition is linguistic (even if e.g. inner monologue and spoken language are just the surface of deeper more unconscious processes), then, yes, we might say LLMs can predictively model some aspects of human cognition, but, again, they are certainly not causal models, and they are not predictive models of human cognition generally (as cognition is clearly far, far more than linguistic).
* I avoid calling LLMs "statistical" because they really aren't even that. They are not calibrated, and including a softmax and log-loss in things doesn't magically make your model statistical (especially since ad-hoc regularization methods, other loss functions and simplex mappings, e.g. sparsemax, often work better and then violate the assumptions that are needed to prove these things are behaving statistically). LLMs really are more accurately just doing (very, very fancy and impressive) curve/manifold-fitting.
But now that I think a bit about it, the observation that an LLM seems to frequently produce obviously and/or subtly incorrect output, is not robust to prompt rewording, etc. is perhaps a useful Norvig-style insight.
I struggle to motivate engaging with it because it is unfortunately quite out of touch with (or just ignores) some core issues and the major advances in causal modeling and causal modeling theory, i.e. Judea Pearl and do-calculus, structural equation modeling, counterfactuals, etc [1].
It also, IMO, makes a (highly idiosyncratic) distinction between "statistical" (meaning, trained / fitted to data) and "probabilistic" models, that doesn't really hold up too well.
I.e. probabilistic models in quantum physics are "fit" too, in that the values of fundamental constants are determined by experimental data, but these "statistical" models are clearly causal models regardless. Even most quantum physical models can be argued to be causal, just the causality is probabilistic rather than absolute (i.e. A ==> B is fuzzy implication rather than absolute implication). It's only if you ask deliberately broad ontological questions (e.g. "Does the wave function cause X") that you actually run into the problem of quantum models being causal or not, but for most quantum physical experiments and phenomena generally, the models are still definitely causal at the level of the particles / waves / fields involved.
IMO I don't want to engage much with the arguments because it starts on the wrong foot and begins by making, in my opinion, an incoherent / unsound distinction, while also ignoring or just being out of date with the actual scientific and philosophical progress and issues already made here.
I would also say there is a whole literature on tradeoffs between explanation (descriptive models in the worst case, causal models in the best case) and prediction (models that accurately reproduce some phenomenon, regardless of if they are based on and true description or causal model). There are also loads of examples of things that are perfectly deterministic and modeled by perfect "causal" models but which are of course still defy human comprehension / intuition, in that the equations need to be run on computers for us to make sense of them (differential equation models, chaotic systems, etc). Or just more practically, we can learn to do all sorts of physical and mental skills, but of course we understand barely anything about the brain and how it works and co-ordinates with the body. But obviously such an understanding is mostly irrelevant for learning how to operate effectively in the world.
I.e. in practice, if the phenomenon is sufficiently complex, an accurate causal model that also accurately models the system is likely to be too complex for us to "understand" anyway (or you just have identifiability issues so you can't decide between multiple different models; or you don't have the time / resources / measurement capacity to do all the experiments needed to solve the identifiability problem anyway), so there is almost always a tradeoff between accuracy/understanding. Understanding is a nice luxury, but in many cases not important, and in complex cases, probably not achievable at all. If you are coming from this perspective, the whole "quandary" of the essay seems just odd.
So in the meantime, Norvig et al. have built statistical models that can do stuff like predicting whether a given sequence of words is a valid English sentence. I can invent hundreds of novel sentences and run their model, checking each time whether their prediction agrees with my human judgement. If it doesn't, then their prediction has been falsified; but these models turned out to be quite accurate. That seems to me like clear evidence of some kind of progress.
You seem unimpressed with that work. So what do you think is better, and what falsifiable predictions has it made? If it doesn't make falsifiable predictions, then what makes you think it has value?
I feel like there's a significant contingent of quasi-scientists that have somehow managed to excuse their work from any objective metric by which to evaluate it. I believe that both Chomsky and Judea Pearl are among them. I don't think every human endeavor needs to make falsifiable predictions; but without that feedback, it's much easier to become untethered from any useful concept of reality.
> You seem unimpressed with that work
I didn't say anything about Norvig's work, I was saying the linked essay is bad. It is correct that Chomsky is wrong, but is a bad essay because it tries to argue against Chomsky with a poorly-developed distinction while ignoring much stronger arguments and concepts that more clearly get at the issues. IMO the essay is also weirdly focused on language and language models, when this is a general issue about causal modeling and scientific and technological progress, and so the narrow focus here also just weakens the whole argument.
Also, Judea Pearl is a philosopher, and do-calculus is just one way to think about and work with causality. Talking about falsifiability here is odd, and sounds almost to me like saying "logic is unfalsifiable" or "modeling the world mathematically is unfalsifiable". If you meant something like "the very concept of causality is incoherent", that would be the more appropriate criticism here, and more arguable.
I feel like Norvig is coming from that standpoint of solving problems well-known to be difficult. This has the benefit that it's relatively easy to reach consensus on what's difficult--you can't claim something's easy if you can't do it, and you can't claim it's hard if someone else can. This makes it harder to waste your life on an internally consistent but useless sidetrack, as you might even agree (?) Chomsky has.
You, Chomsky, and Pearl seem to reject that worldview, instead believing the path to an important truth lies entirely within your and your collaborators' own minds. I believe that's consistent with the ancient philosophers. Such beliefs seem to me halfway to religious faith, accepting external feedback on logical consistency, but rejecting external evidence on the utility of the path. That doesn't make them necessarily bad--lots of people have done things I consider good in service of religions I don't believe in--but it makes them pretty hard to argue with.