You could say that a trained neural net contains a model of how language works, and it reasons about sentences based on this model.
I think people are really hung up on that it has trouble reasoning about what its sentences are reasoning about, and skipping how amazing it is at reasoning about sentence structure itself.
Yes, but people don’t reason about language, they just do it. I know you think I’m confused about this but I’m not. I mean reason here quite explicitly because what we’re talking about is understanding. No one thinks that they ... uh, well ... “understand” language ... okay, we need a new word here because “understand” has two different meanings here. Let’s use “perform” for when you make correct choices from an inexplicit model, that’s what the NN does, and hold “understand” for what a linguist (maybe) does per language, and what a physicist does per orbital mechanics. What we are hoping a GAI will do is the latter. Any old animal can perform. Only humans, as far as we know, and perhaps a few others in relatively barrow cases, understand in the sense that a physicist understands OM. No NN trained on language is gonna have the present argument. Ever.