>> Most humans will be born, live and die inventing absolutely nothing, even those with the opportunity and resources to do so.
I don't think that's right at all. I like to visit museums. You really get hit in the face with the unending creativity of the human mind and the variety of all that human hands have crafted over thousands of years across hundreds of cultures. I would go as far as to say that the natural state of the human mind is to create new things all the time. And mathematics itself was not created (invented or discovered) by one person, but by many thousands.
In any case, it doesn't matter if one instance of the class of human minds hasn't invented anything, in the same way that it doesn't matter if one car can't do 80mph. It's indisputable that we have the capacity for some novelty, and generality, in our thinking. Maybe not every member of the species will achieve the same things, but the fact is that the species, as a species, has the ability to come up with never-before seen things: art, maths, tech, bad poetry, you name it.
>> Lecun may disagree but some others like Hinton, Ilya and Norvig don't.
I'm with LeCun and Bengio. There's a fair amount of confusion about what a "model" is in that sense: a theory of the world. There's no reason why LLMs should have that. Maybe a transformer architecture could develop a model of the world- but it would have to be trained on, well, the world, first. Sutskever's bet is that a model can be learned from text generated by entities that already have a world model, i.e. us, but LeCun is right in pointing out that a lot of what we know about the world is never transmitted by text or language.
I can see that in my work: I work with planning, right now, where the standard thing is to create a model in some mathematical logic notation, that is at once as powerful as human language and much more precise, and then let a planning agent make decisions according to that model. It's obvious that despite having rich and powerful notations available there is information about the world that we simply don't know how to encode. That information will not be found in text, either.
Sutskever again seems to think that, that kind of information, can somehow be guessed from the text, but that seems like a very tall order, and Transformers don't look like the right architecture. You need something that can learn hidden (latent) variables. Transformers can't do that.