I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.
What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?
Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...
They don't need to read every math textbook, paper, and online discussion in existence.
In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?
Someone on HN claimed "This is why it [LLMs] can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition". So, how many parenthesis are balanced in that quoted text?
● The string from the quote is ((((()))))) — 5 opening parens and 6 closing parens.
10 parentheses are balanced (5 matched pairs). There is 1 extra unmatched ).
Walking through it with a stack:
( ( ( ( ( ) ) ) ) ) )
1 2 3 4 5 4 3 2 1 0 -1 ← depth tracker
↑ balanced ↑ unmatched
The depth goes negative on the last ), meaning it has no matching (.In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.
Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.
A parrot that writes better code and English prose than I do?
I would like to buy your parrot.
This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)
I have a different theory.
Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.
My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.