They will never achieve “reason” or understand what it means to do so; they are not human.
Sure, with enough input (in the form of LLM) it can predict what a human’s reasoning may look like, but philosophically, that’s a different thing.
Reason is not universal like how math is.
"A computer will never beat a chess master", etc.
Here are the facts for you: our reasoning is done by our brain. Our brain is just a bunch of processes. Those processes can be replicated in a computer. The number of cells and the speed can be improved. And there you have it, a superior reasoning machine.
These "only humans can do X" mostly comes from religion or other superiority bullshit, but in the end humans are not that special, although we seem to like to think so.
Superior? No.
Religion? No.
Philosophy? Yes.
Biologically, we're clearly an increment over the next smartest thing - we have the same kind of hardware, doing the same things, built by the same process. But that increment carried us through the threshold where our brains became powerful enough to break our species free of biological evolution, and subjecting us to much faster process of technological evolution. This is why chimpanzees live in zoos built by humans, and not the other way around.
If anything, biological history of humanity tells us LLMs may just as well be thinking and reasoning in the same sense we are. That's because evolution by natural selection is a dumb, greedy, local optimization process that cannot fixate anything that doesn't provide incremental benefits along the way. In other words, whatever makes our brains tick, it's something that must 1) start with simple structures, 2) be easy to just randomly stumble on, 3) scale far, and 4) be scalable along a path that delivers capability improvements at every step. Transformer models fit all four of the points.
> with enough input (in the form of LLM) it can predict what a human’s reasoning may look like, but philosophically, that’s a different thing
By what school of philosophy? The one I subscribe to (whatever it's name) says it's absolutely the same thing. It's in agreement with science on this one.
Computers are not biological. Therefore (imho) they will never obtain, and only replicate by trained example, the phenomenological human experiences.
So what? Biology isn't magic, it's nanotech. A lot of very tiny machines. It obeys the same rules as everything else in the universe.
More than that, the theoretical foundation on which our computers are built is universal - it doesn't depend on any physical material. We've made analog computers using water flowing between buckets. We've made digital computers from pebbles falling down and bouncing around what's effectively a vertical labyrinth. We've made digital computers out of gears. We've made digital computers out of pencil, paper, and a human with lots of patience.
Hell, we can make light compute by shining it at a block of plastic with funny etching. We can really make anything compute, it's not a difficult task.
We're using electricity, silicon wafers and nanoscale lithography because it's a process that's been best for us in practice, not because it's somehow special. We can absolutely make a digital computer out of anything biological. Grow cells into structures implementing logic gates for chemical signals? Easy. Make a CPU by wiring literal neurons and nerve cells extracted from some poor animal? Sure, it can be done. At which point, would you say, such a computing tapestry made of living things gain the capability of "phenomenological experiences"? Or, conversely, what makes human brains fundamentally different from a computer we'd build out of nerve cells?
Maybe some red stuff comes out when you break it. Is it blood, or is it something pumped into the other side?