Biologically, we're clearly an increment over the next smartest thing - we have the same kind of hardware, doing the same things, built by the same process. But that increment carried us through the threshold where our brains became powerful enough to break our species free of biological evolution, and subjecting us to much faster process of technological evolution. This is why chimpanzees live in zoos built by humans, and not the other way around.
If anything, biological history of humanity tells us LLMs may just as well be thinking and reasoning in the same sense we are. That's because evolution by natural selection is a dumb, greedy, local optimization process that cannot fixate anything that doesn't provide incremental benefits along the way. In other words, whatever makes our brains tick, it's something that must 1) start with simple structures, 2) be easy to just randomly stumble on, 3) scale far, and 4) be scalable along a path that delivers capability improvements at every step. Transformer models fit all four of the points.
> with enough input (in the form of LLM) it can predict what a human’s reasoning may look like, but philosophically, that’s a different thing
By what school of philosophy? The one I subscribe to (whatever it's name) says it's absolutely the same thing. It's in agreement with science on this one.