I am not necessarily saying humans do something different either, but I have yet to see a novel solution from an AI that is not simply an extrapolation of current knowledge.
My biggest hesitation with AI research at the moment is that they may not be as good at this last step as humans. They may make novel observations, but will they internalize these results as deeply as a human researcher would? But this is just a theoretical argument; in practice, I see no signs of progress slowing down.
I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it.
Sometimes just having the time/compute to explore the available space with known knowledge is enough to produce something unique.
We have at least 5 senses, our thoughts, feelings, hormonal fluctuations, sleep and continuous analog exposure to all of these things 24/7. It's vastly different from how inputs are fed into an LLM.
On top of that we have millions of years of evolution toward processing this vast array of analog inputs.
Jokes aside, imagine you give LLMs access to real-time, world-wide satellite imagery and just tell it to discover new patrerns/phenomens and corrrlations in the world.
It means extending/expanding something, but the information is based on the current data.
In computer games, extrapolation is finding the future position of an object based on the current position, velocity and time wanted. We do have some "new" position, but the sistem entropy/information is the same.
Or if we have a line, we can expand infinitely and get new points, but this information was already there in the y = m * x + b line formula.