I think it is easy to distracted by the specific mechanisms by which these models work, but most of the technological detail is because we want something to happen on systems at the scale of what we can actually build. We simply can't build a human brain scale neural net yet. We build what we can and besides maybe with all this research we will figure something significant about what intelligence actually is.
The notion "This can't be all thought is" is as old as the idea of AI. I think it informed Turing when he proposed the Imitation Game. The insight is that people would be resistant to the idea of a bunch of simple things stuck together becoming thinking until they were faced with something that behaves sufficiently indistinguishable from themselves that to doubt that they were thinking would be akin to doubting everyone you meet.
In the end some people won't even accept an AI that does everything a human does as actually thinking, but then again some people are actually solipsistic.