>But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking
It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.