I don’t know what LLMs are doing, but only a little experimentation with getting it to describe its own process shows that it CAN’T describe its own process.
You can call what a TI calculator does “thinking” if you want. But what people are interested in is human-like thinking. We have no reason to believe that the “thinking” of LLMs is human-like.