> LLMs don't approximate human thought, though. ...Please, I'm begging you, go read some papers and watch some videos about machine learning and how LLMs actually work.
I know how LLMs work; so let me beg you in return, listen to me for a second.
You have a theoretical-only argument: LLMs do text prediction, and therefore it is not possible for them to actually think. And since it's not possible for them to actually think, you don't need to consider any other evidence.
I'm telling you, there's a flaw in your argument: In actuality, the best way to do text prediction is to think. An LLM that could actually think would be able to do text prediction better than an LLM that can't actually think; and the better an LLM is able to approximate human thought, the better its predictions will be. The fact that they're predicting text in no way proves that there's no thinking going on.
Now, that doesn't prove that LLMs actually are thinking; but it does mean that they might be thinking. And so you should think about how you would know if they're actually thinking or not.