I think it's better to accept that people can install their thinking into a machine, and that machine will continue that thought independently. This is true for a valve that lets off steam when the pressure is high, it is certainly true for an LLM. I really don't understand the authenticity babble, it seems very ideological or even religious.
But I'm not friends with a valve or an LLM. They're thinking tools, like calculators and thermostats. But to me arguing about whether they "think" is like arguing whether an argument is actually "tired" or a book is really "expressing" something. Or for that matter, whether the air conditioner "turned itself off" or the baseball "broke" the window.
Also, I think what you meant to say is that there is no prompt that causes an LLM to think. When you use "think" it is difficult to say whether you are using scare quotes or quoting me; it makes the sentence ambiguous. I understand the ambiguity. Call it what you want.
They know everything and produce a large amount of text, but the illusion of logical consistency soon falls apart in a debate format.
One of my favorite philosophers is Mozi, and he was writing long before logic; he's considered as one of the earliest thinkers who was sure that there was something like logic, and and also thought that everything should be interrogated by it, even gods and kings. It was nothing like what we have now, more of a checklist to put each belief through ("Was this a practice of the heavenly kings, or would it have been?", but he got plenty far with it.
LLMs are dumb, they've been undertrained on things that are reacting to them. How many nerve-epochs have you been trained?