story
To borrow the religion analogy, there's the opposite fallacy of claiming the negative case "I know for a fact God isn't real", by stating you know what these kinds of information processing systems are doing fundamentally cannot yield intelligence or some kind of consciousness.
In any case, at some point one has to observe that the people who are leaning most heavily on agnosticism (the fact that we don't know what intelligence is) are the ones who say that we can't say whether those LLMs are intelligent or not because we don't know what intelligence is. In other words, agnostitism itself is used as evidence: we can't define intelligence, the thinking goes, therefore LLMs may be intelligent. There is no other evidence of any sort that LLMs may be intelligent (or any number of synonyms).
Note that this is exactly Russel's Teapot:
Russell's teapot is an analogy, formulated by the philosopher Bertrand Russell (1872–1970), to illustrate that the philosophic burden of proof lies upon a person making empirically unfalsifiable claims, rather than shifting the burden of disproof to others.
Other than all the downright uncanny output.
Personally I doubt current LLMs are sufficient for sentience but that's purely a hunch on my part. Many people such as yourself seem quite overconfident of something that feels like a form of human exceptionalism to me - the idea that such a simple bit of math couldn't possibly be sufficient for sentience. As far as I can tell such a belief is wholly unfounded.
Instead of trying to guess what I think, why not ask me directly, and say what you think, also? Just throwing around weird accusations of half-explained "overconfidence", or "human exceptionalism" (what is that, now?) doesn't really help anyone understand what you are disagreeing with, or what you are agreeing with.
>> Other than all the downright uncanny output.
I don't find the output of ChatGPT, or any other of the language models that have exploded into the hype zone lately "uncanny". I've done plenty of language modelling and while the output of those recent LLMs is grammatically smoother than earlier systems, and they can handle longer-term dependencies, they are not anything new. Their output is "uncanny" only if you've never seen anything like that before. Which is, of course, the case with most people who didn't know about language modelling before they heard about GPT-something, and who are now posting in droves on the web to say how surprised they are.
You made a claim ("a model based purely on such simple math surely can't develop sentience/consciousness") without any proof, and while doing so ridiculed people that might believe otherwise.