Since they're modelling our own languages, people get spooked and start bringing up theory of mind.
In view of that, could you please clarify: are you saying that the OP's own brain is doing something that could be represented with a "matrix multiplication"?
I.e., they could only make that claim if they knew that their own brain (which is presumably the only verifiable instance of consciousness they know of) did something that couldn't be done with "matrix multiplications".
I think you should not try to interpret in that way and only stick to what the other person has said, otherwise we will all lose the thread of the conversation.
In any case, nobody here has said anything about brains having, or not having, to do something with matrix multiplication, except for your comment, so I still don't understand what you are saying.
Are you saying that the OP's brain is doing "something that couldn't be done with "matrix multiplications", or are you saying it isn't doing something like that?
To borrow the religion analogy, there's the opposite fallacy of claiming the negative case "I know for a fact God isn't real", by stating you know what these kinds of information processing systems are doing fundamentally cannot yield intelligence or some kind of consciousness.
In any case, at some point one has to observe that the people who are leaning most heavily on agnosticism (the fact that we don't know what intelligence is) are the ones who say that we can't say whether those LLMs are intelligent or not because we don't know what intelligence is. In other words, agnostitism itself is used as evidence: we can't define intelligence, the thinking goes, therefore LLMs may be intelligent. There is no other evidence of any sort that LLMs may be intelligent (or any number of synonyms).
Note that this is exactly Russel's Teapot:
Russell's teapot is an analogy, formulated by the philosopher Bertrand Russell (1872–1970), to illustrate that the philosophic burden of proof lies upon a person making empirically unfalsifiable claims, rather than shifting the burden of disproof to others.
You made a claim ("a model based purely on such simple math surely can't develop sentience/consciousness") without any proof, and while doing so ridiculed people that might believe otherwise.