You could, I suppose, argue that the causal chains behind an LLM, are simply not the correct causal chains to produce reasoning, but that's a lot more complicated, mainly by the fact that we don't understand exactly what they are, and we don't understand the causal chains that produce human reasoning, so we can't confidently compare them other than on the largest of scales (LLMs are in silica, etc).
That, and it's not obvious why we should make this distinction. A cake that spontaneously assembles itself is still a cake, even if it doesn't have the usual causal history of a cake.