I was trying to show that determinism is not the crux by pointing out that there are ways to get a deterministic output from an LLM. And that thought experiment shows that determinism isn’t what’s essential.
And I will disagree about merely narrowing the outputs. If I download a local model and set the temperature to zero and give it the same prompt twice, I will get the same output. Not one of several outputs in a narrow set. LLMs are functions.