My understanding of LLMs is sub-par at best, could someone explain where the randomness comes from in the event that the model temperature is 0?
I guess I was imagining that if temperature was 0, and the model was not being continuously trained, the weights wouldn’t change, and the output would be deterministic.
Is this a feature of LLMs more generally or has OpenAI more specifically introduced some other degree of randomness in their models?