"On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far."
Its output is predicated upon its training data, not user defined prompts.