Hmmm yes and no, here the output "tends towards" deterministic (≠ deterministic), because the internal process contains noise (noise can't be nullified, otherwise LLMs wouldn't make mistakes and wouldn't produce unexpected outcomes).
And the thing is that even if the output tends towards deterministic they will always be ways to do prompt injections. So maybe I should clarify by mentioning that the solution I shared/proposed is external to the AI's internal process, it's a complete separate thing from the LLMs