It doesn't seem likely an LLM will ever do that. Maybe at a certain point of sophistication? But if the model is regularly changing - which they almost all will be, if they're expected to be up-to-date - there is a strong change they'll be different every time they're used.
(I've been getting different behaviour in even relatively narrow ML-based systems for years. Google Assistant is my prime example - I regularly use the phrase "add to my calendar on the 20th of September at 5pm, go to the park". Almost all the time, it works perfectly. But a couple times a year at least, it won't process this as an action - it just does a Google web search for this string.)
So yeah, prompt "engineering" is indeed a silly term, but software "engineering" kicked off the dilution of that word ages ago. And GPT models can be inspected and measured for input and output, prompts can be analyzed for their effects and usefulness, temperature settings even directly control some degree of determinism. It's not like models change on a whim unless you're just using end user products. Anthropic, Huggingface, AWS, OpenAI, they let you pick a release model version in your API calls and stick with it for a long time. If you're self hosting a fine tuned Llama 70b, nobody will ever force you to update it if you get it doing a task to your expectations. The quality of deterministic behavior in AI is currently lower than that of Excel or C code, but it's also serving a wholly different purpose, people want it to be creative and create novel nondeterministic outputs, comparing them is a bit silly.