and they may or may not "listen", they are non-deterministic and have no formal means or requirements to adhere to anything you write. You know this because they violate your rules all the time in your own experience
Sure but in my experience LLMs behave much more consistently than humans with regards to quality. LLMs don't skip tests because they have to make a deadline for example.
And now we have LLMs that review LLM generated code so it's easy to verify quality standards.
At the same time, the LLM will, with some reliability, ignore the patterns or best practices in your code, or implement the same thing twice in different ways.
There are certain things they do, or don't do, that a human typically wouldn't, putting absolutes and anecdotes aside