This is why LLMs in their current iterations are not the danger to our jobs that some fear them to be: end users and other sake holders lack the precision to properly spec what they want (and often don't even accurately know what they want) a tool/service to do, and they don't have the time (or if they do, don't have the patience) to go back and forth iterating over the wording of the spec to get things adequately defined.
I'm currently an LLM refusenik⁰ so might be missing some context, but from my outside view I get the impression that they do a good job a simple boiler-plate-y stuff but don't save time/effort/thinking on anything much more complex. I'm sure most devs beyond the beginner are happy to have those boilerplate tasks taken off them so they can do the fun stuff, but equally end users aren't going to spend the time working with the tools to get anything more interesting than very simple programming/automation tasks done.
--
[0] I'm on the “is that really morally right?” side of the fence on how the training materials are sourced, particularly with regard to code covered by licences like the GPL family¹, and I'm anal enough to not use something I have that sort of question about even if it makes my life a little harder.
[1] If the assurances that chunks of code can't be regurgitated and that makes it all fine both morally and legally, why are none of the publicly useable LLMs such as MS's copilot trained using, say, Microsoft's Office/Windows/other code as well as public repositories? Surely they should be assured that isn't a problem as much as they want everyone else to be assured it isn't a problem?