In fact, this is a huge chunk of the value a developer brings to the table.
Programs are a socially constructed artifact that help communicate and express a model (which is perpetually locked in people's heads with variance across engineers; divergence is addressed as the program develops). Determining what should or should not be done is a matter of not just domain knowledge, but practical reason, which is to say prudence, which is a virtue that can only be acquired by experience. It is an ability to apply universal principles to particular situations.
This is why young devs, even when clever in some local sense, are worse at understanding the right moves to make in context. Code does not stand alone. It exists entirely in the service of something and is bound by constraints that are external to it.
This means that they will very quickly help you discover all the little details that seemed so obvious to you that you didn't even think to mention them, but were nonetheless critical to a successful implementation. The corollary to that is, the potential ROI of outsourcing is inversely proportional to how many of these little details your project has, and how important they are.
So far I've found LLM coding to be kind of the same. For projects where those details are relatively unimportant, they can save me a bunch of effort. But I would not want to let an LLM build and maintain something like an API or database schema. Doing a good job of those requires too much knowledge of expected usage patterns working through design tradeoffs. And they tend to be incredibly expensive to change after deployment so it pays to take your time and get your hands dirty.
I also kind of hate them for writing tests, for similar reasons. I know many people love them for it because writing tests isn't super happy fun times, but for my part I'm tired of dealing with LLM-generated test suites being so brittle that they actively hinder future development.
There is a huge amount of programming work that consists in reinventing the wheel, i.e. in redoing something very similar to programs that have been written thousands of times before.
For this kind of work LLMs can greatly improve productivity, even if they are not much better than if you would be allowed to search, copy and paste from the programs on which the LLM has been trained. The advantage of an LLM is the automation of the search/copy/paste actions, and even more than this, the removal of the copyrights from the original programs. The copyright laws are what has resulted in huge amounts of superfluous programming work, which is necessary even when there are open-source solutions, but the employer of the programmer wants to "own the IP".
On the other hand, for really novel applications, or for old applications where you want to obtain better performance than anyone has gotten before, providing an ambiguous prompt to an LLM will get you nowhere.
This seems really strange to me. Can you explain how this is different than just stealing code from other sources, or copying it wholly from open source repos?
The best LLMs can manage is "what's statistically-plausible behaviour for descriptions of humans in the corpus", which is not the same thing at all. Sometimes, I imagine, that might be more useful; but for programming (where, assuming you're not reinventing wheels or scrimping on your research, you're often encountering situations that nobody has encountered before), an alien mind's extrapolation of statistically-plausible human behaviour observations is not useful. (I'm using "alien mind" metaphorically, since LLMs do not appear particularly mind-like to me.)