You'll be prompting and evaluating and iterating entirely finished pieces of software and be able to see multiple attempts at each solve at once, none of this deep in the weeds fixing a bug stuff.
We're rapidly approaching a world where a lot of software will be being made without an engineer hire at all, maybe not the hardest most complex or novel software but a lot of software that previously required a team of 3-15 wont have a single dev.
My current estimate is mid 2026
our current popular stack is quicksand.
unless we're talking about .net core, java, Django and more of these stable platforms.
This may be a quick quip or a rant. But the things we say have a way of reinforcing how we think. So I suggest refining until what we say cuts to the core of the matter. The claim above is a false dichotomy. Let's put aside advertisements and hype. Trying to map between AI capabilities and human ones is complicated. There is high quality writing on this to be found. I recommend reading literature reviews on evals.
If you give a junior an overly broad prompt, they are going to have to do a ton of searching and reading to find out what they need to do. If you give them specific instructions, including files, they are more likely to get it right.
I never said they were replacements. At best, they're tools that are incredibly effective when used on the correct type of problem with the right type of prompt.
> they're tools that are incredibly effective when used on the correct type of problem with the right type of prompt.
So, a junior developer who has to be told exactly what to do.
As for the "correct type of problem with the right type of prompt", what exactly are those?