Currently AI failure modes (consistency over long context lengths, multi-modal consistency, hallucinations) make it untenable as a "full-replacement" software engineer, but effective as a short-term task agent overseen by an engineer who can review code and quickly determine what's good and what's bad. This allows a 5x engineer to become a 7x engineer, 10x become a 13x, etc. which allows the same amount of work to be done with fewer coders, effectively replacing the least productive engineers in aggregate.
However, as those failure modes becomes less and less frequent, we will gradually see "replacement". It will come in the form of senior engineers using AI tools noting that a PR of a certain complexity is coded correctly 99% of the time by a given AI model, so they will start assigning longer, more complex tasks to it and stop overseeing the smaller ones. The length of tasks it can reliably complete get longer and longer, until all a suite of agents needs is a spec, API endpoints and the ability to serve testing deployments to PM's, and it begins doing first only what a small, poorly run team could accomplish, but month after month gets better and better until companies start offloading entire teams to AI models and simply require a higher-up team to check and reconfigure them once and a while and budget manage token use.
This process will continue as long as AI models grow more capable, less hallucinatory over long-context horizons, and agentic/scaffolding systems become more robust and effectively designed to mitigate and deal with the issues affecting the AI models that do exist. It won't be easy or straightforward, but the economic potential gains are so enormous that it makes sense that billions are being poured into any AI agent startup that can snatch a few IOI medalists and a coworking space in SF.