B) They are shit for frontend work. This is the big one, and maybe we will someday, but we really haven't bridged the gap between an LLM seeing code and matching it to something on screen. People are well aware that agents need tight feedback loops to really be effective, which is more doable for server side code where you can setup a good test apparatus (still a pain in the ass for other reasons). But the thing a developer does most often is instantly see what their code changes did to a project visually. This is a huge gap with LLMs. We are at the level where an LLM can check that the site loads to begin with. That's it. We can't trust it to make functioning UI, and we sure as hell can't trust it to compare code against UI unless we ask it something very specific. Watch the LLM clocks sometime [1]. I was shocked to see even 5.2 codex making complete nonsense every few minutes.
C) My coworkers have narrow experience in the field we work in, which means they are frequently zooming past things that a general approximation of intelligence like Claude gets tripped up on. ChatGPT is trained on the entire corpus of humanity, yet doesn't understand there is such a thing as bad language servers. That a linting error is not inherently problematic and you don't need to spend $5 in tokens investigating it. This is something a junior developer understands. Why do I need to constantly watch and stop my agent and re-explain things we learned in the first week of programming?
D) Because LLMs are an approximation of correct solutions, god help you if you work in spaces where your syntax looks similar to other frameworks. I work in a lot of legacy apps that are framework spin-offs of other framework spin-offs. Stuff that's half documented and poorly made. There is no testing for these projects and no good IDE support. So if something is an offshoot of OctoberCMS, ChatGPT is just going to make up a bunch of methods from OctoberCMS even if they are completely invalid in this project. Agents are completely useless in projects like these.
E) Local models aren't there yet, and the fact that the entire industry is cool with letting their skills atrophy so hard that they are completely dependent on the livelihoods and structures of a few companies they hadn't even heard of 5 years ago is deeply concerning and unprofessional. I personally wouldn't hire someone so short sighted. Believing in LLMs taking over all coding work is a fundamental disbelief in Humanity, and I'm just not behind it. The most amazing thing about development is how much knowledge share we have and how cooperative we are. A future where no one knows how to do anything anymore is the future where tech truly goes off the rails.