I do think that LLMs will increase the volume of bad code though. I use Cursor a lot, and occasionally it will produce perfect code, but often I need to direct and refine, and sometimes throw away. But I'm sure many devs will get lazy and just push once they've got the thing working...
I think the issue is that we are currently being sold that it is. I'm blown away by how useful AI is, and how stupid it can be at the same time. Take a look at the following example:
https://app.gitsense.com/?doc=f7419bfb27c896&highlight=&othe...
If you click on the sentence, you can see how dumb Sonnet-3.5 and GPT-4 can be. Each model was asked to spell-check and grammar-check the sentence 5 times each, and you can see that GPT-4o-mini was the only one that got this right all 5 times. The other models mostly got it comically wrong.
I believe LLM is going to change things for the better for developers, but we need to properly set expectations. I suspect this will be difficult, since a lot of VC money is being pumped into AI.
I also think a lot of mistakes can be prevented if you include in your prompt, how and why it did what it did. For example, the prompt that was used in the blog post should include "After writing the test, summarize how each rule was applied."
The message that these systems are flawed appears to be pretty universal to me:
ChatGPT footer: "ChatGPT can make mistakes. Check important info."
Claude footer: "Claude can make mistakes. Please double-check responses."
https://www.meta.ai/ "Messages are generated by AI and may be inaccurate or inappropriate."
etc etc etc.
I still think the problem here is science fiction. We have decades of sci-fi telling us that AI systems never make mistakes, but instead will cause harm by following their rules too closely (paperclip factories, 2001: A Space Odyssey etc).
Turns out the actual AI systems we have make mistakes all the time.
I'd say parent is absolutely correct - we ARE being sold (quite literally, through promotional material, i.e. ads) that these models are way more capable than they actually are.
I do see your science fiction angle, but I think the bigger issue is the media, VCs, etc. are not clearly spelling out that we are nowhere near science fiction AI.
What absolute nonsense. What an absurd false equivalence. It's not that we expect perfection or even human level performance from "AI". It's that the crap that comes out of LLMs is not even at the level of a first year student. I've never in my entire life reviewed the code of a junior engineer and seen them invent third party APIs from whole cloth. I've never had a junior send me code that generates a payload that doesn't validate at the first layer of the operation with zero manual testing to check it. No junior has ever asked me to review a pull request containing references to an open source framework that doesn't exist anywhere in my application. Yet these scenarios are commonplace in "AI" generated code.
If an LLM hallucinates a method that doesn't exist I find out the moment I try and run the code.
If I'm using ChatGPT Code Interpreter (for Python) or Claude analysis mode (for JavaScript) I don't even have to intervene: the LLM can run in a loop, generating code, testing that it executes without errors and correcting any mistakes it makes.
I still need to carefully review the code, but the mistakes which cause it not to run at all are by far the least amount of work to identify.