> The in-lab study results showed that developers using a poisoned ChatGPT-like tool were more prone to including insecure code than those using an IntelliCode-like tool or no tool.
Looking at the actual paper, the results suggest that developers miss security holes in fully generated blocks of code more frequently than they do with code that an AI completes, and both versions of AI tooling seem to have increased error rates relative to no tool in this very small study (10 devs per category).
Those results bear almost no relation to the submitted title's claim that developers don't care about poisoning attacks.
For implementing tasks I want help with, I would rather consume it as a testable auditable library, rather than ephemeral copy-paste delivered by a mischievous fae of air and darkness.
From my perspective, your perspective is like a horse and buggy driver feeling vindicated when a "horseless carriage" driver accidentally drives one into a tree. The cars will get easier to drive and safer in crashes, and the drivers will learn to pay attention in certain ways they previously didn't have to.
Will there still be occasional problems? Sure, but that doesn't mean that tying your career to horses would have been a wise move. Same here.
(Also, this article is about "poisoned ChatGPT-like tools." Which says very little about using the tools that most developers are using)
I'm always reminded of this: "Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them—one's a biography written by an eighth grader, the second is a computer game that doesn't work and the third is an image of a London monument. None answers my question, and my search is periodically interrupted by messages like, "Too many connections, try again later."" -- Cliff Stoll, 1995
What these tools change is making the process much faster and adding a (rather questionable) imprimatur of quality from a vendor that may not actually be a good curator of code-samples.
The headline made me think this sort of attack involved someone poisoning via something sent through the api, but how can I possibly concern myself with the training data that the AI which I use uses?
I generally read and understand the suggestions made by the code editor, so I’m not too worried that the autosuggestions are poisoned, but I mostly feel like there’s nothing I can do about it.
But that's the whole point of the article: blindly trusting the tool without evaluating the code for correctness and safety.
I think the comment means, "I can't evaluate whether the code is safe" – not, "I just don't want to." And my whole point is, that's not true. :-)
Software engineers can evaluate AI-generated code. If the complexity is too difficult for an engineer, they should get the assistance of a colleague, work on another feature, or disable the AI tool altogether.
I’m constantly correcting the things that come out of copilot and it’s not possible to use these type of devices without that.
It just allows me to autocorrect and write faster and guesses which functions that I’m about to type, but I don’t think it’s possible to write code with these tools at this point without having any understanding of the code that is coming out of it and not reading that code. The code that comes out of that just won’t work.