But, yes, I'd augment any manual review with a checklist and AI review as a final step. If the AI catches any problems then, your reviewers will be primed to think about why they overlooked them.
Could not agree any more strongly. These automagic tools are one thing in the hands of a dev that groks the basics like these examples. It would be one thing if new devs were actually reviewing the generated code to understand it, but so much is just vibe coded and deployed as soon as it "works". I get flack from not immediately deploying generated code because I want to take time to understand how it works. It's really grating and a lot of friction is coming from it.
https://chatgpt.com/share/69f10515-8808-83ea-abe3-a758d3144c...
If people aren't learning more with AI, that's a meta skill they need to develop.
As for training the review muscles, why would you do that if you have a linter that rejects when you make the mistake? I don't expect reviewers to check whether you eschew nulls or uninitialized variables; I expect the compiler to do that, and I expect over time that more and more things will become tooling concerns (especially given that rigid tools with appropriate feedback are clearly a massive force multiplier for LLMs).
Second, to use your example, the ChatGPT response you provided does a crappy job of explaining the root cause of problem: Namely, that every string is drawn from some underlying language that gives the string its meaning, and therefore when strings of different languages are combined, the result can cause a string drawn from one language to be interepreted as if it were drawn from another and, consequently, be given an unintended meaning.
So, if the idea is that smart teams can not only delegate the catching of problems but also the explanation of those problems to ChatGPT -- presumably because it is a better teacher than the senior engineers who actually understand the salient concepts -- I'd say AI ain't there yet.
Is that true? Is that also true of e.g. teams using type checkers to avoid nulls or exceptions? Or teams that use memory safe languages to avoid memory corruption? Or using a library that has an `unsafeStringToSql` API surface, and a linter to flag its use (where you're expected to use safe macros instead)? My experience is that better tools (or languages and library designs) scanning for issues lead to fewer defects and less playing fast and loose since the entire point of the tools is to ban these mistakes.
On education, it literally tells you that the top concern is SQL injection made possible by concatenating strings, and gives an example of an auth bypass: `name = "foo' OR 1=1 --"`. It also notes that this is not just a minor nitpick, but that actually the solution is fundamentally doing something completely different (query objects with bound parameters). If you don't understand what it means you can just ask:
> Elaborate on 1
> Walk through examples of what goes wrong and why, and how the solution avoids it
etc. The knowledge is all there; you just need to ask for it. It's an infinitely patient teacher with infinite available attention to give to you. You can keep asking follow-ups, ask it to check your understanding, etc. Or there are tons of materials about it on the web or in textbooks, and if you still don't understand, you can still ask a more senior engineer to explain what's wrong.
Yes. See: vibe coding. See also: The shockingly widespread hype for and acceptance of vibe coding across industries that ought to know better.
Do you deny that there is a correlation between AI use and not knowing what you are doing? Isn’t one of the big selling points of AI is that it lets “regular people” create “real world” projects that they could only dream about previously?
I am not saying that serious engineers don’t use AI or that when they use it, they do so foolishly. I’m only pointing out that AI has let a lot of people who don’t know what they’re doing crank out code without understanding how it works (or doesn’t).
> Is that also true of e.g. teams using type checkers to avoid nulls or exceptions? Or teams that use memory safe languages to avoid memory corruption?
No, it is not true of those teams. When people choose to use languages with statically checked types or with memory safety or the other examples you offered, they are rarely doing it because they have no idea how to write sound code. But when people turn to AI to crank out code they couldn’t write themselves (see: vibe coding), that’s what they are doing.
> On education, [ChatGPT] literally tells you that the top concern is SQL injection from essentially concatenating strings, and gives an example of an auth bypass: `name = "foo' OR 1=1 --"`. If you don't understand what that means you can just ask...
Again, that’s a crappy explanation of the real problem. It promotes no understanding of the underlying issue—that strings are drawn from languages that give them their meanings. And, unless you understand that it’s a crappy explanation that ignores the underlying issue—which a person being gaslit by the crappy explanation would not—what stimulus is going to provoke you to ask for a better explanation? How are you going to know that the crappy explanation is crappy and tell ChatGPT to take another direction?
> The knowledge is all there; you just need to ask for it. It's an infinitely patient teacher with infinite available attention to give to you.
Yeah, and if it steers you down a crappy path, such as in your sql-injection session with ChatGPT, it will be infinitely happy to keep leading you down that crappy path. Unless you know that it’s leading you down a crappy path, you won’t be able to tell it to stop and take another path. But if you are relying on the AI to tell you what’s good and what’s crappy, you won’t be able to tell which is which. You’ll be stuck on whatever path the AI first presents to you.
> Or there are tons of materials about it on the web or in textbooks, and if you still don't understand, you can still ask a more senior engineer to explain what's wrong.
And that’s equivalent to “don’t ask the AI, use a traditional resource,” right?