I'm not following the scenario here. The original discussion was around teams using these tools, not vibe coders chasing their dreams.
If you're a "regular person" vibe coder, you're not doing code reviews with a team anyway. You presumably had no teacher and no one to tell you your mistakes. So having a security bot is strictly an improvement.
If you're on a professional team, then you're presumably in the non-foolish group that already had standards, and is using it as a tool as with any of the other quality tools they use. And if they don't have standards and don't know this stuff already, well, the bot is again an improvement. It least it raises the issue for someone to ask what it means.
If you're a professional, I also assume you've heard of SQL injection (does it never come up in a CS degree?), so you don't really need more than a "this method is exposed to SQL injection" explanation. It's like saying "tail recursion is preferred because it compiles to a loop, so it's not prone to stack overflow". It assumes it doesn't need to elaborate further, but if you don't understand a term, you can just ask. Or look it up.
And yeah books or Wikipedia still exist even if you use an automated linter. You can go read about these things if you don't know them. I frequently tell my team members to go read about things. Actually I ended up in a digression about CSRF the other day (we work on low level networking, so generally not relevant), and I suggested the person I was talking to could go read about it if they're interested so as not to make them listen to me ramble.
Also I'm still unclear on why you think the explanation is crappy. It says the problem is you're making a query via simple string substitution, shows how you can abuse quotes if you do that (so concretely illustrates the problem), and says the reason the better solution is better is that it makes a structural object where you have a query with placeholders followed separately by parameters (so you can't misinterpret the query shape), which seems better than "strings are drawn from languages that give them their meanings"?
The root of this subthread was this claim that I made and you questioned:
> Teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose in general.
Note the word delegate. I claimed that teams that delegate security responsibilities to AI are more likely to play fast and loose in general. That’s because delegating security to AI—not supplementing existing security practices with AI—is likely to be a good way to launch insecure garbage into the world. AI simply isn’t good enough to get security right on its own. Maybe someday it will be good enough, but like I wrote earlier, it ain’t there yet. And any team that plays fast and loose with security is likely to play fast and loose in general.
See any problems with that logic?
I only used vibe coding as an obvious example that shows there are lots of teams that delegate security responsibilities to AI. (Vibe coders are delegating almost everything to AI.)
> If you're a "regular person" vibe coder, you're not doing code reviews with a team anyway. You presumably had no teacher and no one to tell you your mistakes. So having a security bot is strictly an improvement.
How is it strictly an improvement? Before vibe coding, “regular people” couldn't launch insecure garbage upon an unsuspecting world—they couldn't launch anything. Do you believe that it’s "strictly better" that now everyone can launch insecure garbage courtesy of their AI minions? Do you think it’s “strictly better” that lots of users are having their data sucked into insecure apps and web sites that are destined to be hacked?
> Also I'm still unclear on why you think the explanation is crappy.
It’s crappy because it tells you how to use a tool (a custom SQL interpolator) without helping you understand the cause of the problem that the tool is trying to solve. You could read this ChatGPT explanation about avoiding SQL injection in Scala and not be any wiser about how to avoid that problem in other programming languages.
Worse, you would never learn from this explanation that the underlying cause of SQL injection is the same as for cross-site-scripting holes and a host of other logic and security problems in software. That’s because ChatGTP was trained on explanations of these problems scraped from the internet, and 99% of those explanations are superficial because the people who wrote them didn’t understand the underlying issues.
But if you deeply understand the following, you will never make this kind of mistake again in any programming language:
1. Every string is drawn from an underlying language and must conform to the syntax and semantics of that language.
2. To combine strings safely, you must ensure that they are all drawn from the same language and are combined according to that language’s syntax and semantics.
Therefore, as a programmer, you must (a) understand the language beneath each and every string, (b) combine strings only when you can prove that they have the same underlying language, and (c) combine strings only according to that underlying language’s syntax and semantics.
If you understand these things, you will know how to avoid all SQL injection and XSS holes and related problems in all programming languages. Things like escaping will make sense: it converts a string in one language into its equivalent string in another language. Further, you will know when you can safely delegate some of your responsibilities to tools such as parsers, type systems, custom SQL interpolators, and the like.
But you wouldn’t learn any of this from the ChatGPT explanation you received. Worse, you wouldn’t even think to ask for this deeper explanation because you would have no reason to suspect from ChatGPT’s explanation that this deeper explanation even existed.
In any case, I appreciate your willingness to continue this conversation. It’s been fun and educational and has forced me to articulate some of my ideas more clearly. Thanks!