logoalt Hacker News

tmoerteltoday at 1:36 AM1 replyview on HN

> Is that true [that teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose in general]?

Yes. See: vibe coding. See also: The shockingly widespread hype for and acceptance of vibe coding across industries that ought to know better.

Do you deny that there is a correlation between AI use and not knowing what you are doing? Isn’t one of the big selling points of AI is that it lets “regular people” create “real world” projects that they could only dream about previously?

I am not saying that serious engineers don’t use AI or that when they use it, they do so foolishly. I’m only pointing out that AI has let a lot of people who don’t know what they’re doing crank out code without understanding how it works (or doesn’t).

> Is that also true of e.g. teams using type checkers to avoid nulls or exceptions? Or teams that use memory safe languages to avoid memory corruption?

No, it is not true of those teams. When people choose to use languages with statically checked types or with memory safety or the other examples you offered, they are rarely doing it because they have no idea how to write sound code. But when people turn to AI to crank out code they couldn’t write themselves (see: vibe coding), that’s what they are doing.

> On education, [ChatGPT] literally tells you that the top concern is SQL injection from essentially concatenating strings, and gives an example of an auth bypass: `name = "foo' OR 1=1 --"`. If you don't understand what that means you can just ask...

Again, that’s a crappy explanation of the real problem. It promotes no understanding of the underlying issue—that strings are drawn from languages that give them their meanings. And, unless you understand that it’s a crappy explanation that ignores the underlying issue—which a person being gaslit by the crappy explanation would not—what stimulus is going to provoke you to ask for a better explanation? How are you going to know that the crappy explanation is crappy and tell ChatGPT to take another direction?

> The knowledge is all there; you just need to ask for it. It's an infinitely patient teacher with infinite available attention to give to you.

Yeah, and if it steers you down a crappy path, such as in your sql-injection session with ChatGPT, it will be infinitely happy to keep leading you down that crappy path. Unless you know that it’s leading you down a crappy path, you won’t be able to tell it to stop and take another path. But if you are relying on the AI to tell you what’s good and what’s crappy, you won’t be able to tell which is which. You’ll be stuck on whatever path the AI first presents to you.

> Or there are tons of materials about it on the web or in textbooks, and if you still don't understand, you can still ask a more senior engineer to explain what's wrong.

And that’s equivalent to “don’t ask the AI, use a traditional resource,” right?


Replies

ndriscolltoday at 2:03 AM

I'm not following the scenario here. The original discussion was around teams using these tools, not vibe coders chasing their dreams.

If you're a "regular person" vibe coder, you're not doing code reviews with a team anyway. You presumably had no teacher and no one to tell you your mistakes. So having a security bot is strictly an improvement.

If you're on a professional team, then you're presumably in the non-foolish group that already had standards, and is using it as a tool as with any of the other quality tools they use. And if they don't have standards and don't know this stuff already, well, the bot is again an improvement. It least it raises the issue for someone to ask what it means.

If you're a professional, I also assume you've heard of SQL injection (does it never come up in a CS degree?), so you don't really need more than a "this method is exposed to SQL injection" explanation. It's like saying "tail recursion is preferred because it compiles to a loop, so it's not prone to stack overflow". It assumes it doesn't need to elaborate further, but if you don't understand a term, you can just ask. Or look it up.

And yeah books or Wikipedia still exist even if you use an automated linter. You can go read about these things if you don't know them. I frequently tell my team members to go read about things. Actually I ended up in a digression about CSRF the other day (we work on low level networking, so generally not relevant), and I suggested the person I was talking to could go read about it if they're interested so as not to make them listen to me ramble.

Also I'm still unclear on why you think the explanation is crappy. It says the problem is you're making a query via simple string substitution, shows how you can abuse quotes if you do that (so concretely illustrates the problem), and says the reason the better solution is better is that it makes a structural object where you have a query with placeholders followed separately by parameters (so you can't misinterpret the query shape), which seems better than "strings are drawn from languages that give them their meanings"?

show 1 reply