logoalt Hacker News

ndriscollyesterday at 6:59 PM1 replyview on HN

Suppose we still need humans to be writing code and caring about this stuff for the foreseeable future, so we need people to continue learning about the ways things can go wrong. For something like injection, you still ideally have a lint rule that says "don't concatenate things that look like SQL/HTML/etc. Use the correct macros for string interpolation". What does it actually teach for a reviewer to tell you that? You can ask the reviewer for more information, but you can ask your teammate anyway if you don't understand why the linter is mad. You can also ask the robot, who will patiently explain it to you even long after all of the knowledgeable humans have retired or died. The robot could even link to a prompt asking to explain it:

https://chatgpt.com/share/69f10515-8808-83ea-abe3-a758d3144c...

If people aren't learning more with AI, that's a meta skill they need to develop.

As for training the review muscles, why would you do that if you have a linter that rejects when you make the mistake? I don't expect reviewers to check whether you eschew nulls or uninitialized variables; I expect the compiler to do that, and I expect over time that more and more things will become tooling concerns (especially given that rigid tools with appropriate feedback are clearly a massive force multiplier for LLMs).


Replies

tmoertelyesterday at 7:29 PM

Two issues here. First, teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose, in general, and thus be less likely to "ask the robot to patiently explain" problems until they understand the problems' root causes and update their mental models to prevent those problems.

Second, to use your example, the ChatGPT response you provided does a crappy job of explaining the root cause of problem: Namely, that every string is drawn from some underlying language that gives the string its meaning, and therefore when strings of different languages are combined, the result can cause a string drawn from one language to be interepreted as if it were drawn from another and, consequently, be given an unintended meaning.

So, if the idea is that smart teams can not only delegate the catching of problems but also the explanation of those problems to ChatGPT -- presumably because it is a better teacher than the senior engineers who actually understand the salient concepts -- I'd say AI ain't there yet.

show 1 reply