logoalt Hacker News

tmoertelyesterday at 5:00 PM3 repliesview on HN

Having AI tools do the review against the checklist would probably prevent the problems. However, it would probably be substantially inferior as a teaching tool for your team. The exercise of having reviewers hunt the checklisted vulnerabilities for themselves is what develops the mental muscles needed to understand the vulnerabilities in depth and avoid them when designing and writing future code.

But, yes, I'd augment any manual review with a checklist and AI review as a final step. If the AI catches any problems then, your reviewers will be primed to think about why they overlooked them.


Replies

dylan604yesterday at 5:24 PM

> The exercise of having reviewers hunt the checklisted vulnerabilities for themselves is what develops the mental muscles needed to understand the vulnerabilities in depth and avoid them when designing and writing future code.

Could not agree any more strongly. These automagic tools are one thing in the hands of a dev that groks the basics like these examples. It would be one thing if new devs were actually reviewing the generated code to understand it, but so much is just vibe coded and deployed as soon as it "works". I get flack from not immediately deploying generated code because I want to take time to understand how it works. It's really grating and a lot of friction is coming from it.

capitalhilbillyyesterday at 5:48 PM

For vulnerabilities of this nature is there really a point in training if an AI will catch them from now on? Seems like a variant of the allowing calculators problem and maybe the problem codeless platforms would have had. If these style of bugs don't change design in any meaningful way then the user can just write pseudo variables and the AI can normalize to safe code and their ability to work without the AI and IDE is probably less relevant than freeing their cognitive load for more complex constraint problems.

ndriscollyesterday at 6:59 PM

Suppose we still need humans to be writing code and caring about this stuff for the foreseeable future, so we need people to continue learning about the ways things can go wrong. For something like injection, you still ideally have a lint rule that says "don't concatenate things that look like SQL/HTML/etc. Use the correct macros for string interpolation". What does it actually teach for a reviewer to tell you that? You can ask the reviewer for more information, but you can ask your teammate anyway if you don't understand why the linter is mad. You can also ask the robot, who will patiently explain it to you even long after all of the knowledgeable humans have retired or died. The robot could even link to a prompt asking to explain it:

https://chatgpt.com/share/69f10515-8808-83ea-abe3-a758d3144c...

If people aren't learning more with AI, that's a meta skill they need to develop.

As for training the review muscles, why would you do that if you have a linter that rejects when you make the mistake? I don't expect reviewers to check whether you eschew nulls or uninitialized variables; I expect the compiler to do that, and I expect over time that more and more things will become tooling concerns (especially given that rigid tools with appropriate feedback are clearly a massive force multiplier for LLMs).

show 1 reply