> Looks like every single one of the 38 vulnerabilities were either SQL injection, XSS, path traversal or "Insecure Direct Object Reference" aka failing to check the caller was allowed to access the record.
Seems like code review against a checklist of the most common vulnerabilities would have prevented these problems. So I guess there are two takeaways here:
First, AI scanners are useful for catching security problems your team has overlooked.
Second, maintaining a checklist of the most-common vulnerabilities and using it during code review is likely to not only prevent most of the problems that AI is likely to catch, but also show your development team many of their security blind spots at review time and teach them how to light those areas. That is, the team learns how to avoid creating those security mistakes in the first place.
What about having the checklist and having an AI tool use it to catch things at review time (or even development time)?
Checking for OWASP top 10 items during code review is usually a mid level dev interview question IME. It's nothing new. Teams don't have to come up with these. These things exist.
Yee, absolutely. A team with a strong code review culture that incorporates security review against common exploits ideally wouldn't end up with holes like this.
But by not having a checklist you avoid that your blind spots get exposed.
I think it shows exactly the opposite of the second. Even with the availability of checklists, and instructions to use them, people won't and don't actually use them consistently.
'With enough eyes, all bugs are shallow' and AI is an automatable eye that looks at things we can tell nobody has seriously looked at before. It's not a panacea, there will be lots of false positives, but there's value there that we clearly aren't getting by 'just telling humans to use the tools available'.
See also: modern practices and sanitizers and tools and test frameworks to avoid writing memory errors in C, and the reality that we keep writing memory errors in C.