> Ironically, AI tend to be better at securing code, because unlike the squishy human, it is much more cable of creating tons of tests and figuring out weaknesses.
Sentences like this make me think AI is honestly the best thing that happened for my imposter syndrome. AI is great for simulating test case, and that's it. If you leave it, it write the most basic, useless tests (i mean, half of them might be usefull when you refactor, but that's about it). It can't design reusable test components and have trouble with test double, which i would think is the easiest test case for AI. Even average devs like me write test double faster than AI, and i'm shit at writing tests.
AI is also extremely bad at understanding versionning, and will use a deprecated API for no reason except increasing the surface of attack.
AI is great for writing CLI scripts, boilerplate and autocomplete. I use it for frontend because i'm shit at it (even though i have to clean its shit up behind), and to rewrite small functionalities of some libraries i want to avoid loading (which allowed us to remove legacy dependencies). It's good at writing prototypes (my main use nowadays), and a very good way to use it is to ask it a plan to improve/factorize your code (it's _very_ bad at factorizing, but as it recognize patterns, it is able to suggest interesting refactors. Half the time it's wrong, so use the "plan" mode)
I'm on a network security and cybersecurity tooling team, i guarantee you AI is shit at securing the code (and at understanding network).
Frankly, i feel like the people downvoting my comment, are still using older LLMs. When Opus 4.5 entered the picture, there was a noticeable improvement in the way the LLM (for me), interacted with the code base, and the issues that it was able to find.
I ran Opus on some public source code, and lets just say that the picture was less rosy for the whole "human as security".
I understand people have a aversion to LLMs but it irked me the wrong way to see the amount of downvotes on here, because people disagree with a opinion. Its starting the become like reddit. As i stated before, its still your tasks as the person working with the LLM to guide it on security practices. But as somebody now 30 years in the industry, the amount of absolute crap i have seen produced as code (and security issues), makes LLMs frankly security wizards.
Stupid example: I have yet to see LLMs not use placeholders to prevent SQL injection (despite it being trained on a lot of bad code).
The amount of code i have seen, where humans just injected variables directly into the SQL... Yea, what a surprise that SQL database content get stolen like its nothing. When doing a security audit on some public code, one of the items always found by the LLMs, yep ... SQL injectable code everywhere.
A lot of practices are easy, but anybody can overlook something in their own code base. This is where LLMs are so great. You audit with multiple LLMs and you will find points that are weak or where you forgot something, even if you code security wse.
So yea, i have no issue doing discussions but the ridiculous downvotes on what seems to come from people with no clue, is amazing. Going to take a break from here.