> Nonetheless, the fact that LLMs got significant better in finding this, better than humans, started to happen half a year ago?
*rolls eyes* regular static analyzers also have been "better than humans" for decades, being better than a human at a specific mechanical task really doesn't mean much. The interesting new thing is the type of potential "fuzzy bugs" described in the article that LLMs are able to identify (a comment not matching the code it describes, uncommon usage of a 3rd party library, mismatch of code and a protocol it implements, or often just generally weird looking code somebody should have a closer look at... this closes a gap in the traditional debugging toolboxes, but shouldn't replace them)
You don't have to dismantle a comment on a microlevel.
It has been clear for ages that certain type of bugs or issues are better solved from software.
But there was still plenty of things a proper SecOps Person would be able to find with help from tooling which automatic tooling wouldn't find.
Taking a limited amount of resources and focusing on the critical things.
I do think this is gone now. Same with Threat modeling etc.