From what I've heard, every LLM before Mythos (which you can't get, they'll call you if you're big enough) will have far too many false positives to be helpful, so I guess the best option would be to use an agent to help you (not lights-off vibe coding!*) take advantage of all the older tools like valgrind and closing all the compiler warnings?
* I presume I'm not the only one to find the agents tasked with adding unit tests will sometimes try to sneak through "open source code and apply regex to confirm presence or absence of specific string literal".
They can speed you up significantly, but you absolutely do need to pay attention to what they produce.
That is false. A year ago every LLM generated report was slop - more likely a false positive than correct. However in the past few months nearly every LLM generated report is real.
With all respect to the Anthropic folks, that's just marketing. (If they're reading this: let us into the program so I can be proven wrong here.)
I'm sure what they have is awesome, but it's clear that there are people out there with some decent prompts that are getting results out of widely available models as well.
The big thing we're sharing is: bulk scanning by random people in random geographies got a _lot_ better around January, it's widely distributed, and it's going to get a lot better regardless of whether that specific version of Mythos becomes widely available or not.