> With skill, and usually not consistently and systematically.
How do you know? If the people who like to crow about vulnerabilities aren't doing it, it doesn't mean that the people who are actually in a position to exploit them systematically and effectively aren't doing it.
Those embargoes have always been dangerous, because they create a false sense of security. But, as you point out...
> With AI, anyone can do this to any software.
Yep. Even if it hadn't been true before, it's clear that now you just have to assume that everybody relevant will immediately recognize the security impact of any patch that gets published. That includes both bugs fixed and bugs introduced.
... and as the AI gets better, you're going to have to assume that you don't even have to publish a patch. Or source code. Within way less time than it's going to take people to admit it and adjust, any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.
> How do you know?
We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.
>any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.
Shouldn't this naturally lead to a state where all (new) code is vulnerability-free? If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code.