logoalt Hacker News

narginalyesterday at 8:55 PM1 replyview on HN

Just like how fuzzers will find all the bugs, right? Right?? There's definitely infrastructure at these big companies that isn't sitting in a while loop 'fuzzing' right? Why is it news that vulnerability research will continue to get harder, exactly? It has always been this way, exploits will get more expensive, and the best researchers will continue with whatever tools they find useful.


Replies

tptacekyesterday at 9:01 PM

It's a good question. Fuzzers generated a surge of new vulnerabilities, especially after institutional fuzzing clusters got stood up, and after we converged on coverage-guided fuzzers like AFL. We then got to a stable equilibrium, a new floor, such that vulnerability research & discovery doesn't look that drastically different after fuzzing as before.

Two things to notice:

* First, fuzzers also generated and continue to generate large stacks of unverified crashers, such that you can go to archives of syzkaller crashes and find crashers that actually work. My contention is that models are not just going to produce hypothetical vulnerabilities, but also working exploits.

* Second, the mechanism 4.6 and Codex are using to find these vulnerabilities is nothing like that of a fuzzer. A fuzzer doesn't "know" it's found a vulnerability; it's a simple stimulus/response test (sequence goes in, crash does/doesn't come out). Most crashers aren't exploitable.

Models can use fuzzers to find stuff, and I'm surprised that (at least for Anthropic's Red Team) that's not how they're doing it yet. But at least as I understand it, that's generally not what they're doing. It something much closer to static analysis.

show 2 replies