logoalt Hacker News

nzyesterday at 1:45 PM6 repliesview on HN

Not contradicting this (I am sure it's true), but why is using an LLM for this qualitatively better than using an actual fuzzer?


Replies

azakaiyesterday at 5:36 PM

1. This is a kind of fuzzer. In general it's just great to have many different fuzzers that work in different ways, to get more coverage.

2. I wouldn't say LLMs are "better" than other fuzzers. Someone would need to measure findings/cost for that. But many LLMs do work at a higher level than most fuzzers, as they can generate plausible-looking source code.

saagarjhayesterday at 1:48 PM

Presumably because people have used actual fuzzers and not found these bugs.

bvisnesstoday at 1:56 AM

As someone on the SpiderMonkey team who had to evaluate some of Anthropic's bugs, I can definitely say that Anthropic's test cases were definitely far easier to assess than those generated by traditional fuzzers. Instead of extremely random and mostly superfluous gibberish, we received test cases that actually resembled a coherent program.

hrmtst93837yesterday at 9:51 PM

Fuzzers and LLMs attack different corners of the problem space, so asking which is 'qualitatively better' misses the point: fuzzers like AFL or libFuzzer with AddressSanitizer excel at coverage-driven, high-volume byte mutations and parsing-crash discovery, while an LLM can generate protocol-aware, stateful sequences, realistic JavaScript and HTTP payloads, and user-like misuse patterns that exercise logic and feature-interaction bugs a blind mutational fuzzer rarely reaches.

I think the practical move is to combine them: have an LLM produce multi-step flows or corpora and seed a fuzzer with them, or use the model to script Playwright or Puppeteer scenarios that reproduce deep state transitions and then let coverage-guided fuzzing mutate around those seeds. Expect tradeoffs though, LLM outputs hallucinate plausible but untriggerable exploit chains and generate a lot of noisy candidates so you still need sanitizers, deterministic replay, and manual validation, while fuzzers demand instrumentation and long runs to actually reach complex stateful behavior.

utopiahyesterday at 4:07 PM

I didn't even read the piece but my bet is that fuzzers are typically limited to inputs whereas relying on LLMs is also about find text patterns, and a bit more loosely than before while still being statistically relevant, in the code base.

mmis1000yesterday at 6:22 PM

It's not really bad or not though. It's a more directed than the rest fuzzer. While being able to craft a payload that trigger flaw in deep flow path. It could also miss some obvious pattern that normal people don't think it will have problem (this is what most fuzzer currently tests)