logoalt Hacker News

ryandrakeyesterday at 6:09 PM7 repliesview on HN

Receiving hundreds of AI generated bug reports would be so demoralizing and probably turn me off from maintaining an open source project forever. I think developers are going to eventually need tools to filter out slop. If you didn’t take the time to write it, why should I take the time to read it?


Replies

moyixyesterday at 7:22 PM

All of these reports came with executable proof of the vulnerabilities – otherwise, as you say, you get flooded with hallucinated junk like the poor curl dev. This is one of the things that makes offensive security an actually good use case for AI – exploits serve as hard evidence that the LLM can't fake.

tptacekyesterday at 7:07 PM

These aren't like Github Issues reports; they're bug bounty programs, specifically stood up to soak up incoming reports from anonymous strangers looking to make money on their submissions, with the premise being that enough of those reports will drive specific security goals (the scope of each program is, for smart vendors, tailored to engineering goals they have internally) to make it worthwhile.

show 1 reply
triknomeisteryesterday at 6:17 PM

Eventually projects who can afford the smugness are going to charge people to be able to talk to open source developers.

show 1 reply
bawolffyesterday at 8:52 PM

If you think the AI slop is demoralizing, you should see the human submissions bug bounties get.

There is a reason companies like hackerone exist - its because dealing with the submissions is terrible.

Nicookyesterday at 6:41 PM

Open source maintainers have been complaining about this for a while. https://sethmlarson.dev/slop-security-reports. I'm assuming the proliferation of AI will have some significant changes on/already has had for open source projects.

jgalt212yesterday at 6:22 PM

One would think if AI can generate the slop it could also triage the slop.

show 1 reply
teerayyesterday at 6:22 PM

You see, the dream is another AI that reads the report and writes the issue in the bug tracker. Then another AI implements the fix. A third AI then reviews the code and approves and merges it. All without human interaction! Once CI releases the fix, the first AI can then find the same vulnerability plus a few new and exciting ones.

show 1 reply