It seems open source loses the most from AI. Open source code trained the models, the models are being used to spam open source projects anywhere there's incentive, they can be used to chip away at open source business models by implementing paid features and providing the support, and eventually perhaps AI simply replaces most open source code
> “Not much. The real incentive for finding a vulnerability in cURL is the fame ('brand is priceless'), not the hundred or few thousand dollars. $10,000 (maximum cURL bounty) is not a lot of money in the grand scheme of things, for somebody capable of finding a critical vulnerability in curl.”
That's the choice as seen from the perspective of a white-hat hacker. But for an exploitable vulnerability, the real choice is to sell it to malware producers (I'm including state-sponsored spyware companies like the makers of Pegasus in this category) for a lot of money, or do the more moral thing and earn at least a little bit of money via a bug bounty program.
A video showing some of the gems which most likely led to this frustration: https://youtu.be/8w6r4MKSe4I?si=7nfRd0VmX8tvXAnY
Outside of direct monetary gain like bounties are efforts to just stand out, in terms of being able to show contributions to a large project or getting say a CVE.
Stenberg has actually written about invalid/wildly overrated vulnerabilities that get assigned CVEs on their blog a few times and those were made by humans. I often get the sense some of these aren't just misguided reporters but deliberate attempts to make mountains out of molehills for reputation reasons. Things like this seem harder to account for as an incentive.
The company I work for has a pretty bad bounty system (basically a security@corp email). We have a demo system and a public API with docs. We get around 100 or more emails a day now. Most of it is slop, scams, or my new favourite AI security companies sending us an AI generated pentest un prompted filled with false positives, untrue things, etc. It has become completely useless so no one looks at it.
I had a sales rep even call me up basically trying to book a 3 hour session to review the AI findings unprompted. When I looked at the nearly 250 page report, and saw a critical IIS bug for Windows server (doesn't exist) existing at a scanned IP address of 5xx.x.x.x (yes an impossible IP) publically available in AWS (we exclusively use gcp) I said some very choice words.
A list of the slop if anyone is interested:
https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
Archive mirror: https://web.archive.org/web/20260121081507/https://etn.se/in...
I just read one of the slop submissions and it's baffling how anyone could submit these with a straight face.
https://hackerone.com/reports/3293884
Not even understanding the expected behaviour and then throwing as much slop as possible to see what sticks is the problem with generative AI.
It makes sense. This process of searching for bugs was slow and time-consuming so it needed to be incentivized. This is no longer the case. Now the hard part is in identifying which ones are real.
To paraphrase a famous quote: AI-equipped bug hunters find 100 out of every 3 serious vulnerabilities.
This is silly, people don't need AI to send you garbage. If your project is getting lots of junk reports, you should take it as a good sign, that people are looking at it a lot now. You don't remove the incentive, you ask for help to triage the junk.
Curl is a popular and well supported tool, if it needs help in this area, there will be a long line of competent people not volunteering their time and/or money. If you need help, get more help. don't use "AI slop" as an excuse to remove the one incentive people have to not sell exploits or just hoard them.
What I wonder is if this will actually reduce the amount of slop.
Bounties are a motivation, but there's also promotional purposes. Show that you submitted thousands of security reports to major open source software and you're suddenly a security expert.
Remember the little iot thing that got on here because of a security report complaining, among other things, that the linux on it did not use systemd?
Smart, bug bounties are a huge PITA.
related: cURL stopped HackerOne bug bounty program due to excessive slop reports https://news.ycombinator.com/item?id=46678710
So AI slop is used to attempt to degrade the quality of cURL.
[dead]
Alternate headline: AI discovering so many exploits that cybersecurity can't keep up
Am I doing this right?
Free work eventually turns out not to be free at all.
Funny how we are now sensitivized to these AI slops, at first I fixated on the En dashes in the lead of the article, made me doubt of the article's author for a few seconds.
Just use an LLM to weed them out. What’s so hard about that?
The solution for this, IMO, is flags. Just like with CTFs, host an instance of your software with a flag that can only be retrieved after a successful exploit. If someone submits the flag to you, there is no argueing about wether or not they found a valid vulnerability.
Yes, this does not work for all vulnerability classes, but it is the best compromise in my mind.
An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.
Then again, I once submitted a bug report to my bank, because the login method could be switched from password+pin to pin only, when not logged in, and they closed it as "works as intended", because they had decided that an optional password was more convenient than a required password. (And that's not even getting into the difference between real two-factor authentication the some-factor one-and-a-half-times they had implemented by adding a PIN to a password login.) I've since learned that anything heavily regulated like hospitals and banks will have security procedures catering to compliance, not actual security.
Assuming the host of the bug bounty program is operating in good faith, adding some kind of barrier to entry or punishment for untested entries will weed out submitters acting in bad faith.