Closing the program is totally reasonable. However, there is another option: Make submitters pay a nominal fee that is returned in the case that a real bug is found.
Unfortunately this isn't all black-and-white. There are some bug bounty where the company is very eager not to pay any bounty, aggressively marking vulnerabilities as out-of-scope or working-as-intended.
In those case you already lose time, but in the future you would also lose money.
Unfortunately you don't know how a company will react before submitting, especially if it's a small one.
That would add administrative overhead, and even higher incentive for submitters to endlessly argue they're right.
It sounds like the bug bounty requires the user to extend the simulator, to cover the type of bug they found. Maybe the they could require a full run of the simulator test suite before submission? This serves as a nice check (that they didn’t break the simulator), and maybe it could also produce some proof-of-work artifact as a side-effect… (is this possible? I don’t know security).
The problem with that approach is that it will also deter genuine submissions, probably moreso than a "no bounty" system.
For those who encounter bugs as part of their employment, they'd now need to convince their employer to fork over money up front. For most employers, getting them to spend even insignificant money is like pulling teeth.
But even for the self-employed or hobbyists, gambling real money on "are they going to be a jerk about my exploit report". No offense towards Turso, but the bulk of software firms are TERRIBLE about handling reports like that. Many already have unstated policies of screwing people out of deserved bug bounties at every step.
To submit such reports today already requires you to accept that your work is statistically, just going to be a bunch of free labour that you gave away for the betterment of the product's users. Adding a cash fee just further deters submissions, especially once people haven't gotten their money back a few times. (Consider how many "AI detection tools" are themselves incredibly unreliable machine learning or sometimes even LLM systems)
Easily exploitable without much stretch of a thought.
I'd say closing a program which doesn't work anymore is a better idea.
Honestly I think this is a great idea. My only suggestion is instead of being very nominal, it should be "reasonable" (so $10 and not $1).
It's even possible to directly link this to maintainers/employees - if you can review 10 such AI/real things per hour (likely more if it's AI slop that's easy to detect), you're generating another revenue stream. Now, I have no idea if these guys are based in SF Bay or a 3rd world country with low COL but as an "add on", $100 an hour isn't too shabby (and can be on the "low end" if one's good at spotting AI crap.)
Side note, isn't it possible to have some way to verify if the "vulns" are actual vulns or not? ...Heck why not throw an LLM at it, powered by a single $10 submission fee?
cool idea
[flagged]
Moving money is not free, and managing payments/etc can be a huuge headache. Sometimes it’s easy, but sometimes it’s not.