I wonder if there is a way to blanket prevent these types of problems.
Possible solutions I can think of:
- Require an account with a paid service. Fix = require money - Require an account verified with real ID/passport etc. Fix = link to real person - Automated reply system to "waste tokens" if it is an AI that is responding. Fix is increased cost of spammer. - Have some kind of "vetting system" where you get on an allowed list to report these types of things. Seems not good to me, but perhaps there is something in it.
I wonder how much open source code is lost because maintainers must deal with this type of thing versus the "good" that AI can bring in productivity.
I've thought about this some and there are benefits and detractions in the processes.
First requiring a deposit system. This might work in the sense that someone dedicated can put down $5 and report a bug, and even if it's not a bug but their work is legitimate they get refunded at the end of the process.
- This doesn't scale well globally as $5 is nothing to me, but significant to someone that lives in a place almost no one in the US can pronounce correctly.
- Once you become trusted you no longer need a deposit.
- Most people what would submit a single, real, bug won't do this and you lose this information.
- How is management of this deposit system paid for? How is fraud dealt with?
A lot of solutions have been suggested over the past few months, including those. They have drawbacks and it’s not how they want to run the project.
That would shut down all drive-by contributions, which may not be a good source of big improvements, but are a good source of information about bugs/maintenance issues. I.e. if I find a rare corner case where the code breaks, for an open source project I'd usually take some time to report it properly. But there's no way I'd pay for the privilege or bother to register with my governmental ID.