Isn't there some alternative approach? I.e when someone submit ai slop they get a strike. Three strikes and you are suspended from submitting to the bug bounty for x months/years?
*Edit - I get it. It seems like the authentication is a challenge.
How about "It costs $1000 to submit a bug bounty for approval", and raise the reward to $2000 (or $5000 if it's in the cards, since that will have a deterrant impact on non-AI responses).
Denominated in BTC to avoid chargebacks etc.
you still need to spend effort reviewing the code to figure out when you can give a strike. Thrice for an actual ban. This would still waste precious maintainer time.
They mentioned they had identified alternatives but it would be costly to implement them. One can imagine that ban evading by generating a new user account would be easy for an LLM agent. It's going to be a long, long game if whack-a-mole.
Such a person can just make a new account and go back at it
This probably gets solved outside of the level of an individual project. No small team can handle this without building a whole product just to handle the bug bounty.
https://en.wikipedia.org/wiki/Sybil_attack
New identities are cheap.