AI can be the ultimate tactical tornado.
But it really doesn't have to be like this.
For their bug bounty program, the company can just charge 5-10$ per submission to guarantee everything you send gets thoroughly reviewed by a human, and so it completely eliminates bot slop DDoS submissions overnight. If your bug and PR was actually good, then you get 10 + 1000$ back, and if it wasn't good, then you need to do better due diligence next time, and the skilled human feedback you received on why it wasn't good, was a valuable lesson for your engineering career, and it only cost you the price of a Starbucks latte, and it also cut out all the scammers polluting the system. This way everyone wins.
I said it before and I'll say it again, for opportunities open to the entire world on the internet, adding monetary friction is THE ONLY (anonymous) WAY to filter out serious people from bad actors doing spray-and-pray hoping they'll make some money, or get that job, by weaponizing AI bots. You can't rely on honor systems and a high trust society on the anonymous open internet, you need to financially gatekeep to save yourself and your sanity, and make sure the honest serious people you want to engage with don't end up drowning in the noise of the scammers and unscrupulous opportunists.
But we can't shut ourselves down just because we refuse to apply solutions to AI slop DDoS.
Yup, this was my first thought. Tell an LLM that there's a bug, and it will _happily_ add 200 lines to the project, usually wrapped in if statements so that it all interleaves with existing code. Then it will write twice as many lines in tests, run it all, and be done. Your bug is fixed. All the tests run, and test coverage went up. Now do that a couple dozen more times. :shudder: