That didn't actually address my comment or question, so I'll repeat it, I guess.
We already know AI is spamming unreliable crap and slop. The apparent solution is "more, better AI".
Why wouldn't this AI for screening all this also produce crap and slop?
Is the plan there "AI but it actually works right and doesn't produce crap and slop"?
I did address it: AI in the hands of the right people.
Random contributions to bug bounty programs or random PRs for new features come from all corners: expert engineers producing fantastic code; intermediate engineers trying their hardest but producing mediocre code; junior engineers wasting everyone's time with ill-conceived poorly-written code; and all of the above with varying amounts of AI assistance. And now also purely-automated AI, where the only human involved is pointing their AI at GitHub with no guidance.
You can't stop it on the inbox side. Either you turn the inbox off, or you leverage AI to help you separate the wheat from the chaff.