AI in the hands of the right people is incredibly powerful. A good team of engineers with AI doing their own bug-hunting on their own code is already far better than any outsider—human, AI, or human-assisted AI—could ever do. A good internal AI-assisted team is also the only thing that can vet all other contributions. It doesn't matter if those contributions are 100% human-written, 100% AI-written, or a combination. The problem is the same.
Unless you stop accepting outside contributions at all, there's simply no way to determine if a human was involved in the process. Any mandate that all contributions come from humans will fail because there's no detection or enforcement mechanism. You have to assume it's slop either way, and improve your ability to vet it. Only another AI can do that, because we don't have enough qualified humans to keep up.
That didn't actually address my comment or question, so I'll repeat it, I guess.
We already know AI is spamming unreliable crap and slop. The apparent solution is "more, better AI".
Why wouldn't this AI for screening all this also produce crap and slop?
Is the plan there "AI but it actually works right and doesn't produce crap and slop"?