If AI is already mass-producing garbage PRs and other unreliable crap, what makes AI (established as producing unreliable crap) the solution for review? What makes the reviewing AI not produce unreliable crap with regards to the review?
A magical, hypothetical AI that always gets it right and will make all these problems go away is neither a solution nor a plan. It's wishful thinking.
AI in the hands of the right people is incredibly powerful. A good team of engineers with AI doing their own bug-hunting on their own code is already far better than any outsider—human, AI, or human-assisted AI—could ever do. A good internal AI-assisted team is also the only thing that can vet all other contributions. It doesn't matter if those contributions are 100% human-written, 100% AI-written, or a combination. The problem is the same.
Unless you stop accepting outside contributions at all, there's simply no way to determine if a human was involved in the process. Any mandate that all contributions come from humans will fail because there's no detection or enforcement mechanism. You have to assume it's slop either way, and improve your ability to vet it. Only another AI can do that, because we don't have enough qualified humans to keep up.