I feel like the pattern here is donate compute, not code. If agents are writing most of the software anyway, why deal with the overhead of reviewing other people's PRs? You're basically reviewing someone else's agent output when you could just run your own.
Maintainers could just accept feature requests, point their own agents at them using donated compute, and skip the whole review dance. You get code that actually matches the project's style and conventions, and nobody has to spend time cleaning up after a stranger's slightly-off take on how things should work.
This is an interesting framing but it assumes maintainers want to use agents at all. Most OSS maintainers we've talked to build things because they enjoy the craft. Donating compute to replace that craft is like offering a chef a microwave.
Who reviews the correctness of the second agents' review?
Or even more efficient: the model we already have. Donate money and let the maintainer decide whether to convert it into tokens or mash the keys themself.
So your proposed solution to AI slop PRs is to "donate" compute, so the maintainers can waste their time by generating the AI slop themselves?
Well, it's not quite that easy because someone still has to test the agent's output and make sure it works as expected, which it often doesn't. In many cases, they still need to read the code and make sure that it does what it's supposed to do. Or they may need to spend time coming up with an effective prompt, which can be harder than it sounds for complicated projects where models will fail if you ask them to implement a feature without giving them detailed guidance on how to do so.