On the one hand open source projects are going to be overrun with AI code that no one reviewed.
On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
So many processes are no longer sufficient to manage a world where thousands of lines of working code are easy to conjure out of thin air. Already strained open source review processes are definitely one.
I get wanting to blanket reject AI generated code, but the reality is that no one's going to be able to tell what's what in many cases. Something like a more thorough review process for onboarding trusted contributors, or some other method of cutting down on the volume of review, is probably going to be needed.
A policy like this has two points. One, to give good faith potential contributors a guideline on what the project expects. Two, to help reviewers have a clear policy they can point to to reject AI slop PRs, without feeling bad or getting into conflicts about minutiae of the code.
> On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
I have yet to see a single example of this. The way you make AI generated code good and maintainable is by rewriting it yourself.
> On the other hand, code produced with AI and reviewed by humans can be perfectly good and indistinguishable from regular old code.
Obligatory xkcd:
>reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code
That depends on the 'regular old code' but most stuff I have seen doesn't come close to 'maintainable'. The amount of cruft is proper.