> My question on AI generated contributions and content in general: on a long enough timeline, with ever improving advancements in AI, how can people reliably tell the difference between human and AI generated efforts?
Can you reliably tell that the contributor is truly the author of the patch and that they aren't working for a company that asserts copyright on that code? No, but it's probably still a good idea to have a policy that says "you can't do that", and you should be on the lookout for obvious violations.
It's the same story here. If you do nothing, you invite problems. If you do something, you won't stop every instance, but you're on stronger footing if it ever blows up.
Of course, the next question is whether AI-generated code that matches or surpasses human quality is even a problem. But right now, it's academic: most of the AI submissions received by open source projects are low quality. And if it improves, some projects might still have issues with it on legal (copyright) or ideological grounds, and that's their prerogative.