What happens when the PR is clear, reasonable, short, checked by a human, and clearly fixes, implements, or otherwise improves the code base and has no alternative implementation that is reasonably different from the initially presented version?
The problem is that even if the code is clear and easy to understand AND it fixes a problem, it still might not be suitable as a pull request. Perhaps it changes the code in a way that would complicate other work in progress or planned and wouldn't just be a simple merge. Perhaps it creates a vulnerability somewhere else or additional cognitive load to understand the change. Perhaps it adds a feature the project maintainer specifically doesn't want to add. Perhaps it just simply takes up too much of their time to look at.
There are plenty of good reasons why somebody might not want your PR, independent of how good or useful to you your change is.
How would you tell that it's LLM-generated in that case?
If the submitter is prepared to explain the code and vouch for its quality then that might reasonably fall under "don't ask, don't tell".
However, if LLM output is either (a) uncopyrightable or (b) considered a derivative work of the source that was used to train the model, then you have a legal problem. And the legal system does care about invisible "bit colour".
This is where most reasonable people would say “OK, fine”
CLEARLY, a lot of developers are not reasonable
If you're going to set a firm "no AI" policy, then my inclination would be to treat that kind of PR in the same way the US legal system does evidence obtained illegally: you say "sorry, no, we told you the rules and so you've wasted effort -- we will not take this even if it is good and perhaps the only sensible implementation". Perhaps somebody else will eventually re-implement it later without looking at the AI PR.