While I appreciate the morality and ethics of this choice, the current trend means projects going in this direction are making themselves irrelevant (don't bother quipping at how relevant redox is today, thanks). E.g. top security researches are now using LLMs to find new RCEs and local privilege escalations; no reason why the models couldn't fix these, too - and it's only the security surface.
IOW I think this stance is ethically good, but technically irresponsible.
People can choose not to use AI. This is because they think it is inevitable that they will eventually use LLMs.
Even if we assume that LLMs become good enough for this to be true (some might feel that is the case already - I disagree, but that is beside the point), there is no reason why OSS maintainers should accept such outside contributions that they would need to carefully review, as it comes from an untrusted source, when they could just use the tools themselves directly. Low effort drive-by PRs is a burden with no upside.