TLDR don't be an asshole and produce good stuff. But I have the feeling that this is not the right direction for the future. Distrust the process: only trust the results.
Moreover this policy is strictly unenforceable because good AI use is indistinguishable from good manual coding. And sometimes even the reverse. I don't believe in coding policies where maintainers need to spot if AI is used or not. I believe in experienced maintainers that are able to tell if a change looks sensible or not.
As someone who has picked up recently some 'legacy' code. AI has been really good at mostly summing up what is going on. In many cases it finds things I had no idea was wrong (because I do not know the code very well yet). This is so called 'battle hardened code'. I review it and say 'yeah its is wildly broken and I see how the original developer ended up here'. Sometimes the previous dev would be nice enough to leave a comment or some devs 'the code is the comments'. I have also had AI go wildly off the rails and do very dumb things. It is an interesting tool for sure one you have to keep an eye on or it will confidently make a foot gun for you. It is also nice for someone like me who has some sort of weird social anxiaty thing about bugging my fellow devs. In that I can create options tables and pick good ideas out of that.
This doesn't work in the age of AI where producing crappy results is much cheaper than verifying them. While this is the case, metadata will be important to understand if you should even bother verifying the results.
I'm not sure I agree it's completely unenforceable: a sloppy, overly verbose PR, maybe without an attached issue, is pretty easy to pick out.
There's some sensible, easily-judged-by-a-human rules in here. I like the spirit of it and it's well written (I assume by Mitchell, not Claude, given the brevity).