Weird stance to take.
I can understand "untested AI-genned code is bad, and thus anything that reeks of AI is going to be scrutinized" - especially given that PostmarketOS deals a lot with kernel drivers for hardware. Notoriously low error margins. But they just had to go out of their way and make it ideological rather than pragmatic.
as a kernel developer, I use LLMs for some tasks, but can say it is not there yet to write real kernel space code
The licensure of the code generated by LLMs is not a settled matter in all jurisdictions; this is a very valid pragmatic concern they address.
It's fine for a project to have moral/ideological leanings, it's only weird if you insist that project teams should be entirely amoral.