This is a bit of a straw man. The harms of AI in OSS are not from people needing accessibility tooling.
It's absolutely not a straw man, because OP and people like OP will be affected by any policy which limits or bans LLMs. Whether or not the policy writer intended it. So he deserves a voice.
I disagree. I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.
I do agree that at large, the theoretical upsides of accessibility are almost certainly completely overshadowed by obvious downsides of AI. At least, for now anyway. Accessibility is a single instance of the general argument that "of course there are major upsides to using AI", and there a good chance the future only gets brighter.
My point, essentially, is that I think this is (yet another) area in life where you can't solve the problem by saying "don't do it", and enforcing it is cost-prohibitive. Saying "no AI!" isn't going to stop PR spam. It's not going to stop slop code. What is it going to stop (see edit)? "Bad" people won't care, and "good" people (who use/depend-on AI) will contribute less.
Thus I think we need to focus on developing robust systems around integrating AI. Certainly I'd love to see people adopt responsible disclosure policies as a starting point.
--
[edit] -- To answer some of my own question, there are obvious legal concerns that frequently come up. I have my opinions, but as in many legal matters, especially around IP, the water is murky and opinions are strongly held at both extremes and all to often having to fight a legal battle at all* is immediately a loss regardless of outcome.