I don't believe that even the weakened version of the argument works -- it is based on an assumption, not fact.
Why would a contributor that uses AI assistance have fewer chances to be trusted?
I'm not talking about AI slop, but a contributor that takes time to understand a problem, find a solution, and discuss pros/cons alternatives. Using LLM assistance, of course.
Because you are at the whims of the bot they are at least partially dependent on.
Why would a contributor that uses AI assistance have fewer chances to be trusted?
please read my explanation here:
https://news.ycombinator.com/item?id=47964279