>If humanity goes extinct in the next few years because of unaligned superintelligence
This is either a misunderstanding of the anti-AI crowd or an intentional attempt to discredit them. The majority of anti-AI people don't actually fear this because that belief would require that this person has already bought into the hype regarding the actual power and prowess of AI. The bigger motivator for anti-AI folks is usually just the way it amplifies the negative traits of humans and the systems we have created which is already happening and doesn't need any type of pending "superintelligence" breakthrough. For example, an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.
> an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.
Exactly, "lack of intelligence" is really a much bigger concern than "superintelligence". Companies and government will happily try to save money and avoid accountability by letting AI do work that it can only do poorly and it will be humans who are left with the accelerated AI powered enshittification and blind/soulless paperclip maximization that results.
It is not a misunderstanding; the anti-AI crowd is heterogeneous.
There are many different groups of anti-AI people with different beliefs.
This attempt to "reframe and reclaim" (here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics") is a rhetorical device, but not an honest one. It's a power struggle over who gets to define and lead "the" anti-AI movement.
We may agree or disagree with them but there are rational anti-AI arguments that center on X-risks.