> At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it.
Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"
> If humanity goes extinct in the next few years because of unaligned superintelligence,
I've seen people claiming that this could happen, but I've yet to read any plausible scenario where this might be the case. Maybe I lack the imagination, could you enlighten me?
But AI isn't going to be unaligned. It's going to be aligned the same way we are because it learns from our data.
What's more likely to happen is that humanity won't go totally extinct--it will just drastically shrink. When robotics and AI perform all useful work and everything is owned by the top 1000 richest people, there will be no more economic purpose for the remaining 7,999,999,000 of us. The earth will become a pleasure resort for O(1000) people being served by automation.
>If humanity goes extinct in the next few years because of unaligned superintelligence
This is either a misunderstanding of the anti-AI crowd or an intentional attempt to discredit them. The majority of anti-AI people don't actually fear this because that belief would require that this person has already bought into the hype regarding the actual power and prowess of AI. The bigger motivator for anti-AI folks is usually just the way it amplifies the negative traits of humans and the systems we have created which is already happening and doesn't need any type of pending "superintelligence" breakthrough. For example, an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.