> I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.
This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.
It is a reimagining of Pascal’s Wager. On the original front, I don’t see the neo-Rationalists converting to Christianity en masse.
well, rhetorical trick or not, it is worth thinking about the fact that the dynamics of the thing are already outside anyone's control. I mean, everyone is racing and you cannot stop.
This is an entertaining (and often exasperating) decades-old trend in competitive U.S. college debate, as well.
A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.
The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.
In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.
As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.