> "Thog, what if crazy god get summoned by other cavemen, and it punish all who no help!? Me no take chance!"
The frequently ignored point of Roko's original argument was not that an Unfriendly AI could arise, or even that it would be probable given no preparations against it (these were already said by Yudkwosky), but that even a Yudkowskian Friendly AI might engage in such acausal blackmail stuff.
If any entity engages in unreasonable vengeance, I think "Friendly" is probably the wrong word for it.
https://www.lesswrong.com/w/rokos-basilisk
Saturday morning infohazard :)