... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.
Your statement is incorrect.
If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.
Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?
An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.
Rallying people through speech is a far more successful way for an individual to enact change through violence
Does this apply to other domains or just AI? For example, if you think gain-of-function research accidents put millions of lives at risk, is the logical next step to quit your job and become a terrorist?
Disagree. Just one more blog post. I swear, one more blog post will do it.
They are! Yudkowsky sat down with Senator Bernie Sanders last month to explain what's at stake, successfully convinced him that it's a big deal, and Sanders has now proposed a national moratorium on AI data centers (https://www.sanders.senate.gov/press-releases/news-sanders-o...) to help slow things down. That's pretty direct, and a lot more useful than random violence by random people.
That pesky basilisk to worry about though
No you wouldn't.
Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.
It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.