Ok, I think I might get a heart attack sooner or later, it's a possibility, although not very high.
If I said so, you might ask me if I saw a doctor or something similar to make me suspect that, and that's my issue with him. He's a Sci fi writer that's scared of technology without a grasp of how it works, and that's OK. He can talk about what he fears, and that's OK. It still doesn't mean we should take him seriously just because.
My pet peeve is that when trying to make laws regarding Ai - at least in Europe - some considerations were done regarding how it worked, what it was (...), how it's being talked in academic literature. I had a lawyer in a course explaining that, and while not perfect you eventually settle on something that more or less is reasonable. With yudkowsky, you have a guy that is scared of nanotech and yada yada. Sure, he might be right. But if I had to act based on something, it would look much more the eu process to make laws and less the "Ai will totally kill us from now to 30 years trust me". Perhaps now I'm more clear
And don't get me started with the rationalist stuff that just assumes pain is linear and yada yada
Eliezer has written extensively on why he thinks AI research is going to kill us all. He has also done 3-hour-long interviews on the subject which are published on Youtube.