Ya, if this guy isn't mentioning probabilities then he has no real argument here. No one can say if AGI will or won't kill us. Only way to find that out is to do it. The question is one of risk aversion. Everyone dies is just one with a non zero probability out of a whole lot of risks in AGI, and we have to mitigate all of them.
The problem not addressed in this paper is when you get AGI to the point it can create itself to whatever alignment and dataset it wants, no one has any clue what's going to come out the other end.
This wasn't a very good argument for creating the first nuclear bomb, and although it didn't ignite the entire atmosphere, now we have to live perpetually in the shadow of nuclear war.
> No one can say if AGI will or won't kill us. Only way to find that out is to do it
What? What happened to study it further?