This article has the opposite effect from putting me at ease. There's no real argument in there that AGI couldn't be dangerous, it's just saying that of course we would build better versions than that. Right, because we always get it right, like Microsoft with their racist chatbot, or AIs talking kids into suicide... We'll fix the bugs later... after the AGI sets off the nukes... so much for an iterative development process...