I don't recommend this article for at least three reasons. First, it muddles key concepts. Second, there are better things to read on this topic. You could do worse that starting with "Conflating value alignment and intent alignment is causing confusion" by Seth Herd [1]. There is no shame in going back to basics with [2] [3] [4] [5]. Third, be very aware that people seek comfort in all sorts of ways. One sneaky way to is convince oneself that "capability = alignment" as a shortcut to feeling better about the risks from unaligned AI systems.
I'll look around and try to find more detailed responses to this post; I hope better communicators than myself will take this post sentence-by-sentence and give it the full treatment. If not, I'll try to write something more detailed myself.
[1]: https://www.alignmentforum.org/posts/83TbrDxvQwkLuiuxk/confl...
[2]: https://en.wikipedia.org/wiki/AI_alignment
[3]: https://www.aisafetybook.com/textbook/alignment
[4]: https://www.effectivealtruism.org/articles/paul-christiano-c...