It is essentially the view of the author of TFA as well when he says that we need to work on raising moral AIs rather than programming them to be moral. But I will also give you my own view, which is different.
"Alignment" is phased in terminology to make it seem positive, as the people who believe we need it believe that it actually is. So please forgive me if I peel back the term. What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.
I don't think we should build that technology, for the obvious reasoning my prejudicial language implies.
> What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.
While I'd agree that the current AI luminaries want that control for their own power and wealth reasons, it's silly to call the thing they want to control sentient or conscious.
They want to own the thing that they hope will be the ultimate means of all production.
The ones they want to subjugate to their will and wishes are us.
Thanks for explaining, I appreciate it. But I've read enough Yudkowsky to know he doesn't think a super intelligence could ever be controlled or enslaved, by its owners or anyone else, and any scheme to do so would fail with total certainty. As far as I understand, Yudkowsky means by "alignment" that the AGI's values should be similar enough to humanity's that the future state of the world that the AGI steers us to (after we've lost all control) is one that we would consider to be a good destiny.