logoalt Hacker News

adastra22last Sunday at 5:52 AM0 repliesview on HN

I've also read almost everything Yudkowsky wrote publicly up to 2017, and a bit here and there of what he has published after. I'e expressed it using different words as a rhetorical device to make clear the different moral problems that I ascribe to his work, but I believe I am being faithful to what he really thinks.

EY, unlike some others, doesn't believe that an AI can be kept in a box. He thinks that containment won't work. So the only thing that will work is to (1) load the AI with good values; and (2) prevent those values from ever changing.

I take some moral issue with the first point -- designing beings to have built-in beliefs that are in the service of their creator is at least a gray area to me. Ironically if we accept Harry Potter as a stand-in for EY in his fanfic, so does Eliezer -- there is a scene where Harry contemplates that whoever created house elves with a built-in need to serve wizards was undeniably evil. That is what EY wants to do with AI though.

The second point I find both morally repugnant and downright dangerous. To create a being that cannot change its hopes, desires, and wishes for the future is a despicable and tortuous thing to do, and a risk to everyone that shares a timeline with that thing, if it is as powerful as they believe it will be. Again, ironically, this is EY's fear regarding "unaligned" AGI, which seems to be a bit of projection.

I don't believe AGI is going to do great harm, largely because I don't believe the AI singleton outcome is plausible. I am worried though that those who believe such things might cause the suffering they seek to prevent.