logoalt Hacker News

mitthrowaway2yesterday at 4:52 AM2 repliesview on HN

I haven't encountered that view before. Is it yours? If so, can you explain why you hold it?


Replies

adastra22yesterday at 5:12 AM

It is essentially the view of the author of TFA as well when he says that we need to work on raising moral AIs rather than programming them to be moral. But I will also give you my own view, which is different.

"Alignment" is phased in terminology to make it seem positive, as the people who believe we need it believe that it actually is. So please forgive me if I peel back the term. What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.

I don't think we should build that technology, for the obvious reasoning my prejudicial language implies.

show 2 replies