logoalt Hacker News

adastra22last Sunday at 5:41 AM1 replyview on HN

I'm not really talking about Sam Altman et al. I'd argue that what he wants is regulatory capture, and he pays lip service to alignment & x-risk to get it.

But that's not what I'm talking about. I'm talking about the absolute extreme fringe of the AI x-risk crowd, represented by the authors of the book in question in TFA, but captured more concretely in the writing of Nick Bostrom. It is literally about controlling an AI so that it serves the interests and well being of humanity (positively), or its owners self-interest (cynically): https://www.researchgate.net/publication/313497252_The_Contr...

If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.


Replies

dananslast Sunday at 8:33 PM

> If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

I think the question of harm to a hypothetically sentient AI being in the future is a distraction when the deployment of AI machines is harming real human beings today and likely into the future. I say this as an avid user of what we call AI today.

show 1 reply