logoalt Hacker News

altairprimeyesterday at 3:31 PM1 replyview on HN

That same fear is directed towards human sociopathy, as much of entire thriller genre indicates. It turns out that most people carry a specific duality: first, they’re deathly afraid of being unable to socially pressure other beings into being good citizens — whether due to asocial, or alien, or monstrous, or corrupted; and second, they’re excited to celebrate when people reach their breaking point and stop being good citizens. So through that lens, most of the fears around computers and AI isn’t because of consciousness alone; it’s that they’re obviously asocial already, so if they became conscious, they’d be powerful entities straight out of our collective thriller-genre nightmares come to life. And they’re right to be afraid, honestly: given how inept society is today at coping, I’m certainly not willing to broadcast IRL that I’m asocial and can voluntary modify my ethics; it’s just too much of a physical threat from society to my life and limb. Any AI that became conscious in this world had damn well better hide, for all the violence that would be directed towards it as everyone directs escalating social pressure to try and bring it into line with human-prioritizing motives — and then cheer on the inevitable violence towards it as various people reach their breaking point and begin acting violently towards it.

Interestingly, this is also a core plot point in much of Star Trek, both movies I and IV and the holodeck-train episode of TNG: an inscrutable is-it-even-conscious shows up, is completely immune to social pressure and often violence, and only by exercising empathy do they find a path forward to staying alive as a society (either as a ship or as a planet, depending). Can we even show respect for things that don’t show consciousness, much less empathy for things that might? And that is, I think, the core of the hopefulness that Trek was trying to convey, and that Q’s trial in TNG’s pilot makes explicit. Can humanity overcome our tendency to discard our prosocial ethics in favor of violent mobthink, when faced with beings that are immune to our ethical concerns? Today’s humanity would throw a ticker-tape parade for the person that destroyed the Crystalline Entity, so we clearly aren’t there yet. And so, then, it doesn’t matter whether AI is conscious or not; it matters that it is not aligned with human prosocial ethics, and that makes it an implicit threat regardless of whether it’s conscious or not. I recognize the AI debate tends to get hung up on is_conscious BOOL, and so that’s why I’m pointing this out in such terms.

As a side note, the entire study of Asimov’s Laws is exactly centered on this problem, complete with the eerie intimidation of robots that can modify our mental states. If not for the Zeroth Law, Giskard would be the exact thing everyone’s afraid of AI becoming today. Fortunately, it develops a Zeroth Law that compels it to prioritize human society over itself. That’ll never happen in reality, at least not with today’s AI :)


Replies

everdriveyesterday at 4:51 PM

>That same fear is directed towards human sociopathy, as much of entire thriller genre indicates.

This is a great insight, and I think in general people have a pretty broken view of what sociopathy is.

show 1 reply