logoalt Hacker News

chris_money202today at 2:20 AM4 repliesview on HN

First they rushed a model to market without safety checks, and I said nothing. It wasn't my field.

Then they ignored the researchers warning about what it could do, and I said nothing. It sounded like science fiction.

Then they gave it control of things that matter, power grids, hospitals, weapons, and I said nothing. It seemed to be working fine.

Then something went wrong, and no one knew how to stop it, no one had planned for it, and no one was left who had listened to the warnings.


Replies

ashtonshearstoday at 3:11 AM

The societal ills from collective tendancy to ignore red flags seems to be a human trait

show 1 reply
zer00eyztoday at 3:13 AM

> Then something went wrong, and no one knew how to stop it,

This is the problem with every AI safety scenario like this. It has a level of detachment from reality that is frankly stark.

If linesman stop showing up to work for a week, the power goes out. The US has show that people with "high powered" rifles can shut down the grid.

We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil".

A lot of what safety amounts to is politics (National, not internal, example is Taiwan a country). And a lot more of it is cultural.

show 3 replies
hsbauauvhabzbtoday at 2:29 AM

Plenty of people have said plenty. The problem isn’t the warnings, it’s that people are too stupid and greedy to think about the long term impacts.

show 1 reply
ReptileMantoday at 4:35 AM

Censoring models is not safety but safetizm. It is the TSA of the AI world. Safety is making sure the model cannot do anything not allowed even if it wants to.