logoalt Hacker News

keedatoday at 5:55 PM0 repliesview on HN

I don't think the risk is SkyNet. I think the real risk is some disaster through an unexpected chain of events, just like any large-scale outage.

I have not read “If Anybody Builds It, Everybody Dies” but I believe that's also its premise.

Current GenAI is extremely capable but also very weird. For instance, it is extremely smart in some areas but makes extremely elementary mistakes in others (cf the Jagged Frontier.) Research from Anthropic and OpenAI gives us surprising glimpses into what might be happening internally, and how it does not necessarily correspond to the results it produces, and all kinds of non-obvious, striking things happening behind the scenes.

Like models producing different reasoning tokens from what they are really reasoning about internally!

Or models being able to subliminally influence derivative models through opaque number sequences in training data!

Or models "flipping the evil bit" when forced to produce insecure code and going full Hitler / SkyNet!

Or the converse, where models produced insecure code if the prompt includes concepts it considers "evil" -- something that was actually caught in the wild!

We are still very far from being able to truly understand these things. They behaves like us, but don't necessarily “think” like us.

And now we’ve given them direct access to tools that can affect the real world.

Maybe we am play god: https://dresdencodak.com/2009/09/22/caveman-science-fiction/