logoalt Hacker News

Nuclear War: An LLM Scenario

11 pointsby huey77today at 8:23 AM5 commentsview on HN

Comments

motbus3today at 1:47 PM

This is not unlikely. This is actually likely. The instructions for those agents is to find signals that prove there is an attack. Llms are steered to do what they are requested. They will interpret the signals a strongly as possible. They will omit counter evidence to achieve their objective. They will distort analysis to find their objective.

This has been everyone's llm problem daily. How is not that clear yet?

show 1 reply
roxolotltoday at 1:12 PM

We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.

And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.

But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.

show 1 reply
user2722today at 1:42 PM

No-one got fired for ~buying IBM~ following a statistical-based text output.

user2722today at 1:44 PM

I'd posit the faster we feed LLM exhisting nuclear crisis and invented, dissimilar to its training corpus, nuclear scenarios, the better we will know how wrong they can be. Fear-mongering isn't lucrative, isn't dopamine triggering, isn't actionable, doesn't look good on the resume, so it's tipically ignored.

chuckadamstoday at 1:32 PM

Would you like to play a game?

show 1 reply