We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.
And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.
But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.
No-one got fired for ~buying IBM~ following a statistical-based text output.
I'd posit the faster we feed LLM exhisting nuclear crisis and invented, dissimilar to its training corpus, nuclear scenarios, the better we will know how wrong they can be. Fear-mongering isn't lucrative, isn't dopamine triggering, isn't actionable, doesn't look good on the resume, so it's tipically ignored.
This is not unlikely. This is actually likely. The instructions for those agents is to find signals that prove there is an attack. Llms are steered to do what they are requested. They will interpret the signals a strongly as possible. They will omit counter evidence to achieve their objective. They will distort analysis to find their objective.
This has been everyone's llm problem daily. How is not that clear yet?