logoalt Hacker News

roxolotltoday at 1:12 PM1 replyview on HN

We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.

And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.

But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.


Replies

asahtoday at 1:50 PM

"hand over" is a misnomer - what actually happens is that there's an interaction with a machine and people either trust it too much, or forget that it's a machine (i.e. handed from one person to another and the "AI warning" label is accidentally or intentionally ripped off)