logoalt Hacker News

jay_kyburzyesterday at 2:22 AM4 repliesview on HN

In my mind, as a causal observer, AGI will be like Nukes. Very powerful technology with the power kill us all, and small group of people will have their fingers on the buttons.

Also like nukes, unfortunately, the cat is out of the bag and because there are people like Putin the world, we _need_ to have friendly AGI to defend hostile AGI.

I understand why we can't just pretend its not happening.

I think the idea that an AGI will "run amok" and destroy humans because we are in its way is is really unlikely and underestimates us. Why would anybody give so much agency to an AI with no power to just pull the plug. And even then, they are probably only going to have the resources of one nation.

I'm far more worried about Trump and Putin getting into a nuclear pissing match. Then global warming resulting in crop failure and famine.


Replies

WhyOhWhyQyesterday at 3:07 AM

You might consider the possibility that decentralized AI will team up with itself to enact plans. There's no "pulling the plug" in that scenario.

show 1 reply
chorsestudiosyesterday at 2:58 AM

In my mind, the idea of AGI running amok isn't literal, instead what it enables;

Optimizing & simulating war plans, predicting enemy movements/retaliation - prompting which attacks are likely to produce the most collateral damage or political advantage. How large of a bomb? which city for most damage? Should we drop 2?? Choices such as drone striking an oil refinery vs bombing a children's hospital vs blowing up a small boat that might be smuggling narcotics.

dyauspitryesterday at 2:27 AM

I think a true AGI would “hack” so well that it would be able to control most of our systems if it “wanted”.

show 1 reply
hollerithyesterday at 2:41 AM

The gaping hole in that analogy is that the scientists at Los Alamos could (and did) calculate the explosive power of the first nuclear detonation before the detonation. In contrast, the AI labs have nowhere near the level of understanding of AI needed to do a similar calculation. Every time a lab does a large training run, the resulting AI might end up being vastly more cognitively capable than anyone expected provided (which will often be the case) that the AI incorporates substantial design changes not found in any previous AI.

Parenthetically, even if it were known by AI researchers how to determine (before unleashing the AI on the world) whether an AI would end up with a dangerous level of cognitive capabilities, most labs would persist in creating and deploying a dangerous AI (basically because AI skeptics have been systematically removed from most of the AI labs very similar to how in 1917 the coalition in control of Russia started removing from the coalition any member skeptical of Communism), so there would remain a need for a regulatory regime of global scope to prevent the AI labs from making reckless gambles that endanger everyone.