logoalt Hacker News

hollerithyesterday at 2:41 AM0 repliesview on HN

The gaping hole in that analogy is that the scientists at Los Alamos could (and did) calculate the explosive power of the first nuclear detonation before the detonation. In contrast, the AI labs have nowhere near the level of understanding of AI needed to do a similar calculation. Every time a lab does a large training run, the resulting AI might end up being vastly more cognitively capable than anyone expected provided (which will often be the case) that the AI incorporates substantial design changes not found in any previous AI.

Parenthetically, even if it were known by AI researchers how to determine (before unleashing the AI on the world) whether an AI would end up with a dangerous level of cognitive capabilities, most labs would persist in creating and deploying a dangerous AI (basically because AI skeptics have been systematically removed from most of the AI labs very similar to how in 1917 the coalition in control of Russia started removing from the coalition any member skeptical of Communism), so there would remain a need for a regulatory regime of global scope to prevent the AI labs from making reckless gambles that endanger everyone.