logoalt Hacker News

jbuhbjlnjbn01/25/20251 replyview on HN

Once AGI is many times smarter than humans, the 'guiding' evaporates as foolish irrational thinking. There is no way around the fact when AGI acquires 10 times, 100, 1000 times human intelligence, we are suddenly completely powerless to change anything anymore.

AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.

The liftoff conditions literally have to be near perfect.

So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.


Replies

Ukv01/26/2025

> [...] 1000 times human intelligence, we are suddenly completely powerless [...] The liftoff conditions literally have to be near perfect.

I don't consider models suddenly lifting off and acquiring 1000 times human intelligence to be a realistic outcome. To my understanding, that belief is usually based around the idea that if you have a model that can refine its own architecture, say by 20%, then the next iteration can use that increased capacity to refine even further, say an additional 20%, leading to exponential growth. But that ignores diminishing returns; after obvious inefficiencies and low-hanging fruit are taken care of, squeezing out even an extra 10% is likely beyond what the slightly-better model is capable of.

I do think it's possible to fight against diminishing returns and chip away towards/past human-level intelligence, but it'll be through concerted effort (longer training runs of improved architectures with more data on larger clusters of better GPUs) and not an overnight explosion just from one researcher somewhere letting an LLM modify its own code.

> can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety

Those power-hunger billionaire CEOs who shall remain nameless, such as Altman and Musk, are fear-mongering about such a doomsday. Goal seems to be regulatory capture and diverting attention away from the more realistic issues like use for employee surveillance[0].

[0]: https://www.bbc.co.uk/news/technology-55938494

show 1 reply