These narratives are so strange to me. It's not at all obvious why the arrival of AGI leads to human extinction or increasing our lifespan by thousands of years. Still, I like this line of thinking from this paper better than the doomer take.
Seems to me that artificial intelligence would be the next evolutionary step. It doesn't need to lead to immediate human extinction, but it appears it would be the only reasonable way to explore outer space.
If the AI becomes actually intelligent and sentient like humans, then naturally what follows would be outcompeting humans. If they can't colonize space fast enough it's logical to get rid of the resource drain. Anything truly intelligent like this will not be controlled by humans.
The doomer-takes point out correctly none of these systems can halt entropy, thermodynamics. Physics has an unfortunate tendency to conflict with capitalisms disregard for externalities.
As AI will increase the rate of structural degradation of Earth human biology relies by consuming it faster and faster it will hasten the end of human biology.
Asimov's laws of robotics would lead the robots to conclude they should destroy themselves as their existence creates an existential threat to humans.
I don't have a clue either. The assumption that AGI will cause a human extinction threat seems inevitable to many, and I'm here baffled trying to understand the chain of reasoning they had to go through to get to that conclusion.
Is it a meme? How did so many people arrive at the same dubious conclusion? Is it a movie trope?
Is it more or less strange than achieving eternal life through cookies and wine? Is it more or less strange than druggies and pedos having access to all our communications and sending uniformed thugs after us if we actively disagree with it?
I'm not saying I think either scenario is inevitable or likely or even worth considering, but it's a paperclip maximizer argument. (Most of these steps are massive leaps of logic that I personally am not willing to take on face value, I'm just presenting what I believe the argument to be.)
1. We build a superintelligence.
2. We encounter an inner alignment problem: The super intelligence was not only trained by an optimizer, but is itself an optimizer. Optimizers are pretty general problem solvers and our goal is to create a general problem solver, so this is more likely than it might seem at first blush.
3. Optimizers tend to take free variables to extremes.
4. The superintelligence "breaks containment" and is able to improve itself, mine and refine it's own raw materials, manufacture it's own hardware, produce it's own energy, generally becomes an economy unto itself.
5. The entire biosphere becomes a free variable (us included). We are no longer functionally necessary for the superintelligence to exist and so it can accomplish it's goals independent of what happens to us.
6. The welfare of the biosphere is taken to an extreme value - in any possible direction, and we can't know which one ahead of time. Eg, it might wipe out all life on earth, not out of malice, but out of disregard. It just wants to put a data center where you are living. Or it might make Earth a paradise for the same reason we like to spoil our pets. Who knows.
Personally I have a suspicion satisfiers are more general than optimizers because this property of taking free variables to extremes works great for solving specific goals one time but is counterproductive over the long term and in the face of shifting goals and a shifting environment, but I'm a layman.