Sounds like it was written by someone with a health condition. Hope Bostrom is alright.
Quite puzzling also he wouldn't even refer to his earlier work to refute it, given that he wrote THE book on the risk of superintelligence.
These narratives are so strange to me. It's not at all obvious why the arrival of AGI leads to human extinction or increasing our lifespan by thousands of years. Still, I like this line of thinking from this paper better than the doomer take.
I don't really believe in the specific numbers he gives, but I appreciate moving the conversation away from “should” and into the consequences — including those that arise from delays.
If intelligence, whatever is meant by that, was the dominating factor in the emergence of power and social orders, then it ought to be quite trivial to show that this is the case by enumerating powerful people from the last century or so and making the case that they were generally very intelligent.
I don't think this is the case. And if Bostrom and whoever else in his clique actually wanted to empower intelligence, how come they aren't viciously fighting for free school, free food, free shelter, free health care and so on, to make sure that intelligent people, especially kids, do not go to waste?
Paper again largely skips the issue that AGI cannot be sold to people, because either you try to swindle people out of money (all the AI startups) or transactions like that are now meaningless because your AI runs the show anyway.
This paper argues that if superintelligence can give everyone the health of a 20 year-old, we should accept a 97% percent chance of superintelligence killing everyone in exchange for the 3% chance the average human lifespan rises to 1400 years old.
The usual (e.g., https://www.reddit.com/r/philosophy/comments/j4xo8e/the_univ...) bunch of logical fallacies and unexamined assumptions from Bostrom.
Good philosophers focus on asking piercing questions, not on proposing policy.
> Would it not be wildly irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?
Yes, if that number is anywhere near reality, of which there is considerable doubt.
> However, sound policy analysis must weigh potential benefits alongside the risks of any emerging technology.
Must it? Or is this a deflection from concern about immense risk?
> One could equally maintain that if nobody builds it, everyone dies.
Everyone is going to die in any case, so this a red herring that misframes the issues.
> The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.
> In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.
"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.
> Superintelligence would be able to enormously accelerate advances in biology and medicine—devising cures for all diseases
There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.
> and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor.
Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.
> These scenarios become realistic and imminent with superintelligence guiding our science.
So he baselessly claims.
Sorry, but this is all apologetics, not an intellectually honest search for truth.
[dead]
Bostrom is that what I call a 'douchebag nerd' and as such seeks validation from other douchebag nerds. The problem is that Bostrom is not an engineer, and therefore cannot gain this recognition through engineering feats. The only thing Bostrom can do is sell an ideology to other douchebag nerds so that they can better rationalise their already douchebagy behaviour.
> Yudkowsky and Soares maintain that if anyone builds AGI, everyone dies. One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead. The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
wtf? death is part of life. is he seriously arguing that if we don't build AGI people will "keep dying"? and suggesting that is equally bad as extinction (or something worse, matrix-like)?
i don't think life would be as colorful and joyful without death. death is what makes life as precious as it is.
"For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don't depend on exactly how the distinction is drawn)" Hmm, is that true? His models actually depend quite heavily on what the AI can do, "can reduce mortality to 20yo levels (yielding ~1,400-year life expectancy), cure all diseases, develop rejuvenation therapies, dramatically raise quality of life, etc. Those assumptions do a huge amount of work in driving the results. If "AGI" meant something much less capable, like systems that are transformatively useful economically but can't solve aging within a relevant timeframe- the whole ides shifts substantially, surly the upside shrinks and the case for tolerating high catastrophe risk weakens?