logoalt Hacker News

edtoday at 6:05 AM3 repliesview on HN

This paper argues that if superintelligence can give everyone the health of a 20 year-old, we should accept a 97% percent chance of superintelligence killing everyone in exchange for the 3% chance the average human lifespan rises to 1400 years old.


Replies

paulmooreparkstoday at 6:15 AM

There is no "should" in the relevant section. It's making a mathematical model of the risks and benefits.

> Now consider a choice between never launching superintelligence or launching it immediately, where the latter carries an % risk of immediate universal death. Developing superintelligence increases our life expectancy if and only if:

> [equation I can't seem to copy]

> In other words, under these conservative assumptions, developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%.

wmftoday at 6:38 AM

That's what the paper says. Whether you would take that deal depends on your level of risk aversion (which the paper gets into later). As a wise man once said, death is so final. If we lose the game we don't get to play again.

show 1 reply
measurablefunctoday at 6:15 AM

Bostrom is very good at theorycrafting.