logoalt Hacker News

esperentlast Sunday at 3:26 AM1 replyview on HN

> which is you can't use the past to predict the future

Of course you can use the past to predict (well, estimate) the future. How fast does wheat grow? Collect a hundred years of statistics of wheat growth and weather patterns, and you can estimate how fast it will grow this year with a high level of accuracy, unless a "black swan" event occurs which wasn't in the past data.

Note carefully what we're doing here: we're applying probability on statistical data of wheat growth from the past to estimate wheat growth in the future.

There's no past data about the effects of AI on society, so there's no way to make statements about whether it will be safe in the future. However, people use the statistics that other, completely unrelated, things in the past didn't cause "doom" (societal collapse) to predict that AI won't cause doom. But statistics and probability doesn't work this way, using historical data about one thing to predict the future of another thing is a fallacy. Even if in our minds they are related (doom/societal collapse caused by a new technology), mathematically, they are not related.

> we always get arguments about why things are really, really bad.

When we're dealing with a completely new, powerful thing that we have no past data on, we absolutely should consider the worst, and of course, the median, and best case scenarios, and we should prepare for all of these. It's nonsensical to shout down the people preparing for the worst and working to make sure it doesn't happen, or to label them as doomers, just because society has survived other unrelated bad things in the past.


Replies

rpdillonlast Sunday at 11:55 AM

Ah, I see your point is not philosophical. It's that we don't have historical data about the effect of AI. I understand your point now. I tend to be quite a bit more liberal and allow things to play out because I think many systems are too complex to predict. But I don't think that's a point that we'll settle here.