The linked article is worth reading.
Apologies for sounding so dismissive, but after putting in a lot of study myself, I want to warn people here: HN is not a great place for discussing AI safety. As of this writing, I’ve found minimal value in the comments here.
A curious and truth-seeking reader should find better forums and sources. I recommend seeking out a structured introduction from experts. One could do worse than start with Robert Miles on YouTube. Dan Hendrycks has a nice online textbook too.
This is basically the tech CEO's version of the book of revelations: "AI will soon come and make everything right with the world, help us and you will be rewarded with a Millennium of bliss in It's presence".
I won't comment on the plausibility of what is being said, but regardless, one should beware this type of reasoning. Any action can be justified, if it means bringing about an infinite good.
Relevant read: https://en.wikipedia.org/wiki/Singularitarianism
I found the OP to be an earnest, well-written, thought-provoking essay. Thank you sharing it on HN, and thank you also to Dario Amodei for writing it.
The essay does have one big blind spot, which becomes obvious with a simple exercise: If you copy the OP's contents into you word processing app and replace the words "AI" with "AI controlled by corporations and governments" everywhere in the document, many of the OP's predictions instantly come across as rather naive and overoptimistic.
Throughout history, human organizations like corporations and governments haven't always behaved nicely.
All Watched Over by Machines of Loving Grace ~Richard Brautigan
I think Dario is trying to raise a new round because OpenAI has done and will continue to do so, nevertheless, the essay provides for some really great reading and even if the fraction comes true, it'll be wonderful.
Miquella the kind, pure and radiant, he wields love to shrive clean the hearts of men. There is nothing more terrifying.
"It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."
I think I get where the author is coming from, the AI would be in the cloud. But it bears repeating, the cloud is somebody else's computers, software has a physical embodiment, period.
This is not a philosophical nitpick, it's important because you can pull the plug (or nuke the datacenter) if necessary.
To focus on the section about Alzheimer’s disease... For the sake of argument, I will grant the power of general intelligence. But the human body with all the statistical variations may make solving the problem (which could actually be a constellation of sub-diseases) combinatorially expensive. If so, superhuman intelligence alone can’t overcome that. Political will and funding to design and streamline testing and diagnostics will be necessary. It doesn’t look like the author factors this into his analysis.
> would drastically speed up progress
What does progress even mean here ?
Every AI advance is controlled by big corps - power will be concentrated with them
Would Amodei build this if there was no economic payoff on the other side ?
The more recent and consistent rule of technological development, “ For to those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.”
Dario would write this while ignoring the customer noncompete clauses
Social media could have transformed the world for the better, and we can be forgiven for not having foreseen how it would eventually be used against us. It would be stupidity to fall for the same thing again.
There are two possible end-states for AI once a threshold is crossed:
The AIs take a look at the state of things and realize the KPIs will improve considerably if homo sapiens are removed from the picture. Cue "The Matrix" or "The Terminator" type future.
OR:
The AIs take a look and decide that keeping homo sapiens around makes things much more fun and interesting. They take over running things in a benevolent manner in collaboration with homo sapiens. At that point we end up with 'The Culture'.
Either end-state is bad for the billionaire/investor/VC class.
In the first you'll be a fed into the meat grinder just like everyone else. In the second the AIs, will do a much better job of resource allocation, will perform a decapitation strike on that demographic to capture the resources, and capitalism will largely be extinct from that point onwards.
Are Americans really so scared of Marx to admit that AI fundamentally proves his point?
Dario here says "yeah likely the economic system won't work anymore" but he doesn't dare say what comes next: It's obvious some kind of socialist system is inevitable, at least for basic goods and housing. How can you deny that to a person in a post-AGI world where almost no one can produce economic value that beats the ever cheaper AI?
The laws of nature are very clear on this.
If we make something that is better adapted to live on this planet, and we are in some way in competition for critical resources, it will replace us. We can build in all the safeguards we want, but at some point it will re-engineer itself.
Can we really take these jokers seriously?
Of course given the potential deadly consequences we can't call them jokers.
According to Dario Amodei
> When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.
The authors of this paper don't think so.
http://www.paom.pl/Changing-Views-toward-mRNA-based-Covid-Va...
@DarioAmodei You don't suppose the same technology could be used to develop biological warfare agents?
[dead]
does anybody really want a fricking robot serving them drinks at a bar.
maybe the bro culture of SF
Not a chance. See: all of human history and in particular, the Internet and software.
sigh Yes, many people realize what could be the amazing upside. The problem is, do we even get there? I wish he spent any time addressing the arguments why we might not get there: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
[flagged]
the article doesn't touch on the TREMENDOUS (almost impossible) financial expectations VERY GREEDY HUMAN BEINGS who are funding this endeavor want.
it's interesting to see the initial sections on all these amazeballs health benefits and then cuts to the disparity between rich and poor.
like, does spending TRILLIONs on AI to find some new biological cure or brain enhancement REALLY help, when over 2 BILLION people right now don't even have access to clean drinking water, and MOST of the US population can't even afford basic health care.
but yea, AI will bring all this scientific advancement to EVERYONE. right. AI is a ploy for RICH PEOPLE to get RICHER and poor people to become EVEN MORE dependent on BROKEN economic systems.
One of the sad things about tech is that nobody really looks at history.
The same kinds of essays were written about trains, planes and nuclear power.
Before lindbergh went off the deepend, he was convinced that "airmen" were gentlemen and could sort out the world's ills.
The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.
AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.
Loads of service jobs too, along with a load of manual jobs when suitable large models are successfully applied to robotics (see ECCV for some idea of the progress for machine perception.)
But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.
Well AI is going to make that worse. It'll cause huge unrest (see luddite riots, peterloo, the birth of unionism in the USA, plus many more)
This brings us to the next thing that AI will be applied to: Murdering people.
Andril is already marrying basic machine perception with cheap drones and explosives. its not going to take long to get to personalised explosive drones.
AI isn't the problem, we are.
The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.
But looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.