That's because it is.
AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.
If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.
You wouldn’t say that rolling dice is dangerous. You would say that the human who decides to take an action, depending on the value of the dice is the danger. I don’t think AI is dangerous. I think people are dangerous.
True of AGI, but what we have right now doesn't fit that bill. (I would encourage people that disagree with this to go talk to ChatGPT about how LLMs and reasoning models work. Seriously! I'm not being snarky. It's very good at explaining itself. If you understand how reasoning works and what an LLM is actually doing it's hard to believe that our current models are going to do much more than become iteratively more precise at mimicking their training datasets.)
Tbh, I find this argument really stupid. The word prediction machine isn’t going to destroy humanity. Sure, humans can do some dumb stuff with it, but that’s about it.
Stop mistaking science fiction for science.
It needs to go well every single day, and only needs to go very poorly once. Not to conflate LLMs with actual super intelligence, but for this (and many other reasons related to basic human dignity), this is not a technology that a responsible society should be attempting to build. We need our very own Butlerian Jihad
Same with everything, right? You could say the same with nukes, electricity, internet, the computer, etc... But if you look at it without paying attention to the "ultimate tool for humanity" hype, it doesn't really look that much of a threat or a salvation.
It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)
There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.
> If AI tech goes very poorly, it can be the end of human history.
"Just unplug the goddamn thing!"
Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.
> If AI tech goes very well
The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.
"If AI tech goes very well, it can be the greatest invention of all human history"
As has been said at many all hands:
Let's all work on the last invention needed by humans.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
-Irving John Good, 1965
If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.
If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.