It's simple. It's because AI is the scariest technology ever made.
Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
Most humans can do more than plagiarizing text. But let's hype up the clankers before the IPOs.
> AI is the scariest technology ever made
Well, it's a good thing that all we managed so far is a large language model instead.
Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.
That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.
There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?
And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
"Why be afraid of nukes it's not like they WANT to blow up"
This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.
Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
AI, the way you are describing it, has not been invented yet. It is a fiction.
What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.
I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)