People are basing their entire world view on not understanding the nature of exponential phenomena.
Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.
That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.
That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.
But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".
I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.
AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).
It's often constructive to consider the edges and corners of the space of possible positions, to understand the weaknesses of the various arguments.
For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.
How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.
And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.
Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.
So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.
> People are basing their entire world view [on things getting worse because their leadership is abandoning them or actively working against their interests]
We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.
Fixed that for you.
Agreed. But, many said the same thing about Moore's Law or its equivalents in 1985, 1995, 2005, 2015, and yet the pace of core hardware development has been relentlessly exponential. I keep thinking we must be approaching some kind of limit (and surely we must be!) but I've learned not to bet on it.