> Some of our possible futures are grim, but manageable. Others are downright terrifying, in which large numbers of people lose their homes, health, or lives. I don’t have a strong sense of what will happen, but the space of possible futures feels much broader in 2026 than it did in 2022, and most of those futures feel bad.
Well, yes, the entire world order is currently being upended. The USA is completely unrolling its place in the global order and becoming isolationist (and soon an authoritarian single-party state). The Petrodollar is either dying or being converted to a Northwestern-Hemisphere-Petrodollar, with the Yuan in the ascendancy (so there goes the strong economy powering VC money). China, EU, and Russia are the new global leaders. The Middle East and its oil is being taken over by Israel. Taiwan will fall to China and thus the whole technological world follows. Countries that are friendly with China will have good renewable tech, countries that aren't will be doubling down on oil and coal. Fresh water will become as valuable as oil. A world war will decimate global productivity for decades. Most of the democracies in the world will be gone by the end of the century.
But none of that has to do with AI.
Bad things will always happen in the world. Good things will happen too. But you're only focusing on the bad. That's not good for your health, or others'.
> Refuse to insult your readers: think your own thoughts and write your own words. Call out people who send you slop. Flag ML hazards at work and with friends. Stop paying for ChatGPT at home, and convince your company not to sign a deal for Gemini. Form or join a labor union, and push back against management demands that you adopt Copilot [..] Call your members of Congress and demand aggressive regulation which holds ML companies responsible [..] Advocate against tax breaks for ML datacenters. If you work at Anthropic, xAI, etc., you should think seriously about your role in making the future. To be frank, I think you should quit your job.
He's freaking out, and rejecting AI completely, out of fear. And that's okay; we all get a little freaked out sometimes. But please try not to make other people freaked out as well? Just because you are scared of something doesn't mean the fear is justified or realistic.
What's going to happen now is the same thing that happened during the pandemic. A bunch of irrationally fearful people will decide that the only way they can cope with their fear, is to reject the basis of it. COVID deniers and anti-maskers/anti-vaxxers were essentially so terrified of the loss of control they had, that they refused to acknowledge it. They instead went full-bore in the opposite direction, defying government mandates and health warnings, in order to try to regain some semblance of control over their lives. And it did not go well.
That's what's now gonna happen with AI deniers. They're so freaked out about AI that they're going to reject it en-masse, not because it is actually doing anything to them, but because they're afraid it might. And the end result is going to be similar: extreme people do extreme things, and the end result isn't good. So please try to reign in the doomerism a bit, for all our sakes.