logoalt Hacker News

AI Will Be Met with Violence, and Nothing Good Will Come of It

179 pointsby gHeadphonetoday at 9:16 AM288 commentsview on HN

Comments

zkmontoday at 2:18 PM

History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Colonial empires proved it only a few centuries back. The invading alien powers are fuelled by the inviting natives.

AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.

Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.

show 7 replies
Avicebrontoday at 1:11 PM

I feel like if people keep using AI as a blanket term for "inequality" and "inequality accelerants" then yeah, it's "AI"'s fault. When in reality the whole thing needs to be decoupled..

"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.

show 9 replies
softwaredougtoday at 1:39 PM

Highly recommend people learn the history of the Industrial Revolution. I recently discovered the Industrial Revolutions Podcast[1] and have been enjoying it. What's happening today isn't unprecedented. The pace of change that's happening IS similar to periods of the industrial revolution.

For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.

It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.

1 - https://industrialrevolutionspod.com/

show 4 replies
ben8bittoday at 9:48 AM

A lot of the magic of LLMs, I think, has been tarnished by these CEOs and other FAANG companies. It might have been a far more interesting world if they didn't bring "AI" or "AGI" into the conversation in such a politicized way.

show 10 replies
ahjustacommentetoday at 1:26 PM

I think a lot of HN readers and a lot of first world/law abiding dwelllers in this and recent threads forget to think.

Violence is not a panacea, but often, the outlet.

Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.

show 5 replies
conartist6today at 12:47 PM

I have said repeatedly that when AI eliminates the need for human creativity and work, the only thing left as the natural domain of humans will be bloodshed.

The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...

show 10 replies
dwrobertstoday at 9:58 AM

> But this is not the way. This is how things devolve into chaos.

Meanwhile

https://www.reuters.com/world/middle-east/how-many-people-ha...

> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.

(Mentioning this specifically because we know the DoD is using AI)

show 1 reply
tokioyoyotoday at 10:27 AM

A bit tangent, but is there anyone working on something for “what if AI pans out?” world? I’m not sure how to explain it, but if in the next 5 years a lot of jobs get displaced because of AI, obviously we’ll have big problems. Is there anyone working on analysis, outcomes, strategies and etc.? I think about it a lot, and would be cool to help and contribute.

show 6 replies
markus_zhangtoday at 2:30 PM

There is nothing new about it. I just hope when people scream “unions” they do expect to do things that early unions did, not just being some armchair unionists.

But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.

AvAn12today at 2:27 PM

I’m not sure anyone needs to break anything. I’m not sure this is a commercially viable business once all of the VC and foreign funding scaffolding goes away.

taffydavidtoday at 10:19 AM

> It hit Horsfall in the groin, who, nominative-deterministically, fell from his horse.

Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related

nacozarinatoday at 10:01 AM

Humans have been successfully using violence for conflict-resolution for tens of thousands of years. We’ll be fine, it’s not our first rodeo.

show 5 replies
MrOrelliOReillytoday at 9:55 AM

> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.

Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.

Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).

show 2 replies
spaceman_2020today at 9:45 AM

The worst part is that AI's first casualties are jobs that no one really asked to kill.

AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them

Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI

Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?

Seems like a complete misallocation of capital if I'm perfectly honest

show 10 replies
mft_today at 1:37 PM

Inequality was growing hugely (and still is) before the recent advent of LLMs.

Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?

As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.

So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.

show 1 reply
mrweaseltoday at 10:01 AM

I really should have gone into sewage work.

show 1 reply
deyiaotoday at 10:09 AM

They say cars replaced carriages but created drivers, so no net job loss. They say AI will do the same—destroy some jobs, create others. But bro, the automobile wiped out 95% of the world's horses. And this time, what AI is replacing is humans.

show 2 replies
ndsipa_pomutoday at 12:10 PM

I eagerly await the Butlerian Jihad

show 1 reply
phyzometoday at 1:11 PM

"Nothing that Altman could say justifies violence against him."

Nothing, really?

I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)

Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.

show 1 reply
ameliustoday at 9:55 AM

Yes, the moment they put 8 foot tall robots in the streets, I am fetching my black spray paint can.

show 1 reply
Hamukotoday at 10:35 AM

One thing I'm kinda worried is what happens to social trust in society once we have more and more LLMs flooding the Internet. Divison in society, in particular in the United States, already seemed to be increasing at a rapid pace as social media became more and more relevant, and I'm afraid that LLMs are just going to add more fuel into the already started fire.

I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.

tao_oattoday at 9:39 AM

The author seems to have some cognitive dissonance. For a piece saying that you cannot justify violence, there sure seems to be an awful lot of justifying violence in here.

show 4 replies
thenthenthentoday at 1:26 PM

So like sharing bikes?

tsunamifurytoday at 9:43 AM

We are in an inverse innovators dilemma

Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.

By optimizing just the production half of the economy and not the consumption half you end up breaking the market

show 1 reply
bluegattytoday at 9:57 AM

'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.

AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.

- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.

That's it.

Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.

Democracy, vigilance, laws, responsibility are what we need, in all things.

show 2 replies
ares623today at 10:10 AM

All this, so people like us can have an easier time doing a job that wasn’t that hard in the first place, and in reality was actually quite comfortable, for employers who are promising to lay us off, for productivity gains that aren’t even measurable.

deadbabetoday at 2:09 PM

I won’t believe AI is truly being met with violence until I see one of these AI tech billionaires get shot multiple times by a person with nothing left to lose. Until we reach that point, it means people still have hope.

show 2 replies
jstanleytoday at 9:36 AM

> Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.

Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?

[0] https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudes...

show 6 replies
booleandilemmatoday at 2:13 PM

I feel like we should start organizing somehow. As programmers, but more importantly, as people. We should start now before the ruling class has no more need of us and it's too late.

If anyone knows of anything already happening please let me know.

I think it needs to be a grassroots thing because our government's strategy seems to be "let the shit hit the fan and do nothing about it".

balamatomtoday at 10:09 AM

>And then, and I’m sorry to be so blunt, then it’s die or kill.

The people ready to die or kill for the AI, do you already imagine what they are going to be like?

show 1 reply
gaigalastoday at 1:06 PM

One weirdo is enough to predict widespread violence?

I'm not convinced.

The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.

I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.

spwa4today at 12:15 PM

This article is bullshit. It is very easy to break a data center, and it's quite obvious how to do it. Yes, attacking the central building with the actual equipment is not a good way to do it. Figure it out, or rather: please don't figure it out.

The rest of the article is equally short sighted and plain wrong.

shevy-javatoday at 1:13 PM

And so it begins ...

Skynet 4.0.

But shit.

stavrostoday at 1:01 PM

OK sure, AI is terrible, but when has humanity ever said "yeah OK fine, we'll put this particular genie back in the bottle"?

The question is "what do we do now?".

philwelchtoday at 9:57 AM

What a load of pointless handwringing.

lapcattoday at 10:41 AM

> Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition

This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.

> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.

This illustrates my previous point. What they're doing is not a mistake.

> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.

It feels like we read two different articles.

throwaway28469today at 1:24 PM

[dead]

Ales375today at 9:49 AM

[dead]

xbmcusertoday at 9:53 AM

[dead]

redsocksfan45today at 10:21 AM

[dead]

roschdaltoday at 9:49 AM

Yes. AI is evil.

show 1 reply
ArchieScrivenertoday at 9:43 AM

This is nonsense, promoted to top of front page without any comments. How about all the rock stars killed over the years, or grocery store clerks shot and stabbed to death? EVERYTHING is met with violence because that's the nature of aggression no matter the impetus, it doesn't require a justifiable reason, only belief in the outcome of its use.

Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.

show 3 replies
jollymonATXtoday at 10:01 AM

Such a cowardly way to write really. Just own your intentions and direction. No need to handwave theater and CYA when spookie superintelligence llm is in the room with you.

trolleskitoday at 1:22 PM

The people who run AI, Altman, Thiel, etc. welcome the violence. In fact, I strongly believe they are already planning for it, and yes, you are a target.