logoalt Hacker News

AI Resistance: some recent anti-AI stuff that’s worth discussing

315 pointsby speckxyesterday at 8:19 PM310 commentsview on HN

Comments

tptacekyesterday at 8:48 PM

I'm glad this person found community, but I think they've been a bit starstruck by concentrated interest. At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it. There are those people about smart phones, the Internet itself, even television.

Meanwhile: the ability to poison models, if it can be made to work reliably, is a genuinely interesting CS question. I'm the last person in the world to build community with anti-AI activists, but I'm as interested as anybody in attacks on them! They should keep that up, and I think you'll see threads about plausible and interesting attacks are well read, including by people who don't line up with the underlying cause.

show 10 replies
larodiyesterday at 8:51 PM

This whole poisoning intent is so incredibly misappropriated, that I feel sad about it. First of all - there is enough content to train on already, that is not poisoned, and second - the other new content is largely populated in automated manner from the real world, and by workers in large shops in Africa, that are being paid to not produce shit.

So yes, you can pollute the good old internet even more, but no, you cannot change the arrow of time, and then there's already the growing New Internet of APIs and public announce federations where this all matters very little.

show 8 replies
lolcatzlulzyesterday at 9:03 PM

The easiest way to grow AI resistance is to get Dario Amodei and Sam Altman on TV and let them talk.

show 2 replies
habermanyesterday at 9:15 PM

I'm old enough to remember a time when the primary hacker cause was DRM, the DMCA, patent trolls, export controls for PGP, etc. All things that made it difficult to use information when you want to. "Information wants to be free."

It's wild to see the about face. Now it's:

> If [companies] can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.

It would have been very difficult to predict this shift 25 years ago.

show 7 replies
MisterTeayesterday at 8:52 PM

My take on AI is that it's a corporate tool used to extract more work from employees while tricking them into thinking they are turbo-charged devs.

These days the tech industry is more moneyed circus than serious effort to improve humanity.

show 1 reply
caesilyesterday at 9:18 PM

The only thing more cringe than the seething anger in this blog is the technical illiteracy revealed by an earnest belief that any of these attempts at "poisoning" will have any negative impact whatsoever on model training.

show 4 replies
Trasteryesterday at 9:45 PM

This is slacktivism. I can kind of understand someone coming to the conclusion that we're replacing working class jobs with compute (caveat, I use working class more broadly than you), and that compute is pure capital. So essentially the capital class are wringing the neck of the working class. I think that, at the very least, is what the capital class is hoping for. If that's what you believe though, slightly poisoning a model is not even close to grappling with what is going on.

jumploopsyesterday at 10:00 PM

I've noticed this trend most heavily on Reddit.

Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]

Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].

Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).

[0]https://www.reddit.com/r/vibecoding/

[1]https://www.reddit.com/r/isthisAI/

[2]https://www.reddit.com/r/aiwars/

[3]https://www.reddit.com/r/antiai/

[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...

[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...

show 1 reply
cortesoftyesterday at 9:12 PM

I am hoping at some point we can move towards having more nuanced conversations about AI and the role it should play in our world. It seems like currently the only two camps are at either extreme.

Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.

show 4 replies
p0w3n3dyesterday at 9:29 PM

  Resistance is futile 
But to be honest, I totally agree that AI is indeed destroying communities. We can already see YouTube redirecting all the reporting to AI which can allow some malicious agent claim your original video and demonetize it (i.e. steal your money). It happened to great YouTube people like Davie504. There is no way to appeal as the appeal is also treated by a robot
show 1 reply
jmmcdyesterday at 9:02 PM

> Since these companies can’t improve their AI models without fresh data created by human beings

Totally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.

show 2 replies
cesarvarelayesterday at 9:04 PM

I wonder if this will have the opposite effect and produce something similar to antibiotic resistance, making Ais better at handling "poison."

jrfloyesterday at 8:44 PM

Seems a bit counterproductive if you're concerned about the environmental impact of AI to trick hyperscalers into burning more compute

show 2 replies
fuddleyesterday at 8:45 PM

> The poison fountain itself is hosted on rnsaffn.com

Would the scrapers not just add these sites to do not crawl list?

show 4 replies
graphememesyesterday at 9:53 PM

They do realize, they filter out this stuff right? You're just making someone elses job more lucrative.

show 1 reply
jadaryesterday at 10:07 PM

Hasn’t griefing and trolling been a thing on the internet for a while? What makes this unique just because it’s AI instead of whatever else?

show 1 reply
atleastoptimalyesterday at 11:49 PM

I have a perhaps unique viewpoint among people in tech, at least among the sample I see on HN

I simultaneously think

1. AI will be a massively impactful technology on the scale of the industrial revolution or greater

2. The potential upside of AI is enormous, but potential downside is just as big (utopia or certain ruin)

3. Most current AI companies are acting somewhat reasonably in a game-theory sense with respect to the deployment of their tech, and aren't especially evil or dastardly compared to Google in the 2000s, social media in the 2010s

4. AI safety is an under-appreciated concern and many who are spending time nitpicking the details are missing the bigger picture of what ASI and complete human obsolescence look like.

5. No amount of whiny protest, data sabotaging, or small-scale angst or claiming that AI is "fake" or hoping for the bubble to pop is going to have even a marginal effect on the development of AI. It is too powerful and the rewards are too great. If anything it will have an overall negative effect because it will convince labs that their potential role as a utopian, public benefactor will not be appreciated, so will instead align themselves with the military industrial complex for goodwill.

SwellJoeyesterday at 10:17 PM

Any human scale "attack", e.g. the made up Everybody Loves Raymond episode isn't doing anything to hurt LLM training data. Might even help them detect exaggeration, satire, etc. when read in context and with other knowledge they have from other sources (like scraping IMDB or whatever, and already knowing the cast and plot summary of every episode of Everybody Loves Raymond).

If there is an effective way to poison them, it'll be automated. And, it'll probably rely on an LLM to produce the poison, since it has to look legit enough to pass the quality filtering and classification stage of the data ingestion process, which is also probably driven by an LLM.

One reason small models are getting better is because the training data being used is not just getting bigger, it's getting cleaner and classified more correctly/precisely. "Model collapse" hasn't happened, yet, even though something like half the web is AI slop, because as the models get smarter for human use in a variety of contexts, they also get smarter for use in preparing data for training the next model. There may very well still be risks of a mad cow disease like problem for LLMs, but I doubt a Markov chain website is going to contribute. The models still can't always tell fact from fiction, but they're not being hoodwinked by a nonsense generator.

ameliusyesterday at 9:36 PM

This robot chasing wild boars is a preview of what is coming for us:

https://www.youtube.com/shorts/6E2AH43ad7w

lxgryesterday at 9:48 PM

> This isn’t exactly the modern equivalent of angry textile workers destroying power looms, but (if you’ll forgive the pun) it’s cut from the same cloth.

And how did that work out for the textile workers?

> The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.

This... seems like an absurd asymmetry in effort on the side of the attacker? At least destroying a power loom is much easier than building one.

Filtering out obvious garbage seems like a completely solved problem even with weak, cheap LLMs, and it's orders of magnitudes more efficient than humans coming up with artisanal garbage.

zoogenyyesterday at 8:54 PM

I often question my own bias on this because in my interactions with local non-tech people, the adoption of AI has pretty much affected everyone I know and it is by my estimation a majority positive reaction. I live in a fairly rural part of the PNW.

So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.

I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.

show 5 replies
sn0nyesterday at 11:25 PM

Let’s just break trust more. Makes sense amirite? LoL, in what reality does this even make sense??? it’s literally just spreading misinformation to people who can’t read between the lines because the tism stick got em before they were born.

KronisLVyesterday at 9:32 PM

I bet it's easy to be against AI, instead of those who use it in inhumane ways (and hold considerably more power). To them, AI is just a tool. If it wasn't AI, it would be buildings full of people and automated devices posting misinformation, outsourcing jobs and pushing for gig economy instead of respectable employment, having understaffed call centres and bad phone trees or knowledgebases that basically tell you to f off, lobbying against workers' rights and regulatory capture and any number of other misaligned motivations.

damnesianyesterday at 8:47 PM

Thanks to this lovely site, and my distaste for AI, I've found a whole ecosystem of minimalist blogs and artists' personal sites. It's shifting my habits and foci. I don't do socials anymore except forums like this.

Maybe I have slop to thank for it.

alyxyayesterday at 8:53 PM

This seems like a wasted effort when AI will primarily learn the majority consensus view and not one-off misinformation. AI tries to learn pattern matching for generalization, so garbage data doesn't make AI learn the wrong patterns, at best just slows down learning the actual patterns. When most compute for training is spent on curated data and RL rather than random web-scraped data, the impact is likely negligible.

show 3 replies
pj_mukhyesterday at 8:58 PM

Fortunately, the slop you visibly see online is just the tip of the iceberg. I would guess 80% of AI's real usage hides beneath the surface in back-office documentation consumption, software development, process optimization and automation, investments in new endeavors companies would've never thought possible/financially feasible etc. All of that usage is hidden from this resistance, and possible now with current models (so all this new poisoning is irrelevant). The valuations could go away tomorrow, and it would've still fundamentally changed the nature of the economy.

It doesn't matter that you don't like the slop on the LinkedIn post, ban it. I think the visible slop on our various feeds that is driving people mad is a rounding error for the AI companies. Moreover, it's more a function of the attention economy than the AI economy and it should've been regulated to all holy hell back in 2015 when the enshittification began.

Now is as good as time as any.

overgardyesterday at 9:45 PM

Honestly, it's no wonder there's a lot of pushback. We have these irresponsible CEO's talking non stop about taking peoples jobs at a time when people are struggling to make ends meet, all while taking in insane cash infusions. Why wouldn't people loathe AI at this point when the marketing is "we're going to fuck you over and there's nothing you can do about it".

show 1 reply
miltonlostyesterday at 9:09 PM

Conflating “kicking over ai delivery bots” and “throwing a Molotov cocktail at Altman’s house” as being both condemnable hasn’t actually been forced off the sidewalk by one of these delivery bots. These are dangerous and anti-human ADA nightmares. They shouldn’t be allowed on sidewalks, emphasis on walk

Aboutplantsyesterday at 8:49 PM

Maybe when the entire marketing of AI is fear mongering and doom (all your jobs are going away!) the end result is something you should have expected from the very beginning

guywithahatyesterday at 8:45 PM

I'm very skeptical of his premise. I feel like AI acceptance/resistance is dependent on what social media site you use. I believe it's antagonistic on Reddit, but sites like X are generally pretty excited for AI. Certainly in my life people are accepting and excited for AI releases and tools, maybe so long as your experience with AI isn't Microsoft enterprise copilot.

show 3 replies
cdelsolaryesterday at 9:20 PM

pretty lame, Milhouse

OutOfHereyesterday at 9:37 PM

These people are dinosaurs and you know what happens to Dinosaurs. Until they meet their conclusion, they are for the moment at risk of becoming terrorists.

mjtkyesterday at 8:51 PM

AI scares the crap out of me. I worried about what reality will look like in 2-5 years. The rate of change is pretty bonkers.

show 1 reply
simianwordsyesterday at 8:56 PM

Is this just Luddism in 21st century? I kind of feel bad for the pathetic (mental) state one must be in to take this kind of activism seriously

show 3 replies
morning-coffeeyesterday at 9:03 PM

This is (yet another reason) why we can't have nice things on the Internet anymore. Sigh.

jonathanstrangeyesterday at 8:48 PM

This is a normal reaction to ground breaking technology but these reactions never had any noteworthy effect in history. There used to be Maschinenstürmer during the 19th Century industrial revolution. There were also violent enemies of cars in the beginning of the 20th Century, some of them were even willing to kill drivers with lethal wire traps.

show 2 replies
appz3yesterday at 11:57 PM

[dead]

aizl34today at 12:33 AM

[dead]

inquirerGeneralyesterday at 9:57 PM

[dead]

mine_boiyesterday at 8:59 PM

[flagged]

show 1 reply
cmdkyesterday at 9:01 PM

[flagged]

julienreszkayesterday at 9:42 PM

i always hated luddites, just one more reason to hate them

roschdalyesterday at 8:57 PM

I resist AI.

pmarreckyesterday at 8:43 PM

So is vaccine resistance.

Doesn't mean it's correct, or empirically-based.

show 1 reply
slibhbyesterday at 8:45 PM

The "Everybody Loves Raymond" bit isn't "misinformation," it's a Norm Macdonald joke.

I find it kind of sad that people are spending time and energy on this. It seems like something depressed people would do. But free country and all that

show 2 replies
lpcvoidyesterday at 9:01 PM

Good, every little bit counts. Poison them data wells.

madamelicyesterday at 8:49 PM

I do understand people's dislike / hatred for AI but I am equally baffled.

I feel like the same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

The "cyber psychosis" thing is overblown just like the "Tesla ignites its passengers" is. The only reason it gets in the news is because it is trendy to do so. The people getting 'infected' would've infected themselves regardless.

Genuinely I think the hatred is overblown by people who have no clue what the actual truth of AI is, something they seem obsessed with.

The only genuine complaint about AI is the data sourcing which is a problem being resolved by CloudFlare along with other platforms that require high payment for the privilege. With that said though, those platforms are still selling user data with users producing the content gaining nothing, that part needs to be fixed.

show 25 replies