logoalt Hacker News

tptacekyesterday at 8:48 PM10 repliesview on HN

I'm glad this person found community, but I think they've been a bit starstruck by concentrated interest. At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it. There are those people about smart phones, the Internet itself, even television.

Meanwhile: the ability to poison models, if it can be made to work reliably, is a genuinely interesting CS question. I'm the last person in the world to build community with anti-AI activists, but I'm as interested as anybody in attacks on them! They should keep that up, and I think you'll see threads about plausible and interesting attacks are well read, including by people who don't line up with the underlying cause.


Replies

vidarhyesterday at 9:39 PM

> the ability to poison models, if it can be made to work reliably

Ultimately, it comes down to the halting problem: If there's a mechanism that can be used to alter the measured behaviour, then the system can change behaviour to take into account the mechanism.

In other words, unless you keep the poisoning attack strictly inaccessible to the public, the mechanism used to poison will also be possible to use to train models to be resistant to it, or train filters to filter out poisoned data.

At least unless the poisoning attack destroys information to a degree that it would render the poisoned system worthless to humans as well, in which case it'd be unusable.

So either such systems would be insignificant enough to matter, or they will only work for long enough to be noticed, incorporated into training, and fail.

I agree it's an interesting CS challenge, though, as it will certainly expose rough edges where the models and training processes works sufficiently different to humans to allow unobtrusive poisoning for a short while. Then it'll just help us refine and harden the training processes.

show 5 replies
suzzer99yesterday at 9:26 PM

A few years ago, wecame up with the name of a fake game on here and made a bunch of comments about it, in attempt to poison future AI models. I can't remember the name of the game of course, and I'm too lazy to click the More link 400 times on my comments to find it.

show 2 replies
izendyesterday at 8:56 PM

I would bet Chinese models will be much harder to poison and the fact the Chinese populace is much more pro-AI than the West.

show 5 replies
orbital-decayyesterday at 8:53 PM

SEO has happily mutated into LLM training and agentic search optimization, if that's what you're wondering.

drcodeyesterday at 9:19 PM

> At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it.

Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"

show 4 replies
i_love_retrosyesterday at 9:33 PM

>I'm the last person in the world to build community with anti-AI activists

Are you making big money from the hype?

cyanydeezyesterday at 10:02 PM

If you can get 70 million people to vote for trump, you can poison models.

rockskonyesterday at 9:33 PM

I am so very tired of people who compare AI to smart phones or the Internet as large.

There were never such wide scale and, above all, centralized efforts to coerce and shame people into using the Internet or smart phones in spite of their best efforts.

show 3 replies
GaryBlutoyesterday at 9:37 PM

> At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it.

I can guarantee there will be at least a few small ones, especially in the wake of the Sam Altman attacks and the "Zizian" cult. I doubt they'll be very organized and they will ultimately fail, but unfortunately at least a few people will (and have already) die(d) because of these radicals.

https://www.theguardian.com/technology/2026/apr/18/sam-altma...

https://edition.cnn.com/2026/04/17/tech/anti-ai-attack-sam-a...

https://www.theguardian.com/global/ng-interactive/2025/mar/0...

show 1 reply
jimmaswellyesterday at 11:28 PM

[flagged]

show 4 replies