logoalt Hacker News

nextosyesterday at 9:59 PM2 repliesview on HN

You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].

[1] https://www.anthropic.com/research/small-samples-poison

[2] https://arxiv.org/abs/2510.07192


Replies

causaltoday at 5:39 AM

No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.

ahazred8tayesterday at 10:18 PM

Yes, there are quite a few anti-AI projects. https://old.reddit.com/r/badphilosophy/wiki/index