You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].
[1] https://www.anthropic.com/research/small-samples-poison
[2] https://arxiv.org/abs/2510.07192
No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.
Yes, there are quite a few anti-AI projects. https://old.reddit.com/r/badphilosophy/wiki/index
No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.