logoalt Hacker News

alyxyayesterday at 9:50 PM0 repliesview on HN

All the examples you gave are chatbots with web search integrated. Are you sure those chatbots didn't just reference false information it found in web searches? That's fundamentally different than poisoning the training of AI models.

> The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

This seems to imply the poisoning affected the web search results, not the actual model itself, because it takes months for data to make it into a trained base model.