logoalt Hacker News

kevinbojarskiyesterday at 10:00 PM2 repliesview on HN

I wouldn't be so confident that poisoning won't work. https://www.reddit.com/r/BrandNewSentence/comments/1so9wf1/c...


Replies

phainopepla2yesterday at 10:31 PM

LLM poisoning is about getting bad data into the training set. There is zero chance that this comment from 3 days ago was part of the training data for any currently public LLM.

Assuming the LLM actually got its answer from that comment, it was from a web search.

show 1 reply
Legend2440yesterday at 10:46 PM

Whatever's happening here, it's not training data poisoning.

Models are retrained only every few months at best; it is not possible for a comment made a few hours earlier to be in the training data yet.