I wouldn't be so confident that poisoning won't work. https://www.reddit.com/r/BrandNewSentence/comments/1so9wf1/c...
Whatever's happening here, it's not training data poisoning.
Models are retrained only every few months at best; it is not possible for a comment made a few hours earlier to be in the training data yet.
LLM poisoning is about getting bad data into the training set. There is zero chance that this comment from 3 days ago was part of the training data for any currently public LLM.
Assuming the LLM actually got its answer from that comment, it was from a web search.