This already happens unintentionally, e.g. Wikipedia loops, where bad info on Wikipedia gets repeated elsewhere, and then the Wikipedia article gets updated to cite that source.
When LLM-generated content is pervasive everywhere, and the training data for LLMs is coming from the prior output of LLMs, we're going to be in for some fun. Validation and curation of information are soon going to be more important than they've ever been.
But I don't think there'll be too much intentional manipulation of LLMs, given how decentralized LLMs already are. It's going to be difficult enough getting consistency with valid info -- manipulating the entire ecosystem with deliberately contrived info is going to be very challenging.