logoalt Hacker News

randomdraketoday at 3:49 PM0 repliesview on HN

I wonder if this extends to training models on new content as well. Are we creating a cyclical information-consumption and training situation in which models being trained are more likely to pick up on and reference content created by themselves or by other LLMs than by other humans?