logoalt Hacker News

parenthesestoday at 4:30 PM0 repliesview on HN

Reading only the abstract: LLMs prefer output of their own generation over humans or even other models.

This is a very good reason to avoid using model-generated data to train future models. We'd be deepening this bias by continuing to do that, essentially forcing society to reshape their output using LLMs to increase engagement. This feels like a form of enshittification that doesn't just touch one product but all of society.