logoalt Hacker News

lblumelast Sunday at 10:42 AM0 repliesview on HN

It has often been claimed, and even shown, that training LLMs on their own outputs will degrade the quality over time. I myself find it likely that on well-measurable domains, RLVR improvements will dominate "slop" decreases in capability when training new models.