It's like those sequences of images where we ask the LLM to reproduce the same image exactly, and we just get some kind of grotesque collapse after a few dozen iterations. The same happens with text and code. I call this "semantic collapse".
I conjecture that after some years of LLMs reading a SharePoint site, producing summaries, then summaries of those summaries, etc... We will end up with a grotesque slurry.
At some point, fresh human input is needed to inject something meaningful into the process.