Least shocking thing I've read about LLMs recently.
They are essentially like that one JPEG meme, where each pass of saving as JPEG slightly degrades the quality until by the end its unrecognizable.
Except with LLMs, the starting point is intent. Each pass of the LLMs degrades the intent, like in the case of a precise scientific paper, just a little bit of nuance, a little bit of precision is lost with a re-wording here and there.
LLMs are mean reversion machines, the more 'outside of their training' the context/work load they are currently dealing with, the more they will tend to gradually pull that into some homogenous abstract equilibrium
Where this result is actually interesting and relevant is when a coding agent splits a large source file into multiple smaller files. Opus + Claude Code will try to recite long sections of source code from memory into each of the new files, instead of using some sort of copy/paste operation like a human would.
Moving a file is a bit easier. LLMs may sometimes try to recite the file from memory. But if you tell them to use "git mv" and fix the compiler errors, they mostly will.
Ordinary editing on the other hand, generally works fine with any reasonable model and tool setup. Even Qwen3.6 27B is fine at this. And for in-place edits, you can review "git diff" for surprises.
There's a kid's game that illustrates this too: https://en.wikipedia.org/wiki/Telephone_game
A coworker talks about LLMs as "bullshit" layers. Not exactly dismissing them or being derogatory about them, but emphasising that each time you feed something through an LLM, what comes out the other side may not be what you expect/want. Like that guy at the pub sharing what he'd seen online somewhere, after a few pints. Might be accurate, but carries notable risk it's not.
So e.g., don't use an LLM to call an API to gather data and produce a report on it, as that's feeding deterministic data through a "bullshit" layer, meaning you can't trust what comes out the other side. Instead use the LLM to help you write the code that will produce a deterministic output from deterministic data.
I've seen co-workers use LLMs to summarise deterministic data coming from APIs and have reports be wildly off the mark as often as they are accurate. Depending on what they're looking at that can have catastrophic risk.
Further, could we think of intent as some ordered state, and over time the LLM introduces entropy, eventually resulting in something akin to free-association?
LLM’s are the most elaborate guessing machine man-kind has made. That’s makes it both useless and useful depending on what it is used for.
That’s it. Once you look at everything through this lense everything makes sense - especially the fact there is no underlying understanding of reasoning and creativity. I don’t care what boosters say.
I was talking about this in a thread yesterday. It’s why I don’t like blogs that are just LLM generated. I don’t care how good you think it is, I don’t care that you consider a facsimile of you good enough. If I want a rote, boring LLM response, I will prompt it myself. I do not appreciate reading blogs and other assumed to be human-generated content and having somebody attempt to trick me into reading their prompt results like some annoying middleman.
I came to your blog to read what you had to say. Why are you writing a blog if you aren’t even going to write it?
A human doing the same tasks as what the LLM did in the paper that the human will degrade the document further then the LLM. If the LLM is 25%, a human would degrade it probably 80% if they used the same technique as the LLM did in this paper. I'm talking about a single pass.
The fact of the matter is, humans don't edit things the way it was done in the paper and neither do coding agents like claude. Think about it: You do not ingest an entire paper and then regurgitate that paper with a single targeted edit... and neither do coding agents.
Also think carefully. A 25% degradation rate is unacceptable in the industry. The AI change that's taking over all of SWE development would not actually exist if there was 25% degradation... that's way too much.
I've definitely experienced this while coding with LLMs. Often, after a flurry of feature work in which I thought I was being reasonably careful but moving very fast, I take a closer look at some small piece of code and go "holy shit". Then I have to spend a few hours going over everything and carefully reworking parts where things didn't quite go how I'd like, where I was unclear, or where the LLM's brainworms kicked in.
Quality is really important to me in its own right, but I also worry about this exact "repeated compression" problem: when my codebase is clean and I have an up-to-date mental model, an LLM can quickly help me churn out some feature work and still leave the codebase in a reasonable state. But as the LLM dirties up the codebase, its past mistakes or misunderstandings compound, and it's likely to flub more and more things. So I have to go back and "restore" things to a correct state before I feel comfortable using the LLM again.