People love to interpret the results in the most negative way possible because it's a threat to their occupation and identity. I refer to HN specifically.
The fact of the matter is, if you want to edit a document by reading the document and then regurgitating the entire document with said edits... a human will DO worse then a 25% degradation. It's possible for a human to achieve 0% degradation but the human will have to ingest the document hundreds of times to achieve a state called "memorization". The equivalent in an LLM is called training. If you train a document into an LLM you can get parity with the memorized human edit in this case.
But the above is irrelevant. The point is LLMs have certain similarities with humans. You need to design a harness such that an LLM edits a document the same way a human would: Search and surgical edits. All coding agents edit this way, so this paper isn't relevant.
[flagged]
> People love to interpret the results in the most negative way possible because it's a threat to their occupation and identity.
OR it could be because their concerns are genuine but are ignored in favour of a good sounding story.