I'd like to see some concrete examples that illustrate this - as it stands this feels like an opinion piece that doesn't attempt to back up its claims.
(Not necessarily disagreeing with those claims, but I'd like to see a more robust exploration of them.)
Look through my comment history at all the posts where I complain the author might have had something interesting to say but it's been erased by the LLM and you can no longer tell what the author cared about because the entire post is a an oversold monotone advertising voice.
https://news.ycombinator.com/item?id=46583410#46584336
https://news.ycombinator.com/item?id=46605716#46609480
https://news.ycombinator.com/item?id=46617456#46619136
https://news.ycombinator.com/item?id=46658345#46662218
https://news.ycombinator.com/item?id=46630869#46663276
https://news.ycombinator.com/item?id=46656759#46663322
I just sent TFA to a colleague of mine who was experimenting with llm's for auto-correcting human-written text, since she noticed the same phenomenon where it would correct not only mistakes, but slightly nudge words towards more common synonyms. It would often lose important nuances, so "shun" would be corrected to "avoid", and "divulge" would become "disclose" etc.
Kaffee: Corporal, would you turn to the page in this book that says where the mess hall is, please?
Cpl. Barnes: Well, Lt. Kaffee, that's not in the book, sir.
Kaffee: You mean to say in all your time at Gitmo, you've never had a meal?
Cpl. Barnes: No, sir. Three squares a day, sir.
Kaffee: I don't understand. How did you know where the mess hall was if it's not in this book?
Cpl. Barnes: Well, I guess I just followed the crowd at chow time, sir.
Kaffee: No more questions.
It is an opinion piece. By a dude working as a "Professor of Pharmaceutical Technology and Biomaterials at the University of Ferrara".
It has all the tropes of not understanding the underlying mechanisms, but repeating the common tropes. Quite ironic, considering what the author's intended "message" is. Jpeg -> jpeg -> jpeg bad. So llm -> llm -> llm must be bad, right?
It reminds me of the media reception of that paper on model collapse. "Training on llm generated data leads to collapse". That was in 23 or 24? Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. That's not how any of it works. Yet everyone has an opinion on how bad it works. Jesus.
It's insane how these kinds of opinion pieces get so upvoted here, while worth-while research, cool positive examples and so on linger in new with one or two upvotes. This has ceased to be a technical subject, and has moved to muh identity.
Have you not seen it any time you put any substantial bit of your own writing through an LLM, for advice?
I disagree pretty strongly with most of what an LLM suggests by way of rewriting. They're absolutely appalling writers. If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.
The skin of their prose lacks the luminous translucency, the subsurface scattering, that separates the dead from the living.