(1) The pattern "It's not just a X---It's a Y" is super common in LLM-generated text for some reason. Complete with em dash. (I like em dashes and I wish LLMs weren't ruining them for the rest of us)
"Upgrading your CPU wasn’t a spec sheet exercise — it was transformative."
"You weren’t just a user. You were a systems engineer by necessity."
"The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks."
And in general a lot of "It's not <alternative>, it's <something else>", with or without an em dash:
"But it wasn’t just the craft that changed. The promise changed."
it's really verbose. One of those in a piece might be eye-catching and make someone think, but an entire blog post made up of them is _tiresome_.
(2) Phrasing like this seems to come out of LLMs a lot, particularly ChatGPT:
"I don’t want to be dishonest about this. "
(3) Lots of use of very short catch sentences / almost sentence fragments to try to "punch up" the writing. Look at all of the paragraphs after the first in the section "The era that made me":
"These weren’t just products. " (start of a paragraph)
"And the software side matched." (next P)
"Then it professionalised."
"But it wasn’t just the craft that changed."
"But I adapted." (a few paragraphs after the previous one)
And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
Out of curiousity, for those who were around to see it: was writing on LinkedIn commonly like this, pre-chatGPT? I've been wondering what the main sources were for these idioms in the training data, and it comes across to me like the kind of marketing-speak that would make sense in those circles.
(An explanation for the emoji spam in GitHub READMEs is also welcome. Who did that before LLMs?)
Thanks a lot, I really appreciate that you took the time for this detailed explanation.
> And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
It's like YouTube-style engagement maximization. Make it more punchy, more rapid, more impactful, more dramatic - regardless of how the outcome as a whole ends up looking.
I wonder if this writing style is only relevant to ChatGPT on default settings, because that's the model that I've heard people accuse the most of doing this. Do other models have different repetitive patterns?