Reads at least partially like LLM writing, for example:
> When code production gets cheap, the cost doesn't disappear. It migrates.
> It was true then. It is unavoidably true now.
There is a reason why such a pattern is frequent in LLM-generated text.
Any good human-written text that provides useful information is likely to highlight in this way or in equivalent ways the contrast between what the reader is expected to incorrectly believe and the reality.
When the reader already knows what the text has to say, that text is superfluous.
Therefore a text that provides new and unexpected information, so it is a useful text, must use some means to explain to the readers the errors of their ways.
It may use simple superposition like "it is not ... it is ..." or it may be more verbose and add "but", "however", "nonetheless" etc.
I believe that it is counterproductive to use this kind of pattern as a method for detection for AI-written texts, because it is normal for it to exists in useful human-written texts.
What should be commented is whether that claim is true, i.e. whether indeed the second part with "it is ..." is true, or whether all of the pattern is superfluous, because none of the expected readers is not already aware that the first part with "it is not ..." is true.
> When code production gets cheap, the cost doesn't disappear. It migrates.
I'm surprised people aren't taking the time to edit this very specific kind of phrasing out of their writing. It's such a common AI tell now that, even when writing by hand, I'd just avoid it entirely.
Then again, I hated that LLMs co-opted the em-dash, and I refuse to stop using it, so I suppose I get it.
Sometimes I feel like we are entering a new witch hunt era but for LLM generated text. Before clicking submit I am sometimes afraid that the text will be labled "LLM Generated" even though its not. Enough people classify you as a witch and you get burnt. Though in this case you only receive nasty comments, down votes and possible social media bans.
Edit: In my observation it seems that people's opinions that do not agree with you get labeled as "AI Generated" more than opinions that agree with yours.
i disagree and even if assisted the points are still valid
Comment reads at least partially like human writing, for it is terse and does not try to make a point.
Really? Do we now suspect everybody who uses the most basic of stylistic elements of producing slop?
Pendulums always swing back and forth between extremes but oh boy did this one swing fast into witch hint territory.
Like clockwork, every single thread about something AI-related has someone expressing their disgust at passages of LLM-written text. In many cases by the same people who are enthusiastically embracing LLM-generated software. Why don't we show the same level of contempt for LLM-authored software as we do for even the slightest hint of LLM-authored text in a blog post?
Maybe it's just because I grew up spending way too much time on the internet, but I write like that and have since well before LLMs. As much as people like to attribute that style to AI, I don't think it's the dead giveaway that people act like it is.