> "it's not X (emdash) it's Y" pestilence.
I wonder for how long this will keep working. Can't be too hard to prompt an AI to avoid "tells" like this one...
Anyone lazy enough to not check the output is also going to be lazy enough to be easy to spot.
People who put the effort into checking the output aren't necessarily checking more than style, but some of them will, so it will still help.
Luckily there are plenty of other obvious tells!
Biggest one in this case, in my opinion: it's an extremely long article with awkward section headers every few paragraphs. I find that any use of "The ___ Problem" or "The ___ Lesson" for a section header is especially glaring. Or more generally, many superfluous section headers of the form "The [oddly-constructed noun phrase]". I mean, googling "The Fire-Retardant Giants" literally only returns this specific article.
Or another one here: the historic stock price data is slightly wrong. For whatever reason, LLMs seem to make mistakes with that often, perhaps due to operating on downsampled data. The initial red-flag here is the first table claims Apple's split-adjusted peak close in 2000 was exactly $1.00.
There are plenty of issues with the accuracy of the written content as well, but it's not worth getting into.
People are already prompting with "yeah, don't do these things":
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing