There's a certain irony in that the article itself is quite clearly assisted by AI. Not a criticism per se as I don't have a problem with AI assistance, but food for thought given the material being commented on.
Made me stop reading a few paragraphs in. I don't have a "problem" in the ethical sense either, but as the sibling comment notes, the way LLMs write is rather grating. To make matters worse, a) people seem to use them to add pointless volume / "filler" to their texts, so now I have to wade through pages and pages of this stuff, and b) I have no easy way to distinguish between an article at least based on novel human insights vs entirely LLM-generated from a "write me something about X topic" prompt. I don't think it's a stretch to say that the latter just isn't worth reading given the state of the art.
I don't have a problem with AI assistance either, but this undermines the point the article is making. For me it is like a priest preaching gay sex is wrong and then being caught in bed with a male prostitute (snorting cocaine optional). Leaves bad taste in the mouth.
Out of curiosity, what are you basing this on?
The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.
The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well. It seems people use them to "polish" up their writing but in reality it would have read better if they hadn't.
My current pet peave is using period instead of comma, as in:
> My people lived the other side of this equation. Not the factory floor. The receiving end.
Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.