I had the same reaction, but the article is not AI-generated according to pangram, which I've generally found reliable. I wonder if LLM turns of phrase and even thought patterns are creeping into normal human thought.
Or, stay with me here, the LLMs were trained on how we, statistically, write.
I think its bidirectional. We change our writing based on what we see (AI generated content on the internet) and AI will learn based on what we write.
It's worth mentioning pangram is more confident in it's positive detections than it's negative ones, as stated by the founder in an interview on the most recent ThursdAI episode