Have a look here [1] and here [2] - I think they are good resources, but fallible in the long run. I think yes, I do, often confirmed by communication with people I know (i.e. i suspect they have used AI to make something -> I ask). This falls victim to confirmation bias, though. I suspect a nontrivial amount of writing I read is AI generated without me realising, and I'm wary also of falsely flagging AI-generated content that is actually from humans.
[1] https://en.wikipedia.org/wiki/Wikipedia%3AAI_or_not_quiz [2] https://en.wikipedia.org/wiki/Wikipedia%3ASigns_of_AI_writin...
I think the second resource that you linked to is valuable. The first is useless unless you're a Wikipedia editor, the significance of verifying citations not withstanding.
The gap between LLM-generated writing and the composite style of the average Wikipedia page is more narrow than most people may believe.
Okay, but the answers in [1] look something like:
AI generated. Some of the clues include:
- Most obviously, a failed ISBN checksum
- Other source-to-text integrity issues; for example, the WWF source says very little about Malaysia specifically, only mentions Sunda tigers (Panthera tigris sondaica), and does not mention tapirs at all
- Very short yet consistent paragraph length
- Generic "see also" links, one of which is redlinked
This is not the sort of thing that I pay attention to unless I'm doing detailed research. And even then I'd probably have a bot check these for me, ironically, since it's such a mechanical job. At the very least detecting AI like this requires conscious effort.