You will start to recognize it over time. The major AI models each have their own voice and patterns that they overuse.
The more you see those patterns the more you start recognizing them. By now I can recognize quickly if a blog post or README.md was generated by Claude or ChatGPT because the signs are so obvious.
Even Hacker News comments that are AI written are easy to spot if they weren't edited. I know I'm not alone because when I recognize an AI comment I check their comment history and find other people calling out their AI-generated submissions, too.
Learning how to recognize the output of the popular AI models is becoming a critical business skill, too. You need to be able to separate out the content from someone who was doing real work that you should take seriously as opposed to the output of someone who is having ChatGPT produce volumes of text that they don't review. The people who do that will waste your time.
It’s very obvious if you leave the default tone. If you specifically ask it to hide its ai voice and make it appear human, it does a really good job. Even better if you give it an example of the writing style.
Ask it to write in the style of patio11 or someone else with a distinctive tone, and it will do a remarkable job.
It will pass pretty consistently. Not sure I love it.
This is a temporary problem. Look at how fast things are progressing. Things will improve until none of this matters because the output is indistinguishable.
I don't see how to interpret your claims. How do you yourself know that you're right when you "recognize" Claude or ChatGPT? How do you know how much of the text you don't recognize as any LLM is actually LLM-generated? My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.