A year ago I would have agreed but lately, when it comes to stuff linked off of HN, it's actually more likely to be clear and readable if it's AI written.
Is it more likely to be clear and reliable if it is AI-written, or are features associated (both directly and by correlation) with clear writing increasingly misperceived as “AI tells” because they are also favored in LLM training?
I don't find the LLM written stuff very readable because after one too many "real"s or "The X Dilemma" my brain shuts off. It's not even voluntary, it just does that on its own.