logoalt Hacker News

lich_kingyesterday at 9:47 PM3 repliesview on HN

Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text. Some of it seems to be a knee-jerk reaction to some of the occasional, one-sided stories of people who were accused of using LLMs and fired from their jobs. And some of it seems to be just hedging so that we don't develop a culture that could penalize their LLM-generated posts or code.

We had people defending the fired Ars Technica guy, even though he admitted to using an LLM in some sort of a contrived non-apology along the lines of "I did it because I had a cold".

My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.


Replies

torawaytoday at 2:36 AM

Sure, it's obviously impossible to ID any single piece of writing as from an LLM without significant false positives.

But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).

At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.

wolvoleoyesterday at 10:08 PM

Hmm, some LLM text is hard to detect, sure.

Some is also horribly easy. If the text is full of:

- Overly positive commentary and encouragement

- Constant use of bullet point lists, bolding and emoji

- This quaint forced 'funniness', like a misplaced attempt at being lighthearted

- A lot of blablah that just missed the point

- Not concise and to the point, but also not super long

Then that really screams ChatGPT to me.

I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.

show 1 reply
mschuster91today at 12:28 AM

> Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text.

And it's becoming more and more difficult - not just by AI getting "better" (and training removing many of the telltale signs), but also because regular people "learn" to write like an AI does. We're seeing it with "algospeak" - young terminally online people literally say stuff like "unalived" in the meatspace nowadays.

We're living in a 1984 LARP.