logoalt Hacker News

eleventyseventoday at 4:07 AM4 repliesview on HN

I routinely call out people of writing in an LLM assisted fashion that clearly shows they have just been "vibe commenting". You know, just paste it in and copy the output without even thinking. The people who for some insane reason think they are making a genuine conversation with their copy pasting skills and $20/mo subscription. As if they are like the archive.whatever of the AI era. Because those comments are objectively terrible and contribute little. The ones with all the consultant sycophant speak and distracting prose that comes off the default prompt and RLHF.

But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice. LLM detection is very difficult especially at small comment scale texts. There is never proof, only telltale phrases. How will this be enforced? What the heck even is "AI"?

The thing that really frustrates me is that I can't put tokens through a transformer in any way in editing my post? I can't have an LLM turn a bare link after a sentence into a [1]? I can't have it literally do nothing more than spell check in an LLM, but could with a rule based model? Or what about other LLMs or SLMs or classic NLP chained together? Or is it just the transformer?

And it is officially sanctioned that people ought to be keeping in the back of their mind "does this feel LLMish?" instead of "is this a good comment that contributes to the discussion?" Maybe LLM prose is so annoying and insufferably sycophantic that even if all the content and logic was sound, it still should be moderated completely out. But the entire technological form is profane and unclean?

I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use. I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.


Replies

coldteatoday at 8:29 AM

>But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice.

That's a good start already. Don't let the impossibility of the perfect prevent implementing the good.

>I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.

Nope, it's all bad. If I wanted the comments of an LLM, I'd ask an LLM.

>I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.

Well, don't let the door hit you on your way out.

lelanthrantoday at 4:51 AM

> I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.

I suppose, then... goodbye?

After all, there are a ton of different forums where you can have your chatbot talk to other chatbots.

show 1 reply
bluedeltoday at 8:56 AM

>I want my comments judged by the contributions they make and do not make to the discussion

There used to be a sort of gentleman's agreement that I could spare the time to read and judge your comment because you went through the effort of writing it.

calmootoday at 4:11 AM

I think a more generous interpretation of dang's comment is that it's fine to use LLMs / tools to fix grammatical errors / spellchecking, but a heavier pass where the prose, wording and tone is altered (even mildly) can create a 'slop ambience' over time, death by a thousand paper cuts.

show 2 replies