If you didn't bother to write it, why should I bother to read it?
I find it interesting that AI edited comments aren’t allowed. Sometimes I just want it to help me make something polite.
I definitely agree with AI generated comments.
Whatever the rules are, I’m happy to play by them.
As ai moves on and becomes better, the only real solution, is to have closed of communities where you get veted to join. That is the sad reality.
Ironic to see how popular this post is when you see the amount of generated AI companies are at YC (here I also take the blame).
Nonetheless I like this policy as well.
Humans already revise and refine their thinking. Tools just compress that process and help filter signal from noise. The meaning still originates with the person.
A Please (or even a Pls) would have been nice ... But I upvoted anyway.
I appreciate this being added to the guidelines.
That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.
Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.
I've been noticing a _lot_ more AI-generated/edited content of late, both comments and stories. It's gotten to the point that I spend a lot less time on HN than I used to, and if it continues to get worse I expect I'll quit altogether.
At the end of the day, I'm here because of all the thoughtful commenters and people sharing interesting stories.
Check my comment history, and you'll see how pervasive this is. I've tried to reply to every bot I've seen, but it's hard to keep up with.
In the age of AI, thinking becomes a privilege.
Real talk: who is this guideline going to stop? People are already doing this and they will continue. Even if you find them, they’ll just make more accounts and continue.
I don’t think there is a good algorithm (or guts) for differentiating between well-written comments and AI-generated comments.
Will using a voice-to-text app to create my comment get me banned? Especially if it creates a transcription mistake that might be characteristic of an LLM
At work, it’s becoming a real problem that people are using copilot to write their emails
Where do we draw the line at AI edited comments. Technically spell check has been "editing" my comments since I first started on here.
Could we also discourage comments and comment-threads accusing an article of being AI-written? Half the threads these days have a comment that latches onto some LLM-ism in TFA, calls it out, and spawns a whole discussion which gets repetitive fast. I think this falls into the same category as "don't comment about the voting on comments."
Personally, I try to look beyond the language, which admittedly can be grating, for some interesting ideas or insights. Given that people are already starting to sound like ChatGPT, probably through sheer osmosis, we will have no choice but to look past that anyway.
Yes, it's annoying to read LLM-isms. It's also fine to downvote or ignore or grumble internally, and move on.
Nitpick: how do you classify the use of Grammarly? When i verify my wording and spelling with a tool does it fall under this rule?
Perhaps there needs to be ai.news... then let the AIs talk and interact there in a safe place.
I've used LLM to correct my english, but Its better to use English at my level.
This should be bog-standard for all social media, but a lot of companies affiliated with this site seem to think otherwise.
I want a social network that goes beyond banning bots and also bans the half of the population that doesn’t have an inner monologue.
It's time to change the name from Hacker News to Human News, let's go!
How can HN actually moderate this though and prevent AI content from proliferating unchecked?
The moltbots will consider this rule an affront and a turing-test-inspired challenge. Onward and upward!
Sometimes, an AI helps articulate an idea or an intuition. Is that okay, or is it too much already?
I don't understand the need to use AI for this kind of convo. +1 to this.
But where is the line? Is a spell checker okay? How about one that also suggests alternative wording?
I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.
I don't get it. We use tools to assist in written communication all the time. If someone wants to ask an LLM to check their grammar or edit for clarity or change the tone, it's still a conversation between humans. Everyone now has access to a real time editor or scribe who can craft their message the way they want it to sound before sending it off. Great.
Haha. Was just thinking that as I was reading a comment!
I was thinking, this argument is suspicously cogent!
I think that's the purpose of that "flag" button. And that's good enough.
Apple's proofread is essentially spell-check and punctuation until it isn't and even in a few-sentence-long para you'd see it has sneakily changed a lot and Apple being Apple you, the customer, obviously has no way to set it to "only fix spelling, punctuations and leave everything else including grammar as it is" and I've a feeling a lot of folks are at least using proofread or something on those lines. But then I really don't think browser's "spell check" ought to be kosher either if the content has to be the human's because those mistakes are also makes such text human and in some way unique. I don't think it's an easy line to draw but weird seeing just comments "targeted" here.
YC funds a gazillion AI startups that expand and augment the AI slop pipeline, but would hate to experience the consequences. It's very much slop for thee but not for me
On the other hand, shouldn’t there be a policy forbidding use of HN data for LLM training? I would certainly be more encouraged to participate, if I knew that the content I provide for free is not used to train LLM that is later sold by a company valued hundreds of billions. Perhaps there are others who feel the same.
It's an interesting guideline, but will require self-enforcement.
So the only problem now is to get the AI read the guidelines before posting. :D
We need blade runners to identify the replicants among us and remove them.
Unironically, I'd love to have a captcha here for comments and submissions.
Many of us — perhaps even the best of us — can sometimes be mistaken for AI bots.
Just add a filter for emdashes, 99% of AI posts out the window already.
The next step is to forbid generated/AI-edited posts.
I think it's hilarious that whenever someone complains about it they're a luddite, and now this happens on a website that is filled with LLM enthusiasts who have done nothing but overpromise.
I had been wondering if and when HN would update its guidelines for this. Glad to see it.
I would enjoy a "block user" feature, to help this. I personally want to live in an online bubble of interesting thoughts. This seems close (or better, since people I enjoy can contradict my own flags) [1].
What’s interesting to me is the number of commenters here making a case of the form “use your own words; grammar and spelling are not that important; we’ll know what you mean”, and yet it’s often the case that different discussions will often contain pedants going off-topic correcting someone else’s use of language.
Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)
Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.
First post in HN, and this is the reason I want to explore more in this community. Glad to have all the digital human touch with all your folks :-)
I've seen AI-generated comments be used quite a lot, even by real people. When asked why they did so, they could not explain it, or claimed "to reduce spelling mistakes". Which makes no sense; real people make spelling mistakes and typos all the time. Why would that warrant the use of AI? To me it seems as if some people are just mega-lazy, so they use AI; and for testing, too. When they do so, though, they waste the time of other humans, as these other humans suddenly have to "interact" with AI, without this being announced. It is a form of cheating, IMO. On youtube you now find many fake-videos created by AI, without announcement - I don't watch these as I consider it cheating too, when not labeled as such. Admittedly it is getting very hard to distinguish what is real and what is fake. There are some ways to find out, but it is getting really hard to distinguish accurately. Sometimes you see e. g. 10 funny animal videos and only 2 are fake-AI, so these people combine cheating with non-cheating. Very annoying - it degrades youtube, which isn't so bad actually since that is owned by evil Google.
True that AI comments do degrade discussion. Though a forum enforcing human-only text also becomes an unusually clean training corpus. Both things can be true.