As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".
While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
It’s fine as long as ppls took effort to double check the answers.
I don't think this needs to be banned, particularly because it wouldn't be very effective (people would just get rid of the "AI said" part), and also because anybody who actually writes a comment like that would probably get downvoted out of the conversation anyway.
Why introduce an unnecessary and ineffective regulation.
Yes, they are almost always low value comments.
No, because this is self correcting behavior. If the comments are annoying, people will downvote them. In the rare case this is appropriate, they should be allowed. Guidelines are for things that will naturally be upvoted but shouldn’t be.
how are you going to enforce it? if someone does that & reformats the text a bit it'll look like a unique response
I think the AIs should post directly
I think AI can be useful to cite in comments as a source of information. I.e., where you might otherwise say "According to Bloomberg, CPI is up 5% in the past 6 months[0]" with [0] linking to a page where you got that info, you could have "According to Claude/GPT/Gemini, CPI is up 5% in the past 6 months" ideally with [0] being the prompt used.
The current upvote/downvote mechanism seems more than adequate to address these concerns.
If someone thinks an "I asked $AI, and it said" comment is bad, then they can downvote it.
As an aside, at times it may be insightful or curious to see what an AI actually says...
We should prefer shaming and humiliation over forbiddance––norms beat laws in such situations.
Of course I prefer to read the thoughts of an actual human on here, but I don't think it makes sense to update the guidelines. Eventually the guidelines would get so long and tedious that no one would pay attention to them and they'd stop working altogether.
(did I include the non-word forbiddance to emphasize the point that a human––not a robot––wrote this comment? Yes, yes I did.)
1 and 3, straight to jail. 2 is fine
ai writing should be rewritten or polished at least as a form of respect for others.
No:
I actually kind of find it surprising that this post and the top comments saying "yes" even exist because I think the answer should be so firmly "no", but I'll explain what I like to post elsewhere using AI (edit: and some reasons why I think LLM output is useful):
1. A unique human made prompt
2. AI output, designated as "AI says:". This saves you tokens and time copying and pasting over to get the output yourself, and it's really just to give you more info that you could argue for or against in the conversation (adds a lot of "value" to consider to the conversation).
3. Usually I do some manual skimming and trimming of the AI output to make sure it's saying something I'd like to share; just like I don't purely "vibe code" but usually kind of skim output to make sure it's not doing something "extremely bad". The "AI says:" disclaimer makes clear that I may have missed something, but usually there's useful information in the output that is probably better or less time consuming than doing lots of manual research. It's literally like citing Wikipedia or a web search and encouraging you to cross-check the info if it sounds questionable, but the info is good enough most of the time such that it seems valuable to share it.
Other points:
A. The AI-generated answers are just so good... it feels akin to people here not using AI to program (while I see a lot of posts posting otherwise that they have had a lot of positive experiences with using AI to program). It's really the same kind of idea. I think the key is in "unique prompts", that's the human element in the discussion. Essentially I am sharing "tweets" (microblogs) and then AI-generated essays about the topic (so maybe I have a different perspective on why I think this is totally acceptable, as you can always just scroll past AI output if it's labeled as such?). Maybe it makes more sense in context to me? Even for this post, you could have asked an AI "what are the pros and cons of allowing people to use LLM output to make comments" (a unique human prompt to add to the conversation) and then pasted AI output for people to consider the pros and cons of allowing such comments, and I'd anticipate doing this would generate a "pretty good essay to read".
B. This is kind of like in schools, AI is probably going to force them to adapt somehow because you could just add to a prompt to "respond in such a way as to be less detectable to a human" or something like that. At some point it's impossible to tell if someone is "cheating" in school or posting LLM output on to the comments here. But you don't need to despair because what's ultimately important on forum comments is that the information is useful, and if LLM output is useful then it will be upvoted. (In other concerning news related to this, I'm pretty sure they're working on how to generate forum posts and comments without humans being involved at all!)
So I guess for me the conversation is more how to handle LLM output and maybe for people to learn how to comment or post with AI assistance (much like people are learning to code with AI assistance), rather than to totally ban it (which to me seems very counter-productive).
edit: (100% human post btw!)
Sometimes AI gives such a surprising or unusual answer to a question that it's worth a discussion. I think it should be discouraged but not "forbidden".
If it’s part of an otherwise coherent post making a larger point I have no issue with it.
If it’s a low effort copy pasta post I think downvotes are sufficient unless it starts to obliterate the signal vs noise ratio on the site.
Absolutely, in cases where it's clear that the person asked a chatbot and copypasted the direct LLM response without editing. I usually downvote those (if it's within the window to do so), and flag them.
I'd love to say yes, but it's basically unenforcable if the comment doesnt disclose it itself.
While I agree that we should be genuinely engaging with each other on this platform, trying to disallow all AI generated content reminds me of the naysayers when it comes to letting llms write code.
Yes, if you wanted to ask an llm, you’d do so, but someone else asks a specific question to the llm, and generates an answer that’s specific to his question. And that might add value to the discussion.
Yes, ounequivocally yes
Permaban from first strike
I remember times when this sentiment was being expressed about “According to Wikipedia...” As much as I am pro implementing this rule, I’m afraid we are losing this fight.
Yes. We should also ban citing Wikipedia. If you don't know, Wikipedia itself recommends you don't cite it- quote the sources wikipedia or your AI use, don't quote secondary sources.
Maybe I remember the Grok ones more clearly but it felt like “I asked Grok” was more prevalent than the others.
I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (https://rfd.shared.oxide.computer/rfd/0576). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.
Of course, if it’s banned maybe people just stop admitting it.
Absolutely.
I am blown away by LLMs - now using ChatGPT to help me write some python scripts in seconds, minutes, that used to take me hours, weeks.
Yet, when I ask a question, or wish to discuss something on here, I do it because I want input from another meatbag in the hacker news collective.
I don’t want some corporate BS.
I've always liked that HN typically has comments that are small bits of research relevant to the post that I could have done myself but don't have to because someone else did it for me. In a sense the "I asked $AI, and it said" comments are just the evolved form of that. However the presentation does matter a little, at least to me. Explicitly stating that you asked AI feels a little like an appeal to authority... and a bad one at that. And makes the comment feel low effort. Often times comments that frame themselves in this way will be missing the "last-mile" effort that tailors the LLMs response to the context of the post.
So I think maybe the guidelines should say something like:
HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.
---------
Also I asked ChatGPT and it said:
Short Answer
HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.
A rule probably isn’t needed. A norm is.
yea, mate. definitely. if i want to hear / read some replies by machine, i'd ask them myself. the value of the this site is the lived experience and thought by the users.
Well yes, of course, but that might interfere with the pump, so unfortunately, you will kindly be asked to report to your nearest re-education center for correction. Thank you for understanding.
Yes.
Thank you for your attention on this matte.r
The guidelines are just fine as they are.
Low effort LLM crap is bad.
Flame bait uncurious mob pile-ons (this thread) are also bad.
Use the downvote button.
rather then ban, I would prefer posts/comments are labeled as such.
with features:
- ability to hide AI labeled replies (by default)
- assign lower weight when appropriate
- if a user is suspected to be AI-generated, retroactively label all their replies as "suspected AI"
- in addition to downvote/upvote, a "I think this is AI" counter
Yes, please. It’s extremely low effort. If you’re not adding anything of value (typing into another window and copying and pasting the output are not) then it serves no purpose.
It’s the same as “this” of “wut” but much longer.
If you’re posting that and ANALYZING the output that’s different. That could be useful. You added something there.
Yes. Unambiguously. I want this exact behavior to lead to social ostracism everywhere.
Edit: I'm happy to add two related categories to that too - telling someone to "ask ChatGPT" or "Google it" is a similar level offense.
It's karma fishing, so yes, please ban it. While we're at it, just automatically add the archive.is link to any news article or don't allow voting on those comments ¯\_(ツ)_/¯
AI generated content should be absolutely banned without question. This includes comments and submissions.
I think comments like this should link their generation rather than C+P it. Not sure if this should be a rule or we can just let downvoting do the work - I worry that a rule would be overapplied and I think there are contexts that are okay.
Yes
Yes.
In my experience with managing teams, you want to encourage and not forbid this because the alternative is people will use llms without telling, which is 100 times worse than disclosed LLM use.
I agree and think the solution is to get rid of the LLMs.
were lmgtfy links ever forbidden?
While I do think such comments are pointless and almost never add anything to the discussion, I don't believe they're anywhere near as actively harmful as comments and (especially) submissions that are largely or entirely AI generated with no disclosure.
I've been seeing more and more of these on the front page lately.
Yes.
If it was my personal site I would instantly ban all such accounts. They are basically virus-carrying individuals from outer space, here to destroy the discourse.
Since that isn't likely to happen, perhaps the community can develop a browser extension that calls attention to or suppresses such accounts.
large LLM-generated texts just get in the way of reading real text from real humans
In terms of reasons for platform-level censorship, "I have to scroll sometimes" seems like a bad one.AI slop is very exhausting to understand. If it's well written maybe not. If its obviously AI then that should be flagged.
Only if they also do a google search, provide the top one hundred hits, and paste in a relevant Wikipedia page.
I don’t see how it is much different than using Wikipedia. They are usually about the same answer and at least in Gemini it is usually a correct answer now.
No, don't ban it. It's a useful signal for value judgements.
TL;DR; Until we are sure we have the moderation systems to assist surfacing the good stuff I would be in favour of temporary guidelines to maintain quality.
Longer ...
I am here for the interesting conversations and polite debate.
In principle I have no issues with either citing AI responses in much the same way we do with any other source. Or with individual's prompting AI's to generate interesting responses on their behalf. When done well I believe it can improve discourse.
Practically though, we know that the volume of content AI's can generate tends to overwhelm human based moderation and review systems. I like the signal to noise ratio as it is; so from my pov I'd be in favour of a cautious approach with a temporary guidelines against it's usage until we are sure we have the moderation tools to preserve that quality.