logoalt Hacker News

Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

843 pointsby embedding-shapeyesterday at 4:02 PM429 commentsview on HN

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?


Comments

tptacekyesterday at 5:50 PM

They already are against the rules here.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

(This is a broader restriction than the one you're looking for).

It's important to understand that not all of the rules of HN are on the Guidelines page. We're a common law system; think of the Guidelines as something akin to a constitution. Dan and Tom's moderation comments form the "judicial precedent" of the site; you'll find things in there like "no Internet psychiatric diagnosis" and "not owing $publicfigure anything but owing this community more" and "no nationalist flamewar" and "no hijacking other people's Show HN threads to promote your own thing". None of those are on the Guidelines page either, but they're definitely in the guidelines here.

show 2 replies
TomasBMyesterday at 6:02 PM

Yes.

The pre-LLM equivalent would be: "I googled this, and here's what the first result says," and copying the text without providing any additional commentary.

Everyone should be free to read, interpret and formulate their comments however they'd like.

But if a person outsources their entire thinking to an LLM/AI, they don't have anything to contribute to the conversation themselves.

And if the HN community wanted pure LLM/AI comments, they'd introduce such bots in the threads.

show 1 reply
flkiwiyesterday at 6:08 PM

I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.

show 9 replies
masfuerteyesterday at 4:21 PM

Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.

show 6 replies
stack_frameryesterday at 6:35 PM

I'm here to learn what other people think, so I'm in favor of not seeing AI comments here.

That said, I've also grown exceedingly tired of everyone saying, "I see an em dash, therefore that comment must have come from AI!"

I happen to like em dashes. They're easy to type on macOS, and they're useful in helping me express what I'm thinking—even if I might be using them incorrectly.

show 1 reply
josefrescoyesterday at 4:22 PM

As a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.

We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.

show 1 reply
tpxlyesterday at 4:17 PM

I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.

show 5 replies
Rebelgeckoyesterday at 5:52 PM

This is not just about banning a source; it is about preserving the core principle of substantive, human-vetted content on HN. Allowing comments that are merely regurgitations of an LLM's generic output—often lacking context, specific experience, or genuine critical thought—treats the community as an outsourced validation layer for machine learning, rather than an ecosystem for expert discussion. It's like allowing a vending machine to contribute to a Michelin-starred chef's tasting menu: the ingredients might be technically edible, but they completely bypass the human skill, critical judgment, and passion that defines the experience. Such low-effort contributions fundamentally violate the "no shallow dismissals" guideline by prioritizing easily manufactured volume over unique human insight, inevitably degrading the platform's high signal-to-noise ratio and displacing valuable commentary from those who have actually put in the work.

show 2 replies
gortokyesterday at 4:17 PM

While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

show 19 replies
AdamH12113yesterday at 4:21 PM

To me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.

I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.

m-hodgesyesterday at 6:00 PM

I feel like this won't eliminate AI-generated replies, it'll just eliminate disclosing that the replies are AI-generated.

show 2 replies
neomyesterday at 6:29 PM

Not addressing your question directly, but when I got flagged last year I emailed Dan and this was the reply: " John Edgar <[email protected]> Sat, Jul 15, 2023, 8:08 AM to Hacker

https://news.ycombinator.com/item?id=36735275

Just curious if chatGPT is actually formally banned on HN?

Hacker News <[email protected]> Sat, Jul 15, 2023, 4:12 PM to me

Yes, they're banned. I don't know about "formally" because that word can mean different things and a lot of the practice of HN is informal. But we've definitely never allowed bots or generated comments. Here are some old posts referring to that.

dang

https://news.ycombinator.com/item?id=35984470 (May 2023) https://news.ycombinator.com/item?id=35869698 (May 2023) https://news.ycombinator.com/item?id=35210503 (March 2023) https://news.ycombinator.com/item?id=35206303 (March 2023) https://news.ycombinator.com/item?id=33950747 (Dec 2022) https://news.ycombinator.com/item?id=33911426 (Dec 2022) https://news.ycombinator.com/item?id=32571890 (Aug 2022) https://news.ycombinator.com/item?id=27558392 (June 2021) https://news.ycombinator.com/item?id=26693590 (April 2021) https://news.ycombinator.com/item?id=22744611 (April 2020) https://news.ycombinator.com/item?id=22427782 (Feb 2020) https://news.ycombinator.com/item?id=21774797 (Dec 2019) https://news.ycombinator.com/item?id=19325914 (March 2019)"

(Edit: oh, it's not 2024 anymore. How time flies!)

michaelcampbellyesterday at 4:21 PM

Related: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.

show 7 replies
TRiG_Irelandyesterday at 7:06 PM

As Tom Scott has said, people telling you what AI told them is worse than people describing their dreams. It definitely does not usefully contribute to the conversation.

Small exception if the user is actually talking about AI, and quoting some AI output to illustrate their point, in which case the AI output should be a very small section of the post as a whole.

ManlyBreadyesterday at 4:33 PM

I think that the whole point of the discussion forum is to talk to other people, so I am in favor of banning AI replies. There's zero value in these posts because anyone can type chatgpt.com in the browser and then ask whatever question they want at any time while getting input from an another human being is not always guaranteed.

show 2 replies
incanus77yesterday at 4:49 PM

Yes. This is the modern equivalent of “I searched the web and this is what it said”. If I could do the same thing and have the same results, you’re not adding any value.

Though this is unlikely a scenario that happened, I’d equate this with someone asking me what I thought about something, and me walking them over to a book on the shelf to show them what that author thought. It’s just an aggregated and watered-down average of all the books.

I’d rather hear it filtered through a brain, be it a good answer or bad.

chemotaxisyesterday at 4:16 PM

This wouldn't ban the behavior, just the disclosure of it.

show 3 replies
lprovenyesterday at 4:22 PM

I endorse this. Please do take whatever measures are possible to discourage it, even if it won't stop people. It at least sends a message: this is not wanted, this is not helpful, this is not constructive.

kreckyesterday at 7:48 PM

Yes.

Saying “ChatGPT told me …” is a fast track to getting your input dismissed on our team. That phrasing shifts accountability from you to the AI. If we really wanted advice straight from the model, we wouldn’t need a human in the loop - we’d ask it ourselves.

show 1 reply
TulliusCiceroyesterday at 5:41 PM

I'd like it to be forbidden, yes.

Sure, I'll occasionally ask an LLM about something if the info is easy to verify after, but I wouldn't like comments here that were just copy-pastes of the Google search results page either.

Loovehtoday at 6:42 AM

I wrote a short piece on the topic a while back, if anybody's interested.

https://niclas-nilsson.com/blog/40f2-empty-sharing

snayanyesterday at 6:14 PM

I would say it depends, from your examples:

1) borderline. Potentially provides some benefit to the thread for readers who also don't have time or expertise to read an 83 page paper. Although it would require someone to acknowledge and agree that the summary is sound.

2) Acceptable. Dude got grok to make some cool visuals that otherwise wouldn't exist. I don't see what the issue is with something like this.

3) borderline. Same as 1 mostly.

The more I think about this, the less bothered I am by it. If the problem were someone jumping into a conversation they know nothing about, and giving an opinion that is actually just the output of an LLM, I'd agree. But all the examples you provided are transformative in some way. Either summarizing and simplifying a long article or paper, or creating art.

appreciatorBustoday at 5:54 AM

Yes.

Copying and pasting from chatGPT is no more contributing to discussion than it would be if you pasted the question into Google and submitted the result.

Everyone here knows how to look up an answer in Google. Everyone here knows how to look up an answer in ChatGPT.

If anyone wanted a Google result for a chat, GPT result, they would have just done that.

TheAceOfHeartsyesterday at 10:54 PM

I think you shouldn't launder LLM output as your own, but in AI model discussion and new release threads it can be useful to highlight examples of outputs from LLMs. The framing and usage is a key element: I'm interested in what kinds of things people are trying. Using LLM output as a substitute for engagement isn't interesting, but combining a bunch of responses to highlight differences between models could be interesting.

I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.

kldgtoday at 3:25 AM

This website has guidelines? I have a friend who vomits out what an AI's response to something is to current events; drives me mad, but I resist being a dick about it (to their face).

I don't recall any instances where I've run into the problem here, maybe because I tend to arrive to threads as a result of them being popular (listed on Google News) which means I'm only going to read the top 10-50 posts. I read human responses for a bit before deciding if I should continue reading, and that's the same system I use for LLMs because sometimes I can't tell just by the formatting; if it's good, it's good - if it's bad, it's bad -- I don't care if a chicken with syphilis wrote it.

ThrowawayR2yesterday at 9:20 PM

That's been discussed previously in https://news.ycombinator.com/item?id=33945628 and dang said in the topmost comment: "They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either! ...". There more to his comment if you're interested.

The HN guidelines haven't yet been updated but perhaps if enough people send an email to the moderators, they'll do it.

amatechayesterday at 6:50 PM

Yes. If I wanted an LLM-generated response I'd submit my own query to such a service. I never want to see LLM-generated content on HN.

cwmooreyesterday at 8:59 PM

Yes. Embarrassing cringe, whether or not it is noted.

But this is a text-only forum and text (to a degree, all digital content) has become compromised. Intent and message is not attributable to real life experience or effort. For the moment I have accepted the additional overhead.

As with most, I have a habit of estimating the validity of expertise in comments, and experiential biases, but that is becoming untenable.

Perhaps there will soon be transformer features that produce prompts adequate to the task of reproducing the thought behind each thread, so their actual value, informational complexity, humor, and salience, may be compared?

Though many obviously human commentors are actually inferior to answers from “let me chatgpt that for you.”

I have had healthy suspicions for a while now.

krickyesterday at 11:44 PM

Sure, everyone wants to "stop silly people replying to my comments by posting LLM-generated garbage", but rules are rules, so you should understand that by introducing a rule like the one you propose, you also automatically forbid discussions about "here's a weird trick to make LLM make stupid mistakes", or "biases of different LLMs" where people reply to each other which prompts they tried and what was the result. Obviously, that's not what you've meant (right?), and everyone understands that, so then it's a judgement call when this applies and when it doesn't, and, congratulations, you've made another stupid rule that no one follows "and that's ok".

"A guideline to refrain" seems better. Basically, this should be only slightly more tolerated than "let me google for you" replies: maybe not actively harmful, but rude. But, anyway, let's not be overly pretentious: who even reads all these guidelines (or rules for that matter)? Also, it is quite apparent, that the audience of HN is on average much less technical and "nerdy" than it was, say, 10 years ago, so, I guess, expect these answers to continue for quite some time and just deal with it.

gruezyesterday at 4:19 PM

What do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?

show 4 replies
a_wild_dandanyesterday at 4:28 PM

No. I like being able to ignore them. I can’t do that if people chop off their disclaimers to avoid comment removal.

sans_souseyesterday at 4:26 PM

There be a thing called Thee Undocumented Rules of HN, aka etiquette, in which states - and I quote: "Thou shall not post AI generated replies"

I can't locate them, but I'm sure they exist...

show 1 reply
tekacsyesterday at 6:05 PM

Based on other HN rules thus far, I tend to think that this just results in more comments pointing out that you're violating a rule.

In many threads, those comments can be just as annoying and distracting as the ones being replied to.

I say this as someone who to my recollection has never had anyone reply with a rule correction to me -- but I've seen so many of them over the years and I feel like we would fill up the screen even more with a rule like this.

clearleafyesterday at 10:38 PM

I don't see the point of publishing any AI generated content. If I want AIs opinion on something I can ask it. If I want an AI image I can generate it. I've never found it helpful to have someone else's ai output lying around.

zoomablemindyesterday at 4:28 PM

There's hardly a standard for a 'quality' contribution to discussion. Many styles, many opinions, many ways to react and support one's statements.

If anything, it had been quite customary to supply references for some important facts. Thus letting readers to explore further and interpret the facts.

With AI in the mix the references become even more important, in the view of hallucinations and fact poisoning.

Otherwise, it's a forum. Voting, flagging, ignoring are the usual tools.

show 1 reply
BiraIgnacioyesterday at 10:33 PM

HN (and the community here) has a great system for surfacing the most useful information and therefore, pushing the not so good one away.

So no, I don't think forbidding anything helps. Let things fall where they should, otherwise.

unsignedintyesterday at 9:12 PM

I think the real litmus test should be whether the comment adds anything substantive to the conversation. If someone is outsourcing their ideas to AI, that’s a different situation from simply using AI to rephrase or tidy up their own thoughts—so long as they fully understand what they’re posting and stand behind it.

Saying "I asked AI" usually falls into the former category, unless the discussion is specifically about analyzing AI-generated responses.

People already post plenty of non-substantive comments regardless of whether AI is involved, so the focus should be on whether the remark contributes any meaningful value to the discourse, not on the tools used to prepare it.

AlwaysRockyesterday at 4:23 PM

Yes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".

I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?

At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.

show 3 replies
MetaWhirledPeasyesterday at 7:34 PM

> Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

This should be restated: Should people stop admitting to AI usage out of shame, and start pretending to be actual experts or doing research on their own when they really aren't?

Be careful what you wish for.

yomismoaquiyesterday at 4:24 PM

I think disclosing the use the AI is better than hiding it. The alternative is people using it but not telling for fear of a ban.

wkat4242today at 4:59 AM

Forbidden, no. But discouraged unless it really adds something constructive to the discussion.

qustrolabeyesterday at 5:51 PM

HN have very primitive comments layout that gives too big of an focus to large responses and first most upvoted post with all its replies. I think just because of that it's better to do something about large responses with little value. I'd rather they just share conversation link

pseudocomposertoday at 1:15 AM

I think it has its place, say, summarizing a large legal document under discussion. That said, if part of what someone says involves citing AI, I’d rather they acknowledge AI as their source.

I think making it a “rule” just encourages people to use AI and not acknowledge its use.

shishyyesterday at 4:15 PM

People are probably copy pasting already without that disclosure :(

show 1 reply
rsynnottyesterday at 5:04 PM

They should be forbidden _everywhere_. Absolutely obnoxious.

RiverCrochetyesterday at 5:49 PM

Yes. LLM copy/paste strongly indicates karma/vote farming, because if I wanted an LLM's output I could just go there myself.

Someone below mentions using it for translation and I think that's OK.

Idea: Prevent LLM copy/pasting by preempting it. Google and other things display LLM summaries of what you search for after you enter your search query, and that's frequently annoying.

So imagine the same on an HN post. In a clearly delineated and collapsible box underneath or beside the post. It is also annoying, but it also removes the incentive to run the question through an LLM and post the output, because it was already done.

PeterStueryesterday at 4:28 PM

For better or worse, that ship has sailed. LLM's are now as omnipresent as websearch.

Some people will know how to use it in good taste, others will try to abuse it in bad taste.

It might not be universally agreed which is which in every case.

show 1 reply
suckleryesterday at 8:52 PM

Asking a chat bot a question and adding its answer to a public conversation defeats the purpose of the conversation. It's like telling someone to Google their question when your personal answer could have potentially been a lot more helpful than a Google search. If I wanted Grok I'd ask Grok, not the human I chose to speak to instead.

tveybenyesterday at 9:24 PM

If one want’s an opinion from AI, one must ask AI - if on the other hand one want’s an opinion from a human being (those with a real brain thinking real thoughts etc.) then - hopefully - that’s what you get (and will keep getting) when visiting HN.

Please don’t pollute responses with made-up machine generated time-wasting bits here…!!!

HPsquaredyesterday at 7:08 PM

It's nice that they warn others, though. Better to let them label it as such rather than banning the label. I'd rather it be simply frowned upon.

🔗 View 50 more comments