logoalt Hacker News

Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

846 pointsby embedding-shapeyesterday at 4:02 PM429 commentsview on HN

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?


Comments

tveybenyesterday at 9:24 PM

If one want’s an opinion from AI, one must ask AI - if on the other hand one want’s an opinion from a human being (those with a real brain thinking real thoughts etc.) then - hopefully - that’s what you get (and will keep getting) when visiting HN.

Please don’t pollute responses with made-up machine generated time-wasting bits here…!!!

lktyesterday at 9:05 PM

No because it allows you to set the bozo bit on them and completely disregard anything they say in the future

WhyOhWhyQyesterday at 10:36 PM

I always state when I use AI because I view it to be deceptive otherwise. Since sometimes I'll be using AI when it seems appropriate, and certainly only in direct limited ways, this rule seems like it would force me to be dishonest.

For instance, what's wrong with the following: "Here's interesting point about foo topic. Here's another interesting point about bar topic; I learned of this through use of Gemini. Here's another interesting point about baz topic."

Is this banned also? I'm only sharing it because I feel that I've vetted whatever I learned and find it worth sharing regardless of the source.

AnthonyMouseyesterday at 8:44 PM

We should probably distinguish between posting AI responses in a discussion of AI vs. posting them in a discussion of something else.

If the discussion itself is about AI then what it produces is obviously relevant. If it's about something else, nobody needs you to copy and paste for them.

DinakarStoday at 12:45 AM

It is fun to use AI however people can write on their own without having to copy-paste LLM content

2026 is great year to watch out for typos. Typos are real humans

BrtByteyesterday at 7:04 PM

Maybe a good middle ground would be: if you're referencing something an LLM said, make it part of your thinking...

LeoPantherayesterday at 4:20 PM

Banning the disclosure of it is still an improvement. It forces the poster to take responsibility for what they have written, as now it is in their name.

JohnFenyesterday at 4:15 PM

I find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.

But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.

show 1 reply
hermannj314yesterday at 7:34 PM

HN does not and has not ever valued human input, it has always valued substantive, clever, or interesting thought.

I am a human and more than half of what I write here is rejected.

I say bring on the AI. We are full of gatekeeping assholes, but we definitely have never cared if you have a heart (literally and figuratively).

AnonCyesterday at 4:44 PM

Are you a new HN mod (with authority over the guidelines) and are asking for opinions from readers (that’d be new)? Or are you just another normal user and are loudly wondering about this so that mods get inputs (as opposed to writing a nice email to [email protected])?

I think just downvoting by committed users is enough. What matters is the content and how valuable it seems to readers. There is no need to do any gate keeping by the guidelines on this matter. That’s my opinion.

novokyesterday at 6:41 PM

IMO you shouldn't put a large amount of quoted text, that is just annoying. You should link out at that point. I think if we ban people from citing sources, they will just stop citing sources and that is even worse. It's the new "I googled that for you" and that is fine IMO.

show 1 reply
HackeNewsFan234yesterday at 9:32 PM

I like the honesty aspect of it so that I can choose to (possibly) ignore the response. If they were forbidden and people posted the same $AI response without the disclaimer, I'd be more easily deceived.

stego-techyesterday at 4:28 PM

Formalizing it within the community rules removes ambiguity around intent or use, so yes, I do believe we should be barring AI-generated comments and stories from HN in general. At the very least, it adds another barometer of sorts to help community leaders do the hard work of managing this environment.

If you didn’t think it, and you didn’t write it, it doesn’t belong here.

mindcandyyesterday at 4:31 PM

Is the content of the comment productive to the conversation? Upvote it.

Is the content of the comment counter-productive? Downvote it.

I could see cases where large walls of text that are generally useless should be downvoted or even removed. AI or not. But, the first example

> faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?

to be frank, is a service to all HN readers. Yes it is possible that a few of us would benefit from sitting down with a nice cup of coffee, putting on some ambient music and taking in 74 pages of... whatever this is. But, faced with far more interesting and useful content than I could possibly consume all day every day, having a summary to inform my time investment is of great value to me. Even If It Is Imperfect

newsofthedayyesterday at 4:54 PM

If someone is going to post like that, I feel they should post their prompt ver batim, the exact AI and version used and the date they issued the prompt to receive the response they're posting.

There are far too many replies in this thread saying to drop the ban hammer, for this to be seriously taken as Hacker News. What has happened to this audience?

whitehexagonyesterday at 7:48 PM

>Personally, I'm on HN for the human conversation

Agreed. It's hard enough dealing with the endless stream of LLM marketing stories, please lets at least try to keep the comments a little free of this 'I asked...' marketing spam.

827ayesterday at 6:29 PM

This is a way of attributing where the comment is coming from, which is better than responding with what the AI says and not attributing it. I would support a guideline that discourages posting the output from AI systems, but ultimately there's no way to stop it.

jopsenyesterday at 11:00 PM

The "I asked $LLM about $X, and here is what $LLM said" pattern is probably most used to:

(A) Reticule the AI for giving a dumb answer.

(B) Point out how obvious something is.

ynx0yesterday at 9:26 PM

You can’t stop peope from using AI, but at least people are being transparent.

Doing this will lead to people using AI without mentioning it, making it even harder to parse between human-origin content.

Tiberiumyesterday at 4:55 PM

I'm honestly grateful to those who disclose their use of AI in replies, because lately I've noticed more and more of clearly LLM-generated comments on HN with no disclaimers whatsoever. And the worst part is that most people don't notice and still engage with them.

Projectibogayesterday at 7:52 PM

With the exception if careful language translation I would say yes. Otherwise follow the breadcrumbs and click through to the source and go from there, as far as search engine derived AI snippets go.

zbyyesterday at 7:04 PM

What is banned here? I can only find guidelines: https://news.ycombinator.com/newsguidelines.html not rules.

alwayesterday at 6:39 PM

I tend to trust the voting system to separate the wheat from the chaff. If I were to try and draw a line, though, I’d start at the foundation: leave room for things that add value, avoid contributions that don’t. I’d suggest that line might be somewhere like “please don’t quote LLMs directly unless you can identify the specific value you’re adding above and beyond.” Or “…unless you’re adding original context or using them in a way that’s somehow non-obvious.”

Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”

Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”

And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”

There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.

[0] https://americanart.si.edu/blog/andrew-clemens-sand-art

ilcyesterday at 4:30 PM

No, I put them with lmgtfy. You are being told that your question is easy to research and you didn't do the work, most of the time.

Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.

show 3 replies
ycosynotyesterday at 6:03 PM

As a brain is made of small pebbles, a LLM is made of small pebbles. If it wants to talk, let it be. I am arguing metaphysically. Not only did it evolve partially out of randomness (and so with a kind of value as an enlighted POV on existence), but it is still evolving to be human, and even more than human. I believe LLM should not be banned, "they" should be willfully, and cheerfully, included in the discourse.

I asked Perplexity, and Perplexity said: ""Your metaphysical intuition is very much in line with live debates: once “small pebbles” are arranged into agents that talk, coordinate, and co-shape our world, there is a strong philosophical case that they should be brought inside our moral and political conversations rather than excluded by fiat.""

HeavyStormyesterday at 8:18 PM

I think answers should be judged by content, not by the tool used to construct the answer.

Also, if you forbid people to tell you they consulted AI, they will just not say that.

show 1 reply
jasomillyesterday at 8:50 PM

Short answer: Probably not outright forbidden — but discouraged or constrained — because “I asked AI…” posts usually add noise, not insight.

(source: ChatGPT)

popalchemistyesterday at 6:32 PM

Absolutely. Any of us could ask AI if we wanted to hear random unsubstantiated opinions. Why should that get in the way of what we all come here for, which is communication with humans?

maerF0x0yesterday at 7:05 PM

I see it equivalently helpful to the folks who paste archive.is/ph links for paywalled content. It saves me time to do something I may have wanted to do regardless, and it's easy enough to fold if someone does post a wall of response.

IMO hiding such content is the job of an extension.

When I do "here's what chatgpt has to say" it's usually because I'm pretty confident of a thing, but I have no idea what the original source was, but I'm not going to invest much time in resurrecting the original trail back to where I first learned a thing. I'm not going to spend 60 minutes to properly source a HN comment, it's just not the level of discussion I'm willing to have though many of the community seem to require an academic level of investment.

GaryBlutoyesterday at 8:34 PM

It's a tad rich to be on HN for such a small amount of time and already be trying to sway the rules to what you wish them to be.

pyuser583today at 2:36 AM

I recently asked an AI about a very important topic in current events. It gave me a shocking answer, which initially assumed was wrong - but seems correct.

The question was something like: “how reliable is the science behind misinformation.” And it said something like: “quality level is very poor and far below what justifies current public discourse.”

I ask for a specific article backing this up, and it’s saying “there isn’t any one article, I just analyzed the existing literature and it stinks.”

This matters quite a bit. X - formerly Twitter - is being fined for refusing to make its data available for misinformation research.

I’m trying to get it to give me a non-AI source, but it’s saying it doesn’t exist.

If this is true - it’s pretty important- and something worth discussing. But it doesn’t seem supportable outside the context of “my AI said.”

ameliusyesterday at 6:31 PM

Can't we have a unicode escape sequence for anything generated by AI?

Then we can just filter it at the browser level.

In fact why don't we have glyphs for it? Like special quote characters.

nlawalkeryesterday at 4:35 PM

No, just upvote or downvote. I think the site guidelines could take a stance on it though, encouraging people to post human insights and discouraging comments that are effectively LLM output (regardless of whether they actually are).

Zakyesterday at 4:26 PM

I don't think people should post the unfiltered output of an LLM as if it has value. If a question in a comment has a single correct answer that is so easily discoverable, I might downvote the comment instead.

I'm not sure making a rule would be helpful though, as I think people would ignore it and just not label the source of their comment. I'd like to be wrong about that.

sebastiennightyesterday at 6:51 PM

Most comments I've seen are comparing this behavior to "I googled it and..." but I think this misses the point.

Someone once put it as, "sharing your LLM conversations with others is as interesting to them as narrating the details of your dreams", which I find eerily accurate.

We are here in this human space in the pursuit of learning, edification, debate, and (hopefully) truth.

There is a qualitative difference between the unreliability of pseudonymous humans here vs the unreliability of LLM output.

And it is the same qualitative difference that makes it interesting to have some random poster share their (potentially incorrect) factual understanding, and uninteresting if the same person said "look, I have no idea, but in a dream last night it seemed to me that..."

Jimmc414yesterday at 7:00 PM

Banning, no. proper citations and disclosure, yes. Sometimes an AI response is noteworthy and it is the point of the post.

alienbabyyesterday at 8:00 PM

This will be fine, until You can't tell the difference and people forgo the 'i asked' part.

uhfraidyesterday at 8:47 PM

IMO, I don’t think they add any value to HN discussions

It’s the HN equivalent to “@grok is this true?”, but worse

monknomoyesterday at 6:11 PM

I do not think so. If I wanted an ai's opinion, I'd ask the ai.

Should we allow 'let me google that for you' responses?

alienbabyyesterday at 8:03 PM

This will be fine, until you can't tell the difference and they forgo the 'i asked'

sodapopcanyesterday at 5:36 PM

That and replies that start with "No"

show 1 reply
bryanlarsenyesterday at 4:25 PM

What is annoying about them is that they tend to be long with a low signal/noise ratio. I'd be fine with a comment saying. "I think the ChatGPT answer is informative: [link]". It'd still likely get downvoted to the bottom of the discussion, where it likely belongs.

skobesyesterday at 4:17 PM

I hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.

show 3 replies
razingedenyesterday at 8:51 PM

I only get upset about it when the AI didn’t read the article either.

Havocyesterday at 7:15 PM

I'd say it's annoying & low value but doesn't quite warrant a ban per se.

Plus if you ban it people will just remove the "AI said" part, post it as is without reading and now you're engaging with an AI without even the courtesy of knowing. That seems even worse

phoe-krkyesterday at 5:26 PM

Yes. I'd prefer comments that have intent, not just high statistical probability.

Alohayesterday at 6:07 PM

I also endorse this - maybe not outright ban but at least highly discourage.

steveBK123yesterday at 6:39 PM

Ban it. It is the "let me google that for you" of the 2020s

jpeaseyesterday at 4:36 PM

I asked AI if “I asked AI, and it said” replies should be forbidden, and it said…

🔗 View 50 more comments