logoalt Hacker News

Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

869 pointsby embedding-shapeyesterday at 4:02 PM432 commentsview on HN

As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?


Comments

63stackyesterday at 4:22 PM

Yes

0x00clyesterday at 4:23 PM

This is what DeepSeek said:

> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.

show 1 reply
Kim_Bruningyesterday at 7:54 PM

But what [if the llms generate] constructive and helpful comments? https://xkcd.com/810/

For obvious(?) reasons I won't point to some recent comments that I suspect, but they were kind and gentle in the way that Opus 4.5 can be at times; encouraging humans to be good with each other.

I think the rules should be similar to bot rules I saw on wikipedia. It ought to be ok to USE an AI in the process of making a comment, but the comment needs to be 'owned' by the human/the account posting it.

Eg. if it's a helpful comment, it should be upvoted. If it's not helpful, downvoted; and with a little luck people will be encouraged/discouraged from using AI in inappropriate ways.

"I asked gemini, and gemini said..." is probably the wrong format, if it's otherwise (un)useful, just vote it accordingly?

cdelsolaryesterday at 7:10 PM

nah

ok123456yesterday at 6:32 PM

yes

WithinReasonyesterday at 6:42 PM

That's what the upvote/downvote system is for.

mistrial9yesterday at 4:32 PM

the system of long-lived nicks on YNews is intended to build a mild and flexible reputation system. This is valuable for complex topics, and to notice zealots, among other things. The feeling while reading that it is a community of peers is important.

AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.

This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.

WesolyKubeczekyesterday at 4:35 PM

Yes. If I wanted an LLM’s opinion, I would have asked it myself.

show 1 reply
pembrookyesterday at 4:33 PM

No, this is not a good rule.

What AI regurgitates about about a topic is often more interesting and fact/data-based than the emotionally-driven human pessimists spewing constant cynicism on HN, so in fact I much prefer having more rational AI responses added in as context within a conversation.

show 1 reply
tehwebguyyesterday at 4:22 PM

It should be allowed and downvoted

mrguyoramayesterday at 6:33 PM

Umm, just to be clear;

HN is not actually a democracy. The rules are not voted on. They are set by the people who own and run HN.

Please tell me what you think those people think of this question.

syockityesterday at 4:18 PM

You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?

Obligatory xkcd https://xkcd.com/810/

show 1 reply
legoheadyesterday at 5:56 PM

Lots of old man yelling at clouds energy in here.

This is new territory, you don't ban it, you adapt with it.

slowmovintargettoday at 12:46 AM

While I'd certainly prefer raw human authorship on a forum like this, I can't help but think this is the wrong question. The labeling up front appears to be merely a disclosure style. That is, commenters say that as a way of notifying the reader that they used an LLM to arrive at the answer (at least here on HN) rather than citing the LLM as an authority.

"Banning" the comment syntax would merely ban the form of notification. People are going to look stuff up with an LLM. It's 2025; that's what we do instead of search these days. Just like we used to comment "Well Google says..." or "According to Alta Vista..."

Proscribing quoting an LLM is a losing proposition. Commenters will just omit disclosure.

I'd lean toward officially ignoring it, or alternatively ask that disclosure take on less conversational form. For example, use quote syntax and cite the LLM. e.g.:

> Blah blah slop slop slop

-- ChatGippity

Razenganyesterday at 10:55 PM

Fighting the zeitgeist never works out. The world's gonna move on whether you move on with it or not.

I for one would love to have summary executions for anyone who says that Hello-Fellow-Kids cringe pushed on us by middle-aged squares: "vibe"

dominotwyesterday at 4:24 PM

i asked chatgpt and it said no its not a good idea to ban

stocksinsmocksyesterday at 9:16 PM

I already assume everything, and I mean everything, that I read in any comment section, whether here or sewers like Reddit, X, or one of the many Twitter-but-Communist clones, is either:

1. Paid marketing (tech stacks, political hackery, Rust evangelism) 2. Some sociopath talking his own book 3. Someone who spouts off about things he doesn’t know about (see: this post’s author)

The internet of real people died decades ago and we can only wander in the polished megalithic ruins of that enlightened age.

bjourneyesterday at 8:34 PM

Yes, please. LLMs are poisoning all online conversions everywhere. It's an epidemic global plague.

ben_wyesterday at 4:19 PM

Depends on the context.

I find myself downvoting (flagging) them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.

Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360

hooverdyesterday at 6:16 PM

Yes please. I don't care if somebody did their own research via one, but it's just so low effort.

renewiltordyesterday at 5:09 PM

[dead]

ruinedyesterday at 4:29 PM

you got a downvote button

leephillipsyesterday at 4:51 PM

Posting this kind of slop should be a banning offense. Also: https://hn-ai.org/

moomoo11yesterday at 4:25 PM

Honestly I judge people pretty harshly. I ask people a question in honest good faith. If they’re trying to help me out and genuinely care and use AI fine.

But most of the time it’s like they were bothered that I asked and copy paste what an AI said.

Pretty easy. Just add their name to my “GFY” list and move on in my life.

createaccount99yesterday at 5:29 PM

Forbidden? They should be mandatory.

ekjhgkejhgkyesterday at 4:22 PM

I don't think they should be banned, I think they should be encouraged: I'm always appreciative when people who can't think for themselves openly identify themselves so that it costs me less effort to spot them.

cm2012yesterday at 5:51 PM

These comments are in the top 10% of usefulness of all comments in those threads. Clear, legible information that is easy to read and relevant. Keep!