While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
I agree with this sentiment.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
I am just sad that I can no longer use em dashes without people immediately assuming what I wrote was AI. :(
I think there's well done and usually unnoticeable and poorly done and insulting. I don't agree that the two are always the same, but I think lots of people might think they are doing the former but are not aware enough to realize they are doing the latter.
It's at least a factor in why I value HN commentary so much less than I used to.
Aside:
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
I think what's important here is to reduce harm even if it's still a little annoying. Because if you try to completely ban mentioning something is LLM written you'll just have people doing it without a disclaimer...
Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!
Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.
LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.
This is the only reasonable take.
It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.
Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.
Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.
Agree and I think it might also be useful to have that be grounds for a shadowban if we start seeing this getting out of control. I'm not interested, even slightly, in what an LLM has to say about a thread on HN. If I see an account posting an obvious LLM copy/paste, I'm not interested in seeing anything from that account either. Maybe a warning on the first offense is fair, but it should not be tolerated or this site will just drown in the slop.
Yeah like if I wanted to know what a particular AI says, I'd have asked it..
There will be many cases you won't even notice. When people know how to use AI to help with their writing, it's not noticable.
I actually disagree, in certain cases. Just today I saw:
https://news.ycombinator.com/item?id=46204895
when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.
When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.
If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.
HN is the mix of personal experience, weird edge cases, and even the occasional hot take. That's what makes HN valuable
It's kinda funny how we once in internet culture had "lmgtfy" links because people weren't just searching google instead of asking questions.
But now people are vomiting chatgpt responses instead of linking to chatgpt.
And yet people ask for sources all the time. "I don't care what you think, show me what someone else thinks".
On a similar sentiment, I’m sick and tired of people telling others to go google stuff.
The point of asking on a public forum is to get socially relatable human answers.
While, I don't disagree with the general sentiment, a black and white ban leaves no room for nuance.
I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.
I strongly disagree - when I post something that AI wrote I am doing it because it explains my thoughts better than I can - it digs deeper and finds the support for intuitions that I cannot explain nicely. I quote the AI - because I feel this is fair - if you ban this you would just lose the information that it was generated.
I strongly agree with this sentiment and I feel the same way.
The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.