logoalt Hacker News

flkiwiyesterday at 6:08 PM9 repliesview on HN

I read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:

1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not

On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.

I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.


Replies

pyraleyesterday at 8:11 PM

> "I ran a $searchengine search and here is the most relevant result."

Except it's "...and here is the first result it gave me, I didn't bother looking further".

giancarlostoroyesterday at 7:08 PM

> 2. People behave as if they believe AI results are authoritative, which they are not

Web search has the same issue. If you don't validate it, you wind up in the same problem.

9rxyesterday at 8:55 PM

> people are demonstrating a new behavior that is disrupting social norms

The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.

show 1 reply
WorldPeasyesterday at 10:03 PM

I think it's closer in proximity to the "glasshole" trend, where there social pressure actually worked to make people feel less comfortable about using it publicly. This is an entirely vibes based judgement, but presenting unaltered ai speech within your own feels more imposing and authoritative(as wagging around an potentially-on camera did then). This being the norm on other platforms has degraded my willingness to engage with potentially infinite and meaningless streams of bloviation rather than the (usually) concise and engaging writings of humans

icoderyesterday at 9:05 PM

Totally agree if the AI or search results are a (relatively) direct answer to the question.

But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'

show 1 reply
munchbunnyyesterday at 8:58 PM

> 2. People behave as if they believe AI results are authoritative, which they are not

I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.

show 1 reply
charcircuityesterday at 6:29 PM

>If I wanted to run a web search, I would have done so

While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.

show 4 replies
Terr_yesterday at 6:47 PM

Agreed on the similar-but-worse comparison to to the laziest possible web-searches of yesteryear.

To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.

show 1 reply
ozgungyesterday at 9:31 PM

I think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”

> 1. If I wanted to run a web search, I would have done so

Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.

2. People behave as if they believe AI results are authoritative, which they are not

AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.

Any strict rule/ban would be very premature and shortsighted at this point.