logoalt Hacker News

harshrealityyesterday at 8:34 PM2 repliesview on HN

Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.

To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.

The models all have disclaimers that state the inverse. People just gradually lose sight of that.

[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.


Replies

jmalickiyesterday at 8:37 PM

> Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.

I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!

You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.

show 1 reply
latexryesterday at 10:03 PM

> To the extent people trust AI to be infallible, it's just laziness and rapport (…) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.

Why it happens is secondary to the fact that it does.

> The models all have disclaimers that state the inverse. People just gradually lose sight of that.

Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.

https://www.youtube.com/watch?v=Xj4aRhHJOWU