logoalt Hacker News

smiley143707/31/20258 repliesview on HN

> people aren't aware of how wrong they can be, and the errors take effort and knowledge to notice.

I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.

They were shocked that it's possible for hallucinations to occur. I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?


Replies

bayindirh07/31/2025

Computers are always touted as deterministic machines. You can't argue with a compiler, or Excel's formula editor.

AI, in all its glory, is seen as an extension of that. A deterministic thing which is meticulously crafted to provide an undisputed truth, and it can't make mistakes because computers are deterministic machines.

The idea of LLMs being networks with weights plus some randomness is both a vague and too complicated abstraction for most people. Also, companies tend to say this part very quietly, so when people read the fine print, they get shocked.

viccis07/31/2025

> I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?

I think it's just that LLMs are modeling generative probability distributions of sequences of tokens so well that what they actually are nearly infallible at is producing convincing results. Often times the correct result is the most convincing, but other times what seems most convincing to an LLM just happens to also be most convincing to a human regardless of correctness.

show 1 reply
throwawayoldie07/31/2025

My experience, speaking over a scale of decades, is that most people, even very smart and well-educated ones, don't know a damn thing about how computers work and aren't interested in learning. What we're seeing now is just one unfortunate consequence of that.

(To be fair, in many cases, I'm not terribly interested in learning the details of their field.)

yifanl07/31/2025

If I wasn't familiar with the latest in computer tech, I would also assume LLMs never make mistakes, after hearing such excited praise for them over the last 3 years.

emporas07/31/2025

It is only in the last century or so, that statistical methods were invented and applied. It is possible for many people to be very competent at what they are doing and at the same time be totally ignorant of statistics.

There are lies, statistics and goddamn hallucinations.

rplnt07/31/2025

Have they never used it? Majority of the responses that I can verify are wrong. Sometimes outright nonse, sometimes believable. Be it general knowledge or something where deeper expertise is required.

jasonjayr07/31/2025

I worry that the way the models "Speak" to users, will cause users to drop their 'filters' about what to trust and not trust.

We are barely talking modern media literacy, and now we have machines that talk like 'trusted' face to face humans, and can be "tuned" to suggest specific products or use any specific tone the owner/operator of the system wants.

dsjoerg07/31/2025

> I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.

Highly educated professionals in my experience are often very bad at applied epistemology -- they have no idea what they do and don't know.