logoalt Hacker News

Sharlintoday at 12:10 PM7 repliesview on HN

Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.


Replies

peratoday at 12:53 PM

Last year I had to deal with a contractor who sincerely believed that a very popular library had some issue because it was erroring when parsing a chatgpt generated json... I'm still shocked, this is seriously scary

show 1 reply
Suzurantoday at 1:14 PM

My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.

show 4 replies
tveitatoday at 2:45 PM

I think people's attitude would be better calibrated to reality if LLM providers were legally required to call their service "a random drunk guy on the subway"

E.g.

"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"

pjc50today at 2:49 PM

Billions of dollars of marketing have been spent to enable them to believe that, in order to justify the trillions of investment. Why would you invest a trillion dollars in a machine that occasionally randomly gave wrong answers?

anon_anon12today at 12:55 PM

People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.

show 2 replies
Cthulhu_today at 12:52 PM

I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.

pousadatoday at 12:38 PM

I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

Unless I have been reading very different science fiction I think it’s definitely not that.

I think it’s more the confidence and seeming plausibility of LLM answers

show 4 replies