logoalt Hacker News

simianwordslast Saturday at 3:33 PM5 repliesview on HN

No they don’t give false information often.


Replies

ziml77last Saturday at 3:49 PM

They do. To the point where I'm getting absolutely furious at work at the number of times shit's gotten fucked up and when I ask about how it went wrong the response starts with "ChatGPT said"

ipaddrlast Saturday at 3:47 PM

Do you double check every fact or are you relying on yourself being an expert on the topics you ask an llm? If you are an expert on a topic you probably aren't asking ab llm anyhow.

It reminds me of someone who reads a newspaper article about a topic they know and say its most incorrect but then reading the rest of the paper and accepting those articles as fact.

show 1 reply
tempest_last Saturday at 3:58 PM

I have them make up stuff constantly for smaller rust libraries that are newish or dont get a lot of use.

mythrwylast Saturday at 3:49 PM

"Often" is relative but they do give false information. Perhaps of greater concern is their confirmation bias.

That being said, I do agree with your general point. These tools are useful for exploring topics and answers, we just need to stay realistic about the current accuracy and bias (eager to agree).

mythrwylast Saturday at 3:59 PM

I just asked chatGPT.

"do llms give wrong information often?"

"Yes. Large language models produce incorrect information at a non-trivial rate, and the rate is highly task-dependent."

But wait, it could be lying and they actually don't give false information often! But if that were the case, it would then verify they give false information at a non trivial rate because I don't ask it that much stuff.