logoalt Hacker News

simianwordslast Saturday at 3:08 PM2 repliesview on HN

You are exaggerating. LLMs simply don’t hallucinate all that often, especially ChatGPT.

I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.

We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.

Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.


Replies

avalyslast Saturday at 4:39 PM

All we can do is share anecdotes here, but I have found ChatGPT to be confidently incorrect about important details in nearly every question I ask about a complex topic.

Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things.

It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion.

show 2 replies
ipaddrlast Saturday at 3:31 PM

LLM give false information often. The ability for you to catch incorrect facts is limited by your knowledge and ability and desire to do independent research.

LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.

show 2 replies