logoalt Hacker News

threatofraintoday at 7:03 AM2 repliesview on HN

> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.

You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.

This effect may force companies to simply ban chatbots from certain conversation.


Replies

EgregiousCubetoday at 5:26 PM

The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.

I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.

xethostoday at 8:29 AM

Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.