logoalt Hacker News

Scarblaclast Saturday at 10:59 PM1 replyview on HN

LLM hallucinations aren't errors.

LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.


Replies

jvanderbotlast Saturday at 11:27 PM

You know the difference between verification and validation?

You're describing a lack of errors in verification (working as designed/built, equations correct).

GP is describing an error in validation (not doing what we want / require / expect).