logoalt Hacker News

delusionaltoday at 4:33 PM0 repliesview on HN

> Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.

Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?

You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.