logoalt Hacker News

celltalk02/19/20251 replyview on HN

I do hallucinate a better future as well.


Replies

coherentpony02/19/2025

It bothers me that the word 'hallucinate' is used to describe when the output of a machine learning model is wrong.

In other fields, when models are wrong, the discussion is around 'errors'. How large the errors are, their structural nature, possible bounds, and so forth. But when it's AI it's a 'hallucination'. Almost as if the thing is feeling a bit poorly and just needs to rest and take some fever-reducer before being correct again.

It bothers me. Probably more than it should, but it does.

show 1 reply