logoalt Hacker News

aydynyesterday at 5:46 PM2 repliesview on HN

This seems unnecessarily pedantic. We know how the system works, we just use "hallucination" colloquially when the system produces wrong output.


Replies

leptonsyesterday at 6:21 PM

If the information it gives is wrong, but is grammatically correct, then the "AI" has fulfilled its purpose. So it isn't really "wrong output" because that is what the system was designed to do. The problem is when people use "AI" and expect it will produce truthful responses - it was never designed to do that.

show 1 reply
cess11yesterday at 7:02 PM

Other people do not, hence the danger and the responsibility of not giving them the wrong impression of what they're dealing with.

show 1 reply