logoalt Hacker News

arevnolast Wednesday at 10:48 PM1 replyview on HN

> putting a lot of blind faith in an algorithm that's proven time and time again to make things up

Don't be ridiculous. Our entire system of criminal justice relies HEAVILY on the eyewitness testimony of humans, which has been demonstrated time and again to be entirely unreliable. Innocents routinely rot in prison and criminals routinely go free because the human brain is much better at hallucinating than any SOTA LLM.

I can think of no more critical institution that ought to require fidelity of information than criminal justice, and yet we accept extreme levels of hallucination even there.

This argument is tired, played out, and laughable on its face. Human honesty and memory reliability are a disgrace, and if you wish to score points against LLMs, comparing their hallucination rates to those of humans is likely going to result in exactly the opposite conclusion that you intend others to draw.


Replies

1659447091yesterday at 12:33 AM

> the human brain is much better at hallucinating than any SOTA LLM

Aren't the models trained on human content and human intervention? If humans are hallucinating that content, then LLMs even slightly hallucinating from fallible human content, wouldn't that make the LLMs hallucinations still, if even slightly, more than humans? Or am I missing something here where LLMs are somehow correcting the original human hallucinations and thus producing less hallucinated content?