logoalt Hacker News

1659447091yesterday at 12:33 AM0 repliesview on HN

> the human brain is much better at hallucinating than any SOTA LLM

Aren't the models trained on human content and human intervention? If humans are hallucinating that content, then LLMs even slightly hallucinating from fallible human content, wouldn't that make the LLMs hallucinations still, if even slightly, more than humans? Or am I missing something here where LLMs are somehow correcting the original human hallucinations and thus producing less hallucinated content?