logoalt Hacker News

rectangyesterday at 6:11 PM1 replyview on HN

How does "outperform" translate to the propensity of an LLM to hallucinate?


Replies

operatingthetanyesterday at 6:13 PM

There seems to be a mass delusion about how capable SOTA models actually are. That's my only explanation for how poorly I find them performing in basic knowledge tasks compared to how others describe their prowess.

show 1 reply