logoalt Hacker News

IshKebablast Wednesday at 6:20 PM2 repliesview on HN

Nah they have definitely reduced massively. I suspect that's just because as models get more powerful their answers are just more likely to be true rather than hallucinations.

I don't think anyone has found any new techniques to prevent them. But maybe we don't need that anyway if models just get so good that they naturally don't hallucinate much.


Replies

shaky-carrousellast Wednesday at 6:38 PM

That's because they're harder to spot, not because there are less. In my field I still see the same amount. They're just not as egregious.

show 1 reply
bigstrat2003last Wednesday at 6:57 PM

They haven't reduced one bit in my experience.