Nah they have definitely reduced massively. I suspect that's just because as models get more powerful their answers are just more likely to be true rather than hallucinations.
I don't think anyone has found any new techniques to prevent them. But maybe we don't need that anyway if models just get so good that they naturally don't hallucinate much.
Nah they have definitely reduced massively. I suspect that's just because as models get more powerful their answers are just more likely to be true rather than hallucinations.
I don't think anyone has found any new techniques to prevent them. But maybe we don't need that anyway if models just get so good that they naturally don't hallucinate much.