I'm not saying it needs to be perfect, but the guy in this article is putting a lot of blind faith in an algorithm that's proven time and time again to make things up.
The reason I have become "bearish" on AI is because I see people repeatedly falling into a trap of believing LLMs are intelligent, and actively thinking, rather than just very very fine tuned random noise. We should pay attention to the A in AI more.
> putting a lot of blind faith in an algorithm that's proven time and time again to make things up
Don't be ridiculous. Our entire system of criminal justice relies HEAVILY on the eyewitness testimony of humans, which has been demonstrated time and again to be entirely unreliable. Innocents routinely rot in prison and criminals routinely go free because the human brain is much better at hallucinating than any SOTA LLM.
I can think of no more critical institution that ought to require fidelity of information than criminal justice, and yet we accept extreme levels of hallucination even there.
This argument is tired, played out, and laughable on its face. Human honesty and memory reliability are a disgrace, and if you wish to score points against LLMs, comparing their hallucination rates to those of humans is likely going to result in exactly the opposite conclusion that you intend others to draw.