People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.
I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.
I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.