> I don't follow the logic that "it hallucinates so it's useless".
I... don't even know how to respond to that.
Also. I didn't say they were useless. Please re-read the claim I responded to.
> Are they also really bad at some aspects of dealing with both? Absolutely. Dangerously, humorously bad sometimes.
Indeed.
Now combine "Finding patterns in large datasets is one of the things LLMs are really good at." with "they hallucinate even on small datasets" and "Are they also really bad at some aspects of dealing with both? Absolutely. Dangerously, humorously bad sometimes"
Translation, in case logic somehow eludes you: if an LLM finds a pattern in a large dataset given that it often hallucinates, dangerously, humorously bad, what are the chances that the pattern it found isn't a hallucination (often subtle one)?
Especially given the undeniable verifiable fact that LLMs are shit at working with large datasets (unless they are explicitly trained on them, but then it still doesn't remove the problem of hallucinations)