logoalt Hacker News

post_belowtoday at 7:37 AM1 replyview on HN

I don't follow the logic that "it hallucinates so it's useless". In the context of codebases I know for sure that they can be useful. Large datasets too. Are they also really bad at some aspects of dealing with both? Absolutely. Dangerously, humorously bad sometimes.

But the latter doesn't invalidate the former.


Replies

troupotoday at 9:23 AM

> I don't follow the logic that "it hallucinates so it's useless".

I... don't even know how to respond to that.

Also. I didn't say they were useless. Please re-read the claim I responded to.

> Are they also really bad at some aspects of dealing with both? Absolutely. Dangerously, humorously bad sometimes.

Indeed.

Now combine "Finding patterns in large datasets is one of the things LLMs are really good at." with "they hallucinate even on small datasets" and "Are they also really bad at some aspects of dealing with both? Absolutely. Dangerously, humorously bad sometimes"

Translation, in case logic somehow eludes you: if an LLM finds a pattern in a large dataset given that it often hallucinates, dangerously, humorously bad, what are the chances that the pattern it found isn't a hallucination (often subtle one)?

Especially given the undeniable verifiable fact that LLMs are shit at working with large datasets (unless they are explicitly trained on them, but then it still doesn't remove the problem of hallucinations)