logoalt Hacker News

kpw94today at 1:39 AM1 replyview on HN

That's a big flaw of LLMs, not limited to RAGs: it lacks the fundamental understanding of "good and bad", like Richard Sutton said in that Dwarkesh podcast.

So if you flood the Internet with "of course the moon landing didn't happen" or "of course the earth is flat" or "of course <latest 'scientific fact' lacking verifiable, definitive proof> is true", you then get a model that's repeating you the same lies.

This makes the input data curating extremely important, but also it remains an unsolved problem for topics where's there's no consensus


Replies

anhldbktoday at 6:39 AM

So true.