logoalt Hacker News

apothegmlast Wednesday at 1:13 PM1 replyview on HN

The issue is how terrible the LLM is at determining which sources are relevant. Whereas a somewhat informed human can be excellent at it. And unfortunately, the way search engines work these days, a more specific search query is often unable to filter out the bad results. And it’s worst for terms that have multiple meanings within a single field.


Replies

kenjacksonlast Wednesday at 5:00 PM

That word "somewhat" in "somewhat informed" is doing a lot of lifting here. That said, I do think that having a little curation in the training data probably would help. Get rid of the worst content farms and misinformation sites. But it'll never be perfect, in the same way that getting any content in the world today isn't perfect (and never has been).

show 1 reply