logoalt Hacker News

idle_zealotyesterday at 6:51 PM1 replyview on HN

> Especially if the query is complex (e.g. give me a summary of USAs current lunar missions and progress towards a lunar base.)

This terrifies me. The number of ostensibly smart, curious people who now fill their knowledge gaps with pseudorandom information from LLMs that's accurate just often enough to lower mental guards. I'm not an idiot; I know most people never did the whole "check and corroborate multiple sources" thing. What actually happened in the average case was that a person delegated trust to a few parties who, in their view, aligned with their perspective. Still, that sounds infinitely preferable to "whatever OpenAI/Google/whoever's computer says is probably right". When people steelman using LLMs for knowledge gathering, they like to position it as a first step to break in on a topic, learn what there is to learn, which can then be followed by more specific research that uses actual sources. I posit that the portion of AI users actually following up that way is vanishingly small, smaller even than the portion of people who read multiple news sources and research the credibility of the publications.

I value easy access to information very highly, but it seems like when people vote with their feet, eyes, and wallets that's not what you get. You get fast and easy, but totally unreliable information. The information landscape has never been great, but it seems to only get worse with each paradigm shift. I struggle to even imagine a hypothetical world where reliable information is easy to access. How do you scale that? How do you make it robust to attack or decay? Maybe the closest thing we have now is Wikipedia, is there something there that could be applied more broadly?


Replies

VladVladikofftoday at 4:16 AM

For a brief overview on a topic the accuracy is good enough. It might get some minor details wrong but they are generally superfluous to the topic, it typically breaks down when you are really getting into the weeds of a topic, or really niche subjects, at which point you have exceeded the utility of the LLMs. I have read many blog posts linked off 1st ranking position google queries in the past and found their answers to have inaccuracies as well, how is that better?