It's an odd thing here, because I don't really understand why this is LLM-specific at all. If someone came up to me and asked "who's the 6 Nimmt world champion?" I'd google it and probably find the same result, and have no reason not to believe it. I mean, for all I know the game is being made up too, though it has more sources at least.
The difference imo is removing the information from the source. Previously you'd use the source of the information to gauge how much you trust it. If it's a reddit post or a no name website you'd likely be skeptical if it doesn't seem backed up by better sources. But now the info is coming from an LLM that you generally trust to be knowledgeable. And the language it uses backs up this feeling.
The OP post is highlighting how incredibly easy it is for a very small amount of information on the web to completely dictate the output of the LLM in to saying whatever you want.
A lot of people seem to think this to be an LLM problem, but you're right.
This is a general epistemological problem with relying on the Internet (or really, any piece of literature) as a source of truth
Closed it after “This house of cards only needs a $12 domain!”, right under “Sorry, Wikipedia.”, right under their Wikipedia edit.
Because outside of the tech community (in fact, many even inside of it), almost 100% of the folks consider what these chatgpt like tools answer as the truth without questioning it, or cross-verifying it even once.