Sure, but think about something as low stakes as, "Does such-and-such a character from my favorite TV show have any siblings" vs. "Is it safe to consume XYZ"
Even with the great structured and semi-structured data that Wikis can provide with this like infoboxes and other sort of templates, there were definitely limitations to the tech nearly ten years ago. My experience back then is one of the reasons I'm super skeptical of the long-term value of the AI / LLM trend we're going through right now.
Aren't those types of prompts the MOST likely to generate hallucinations?