> highly inaccurate authority.
The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.
Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.