People will commonly hold LLMs as unusable because they make mistakes. So do people. Books have errors. Papers have errors. People have flawed knowledge, often degraded through a conceptual game of telephone.
Exactly as you said, do precisely this to pre-LLM works. There will be an enormous number of errors with utter certainty.
People keep imperfect notes. People are lazy. People sometimes even fabricate. None of this needed LLMs to happen.
LLM are a force multiplier of this kind of errors though. It's not easy to hallucinate papers out of whole cloth, but LLMs can easily and confidently do it, quote paragraphs that don't exist, and do it tirelessly and at a pace unmatched by humans.
Humans can do all of the above but it costs them more, and they do it more slowly. LLMs generate spam at a much faster rate.
Quoting myself from just last night because this comes up every time and doesn't always need a new write-up.
> You also don't need gunpowder to kill someone with projectiles, but gunpowder changed things in important ways. All I ever see are the most specious knee-jerk defenses of AI that immediately fall apart.
Under what circumstances would a human mistakenly cite a paper which does not exist? I’m having difficulty imagining how someone could mistakenly do that.
Fabricated citations are not errors.
A pre LLM paper with fabricated citations would demonstrate will to cheat by the author.
A post LLM paper with fabricated citations: same thing and if the authors attempt to defend themselves with something like, we trusted the AI, they are sloppy, probably cheaters and not very good at it.