> Maybe the standard should be "less false than the average human produced work."
I don't think so. Lots of people blindly trust LLMs more than they trust the average human, probably for bad reasons (including laziness and over-reliance on technology).
Given that reality, it's irresponsible to make LLMs that don't reach a much better standard, since it encourages people to misinform themselves.