I understand the author's conserns but I wonder of his standard of "true" is a bit unrealistic. Maybe the standard should be "less false than the average human produced work."
Yes a human expert who is devoted to uncovering and communicating truth will do much better than an llm. But that is not what most of the content in the world consists of.
Most content is written by people with questionable expertise who write with goals like fame or viewership numbers or advertising dollars. Most content on the internet or in the world does not seem to be written with the goal of uncovering truth.
To create a more truthful world AI doesn't have to be as accurate as an expert. It just has to be more accurate than average. And in many areas, that's a very low bar.
To choose a less controversial area than religion, I think adding AI to nutrition advice on the internet will be an improvement. It's true you will inevitably get inaccurate advice communicated with confidence, but any AI trained on any kind of credible scholarship for will still be miles ahead of Internet nutritional advice that is often filled with zany pseudoscience.
Yes it will produce inaccurate results but on average the information will be more accurate than what already exists and what is currently being perpetuated
> Maybe the standard should be "less false than the average human produced work."
I don't think so. Lots of people blindly trust LLMs more than they trust the average human, probably for bad reasons (including laziness and over-reliance on technology).
Given that reality, it's irresponsible to make LLMs that don't reach a much better standard, since it encourages people to misinform themselves.
If you genuinely believe that failure to believe in Christ means an eternity of punishment, anything that might feasibly turn off a potential believer - like an AI hallucination posing as a religious explanation - is fundamentally worse than murder.
The article addresses this exact argument.
In some domains "good enough" or "the ends justify the means" are acceptable. In this domain the author clearly feels there is a significant moral requirement to satisfy.
> I understand the author's conserns but I wonder of his standard of "true" is a bit unrealistic.
In the case of religions it is definitely unrealistic.
If you literally believe that getting an answer wrong can send someone to hell, then I think the stakes are a little more dire than bad nutritional advice.
Like imagine if this were a baby-care bot, dispensing advice on how to safely care for your baby. That would be pretty stupid, and would likely eventually give advice so incorrect a baby would die. For someone who believes, that is a less tragic outcome than being led astray by an apologetics bot. It takes an incredible level of conceit to build one anyway.