> It is irresponsible for these companies
I would claim that ignoring the "ChatGPT is AI and can make mistakes. Check important info." text, right under the query they type in client, is clearly more irresponsible.
I think that a disclaimer like that is the most useful and reasonable approach for AI.
"Here's a tool, and it's sometimes wrong." means the public can have access to LLMs and AI. The alternative, that you seem to be suggesting (correct me if I'm wrong), means the public can't have access to an LLM until they are near perfect, which means the public can't ever have access to an LLM, or any AI.
What do you see as a reasonable approach to letting the public access these imperfect models? Training? Popups/agreement after every question "I understand this might be BS"? What's the threshold for quality of information where it's no longer considered "broken"? Is that threshold as good as or better than humans/news orgs/doctors/etc?
> Popups/agreement after every question "I understand this might be BS"?
Considering the number of people who take LLM responses as authoritative Truth, that wouldn't be the worst thing in the world.
Why are you assuming that the general public ought to have access to imperfect tools?
I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.