“ Humans must not blindly trust the output of AI systems. AI-generated content must not be treated as authoritative without independent verification appropriate to its context.”
I’m lost, how do individuals actually do this in our current world? Is each person expected to keep a “white list” of reliable sources of truth in their head. Please don’t confuse what I’m saying with a suggestion that there is no truth. It just seems like there are far more sources of mis- of half-truths and it’s increasingly difficult for people to identify them.
Did AI change anything in that regard? I believe that same as before, you couldn't trust everything you see, and research effort was always more than keeping a white list; means vary, case-by-case.
And same it is now. It's a change in quantity, but not quality.
Humanity has spent millennia creating and evolving institutions to address exactly this problem, and have recently decided to essentially throw out the whole lot and replace it with nothing.
Checking AI citations and reading.
Critical thinking and reading comprehension and the primary tools in determining truth, AFAIK. Knowing facts beforehand helps too but a trustworthy source can provide false information as much as an untrustworthy source can provide true information.
This has always been an issue, and in the past it was a more difficult issue because your sources of knowledge were more limited. Nowadays its mostly about choosing the right source(s) rather than having to go out of your way to find them (like traveling to a regional/university library).
I... am not sure. Computers are machines that create order (like db tables) from the chaos of reality. Now we have LLMs that make computers spit out chaos as well.
They don't have to though, we can still leverage LLMs to organize chaos, which is what I hope they ultimately end up doing.
For example an AI therapist is a nightmare, people putting the chaos of their mental state into a machine that spits dangerous chaos back out. An AI tool that parsed responses for hard data (i.e. rate 0-9 how happy was the person) and then returned that as ordered data (how happy was I each day for the last month) that an actual therapist and patient could review is the correct use of AI and could be highly trusted. The raw token output from LLMs should just be used for thinking steps that lead to a parseable hard data answer that can be high trust.
Of course that isn't going to happen, but I can see some extremely cool and high trust products being built using LLMs once we stop treating them like miracle machines.