How many of these cases do we have to have before lawyers realise that they need to check that the things an LLM tells them are actually true?
I'm continually amazed at how much faith people have in them. I guess since they can sound like people and output really authoritative and confident text it just overrides any skepticism subconsciously?
It doesn't matter anymore.
LLMs just revealed what a decadent society we have setup for ourselves worldwide.
It’s worse than that. We’re hearing about the lawyers and Ars Technica because the consequences are public and the errors are egregious.
It’s likely happening to everyone.
Just this week I tracked down the citations of a scientific paper (whose authors could very well be here) where 25% of the citations were made up and 50% of the remaining ones were wrong, taking ArXiv papers and citing them as belonging to (say) IJCLR.
It's not just lawyers.
This whole thing is silly, LLMs can automate reference validation.
If someone is a lawyer, accountant, doctor, teacher, surgeon, engineer etc, and is regurgitating answers that were pumped out with with GPT-5-extra-low or whatever mediocre throttled model they are using, they should just be fired and de-credentialed. Right now this is easy.
The real problem is ahead: 99.999% of future content that exists will be made using generative AI. For many people using Facebook, Instagram, TikTok, or some other non-sequential, engagement weighted feed, 50%+ of the content they consume today is fake. As that stuff spreads in to modern culture it's going to be an endless battle to keep it out of stuff that should not be publishing fake content (e.g. the New York Times or Wall Street Journal; excluding scientific journals who seem to abandoned validation and basic statistics a long time ago.)
Much of the future value and profit margins might just be in valid data?
Do we see this a lot in the US? This seems to be more unique to India.
What kind of AI is this that you constantly need a human to check its job? Do you think Jean-Luc Piccard had to constantly check the output of the Enterprise computer? No he didn't. If AI is not better than humans, then what the heck is the point? You might as well just use humans.
It doesn't matter, because any process that seems right most of the time but occasionally is wrong in subtle, hard to spot ways is basically a machine to lull people into not checking, so stuff will always slip through.
It's just like the cars driving themselves but you need to be able to jump in if there is a mistake, humans are not going to react as fast as if they were driving, because they aren't going to be engaged, and no one can stay as engaged as they were when they were doing it themselves.
We need to stop pretending we can tell people they "just" need to check things from LLMs for accuracy, it's a process that inevitably leads to people not checking and things slipping through. Pretending it's the people's fault when essentially everyone using it would eventually end up doing that is stupid and won't solve the core problem.