> She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote
I don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.
Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.
Intentionality normally has to be taken into account in common law countries.
That doesn't mean she hasn't done something wrong, but obviously it's more serious to do something intentionally than it is to do it carelessly or recklessly.
> excuse to blame the tool
The issue is ultimately blaming people doesn't really solve things. Unless its genuinely a one-of-a-kind case. But if this happened once its probably going to happen again, and this isn't the first such case of LLM hallucinations in law.
It's weird to think this way, because its easy to just point at a person for a specific instance, but when you see something repeat over and over again you need to consider that if your ultimate goal is to stop something from happening you have to adjust the tools even if the people using them were at fault in every case.
They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this.
So the judge was lazy, incompetent, or both.
This is why LLMs won't replace humans wholesale in any profession: you can't hold a machine accountable. Most of the chatbot experiences I have with various support channels always end up with human intervention anyway when it involves money.
Maybe true general intelligence would solve these issues, but LLMs aren't meeting that threshold anytime soon, imo. Stochastic parrots won't rule the world.
> Not knowing the law isn't excuse to break law,
Yeah, about that ...
https://metro.co.uk/2016/07/03/rapist-struck-again-after-dep...
> A Somalian rapist who had his deportation overturned went on to rape two more women after he was freed.
> But he had his deportation overturned after serving his time because he didn’t know it was unacceptable in the UK.
Using an LLM to automate is simply the newer cheaper outsourcing with much of the same entertainment, but less food poisoning and air travel.
Over the last 20 years a lot of engineering (proper eng, not software) work in the west has been outsourced to cheaper places, with the certified engineers simply signing off on the work done elsewhere. This results in a cycle of doing things ever faster/more cheaply and safeguards disappearing under the pressure to go ever cheaper and faster.
As someone else pointed out, LLMs have just really exposed what a degraded state we have headed into rather than being a cause of it themselves. It's going to be very tough for people with no standards - they'll enjoy cheap stuff for a while and then it will all go away. Surprised Pikachu faces all round.
(I'm pro AI btw, just be responsible.)