There will be many more things like this and it’s an elephant in the room for the supposed mass replacement of people with AI.
Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.
You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.
The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.
PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.
>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.
I remember growing up and always hearing "The computer is down" as an excuse for why things were cancelled/offices closed/buses and trains not running/ad infinitum.
At some point I read a article that pointed out that the reason the computer was down was because a person made a [coding] error: the computer itself was fine.
I've yet to read about how a person who caused the computer to be down was disciplined.
We should have more hygiene when it comes to AI.
Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.
Failing to do so (or tampering with it) should be considered bad hygiene, and should be treated like a doctor who doesn't wash their hands before surgery.
I don't believe most countries hold judges accountable for bad ruling at all even before AI era.
"Check and balance, except judiciary."
>why so many companies are now saying they see zero ROI from AI efforts.
I strongly suspect this is because workers are pocketing the gains for themselves. Report XYZ usually takes a week to write. It now takes a day. The other 4 days are spent looking busy.
The MIT report that found all these companies were getting nowhere with AI, also found that almost every worker was using AI almost daily. But using their personal account rather than the corporate one.
> You can make humans more productive
If productivity is 10x unless work increases 10x jobs will be gone.
Counterpoint: No one ever gets fired or goes to jail when big tech firms break the law. Companies will put out an apology, pay whatever small fine is imposed, and continue with illegal AI usage at scale.
> Someone has to get fired / go to jail when something screws up.
In law, someone always hangs. I think a number of American lawyers have been sanctioned for using AI slop.
In other vocations ... not so much. I think that one of the reasons that insurance likes AI so much, is that they can say that it was "the computer" that made the decision that killed Little Timmy.
Or, AI is going to be like when land lines became unnecessary when cellphones showed up in India. India may get to skip an entire intellectual generation due to the ability of a cheap model to educate (in any language).
The narrative that an entire population are “worth” less, paid less , know less, live less …
Fuck this less shit, embrace the paradigm shift. God is finally providing the remedial support through the miracle of AI.
> Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.
The turning point will be when threatening an AI with being unplugged for screwing up works in motivating it to stop making things up.
Some people will rightly point out that is kind of what the training process is already. If we go around this loop enough times it will get there.
Isn't just the issue stemming simply from not using the right tool? When the stakes are high and you should be checking details, the right tools are grounded Ai solutions like nouswise and notebooklm and not the general purpose chatbots that almost everyone knows they might hallucinate. I also do believe that this use case is definitely a low hanging fruit to automat a lot of manual work but it comes with new requirements like transparency to help with verifying the responses.
I think this is an even clearer case than usual. With software engineers and office work you don’t have legal limitations on who can perform the work, but they exist for lawyers and doctors for example.
So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.