> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Devils advocate: what if a facial recognition system with a large enough database can always find an unrelated/innocent person that looks similar enough to convince the human?
This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability.
This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.
Reminds me of a case that just popped up in my neck of the woods.
Man gets pulled over on an expired plate. They search based on this fact, find a pill bottle (for Irritable Bowel Syndrome) and magically find he’s trafficking cocaine and fentanyl.
Months later a lab test exonerates the poor guy.
https://www.wyff4.com/article/deputies-falsely-identify-ibs-...
> How is this the fault of AI?
AI is being used by bureaucrats and enforcers to justify lazy, harmful conclusions. You don't live in the real world if you think "just punish the bureaucrats, don't make it about AI" is going to remotely rectify this toxic feedback loop and ecosystem.
It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.
Computers are wildly dangerous, not because of anything innate but because of how humans act around them.
Automation has a strong tendency to degrade diligence.
I see this all the time in operational / production settings. Having a loop with automation reviewed and approved by a human degrades very fast. I only approve automation that has a quick path to unsupervised operation.
100% 100% 100% humanity is so obsessed with ai that we're losing...our humanity. "blame the mindless, soulless robots! how could we have possibly known that they need to be supervised?! aren't they basically just humans that don't need to rest or eat?"
> How is this the fault of AI?
The false positive rate combined with scanning millions of pictures might make the chance of arresting the wrong person really high.
> How is this the fault of AI
It isn't, the article doesn't claim (or even imply) that it is "the fault" of AI, only that AI was part of the chain of events, and nothing is the fault of AI until AI is sufficiently advanced to constitute a moral actor. “At the source of every error which is blamed on the computer, you will find at least two human errors, one of which is the error of blaming it on the computer” remains true.
OTOH, it is potentially the fault of the reliance human actors put on an AI determination.
It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades.
The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).
If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.
Study after study has shown a very strong and consistent bias of humans to trust "automated systems" in face of any ambiguity
I think the biggest problem is that the popular narratives about AI enable this like of accountability sink.
> Instead of scapegoating an AI bogeyman
One big reason for AI adoption everywhere is that you can use it as a scapegoat
I think it's more nuanced; it is one error in a Tragedy of Errors.
Someone from the government should be in jail for this kind of oversight.
> How is this the fault of AI?
It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]
> There's a reason why we don't let AI autonomously jail people.
Yes we do. [1]
> and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.
[0]: https://thisisreno.com/2026/03/lawsuit-reno-police-ai-polici...
[1]: https://projects.tampabay.com/projects/2020/investigations/p...
Where does it say that AI is blamed.
It says she was misidentified using facial recognition.
That’s exactly what happened
computer said yes
> How is this the fault of AI?
Humans being human. Getting lazy, being incompetent, getting incompetent with AI use or simply being biased. The wrongfully arrested person doesn't even resamble the perpetrator.
Maybe if they were held accountable forthese actions, they would act responsibly?
> How is this the fault of AI?
It is not. It is the fault of the police
AI models are tools. When mistakes are made they are the mistake of the operator of said tool
This AI model was badly misused, this woman should get a metric shit tonne of compensation, but it was the fault of the police.
I hope you take this as a teaching/learning opportunity
> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").