The vendor they used, Clearview AI, does not allow you to request data deletion unless you live in one of the half-dozen states that legally mandate it.
https://www.clearview.ai/privacy-and-requests
I have suddenly becomes very interested in New York's S1422 Biometric Privacy Act.
For me the worst thing in this case is that a JUDGE signed off on an arrest warrant with only a clearview match linking Ms Lipps to the crime.
A judge and the warrant process are supposed to be the safeguard against police doing shady stuff (like relying on an AI hit to decide who commit a crime). But if the judges can't be bothered...
This isn't the first time this month I've read about someone suffering consequences of mistaken identity after their facial recognition said they look like someone who committed a crime. I'm sure this is starting to happen at an alarming rate.
The fundamental problem is that among the 350 million people living in the United States, there are a lot of pairs of people who look pretty darn similar. It used to be impractical to ask a question like "who in the US looks like the person in this security footage", and so as a matter of practicality, once you found someone who looks like the suspect, you probably also have other evidence, even if it's pretty weak, linking them to the crime.
But with AI, you can ask "who in the US looks like this person", and so we need to re-calibrate what it means if all you know is that someone looks like a suspect. I am of the opinion that "looks like someone," in the absence of any other evidence, is reasonable suspicion, but not probable cause, that you are the person you look like. Reasonable suspicion is enough for the police to stop you on the street and ask for your ID, but not enough to arrest you. There are other data points that alone might not even be reasonable suspicion, but could be combined with "looks like someone" to make probable cause, such as "was near the place at the time the crime happened".
AI isn't really the problem, even whether or not the AI's determination that two people look alike is valid or reviewed by a human isn't the problem. The problem is assuming that because two people look alike they must be the same person, even if you have no other evidence of them being the same person.
This is a weak or misleading story about AI.
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
Can someone clarify how that process works?
Earlier discussion (405 comments):
Money quote from someone quoted in the article:
"[I]t’s not just a technology problem, it’s a technology and people problem."
I can't. I just can't.
Wow thought the bar for probable cause for an arrest warrant would be much higher. Especially to drag soneone from another state.
The actual scariest part isn't that the AI got it wrong... it's that nobody felt the need to verify the AI. A tip from an anonymous caller can get investigated and found out if its true or not, and a match from a facial recognition system apparently does not. People haven't built better investigative tools they've just built better ways to skip around the investigation.
Previous discussion:
Insane. Not even an apology. And they ask why we should respect the police.
A lot of dumb shit happens in this arena, where if you had just one smart cop, it could have been prevented. Here’s one from 2023:
So cops used AI to attempt to investigate a crime. But, there was no crime - the arrest was wrong. Why can cops excuse themselves here for delegating their responsibilities (protecting society, allegedly that is) onto software? AI may also be written by some corporations to "tweak" this or that, see this foreign-looking guy being more likely to be AI-investigated. This is like the movie Minority Report - but stupid. IMO the courts should conclude that cops should not be allowed to use AI without having a prior, independently verified objective reasoning for any investigation. This mass sniffing that is currently going on is very clearly illegal. The current orange guy does not care about the law; see flock cameras aka spy cameras employed by the government on all car drivers at all times.
AI is a liability issue waiting to happen. And this is just another example.
[dead]
[dead]
[dead]
This has been posted at least twice before on HN.
Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.
Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
[1]: https://news.ycombinator.com/item?id=23628394