The Wired article is higher quality, agreed, but "race-baiting", really? It seems quite relevant that a specific ethnic group is much more likely to suffer consequences due to this flawed mass facial recognition given how the enforcement is targeted.
Particularly given the example from the article:
In Oregon testimony last year, an agent said two photos of a woman in custody taken with his face-recognition app produced different identities. The woman was handcuffed and looking downward, the agent said, prompting him to physically reposition her to obtain the first image. The movement, he testified, caused her to yelp in pain. The app returned a name and photo of a woman named Maria; a match that the agent rated “a maybe.”
Agents called out the name, “Maria, Maria,” to gauge her reaction. When she failed to respond, they took another photo. The agent testified the second result was “possible,” but added, “I don’t know.” Asked what supported probable cause, the agent cited the woman speaking Spanish, her presence with others who appeared to be noncitizens, and a “possible match" via facial recognition. The agent testified that the app did not indicate how confident the system was in a match. “It’s just an image, your honor. You have to look at the eyes and the nose and the mouth and the lips.”
The Wired article is higher quality, agreed, but "race-baiting", really? It seems quite relevant that a specific ethnic group is much more likely to suffer consequences due to this flawed mass facial recognition given how the enforcement is targeted.
Particularly given the example from the article: