The Wired article is higher quality, agreed, but "race-baiting", really? It seems quite relevant that a specific ethnic group is much more likely to suffer consequences due to this flawed mass facial recognition given how the enforcement is targeted.
Particularly given the example from the article:
In Oregon testimony last year, an agent said two photos of a woman in custody taken with his face-recognition app produced different identities. The woman was handcuffed and looking downward, the agent said, prompting him to physically reposition her to obtain the first image. The movement, he testified, caused her to yelp in pain. The app returned a name and photo of a woman named Maria; a match that the agent rated “a maybe.”
Agents called out the name, “Maria, Maria,” to gauge her reaction. When she failed to respond, they took another photo. The agent testified the second result was “possible,” but added, “I don’t know.” Asked what supported probable cause, the agent cited the woman speaking Spanish, her presence with others who appeared to be noncitizens, and a “possible match" via facial recognition. The agent testified that the app did not indicate how confident the system was in a match. “It’s just an image, your honor. You have to look at the eyes and the nose and the mouth and the lips.”
I'm focused on the initial paragraph more than anything else.
OP's lead sentence is race-baiting, bubble-coded hyperbolic misinformation, and the entire first paragraph is completely unnecessary and uncharacteristic of appropriate HN content. We know how to have better discussions here. Starting with primary source and not editorialized re-posts is one of them.
Also, "non-white" is not really a "specific ethnic group" imo; and the article does not lead with "much more likely to suffer consequences" but rather "DHS want to find non-white people to deport by any means necessary" which is a gross mischaracterization of the stated intention of actual government officials. If you have direct evidence to the contrary lmk