This isn't the first time this month I've read about someone suffering consequences of mistaken identity after their facial recognition said they look like someone who committed a crime. I'm sure this is starting to happen at an alarming rate.
The fundamental problem is that among the 350 million people living in the United States, there are a lot of pairs of people who look pretty darn similar. It used to be impractical to ask a question like "who in the US looks like the person in this security footage", and so as a matter of practicality, once you found someone who looks like the suspect, you probably also have other evidence, even if it's pretty weak, linking them to the crime.
But with AI, you can ask "who in the US looks like this person", and so we need to re-calibrate what it means if all you know is that someone looks like a suspect. I am of the opinion that "looks like someone," in the absence of any other evidence, is reasonable suspicion, but not probable cause, that you are the person you look like. Reasonable suspicion is enough for the police to stop you on the street and ask for your ID, but not enough to arrest you. There are other data points that alone might not even be reasonable suspicion, but could be combined with "looks like someone" to make probable cause, such as "was near the place at the time the crime happened".
AI isn't really the problem, even whether or not the AI's determination that two people look alike is valid or reviewed by a human isn't the problem. The problem is assuming that because two people look alike they must be the same person, even if you have no other evidence of them being the same person.