What often is fully ignored in such articles is the false positive rate.
Like e.g. where I live they tested some state of the art facial recognition system on a widely used train station and applauded themself how grate it was given that the test targets where even recognized when they wore masks and capes, hats etc.
But what was not told was that the false positive rate while percentage wise small (I think <1%) with the amount of expected non-match samples was still making it hardly usable.
E.g. one of the train stations where I live has ~250.000 people passing through it every day, even just a false positive rate of 0.1% would be 250 wrong alarms, for one train station every single day. This is for a single train station. If you scale your search to more wider area you now have way higher numbers (and lets not just look at population size but also that many people might be falsely recognized many times during a single travel).
AFIK the claimed false positive rate is often in the range of 0.01%-0.1% BUT when this system are independently tested in real world context the found false positive rate is often more like 1%-10%.
So what does that mean?
It means that if you have a fixed set of video to check (e.g. close to where a accident happened around +- idk. 2h of a incident) you can use such systems to pre-filter video and then post process the results over a duration of many hours.
But if you try find a person in a nation of >300 Million who doesn't want to be found and missed the initial time frame where you can rely on them to be close by the a known location then you will be flooded by such a amount of false positives that it becomes practically not very useful.
I mean you still can have a lucky hit.
What does 'false positive' mean? That it thinks it is someone else, or that it thinks it is a target of an investigation?