> That needs extraordinary evidence.
Waymo studied this back when it was a Google moonshot, and concluded that going full automation is safer than human supervision. A driving system that mostly works lulls the driver into complacency.
Besides automation failure, driver complacency was a big component[1] of the fatal accident that led to the shuttering of Ubers self-driving efforts - the safety driver was looking at her phone for minutes in the lead up. It is also the reason why driver attention is monitored in L2
If rider and pedestrian safety is the main concern, the automated assistance and safety systems that car manufacturers were already developing make the most sense. They either warn or intervene in situations the human may not realize they are in danger and/or do not respond in time. Developing these solves the harder problems first, automation is easy in comparison.
The idea that mostly-automating the system because it's statistically better than humans, but requiring human-assistance to monitor and respond in these exact situations, was flawed logic to begin with. Comparisons of statistics should be made like-for-like, given these are scenarios we can easily control.
For example, a robotic taxis should at least be compared to professional drivers on similar routes, roads, vehicles, and times of day. Not just comparing "all drivers in all vehicles in all scenarios over time" with private company data that cherry-picks "automated driving" miles on highways etc. (where existing assistance systems could already achieve near-perfect results).
Companies testing autonomy on the public should be forced to upload all crash data to investigators as part of their licensing. The vehicles already have extremely detailed sensor and video data to operate. The fact that we have no verified data to compare to existing human statistics is damning. It's a farce.