I know this might be a hot take but:
I'd bet all my money, and all the money I could borrow, that a waymo would stop/swerve for a child running out before the sensory nerves in a humans eye reacted to that child. Just thinking it's not as egregious a violation when committed by something with a 0.1ms response time. Still a violation, still shouldn't do it, but the worst case outcome would be much much harder to realize than with a human driver.
Also just to add, the fact that there aren't cases of this from Phoenix or SF seems to signal it's a dumb mistake bug in the "Atlanta" build.
If your safety argument is a bet, you already failed the ethics test.
Does SF have school buses?
You’re giving a technical answer to a question that’s actually about the economic and policy incentives.
Yes, electronic sensors can enable the car to react more quickly: But react how?
A buggy or unexpected reaction will just lead to equal or faster tragedy.
Individual drivers are incentivized to keep their behavior (or be taken off the road). What legal incentives are there when a faceless company is involved and creates one or two drivers “at scale”?