>We accept the risks with humans because those humans accept risk.
It seems very strange to defend a system that is drastically less safe because when an accident happens, at least a human will be "liable". Does a human suffering consequences (paying a fine? losing their license? going to jail?) make an injury/death more acceptable, if it wouldn't have happened with a Waymo driver in the first place?
Even in terms of plain results, I'd say the consequences-based system isn't working so well if it's producing 40,000 US deaths annually.
Yes
I think a very good reason to want to know who's liable is because Google has not exactly shown itself to enthusiastically accept responsibility for harm it causes, and there is no guarantee Waymo will continue to be safe in the future.
In fact, I could see Google working on a highly complex algorithm to figure out cost savings from reducing safety and balancing that against the cost of spending more on marketing and lobbyists. We will have zero leverage to do anything if Waymo gradually becomes more and more dangerous.