Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
> not sure I would trust it
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
What would that guarantee look like and would it be legal to sell a product that made that guarantee?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
I find it interesting that you don't give other drivers any consideration in your analysis.
Appreciate the honesty.
Sure, but then I don't want you to have a vehicle at all to minimize my own risk.
> to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids