As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.
Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.
> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.
I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.
In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.
In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.
The "safety drivers" do nothing. They sit in the passenger seat and the only thing they have is a button that presumably stops the car and lets a remote operator take over.
> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.
Sorry that does not compute.
It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.
Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..