logoalt Hacker News

jsighttoday at 3:29 PM2 repliesview on HN

TBH, the comments here amaze me. The claim is that a human being paid to monitor a driver assistance feature is 3x more likely to crash than a human alone.

That needs extraordinary evidence. Instead the evidence is misleading guesses.


Replies

overfeedtoday at 4:37 PM

> That needs extraordinary evidence.

Waymo studied this back when it was a Google moonshot, and concluded that going full automation is safer than human supervision. A driving system that mostly works lulls the driver into complacency.

Besides automation failure, driver complacency was a big component[1] of the fatal accident that led to the shuttering of Ubers self-driving efforts - the safety driver was looking at her phone for minutes in the lead up. It is also the reason why driver attention is monitored in L2

show 1 reply
estearumtoday at 3:33 PM

That... is not really an extraordinary claim. That has been many people's null hypothesis since before this technology was even deployed, and the rationale for it is sufficiently borne out to play a role in vigilance systems across nearly every other industry that relies on automation.

A safety system with blurry performance boundaries is called "a massive risk." That's why responsible system designers first define their ODD, then design their system to perform to a pre-specified standard within that ODD.

Tesla's technology is "works mostly pretty well in many but not all scenarios and we can't tell you which is which"

It is not an extraordinary claim at all that such a system could yield worse outcomes than a human with no asssistance.