> What does the statistics say? How many miles have the self-driving cars driven, and how many deaths were they responsible for? How does it compare to a human driver?
As of December 2017 Uber had driven 2 million autonomous miles[1]. Let's be generous and double that, so 4 million.
The NHTSA reports a fatality rate (including pedestrians, cyclists, and drivers) of 1.25 deaths per 100 million miles[2], twenty five times the distance Uber has driven.
You probably shouldn't extrapolate or infer anything between those two statistics, they're pretty meaningless because we don't have nearly enough data on self driving cars. But since you asked the question, that's the benchmark: 1.25 deaths per 100 million miles.
Scaling those numbers paints a poor picture for Uber. Assuming 3 million total miles autonomously driven thus far from Uber's program:
- Uber autonomous: 33.3 deaths per 100 million miles
- Waymo: 0 deaths per 100 million miles
- National average: 1.25 deaths per 100 million miles
Of course, the Uber and Waymo numbers are from a small sample size.
But there's also the bayesian prior that Uber has been grossly negligent and reckless in other aspects of their business, in addition to reports that their self-driving cars have had tons of other blatant issues like running red lights.
It seems reasonably possible that an Uber self-driving car is about as safe as a drunk driver. DUIs send people to jail - what's the punishment for Uber?
Scaling those numbers is not useful and in fact reduces usefulness.
Comically, that’s why OP said not to do that.
Comparing dissimilar things is actually worse than not comparing at all since it will increase the likelihood of some decision resulting from the false comparison.
The goal is to use the best set of information available to us. I merely cited the normalized numbers because it's been asked various times in this thread - questions along the lines of "how does this rate compare with human drivers?"
The purpose of the extrapolation was to get a (flawed) approximation to that answer. By itself, it doesn't say much, but all we can do is parse the data points available to us:
- Uber's death rate after approximately 3 million self-driven miles is significantly higher than the national average, and probably comparable to drunk drivers.
- Public reporting around the Uber's self-driving program suggests a myriad of egregious issues - such as running red lights.
- The company has not obeyed self-driving regulations in the past, in part because they were unwilling to report "disengagements" to the public record.
- The company has a history of an outlier level of negligence and recklessness in other areas - for example, sexual harassment.
But this is precisely why you should simply extrapolate. Of course people ask, and of course the answer will be useful. But extrapolating one figure of 3M miles to a typical measure (per 100M) is not useful because it provides no actionable information.
Providing this likely wrong number anchors a value in people’s minds.
It’s actually worse than saying “we don’t know the rate compared to human drivers because there’s not enough miles driven.”
Your other points are valid but don’t excuse poor data methods hygiene.
Even now you are making baseless data on its face because you don’t know the human fatality rate per 3M enough to say is “significantly higher.” Although I think it’s easier to find enough data from the human driver data to match similar samples to Uber. But dividing by 33 is not sufficient to support your statement.
I haven’t seen data on the public reporting. That seems interesting and would appreciate it if you can link to it.
> the self-driving car was, in fact, driving itself when it barreled through the red light, according to two Uber employees, who spoke on the condition of anonymity because they signed nondisclosure agreements with the company, and internal Uber documents viewed by The New York Times. All told, the mapping programs used by Uber’s cars failed to recognize six traffic lights in the San Francisco area. “In this case, the car went through a red light,” the documents said.
It depends on what question you're trying to answer with the data (however incomplete one might view it).
Is the data sufficient to say if Uber might eventually arrive at a usable self driving vehicle. Plainly no. It's not sufficient to answer this question one way or another.
Is the data sufficient to indicate if Uber is responsible enough to operate an automated test vehicle program on public roads. Maybe.
There still needs to be an investigation of cause, but if the cause is in a autopilot failure, or the testing protocols preventing a failing autopilot from harming the public, then the question is what the remedy should be.
I agree that you have to use data available to make the best decision possible.
There may be methods to account with all of the problems of comparing two different measures, but it requires a lot of explanation.
But extrapolating one measure into another is wrong without those caveats. That’s the comment I replied to. So in no situation would the method I replied to be useful for what reasonable question is asked.
I think it's very relevant. If the testing protocols are insufficient to prevent an avoidable accident within an outer bound of accident rates. If it is a clear data point outside those bounds (even with uncertainty) one could make a case to severely limit or ban Uber's testing on public roads, and require that they demonstrate sufficient maturity of testing procedures and data to be allowed back onto the roads. This as opposed for waiting for another 'data point' (death).
We absolutely should extrapolate something from those statistics.
Let's assume that the chance of killing in any two intervals of of the same number of miles traveled is the same. Let's say that the threshold for self driving cars being "good enough" is the same death rate as human drivers.
If we assume Uber is good enough, then they should kill people at a rate of at most 1.25/100,000,000. The waiting time until they first kill someone should fit an exponential distribution. The probability that a death would occur in the first t miles is 1 - e^(-lambda t) where lambda is the rate of killing people, is 1.25/100,000,000. I.e. 1 - e^( -(1.25 / 100,000,000) x 4), which is 5 x 10^-8.
If Uber has only a 5 x 10^-8 probability of driving "safely enough" they should lose their license at the very least.
Edit: Oops, 4 != 4,000,000. It should be 1 - e^( -(1.25 / 100,000,000) x 4,000,000) which is about 0.049...
Still, I think we can ask for better than a 5% chance of being the same as a human driver.
(also replaced stars with 'x' because HN was making things italic)
Also, probably 95 percent of the autonomous miles are under the easiest conditions, a sunny day between 9-5 because most are at the Arizona/California test centers.
1.25 per 100 million miles is almost certainly a bad benchmark since the majority of those miles are interstate miles. Fatality rate per mile of urban driving would be much better, although I'm not really sure whether I would expect that number to be higher or lower.
Edit: Actually, maybe I'm wrong in assuming (a) the majority of miles driven are interstate miles, or (b) that the majority of existing miles logged by self-driving cars have not been on the interstate. Would love to see some data if anyone has it, although I suspect Google, Uber, et al. are reluctant to share data at this point.
If the accident had happened with the autonomous vehicle of ANY company, we would still be talking about this and estimating the number of deaths per 100 million miles.
Therefore, I think it would be more fair to consider all miles run by all autonomous vehicles all over the world in the denominator.
It is for the same reason that we want to consider all miles driven everywhere, not just those in Arizona.
As of December 2017 Uber had driven 2 million autonomous miles[1]. Let's be generous and double that, so 4 million.
The NHTSA reports a fatality rate (including pedestrians, cyclists, and drivers) of 1.25 deaths per 100 million miles[2], twenty five times the distance Uber has driven.
You probably shouldn't extrapolate or infer anything between those two statistics, they're pretty meaningless because we don't have nearly enough data on self driving cars. But since you asked the question, that's the benchmark: 1.25 deaths per 100 million miles.
[1]: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self... [2]: https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...