Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

About 33,000 people die on America’s roads every year.

There will eventually be a fatality caused by a self-driving car. I wonder how many of these we will be willing to accept. Say self-driving cars reduce fatalities by 10x. That is, by switching 100% to self-driving cars there are 3300 deaths a year, but they are all caused by self-driving cars.

Would America accept that?

I personally doubt we would. I think the number will have to be in the hundreds at most, closer to what we accept from plane crashes.

How safe do self-driving cars have to be before we start using them?



(Disclaimer: I work at Google but not on self driving cars. My opinion, not my employer's, follows)

What about the reduced individual and societal costs (in insurance, medical, lost productivity, psychological trauma) that come with a 10x reduction in driving fatalities?

People might see the value of a 10x (or even smaller) reduction if it is expressed in those terms.


Common misconception amongst technophiles like us: humans aren't rational. We accept flaws in humans, not so much in machines. Because we can sympathise with human error but a machine error is always a defect.


I think the problem is, people really don't care about autonomous objects.

There is so many unknowns with this, it's really hard to know. Like if the self-driving car gets in an accident (that is a result of the self-driving car), who is responsible? Google? Me? Other driver? Car maker? Car seller? Insurance company?

Humans have kind of accepted that humans make mistakes, you're trained from day 1 to when you die, humans make mistakes. It's a bit different when it's a program with a giant state machine.


> Like if the self-driving car gets in an accident (that is a result of the self-driving car), who is responsible? Google? Me? Other driver? Car maker? Car seller? Insurance company?

The insurance company is not going to be responsible in the first instance, though it may be insuring someone who is responsible.

For the rest, its hardly as if the responsibility for damages that aren't attributable to improper operation but are instead result from the design/manufacture of the vehicle and/or the failure of the owner to properly maintain the vehicle in a safely operable condition have not been extensively litigated already; so, while "self-driving cars" might be a new thing, the legal system isn't devoid of guidance on how liability could apply to them.

> Humans have kind of accepted that humans make mistakes

Self-driving car errors are all human mistakes, even if the mistake becomes visible distantly (potentially in both time and space) from the error itself.


two self-driving cars, from different owners, crash: who is going to pay for that?


The one at fault in the accident


that's the point!

currently there are only two sides, but with self-driving cars both sides could try to blame the other OR any other related entity (even from their own side) --what if both sides agrees to go after a 3rd one?


I think one of the problems is that when a self-driving car do cause an accident ,it will probably do so in an obviously incompetent way. Like running into a tree that was not mapped correctly, in a way a human would never do.

The cars may not be wrong often, but I think we would have a hard time accepting accidents that we cannot understand why they happened.


I don't think it's so different than people being killed by a mechanical failure in a plane. Sure you can put the failure back on the mechanic or engineer who designed the plane but you can do that with software too.


Why not?


Because its easier to accept it if it was a humans fault, not some piece of code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: