Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the "thinking a panda is a vulture" problem, don't humans fail in similar ways? The analogous examples for us are camouflage, optical illusions, logical fallacies, etc.

It doesn't really have to be perfect as long as it doesn't fail in common scenarios.



Humans don't appear to fail in the same ways - camouflage and optical illusions are very different to the specific imperceptible-to-humans changes that trick neural networks. Then again, there's no way to test the method on humans because you need to know the neural network weights and that is tricky for people!

In practice it probably doesn't matter anyway - the chance of the exact required perturbation of the input happening by chance are infinitesimal, due to the high dimensionality of the input. And even if it was a problem there are ways around it.


For the "thinking a panda is a vulture" problem, don't humans fail in similar ways?

This is a good question. My impression is that humans fail and artificial neural networks fail but we don't know enough about the brain to say artificial neural networks fail in the same way as humans.

As another poster notes, humans accept human error more than computer error and I think that's because humans have an internal model of what other humans will do. If I see a car waving in a lane and going slowly, I have some ideas what's happening. I don't think that model would extend to a situation where neural network-driven car was acting "wonky".


Is this a good time to ask whether the dress is blue/black or white/gold? ;)


It should never fail since any failure could potentially create a fatal scenario. People usually accept fatalities because of human error but they won't accept death because of algorithmic failure.


I suspect that it won't take long for people to come to terms with it in the same way we now "accept" industrial accidents. "Accept" in this case simply means that the industry in question is allowed to continue doing business.


That's an unattainable high acceptance bar. A more reasonable one would be to have mass adoption of self driving cars as soon as self driving cars cause less accidents than human drivers.


Not every car crash ends in death. But the AI will learn a lot from each crash. I think mistakes and 'bugs' in the system will get ironed out at low speed crashes and in high speed crashes on test circuits...

Have you seen the AI Formula 1 called roborace? Once those cars get good enough to beat Lewis Hamilton or Seb Vettel I'll trust it with me and my family.


Do people accept death due to autopilot error in aeroplanes? It's the same thing. There has been no demands for autopilot to be removed from planes or mass refusal to fly. The reason is that most people can see that autopilot is an overall safety gain compared with getting a human to concentrate on the same thing for long periods of time.


> It doesn't really have to be perfect as long as it doesn't fail in common scenarios.

i agree that it doesn't have to be perfect, but the standard should be higher than "doesn't fail in common scenarios." we should also expect graceful handling of many uncommon but plausible scenarios. we expect human drivers to handle more than just common scenarios, and human drivers are pretty bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: