Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People at Tesla and other autonomous driving companies, of course are aware and worry about such situations. If you have a few hours and want to see many of the technologies and methods that Tesla is using to solve them, check out Tesla's recent "AI day" presentation. Tesla is quite cool about openly discussing the problems they have solved, problems they still have, and how they are trying to solve them.

An incomplete list includes:

1) Integrating all the camera views into one 3-D vector space before training the neural network(s).

2) A large in-house group (~1000 people) doing manually labeling of objects in that vector space, not on each camera.

3) Training neural networks for labeling objects.

4) Finding edge cases where the autocar failed (example is when it loses track of a vehicle in front of it when the autocar's view is obscured by a flurry of snow knocked off the roof of the car in front of it), and then querying the large fleet of cars on the road to get back thousands of similar situations to help training.

5) Overlaying multiple views of the world from many cars to get a better vector space mapping of intersections, parking lots, etc

6) New custom build hardware for high speed training of neural nets.

7) Simulations to train rarely encountered situations, like you describe, or very difficult to label situations (like a plaza with 100 people in it or a road in an Indian city).

8) Matching 3-D simulations to what the cars cameras would see using many software techniques.



They're cool about openly discussing it because this is all industry standard stuff. It's a lot of work and impressive, but table stakes for being a serious player in the AV space, which is why the cost of entry is in the billions of dollars.


> People at Tesla and other autonomous driving companies, of course are aware and worry about such situations.

Yeah, a Tesla couldn't possibly drive into a stationary, clearly visible fire engine or concrete barrier, on a dry day, in direct sunlight.


As awful of a failure as that is, and as fun as it is to mock Tesla for it, that claim was that they're aware of edge cases and working on fixing them, not that they're already fixed. So your criticism doesn't really make sense.


A system dealing with 'edge cases' by special casing them is not going to work for driving, driving is a continuous string of edge cases, and if you approach the problem that way you fix one problem but create the next.


I don't think anybody said anything about special casing them.

I dislike saying anything in defense of tesla's self-driving research, but let's be accurate.


Neither could a human, I'm sure.

At least, I never would...


If you never fail, you aren't moving fast enough.

A million people are killed globally each year by motor vehicles. Staggering amounts of pain and injuries. Massive amounts of property damage. Tesla's cars are not supposed to be left to drive themselves. The chance to save so much carnage seems worth letting some people driving Tesla's, that fail to pay attention to the road, suffer the consequences of poor decisions.

Plus these problems are likely too be mostly fixed due to the fact that they happened.


> If you never fail, you aren't moving fast enough.

Start-up religion doesn't really work when there are lives on the line. That's fine for your social media platform du jour but please don't bring that attitude to anything that has 'mission critical' in the description. That includes medicine, finance, machine control, traffic automation, utilities and so on.


But what about that million people who die every year now? Are the few thousand people who will die because of AI mishaps worth more than the million who die due to human mishaps?

Not to say that we shouldn't be cautious here, but over-caution kills people too



You described a lot of effort, but no results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: