Or an ambulance going on the opposite direction (because that’s the only available choice) on a boulevard in a busy capital city like Bucharest. Saw that a couple of hours ago, the ambulance met a taxi which was going the right way but of course that the taxi had to stop and find a way for the ambulance to pass (by partly going on the sidewalk). I said to myself that unless we get to AGI there’s no way for an “autonomous” car to handle that situation correctly.
You don't even need to go that far, the other day I saw an ambulance going down on Burrard Street in Vancouver, BC without lights or sirens then I guess a call came in , it put on both and turned around. It's a six lane street where normal cars aren't allowed to just turn around. It was handled real well by everyone involved, mind you, it wasn't unsafe but I doubt a computer could've handled it as well as the drivers did.
I don't believe people are using their full AGI when driving (and the full "AGI" may as well happen to be a set of basic pattern matching capabilities which we haven't discovered yet). After decades of driving the behavior is pretty automatic, and when presented with complex situation following a simple rule, like just brake, is frequently the best, or close to it, response.
To me the solution to that is obvious and far better than the current status quo. The cars are all attached to a network and when an emergency service vehicle needs to get somewhere in a hurry there is a coordinated effort to move vehicles off the required route.
As things stand emergency vehicles have to cope with a reasonable minority of people who completely panic and actually impede their progress.
This has to work even if network reception is weak or absent. You can't be certain that 100% of cars will receive the signal and get themselves out of the way in time.
Oh you can have that in Bucharest even with regular cars. Lanes are pretty fluid there, as is the preferred direction of travel, I've lived there for only two years and I've seen more vehicles go in the opposite direction ('ghost riders' we call them here) than anywhere else over the rest of my life. Romanian traffic is super dangerous, especially if you are a pedestrian and you can just about forget cycling in traffic. It is also the only place where a car behind me honked to get me to move over when I was walking on the sidewalk.
People at Tesla and other autonomous driving companies, of course are aware and worry about such situations. If you have a few hours and want to see many of the technologies and methods that Tesla is using to solve them, check out Tesla's recent "AI day" presentation. Tesla is quite cool about openly discussing the problems they have solved, problems they still have, and how they are trying to solve them.
An incomplete list includes:
1) Integrating all the camera views into one 3-D vector space before training the neural network(s).
2) A large in-house group (~1000 people) doing manually labeling of objects in that vector space, not on each camera.
3) Training neural networks for labeling objects.
4) Finding edge cases where the autocar failed (example is when it loses track of a vehicle in front of it when the autocar's view is obscured by a flurry of snow knocked off the roof of the car in front of it), and then querying the large fleet of cars on the road to get back thousands of similar situations to help training.
5) Overlaying multiple views of the world from many cars to get a better vector space mapping of intersections, parking lots, etc
6) New custom build hardware for high speed training of neural nets.
7) Simulations to train rarely encountered situations, like you describe, or very difficult to label situations (like a plaza with 100 people in it or a road in an Indian city).
8) Matching 3-D simulations to what the cars cameras would see using many software techniques.
They're cool about openly discussing it because this is all industry standard stuff. It's a lot of work and impressive, but table stakes for being a serious player in the AV space, which is why the cost of entry is in the billions of dollars.
As awful of a failure as that is, and as fun as it is to mock Tesla for it, that claim was that they're aware of edge cases and working on fixing them, not that they're already fixed. So your criticism doesn't really make sense.
A system dealing with 'edge cases' by special casing them is not going to work for driving, driving is a continuous string of edge cases, and if you approach the problem that way you fix one problem but create the next.
A million people are killed globally each year by motor vehicles. Staggering amounts of pain and injuries. Massive amounts of property damage. Tesla's cars are not supposed to be left to drive themselves. The chance to save so much carnage seems worth letting some people driving Tesla's, that fail to pay attention to the road, suffer the consequences of poor decisions.
Plus these problems are likely too be mostly fixed due to the fact that they happened.
> If you never fail, you aren't moving fast enough.
Start-up religion doesn't really work when there are lives on the line. That's fine for your social media platform du jour but please don't bring that attitude to anything that has 'mission critical' in the description. That includes medicine, finance, machine control, traffic automation, utilities and so on.
But what about that million people who die every year now? Are the few thousand people who will die because of AI mishaps worth more than the million who die due to human mishaps?
Not to say that we shouldn't be cautious here, but over-caution kills people too