I wonder if Tesla FSD is properly considering a windowed sample of images at a high enough sample rate to see modern high intensity strobe lights used on emergency vehicles.
Clearly the dash cam used is sampling at a high enough rate to see them, but perhaps the Tesla isn't considering the a view of the road ahead across a wide enough period of time to see the hazard lights and understand the vehicle is potentially stationary and alerting drivers about a hazard.
A very naive FSD algorithm working only from the current frame (without looking at past frames) could certainly mess this up, but at what point do we accept this simply isn't fit for purpose - it can't avoid hitting a stationary vehicle, so what hope does it have with slowly moving and other edge cases?
I'm curious how Tesla FSD works with obstacles that aren't cars or people on roads. Where I drive, I often have to slam on my brakes to avoid hitting deer, turkeys, small rodents, running over squashed skunks, etc. I assume that Tesla FSD would notice some of these obstacles, but it wouldn't be able to notice a deer on the edge of the road and identify it as slowdown-worthy... would it?
Curious if any rural Tesla owners can chime in on this -- it's not exactly a documented feature.
Similarly curious - my understanding is that Tesla w/ FSD effectively drives as a relatively selfish and "reactive" driver, theoretically dealing with anything entering its driving path.
I'd also be interested to know if the Tesla FSD can react to "foreseeable second order" issues that an experienced driver is expected to respond to. For example, a pedestrian crossing (giving priority to pedestrians) where visibility of the waiting pedestrian is obscured by a parked vehicle. A human driver can recognise they lack visibility of the footpath around the crossing (and recognise the crossing by road markings and street furniture), and therefore slow on approach, since there is a risk they may need to stop rapidly.
that principle of "slow on approach, since there is a risk" is something I don't feel like I've seen in any of Tesla's videos.
Even the ones where they are proudly showing a Tesla avoiding a collision with a person, the car seems to leave it to the very last second and then slams on the brakes, where my mind is already screaming "BRAKE BRAKE BRAKE" a couple of seconds after I would have started a precautionary slowing.
I have no idea how we’re ever going to achieve trustworthy self-driving cars before we achieve full artificial intelligence.
I know how I react to different situations is fully contingent on recognizing what I’m actually looking at and being able to generate a risk probability model in real-time.
If I don’t know what I’m looking at, I have to be extremely conservative and if I had to do that every time, my driving would probably be very uncomfortable.
I'm out in the country and often see quails and jackrabbits running across the road (both appear extremely suicidal in darting in front of cars). I can't imagine any auto-pilot being able to deal with that. Now I know the areas where they tend to dart, so precautionary slow driving helps, but would an auto-pilot know that as well?
Forget flashing lights. Why would the car not stop or even slow down when a large immovable object is within impact distance? Auto-brake has been around for a decade now.
You make a very good point. I've seen videos from Tesla's AI days where they show debug bounding boxes around identified objects, and the boxes always seem to disappear when objects are occluded. Perhaps that's just a visualisation choice, but I'm taking the precautionary approach of assuming it means that the car has no object permanence because it's not considering a wide enough time window.
They just released an Emergency Vehicle Detection feature on basic autopilot, and so far it works.
This is a big part of the rewrite of autopilot, the ML they use can now persist data through time, so it can reasonably predict if occluded objects will remain in place.
The crash was in August, well before this feature came out. The FSD rewrite has been fixing the issues with static object detection. Recommend watching Andrej Karpathy's AI Day presentation if you're interested in how and why static object detection is such a challenge.
Slowly moving is much easier to a lot of systems than stationary. An easy way to distinguish a car from the guard rails or other fixed objects is to look at movement, most radar systems do this.
Very true, but I guess from a probabilities perspective, you're highly likely to encounter a stationary car in the driving lane (i.e. a badly parked car). I can't think of a time I've driven and not encountered this, and it's easy to react in advance before it was even a priority (see parked cars ahead, move out a lane, no inconvenience experienced...)
I wonder how it would cope with other items that don't "belong" in the roadway sitting stationary - skips, bins, shipping containers, large piles of bricks or rubble etc.
I mean sure, autopilot bad. No argument from me there. But how in the world are you in the drivers seat of a car and not see a car stopped in the middle of the road with flashing lights?
The whole "you need to pay full attention when using autopilot" argument is nonsense, but surely people aren't turning on self driving and then just checking out mentally entirely? Do people really trust this tech enough to not even keep an eye on the road?
I didn't realize from the link title that it also collided with the stopped vehicle. Wow.
It made me wonder, are the driving models trained at all to recognize accidents or other road anomalies? A plane making an emergency landing in front of them? Just curious.
Edit: Would also be interested to know if there's a way police could work with Tesla or other autopilot carmakers to broadcast special codes like don't-hit-us-here-is-our location, or just send a signal blocking autopilot, or whatever. They already send texts and radio alerts, and there are already local AM alert stations in the US anyway.
If my kids would be driving, I'd rather buy from that kind of manufacturer.
I think that by even asking the question if the system has been trained to recognize accidents you're giving it too much credit.
It simply does not recognize objects that are not moving. This is not a smart system but something very dumb. As for accidents, it will never be able to understand visually what constitutes an accident site.
> Would also be interested to know if there's a way police could work with Tesla or other autopilot carmakers to broadcast special codes like don't-hit-us-here-is-our location, or just send a signal blocking autopilot, or whatever. They already send texts and radio alerts, and there are already local AM alert stations in the US anyway.
I don't think this will be a good solution, since inherently these incidents happen when there's an "exception". If every stationary emergency vehicle had to send a notification each time the lights were activated, this would maybe work, until you're in an area without coverage.
To me, we need to raise the bar for what's acceptable - this vehicle would presumably also have ploughed into an injured motorist's car before the police had arrived... The root problem isn't the lack of notice of the stopped emergency vehicle, rather it's the apparent inability to avoid a stationary vehicle on the roadway, which should be the very first (and simplest) thing to detect, since the self driving vehicle is closing in on it and can see its relative position isn't moving. If it can't do that, IMO it shouldn't be on a public road, as it isn't capable of basic speed management in traffic without being able to gauge closing speed against a vehicle ahead.
> Edit: Would also be interested to know if there's a way police could work with Tesla or other autopilot carmakers to broadcast special codes like don't-hit-us-here-is-our location, or just send a signal blocking autopilot, or whatever. They already send texts and radio alerts, and there are already local AM alert stations in the US anyway.
I think that's solving the wrong problem, as the rest of your comment implies it's just as important to not hit other stationary objects like, say, another Tesla that has decided it's going to stop in the middle of the road[1].
That's a much more complicated feature than you'd think.
What does the car do when someone cuts you off on the highway? You're going 60 mph and there's now an object 3 feet in front of you. The car doesn't know whether it's moving or not, it takes time to measure velocity.
Or if you're on one of those curved highway offramps with concrete barriers. There is legitimately an object directly in front of you.
Or if it's just a plastic bag that you can totally just plow into.
We can solve those problems, but it requires stuff like planning a route so you know if there are things in the path, and tracking objects around the car so you already know how fast they're going.
A naive "if object in front of car and impact will happen in less than X seconds" approach is going to misfire often, and people are going to rear-end the car when it slams its breaks at 75 mph on the highway for no apparent reason.
I have a Honda that seems to have solved all those problems. It doesn't really "take time" to measure velocity, at least in the context of moving objects with inertia and human reaction time.
There is a way to deal with emergency braking that could be considered obvious and yet ingenious, if you have radar or something that can identify solid objects.
Instinctively, you'd think these systems are complicated, and impossible to guaranteed correct behavior with.
But you might set the threshold so that braking triggers slightly after avoiding an accident is physically impossible given the required acceleration vector. That basically guarantees that you can't make things worse.
Just got my Plaid 2 days ago. I already feel uncomfortable about how it changes lanes and merges into lanes. Def making other drivers notice something is up with the way it steers, especially during traffic. Not even talking about city streets.
Also noticing that other drivers leave a lot more space instantly, and let me pass as soon as the turn signal is on, not sure if they are worried about it being on Autopilot.
Emergency vehicles should be hooked in to a crowd-sourced traffic system now. The first thing they should do is hit a big button on their dashboard that says “Road Closed”, “lane blocked” etc which should then update Google maps etc and also disable things like autopilot for vehicles approaching the incident.
This goes for things like roadworks too.
A bit like Waze but the information comes from “official” sources and is required by law to be logged.
I feel like 90% of modern cars would have stopped for the object blocking the lane. Even if the police weren't there, then it would have just smashed into the car stopped on the highway. The fact it didn't recognize the police car isn't the issue, it's that Tesla's are basically blind to anything not moving.
We can't even get police departments to agree to keep their body cameras on. Any such requirement you'd pass to officers like that would get wrecked by the police unions / F.O.P.
Our police aren't organized at a national level in the USA: its a collection of 10,000+ departments, some of which are incredibly shallow (say 1 sheriff elected by the locals and 5 deputies).
------
To practically start something like that, you find police departments that are likely to agree with each other (ex: NYPD + Boston PD), and then maybe get an agreement between their Commissioners. You grow the alliance one step at a time. You're gonna have to figure out "why would the police want a system like this in their procedures?", such as arguing for a mitigation of accidents or whatever. Or maybe giving the police some kind of power (ex: official announcement to close a road and redirect traffic)
That's not a bad sounding idea, but it makes a lot of assumptions about the infrastructure available in the region the self-driving car is being deployed.
If the goal is really to make cars that can drive as well as humans, they need to be able to do that based only on on-board sensors.
Teslas crash less often than other cars, even less when Autopilot is enabled. You just hear about it more often because it's actually news when it happens, as opposed to some idiot texting while driving in a Camry.
> Teslas crash less often than other cars, even less when Autopilot is enabled.
[Citation needed]
Substantiate your claim with sufficient, unbiased and relevant evidence first.
I also think you meant ‘supposed to crash less often’. But it is clear that this system (FSD or autopilot) is even more unsafe than advertised. Otherwise it would not be currently under investigation by the regulators.
False advertising and deception seems to be working for Tesla’s customers. Maybe it is for Tesla’s own good for their autopilot beta software to be under close investigation.
Of course a source from Tesla would say that. I explicitly asked for ‘unbiased’ sources and instead you have given me this. The evidence needs to be from unbiased sources.
Might as well ask a Tesla salesman, which car brand should I buy?
How hard would it be given that Tesla uses cameras to just disable Autopilot when it senses flashing lights? Or at least give the driver a few seconds to react before disabling Autopilot.
New in #Tesla 2021.24.12 Owners Manual for #Model3 #ModelY "If Model 3/Model Y detects lights from an emergency vehicle when using Autosteer at night on a high speed road, the driving speed is automatically reduced and the touchscreen displays a message informing you of the slowdown. You will also hear a chime and see a reminder to keep your hands on the steering wheel. When the light detections pass by or cease to appear, Autopilot resumes your cruising speed. Alternatively, you may tap the accelerator to resume your cruising speed. Never depend on Autopilot features to determine the presence of emergency vehicles. Model 3/Model Y may not detect lights from emergency vehicles in all situations. Keep your eyes on your driving path and always be prepared to take immediate action."
Clearly the dash cam used is sampling at a high enough rate to see them, but perhaps the Tesla isn't considering the a view of the road ahead across a wide enough period of time to see the hazard lights and understand the vehicle is potentially stationary and alerting drivers about a hazard.
A very naive FSD algorithm working only from the current frame (without looking at past frames) could certainly mess this up, but at what point do we accept this simply isn't fit for purpose - it can't avoid hitting a stationary vehicle, so what hope does it have with slowly moving and other edge cases?