Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That boy chasing that ball out onto a crowded street has much better chances with an automated car vs one driven by a human. Heck, any of those situations human drivers perform very poorly in already, the bar is low.


I have first-hand experience with such a case, and I disagree. I avoided hitting a kid that I think any AI would have killed.

I'm driving southbound on Fair Oaks in Sunnyvale, approaching Arques. On the corner is a mini strip mall with a small parking lot. Out of the corner of my eye, I see a young child, maybe 4 years old, running around that parking lot being chased by his mother or caretaker.

Now, being well off the road and out of its universe of conditions that would be considered by an AI, he would be ignored. But I just knew, somehow, that this was trouble.

The kid, thinking this is a fun, impromptu chase game, then sprints directly into traffic right in front of me. I stop with a foot to spare.

Edge conditions: it's what's for dinner. (tm)


I'm not sure about that. Looking at this system https://www.youtube.com/watch?v=hCWL0XF_f8Y it is disconcerting how people pop in and out of view.

A human could see a group of kids looking in the direction of the street at someone occluded from view and realize the possiblity of a child dashing out onto the street well in advance. The self-driving car would have to detect, classify and react after the child comes into view. It is this kind of common sense AI that we cannot engineer our way out.

A couple days ago there was a viral post about how self-driving cars react to a 35 mph sign vandalized to look like 85 mph. A human would know that the sign is wrong - the self driving car would need this 'use case' programmed in - maybe have location based speed limits programmed in.


I think you have way too much faith in human-based image tracking, and their ability to interpolate situations while listening to the radio. As for https://www.dailymail.co.uk/sciencetech/article-8021567/Tesl...

Reading the article:

> ‘It is worth noting that this is seemingly only possible on the first implementation of TACC when the driver double taps the lever, engaging TACC.’

It isn't a strict self driving situation, but makes for great click bait. I can't imagine that this isn't an easy fix (Tesla's are also easy to update collectively via software update, try doing that to human beings).


There are two bars though, because there's a double standard. If the robot car hits the boy, it's national news. The lower bar of surpassing human safety performance is, in almost all regards, not even relevant to the discussion. Maybe it should be but it's definitely not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: