Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Uber vehicle reportedly saw but ignored woman it struck (techcrunch.com)
337 points by runesoerensen on May 7, 2018 | hide | past | favorite | 426 comments


I'm curious what degree of "cultural context" is built into automated cars - I feel that there has to be a lot of locale-specific techniques for driving.

For example, in NYC, there's nothing at all uncommon about a pedestrian beginning to cross the street in front of you and approaching the area of your lane where you will shortly be. Pedestrians typically then stop about a foot away from your lane, let you drive past them, and then continue walking. If a car slammed on the brakes in this situation, it would likely cause more accidents than not braking would cause.

On the contrary, in Tempe, a pedestrian starting to do the exact same thing as above is likely much more a case of them not realizing your car is coming at all, in which case slamming on the brakes is appropriate.

This isn't to defend Uber, clearly their car did do the wrong thing in the Tempe situation (though I make no judgment on who was at fault). And clearly cars need to be able to handle the variety of unwritten, localized driving quirks different regions have. But it seems like a very non-trivial problem to do this correctly.


NPR had a story about how self-driving cars in Pittsburgh wouldn't be adhering to the "Pittsburgh-left":

"Even harder for autonomous cars to master are local quirks; things like "the Pittsburgh left." That's a custom unique to the city that allows the first driver trying to make a left turn to do so before oncoming cars pass through an intersection. It's one thing that Emily Duff Bartel says Uber's fleet of self-driving cars won't be programmed to practice. "We're following the rules on this one," she says."

https://www.npr.org/sections/alltechconsidered/2017/04/03/52...


Not unique to Pittsburgh (also prevalent in Rhode Island, among other places), but more importantly, letting the first car turn left first is useful to keeping traffic flowing on narrow roads—which, in the areas that use this tactic, rely on it. So, not following local customs is a good way to snarl traffic (since the oncoming traffic will be waiting for the car to take its turn).


Obviously if the law is wrong it should be changed. Then the question becomes, can a car be programmed to follow the more complex law.


I think it's less about complexity and more about precision. A simple law can be extremely difficult to encode if it lacks specification.


Not that I'd recommend doing it in an area not used to it... but that sounds like a riskier alternative to a hook turn. With hook turns, If you want to turn right, you start in the leftmost lane. When your light goes green, you 'hook' around to put yourself in front of the cars to your [initial] left. From there, the road you want to enter is straight ahead.

This way, no-one cuts across lanes where traffic has a green light.


This makes no sense to me even after assuming you are from a country that drives on the left and reading the Wikipedia page.


First off, note that hook turns are a bike thing, not a car thing — trying to imagine it with cars isn’t going to make any sense. They rely on the fact that bikes are small enough that they can fit in the space in front of the stopped opposite-direction cars (which might be a crosswalk, bike box, etc). You just go stop there, turn your bike so it’s pointing the right direction, and wait for the green.

They work even better with dedicated bike infrastructure as depicted in the diagram about half way through this article: https://www.vox.com/2015/8/12/9139771/protected-intersection...


Not a car thing? Welcome to Melbourne, Australia!

https://www.youtube.com/watch?v=Yh92LirlCf8

My wife used to drive in the Melbourne CBD daily for work, but would always go around the block instead of attempting one.


Ok, let's use normal right-sided driving method. Imagine you're vulnerable (on a bike, or worse: on an electric scooter), and you want to turn left. You were going northbound and want to go westbound.

In "normal" fashion, you'd have to stay on the left of your lane, and wait for the traffic coming in front of you to die down before you can cross. That puts you straight in the middle of the intersection for some time, which is dangerous.

With a hook turn, you go and wait along with the traffic coming from the east to the west. They are stopped, so they are not a danger to you. When the east-west traffic light finally turns green, you won't be crossing anyone's traffic, and you'll be already standing on the rightmost part of your new road.


Fairly sure the hook turn isn’t about safety, it’s about traffic flow. If it’s about safety it would be see more usage across the world


It's definitely safer, and it does see quite a bit of usage in areas of the world with high quality cycling infrastructure. The fact that it doesn't see more usage is just a testament to the fact that most urban planners around the world don't care much about keeping cyclists safe.


Its used in denmark Australia etc anywhere with sufficient bike lanes or of the intersection is reallt large


No In Australia it’s for the benefit of the tram https://en.m.wikipedia.org/wiki/Hook_turn the article seems to imply it’s otherwise for traffic flow and 1 country for the benefit of cyclists


You're confusing people because you're talking from a left-hand drive perspective.

This is known in the right-hand drive cycling world as a Copenhagen left, and many consider it to be bad infrastructure design.

http://www.aviewfromthecyclepath.com/2010/07/not-really-so-g...


I'm from near Pittsburgh and I'd hardly call this a custom. It's just what you do when you're in a hurry and have pole-position in the turning lane...


Not sure what you mean. "Just what you do" when you're from a specific area is pretty much the textbook definition of a "custom".


I think the parent's argument is that it's not a custom to think of this as really accepted behavior. That is, an agressive driver turning left in a hurry may gun it, and the driver going straight may let him through to avoid an accident, but it's not "custom" to the point that that word implies it's accepted.


It's a dick move. But people do it a lot anyway.


This custom is prevalent in places other than Pittsburgh. It can also get you a citation if a cop sees it.


It depends. Most cities whose road network predates cars have "local customs" which are essential to keep traffic flowing. These "local customs" may not be strictly legal, but you rarely will wind up with a ticket for it.


As it should, what a ridiculous thing to do.


As a long-time but not native Pittsburgher, I have to agree. The "Pittsburgh Left" is dangerous . There are arguments here that it eases traffic flow by "crowd-sourcing" a left-turn light or that without it there'd be a backup of traffic. What rubbish. A left turn light allows several (dozens at some intersections) make a left. A "Pittsburgh Left" only allows one reckless, impatient driver make this left. And I see it more often on four lane roads than on two lane roads so I discount the eases traffic flow argument. It's a moving violation with a heavy fine and points on your license - as it should be.


Says the person who has never seen the ridiculous thing that is single lane road deadlock.


Pull into the intersection and go when it’s clear (on yellow or red). Gunning it so you can cut off oncoming traffic is ridiculous.


That doesn't solve it at all, you're thinking of a different problem. I got clued in from another comment that only mentioned it offhand, but this is the problem the "Pittsburgh left" tries to solve:

The traffic is a single lane, and you want to turn left. Light turns green, and you wait for the oncoming traffic to clear. All traffic behind you is stopped on a green because they can't get around you.


Thanks, that’s a great point. That really is a terrible situation.

And I’m sure the risks associated with the “Pittsburg left” are significantly mitigated when everyone expects it.


Even with multiple lanes this can be an issue as people behind you try to merge right to pass you which in busy traffic increases the chance of accidents and causes slowdowns.


Indeed. When I first came to the US I lived in Boston and this happens quite frequently inside the I95. It used to drive me crazy, I hated it, I though it was dangerous and uncivilized :-) it didn’t take long for me to understand why it was necessary and to start doing it myself.


For a responsible driver, it is a test to see if you are paying attention. A careful driver will see this about to happen, anticipate it, and allow the turning driver to make his impatient left safely.


If single lane road deadlock is a problem, the city's urban planners should (and probably will) install a left turn signal to solve it. A better solution than reckless driving is to report the deadlock to the city.


Many of the roads in the city of Pittsburgh are carved into the mountain, sometimes with shops or houses on either side between the road and cliff face. The terrain there is not a trivial thing to engineer around.


You clearly have never been to Pittsburgh. There is simply no room at some intersections for a left turn lane, and a left turn signal would needlessly block traffic going the other way even when nobody wants to turn left.


Pretty sure that's just an asshole thing, not a "local quirk." It's like saying not using your turn signal is a funny quirk of BMW drivers, or going 55 in a 70 in the left lane in a Prius.


This type of left turn is less an "asshole thing" and more a "drivers compensating for poor design" thing.

Typically, the "Pittsburgh left" is used in places where there should be a protected left turn as part of the signal cycle, but there isn't, and at times of high traffic flow it's the only way for any traffic to complete the turn (since waiting for oncoming traffic to clear on a green essentially means waiting until the multi-hour-long period of high traffic is over).


Won't the oncoming traffic clear once the light turns red? That's usually enough for the 2-3 cars waiting in the intersection to complete their turn.


The usual case for this is a light where left-turning traffic must wait for oncoming traffic to clear, but at certain times of day oncoming traffic never clears. The only gap in oncoming traffic occurs when the oncoming traffic has a red light. But the left-turning traffic also has a red at that point.


I thought we were talking about the Pittsburgh-left. That's not what you're describing. The Pittsburgh-left occurs immediately after the light turns green, when both the oncoming traffic and the person making the turn have a green light.

My point was, why do you need to do that when you can just enter the intersection during the green light and complete your turn once the light turns red?


Having driven in the single lane thoroughfares of Pittsburgh, I’m grateful for the people in front of me that take a “Pittsburgh-left.” Otherwise the entire lane would be stopped for every left turner each green light. This way it’s only stopped for every other left turner in the cases where a left-turner ends up in front of the line. It’s an improvement to everyone’s traffic flow.


Driving in California, the best roads literally make it illegal to turn left during rush hour when the intersection doesn’t have a left turn lane. People still get where they’re going.


Whether you do it after the light goes red (illegal) or immediately on green (also illegal), it's the same underlying problem: the intersection needs a protected left turn in the signal sequence.


Which law is violated if you enter the intersection when the light is green and exit it when the light is red?


In many places it's illegal to enter the intersection (even if the light is green) if you're unable to immediately leave it, mainly to ensure that intersections don't get blocked if there's a traffic jam in the direction you want to go; and this rule also prohibits (not everywhere) to enter the intersection for a left turn if you can't complete the turn right now because there's still oncoming traffic.


Ahh now it is finally clear.

Then the laws in Pittsburgh are just bad. Not a design question at all.

Here you oull into the intersection on green, turn slightly ti a big white cross on the road for exactly this purpose and go when it is clear. This happens when the oncoming side has red, but before the crossing cars have green. The time between red and green is long enough for all two or three cars to clear the intersection.

Get better laws i guess?


The only gap in oncoming traffic occurs when the oncoming traffic has a red light. But the left-turning traffic also has a red at that point.

Which is why you need to move onto the intersection as the light turns green -- you've now passed the traffic light, so once the oncoming traffic gets the red light, you are clear to turn.


it's the only way for any traffic to complete the turn

As someone else pointed out, you can do three right turns instead.


Or turn right and then turn around.


Is this the same as pulling straight into the middle of the intersection with live traffic swirling around you on both sides then making the left after the lights have reddened and traffic has cleared?


No, it's zipping left immediately when the light turns green and before oncoming traffic has a chance to accelerate.


It sounds similar. I stopped doing it years ago after seeing too many close calls. But I live in an area where other people don't exactly accommodate the behavior.


It sounds like the opposite of that -- letting the left turn car go before both sides move, rather than after both sides finish.

The behaviour you just described is what everyone does in Michigan. The Pittsburgh left sounds friendlier, but more risk prone to people who don't expect it or respect it.


Correct, although the creep and go after the light is also practiced. Main reason for the Pittsburgh left's popularity is that there are a number of roads in and around the city where during rush hour it's otherwise impossible to ever make a left and have no turn lane.


> and have no turn lane.

Urgh, that's something missing from the other comments. Now it makes sense to me.


This happens to me sometimes, my solution is to overshoot by a block, then hook 3 right turns, rather than slow up everyone behind me or pull a daredevil left. I would hope self driving cars would be programmed to follow a strategy like mine while still accommodating daredevils by using cautious acceleration and effective collision avoidance.


In other words, despite the traffic lights not having an official "left turn only" phase, the traffic has naturally created one. I've seen this in China too.


No, what you're describing sounds a hell of a lot safer. (It's also how people make left turns in Vancouver. The fact that it's illegal in Seattle infuriates me to no end.)

The Pittsburgh left sounds like a recipe to get T-boned by an inattentive driver. Pulling into the intersection, and making a left turn when it is safe is a much better alternative.


It is legal to pull into the intersection on green or yellow, and wait until it is clear (often when the light turns red) to turn left in Seattle. It is the correct/legal way to make a left turn. Otherwise you are just going to sit there blocking traffic for multiple cycles during busy times.


That's why you go during the yellow. Cars are stopping at the new red. You then go left before the cars getting the new green start moving.


The oncoming traffic doesn't stop until the light turns red. When the light is yellow, they're actually more likely to _speed up_ in order to "make the light" rather than stop. So when the light inevitably turns red and you're in the middle of the intersection, do you either:

a) Just sit there, in the middle of the intersection, blocking traffic.

b) Attempt to back up, hoping there are no other cars behind you in the left turn lane blocking your path.

c) Continue through the intersection.

Unless the law says "never stop in the intersection in the first place", I can't imagine it enforcing any option other than c.


I've found that Seattle drivers are unusually respectful of amber lights - probably because getting pulled over for abusing it is more common than never.


Interestingly, yellow light laws are different in different states — so in some places people are very respectful of yellow lights.

Most states say it's OK if you enter the intersection on yellow, but a handful (not including WA, but including neighboring OR) require that you clear the intersection before the red appears.

I've sometimes wondered if "entering the intersection" includes entering the crosswalk adjacent to the intersection.

https://www.reddit.com/r/todayilearned/comments/57ajjq/til_w...


A family friend crashed into an oncoming car because of this ambiguity. They were waiting to make the turn, thought the other car should stop (because it was yellow going on red). Apparently it even looked like they were slowing down but then sped up at the last minute. Police said both at fault.


> The oncoming traffic doesn't stop until the light turns red.

Wait, why? No new car should enter the intersection when the lights turn to yellow. The yellow light should last long enough for everyone to be able to safely drive off the intersection, in orderly manner.


In the states, there's often a 1-second all red cycle to let the intersection clear, because drivers treat a yellow light as a green that's about to expire. Even with this, I often see 2 or 3 cars enter the intersection after I already have a green signal.


> No new car should enter the intersection when the lights turn to yellow.

Cars will obviously enter the intersection under a yellow. It's why we have a yellow.

Even if everyone wanted to stop for every yellow, at 35 mph with average reaction time, cars will travel a minimum of 100 feet after the yellow phase starts. Some of those cars will enter the intersection.

With a typical 4 second yellow, some drivers don't even choose to stop for the first half of the yellow, of course.


What's illegal? Stopping in an intersection? Or exiting the intersection on red? The light would be red on exiting the intersection unless cars are illegally entering the intersection on yellow when they should be stopping because they weren't speeding and have time to stop.


Entering an intersection on yellow isn’t illegal in Washington, California, Texas, or any state that I am aware of.


Ah, ok, that explains it... It is illegal to enter an intersection on yellow in most of Europe, I think. The yellow light is for driving off of intersection, is what I've been taught. It's only allowed to enter the intersection if you're going really fast and the light turns when you're really close to the intersection, but even then the cops are likely to question you if they see it.


That‘s not the case at all in Germany (most of all because you cannot even see the lights anymore once you entered the intersection since in Germany they are at the beginning of the intersection, not the opposite end). Which European country are you talking about?

In driving school I was taught to generally be ready to brake when approaching a light, but to also then pick a reasonable point of no return (depending on speed, maybe a couple dozen meters before the light) beyond which I would not be braking, no matter what the light does. That way braking should never ever feel anything close to emergency braking you do when there‘s an actual unforeeen obstacle. It should always be smooth.

Also, when making a left turn at an intersection in Germany you drive into the intersection and then you wait for the oncoming traffic to clear. At this point you cannot even see the lights. The only thing that is relevant for you is to wait for oncoming traffic to clear up (or if you obviously see they have stopped since the light has since turned red and have verified optimally by eye contact that you are free to go).

Other traffic from left and right obviously have to wait until you have cleared the intersection even if they have a green light.


It's illegal to enter an intersection if you're unable to immediately leave it. So, if you don't have a clear left turn, you're supposed to sit behind the white line.

Likewise, it's illegal to enter an intersection, going straight, when there isn't enough space in your lane, for the back of your car to clear the intersection. Doesn't stop people, though.


Your first point is not true, at least not where I live. It is legal to enter an intersection to prepare to make a left turn before oncoming traffic has cleared. This very scenario is spelled out in the NYS drivers manual— see the second example under “right of way”. [0]

[0] https://dmv.ny.gov/about-dmv/chapter-5-intersections-and-tur...


At least in Los Angeles, this is not only legal but, in many cases, the only legal way to make a left turn.


It may be legal in LA or NY, but in Washington, it is not legal.

> RCW 46.61.202 says "no driver shall enter an intersection or a marked crosswalk or drive onto any railroad grade crossing unless there is sufficient space on the other side of the intersection, crosswalk, or railroad grade crossing to accommodate the vehicle he or she is operating without obstructing the passage of other vehicles..."

Some people bring out the following passage to be in conflict with it, but I strongly disagree.

> RCW 46.61.190, which says drivers must stop at a marked stop line or "at the point nearest the intersecting roadway where the driver has a view of approaching traffic on the intersecting roadway before entering the roadway, and after having stopped shall yield the right-of-way to any vehicle in the intersection or approaching on another roadway so closely as to constitute an immediate hazard..."

The 'intersecting roadway' obviously refers to the crossing street, not the lane you want to turn into. Therefore, if you can see oncoming traffic without pulling into the intersection, then crossing past the white line, and waiting there is illegal.


IANAL, but by a plain language reading, entering the intersection to prepare to turn left is not in conflict with the first point, so long as the destination roadway is clear.

The second point is irrelevant, as it pertains to stop/yield intersections, which are not being discussed here.


Well, in Latvia we ofcourse have a law, but in real life I was tought that I and the driver in front of me should decide how we turn left. Kind of look at what the other driver does and react accordingly or make initiative myself.

Edit: oh sorry misunderstood what pittsburg left meant. I was describing situation when both cars turn left. There are 2 options: cube path, that is paths cross, cars pass one by one, or 2 arc paths not intersecting.


So I am curious if companies have popped up who will try to provide localization for traffic laws similar to how we have companies that provide data on zip codes to regulated materials. an example would be to know when a right turn on red is permitted even without signage. which while not a quirk there are many small towns with special rules that tend to catch people off guard.


I've seen this often in southern california and chicago, the only two cities I've driven heavily; and despite having no interaction with Pittsburgh, I do it myself all the time...

I can't imagine how this is a Pittsburgh quirk; its the obvious thing to do when the option is available


Obvious to not respect the right of way? Or fall prey to insurance fraud if you misinterpret the "benevolent" driver who you thought was letting you jump the turn.


Pittsburgh needs to do what we do in (some intersections in) California — ban left turns during rush hour on intersections with no left turn lane. Everyone still gets where they need to go and traffic moves much more smoothly.


And welcome to why self-driving cars will always be 3 years away.

I think we should be targeting self driving for long-haul freeway scenarios where the rules are relatively simple and predictable. Trying to control for the infinite number of scenarios in cities is a nightmare which will ultimately bog down the development of any company which makes that its goal.


> And welcome to why self-driving cars will always be 3 years away.

It's also possible that human/pedestrian behavior will adapt to the consistent behavior of driverless cars.


Or jaywalking will become more strictly enforced with policing or physical barriers.

https://www.strongtowns.org/journal/2018/4/2/automated-vehic...


That's a very good argument but I don't think it will come to pass, for this reason: http://theoatmeal.com/pl/minor_differences/cutting_off

There's something about driving that turns ordinary people into assholes. I've observed it in myself. Bus passengers are much more considerate, and I think that passengers in self-driving vehicles will behave like passengers, not like drivers. So if a pedestrian cuts them off they'll just shrug and keep playing their game or whatever they're doing.

I hope.


My theory is that people act like jerks behind the wheel, and come across as jerks even when they're not, because drivers have no way to apologize or ask please or do any other polite thing. All they can do is honk the horn, which is like yelling "HEY!!" If there were a communication mechanism for "excuse me, the light is green now" and "oops, so sorry, didn't see you there" driving would be easier and more pleasant.


I feel like in large cities the majority of drivers actually practice "mean" driving techniques - tailgating, lane protection (preventing people from merging-in when they have to), zoom-ups near contest areas, loud-music in communities, lane weaving.. I've left the bay area a long time ago now, but it's amazing when you live outside of that a couple years how awful people are behind the wheel. I would call it passive-aggressive behavior and it is really really bad to ones mental health. Bay area, Denver, Portland, Seattle there's a few others out west here - but make no mistake, the actions people take while driving is 100% actual jerk, not mistaken jerk.


In Poland there's an unwritten custom of saying thanks by flashing emergency stop lights for two ticks. The most common use case is when someone lets you through when they don't have to.

On freeways the same is used by cars coming from the opposite direction if there's a radar ahead.

Still, in peak hours pretty much everyone is a dick, even usually chill Uber drivers.

Edit: it's also used as a "sorry" signal too.

Also, this is not local. I've seen it used all around the country.


Definitely not local. Pretty much a given in Lithuania too. If someone lets you in etc, it's just rude to not flash emergency lights. Even if somebody squeezed in like total dickhead during a rush hour

We use headlights to warn of friends waiting on the other side of the bush though.


Also seen in London


And the rest of the UK :)


Japan as well


Exactly! Every communication you try to make in a car can be interpreted as aggressiveness, which escalates anger on both sides. You need a way to de-escalate.


Yes, in Japan there is a custom of briefly tooting the horn to say thank you (for example when someone lets you in in front of them) and they still have problems when people have different ideas of what constitutes "too long".


Hah! I come from Ireland where the custom is also a brief flash of the emergency indicators to say "thanks". I then moved to New Zealand where they toot the horn to say thanks. I was most confused when I got tooted (rude in Ireland) for letting someone pass easily. I was like * buddy ;) then I realized what was going on, well, eventually anyway :)


Ha. In Boston I used to get frustrated at oncoming drivers flashing their lights at me. What was I doing wrong? Are my headlights misaligned? Is something falling off? Do I know you? Eventually I worked it out: they were warning me of a police cruiser ahead.

In a state known for Massholes it’s surprisingly unjerkish.


Dangerous driving can result in damage, injury, or death. That is why people are so volatile behind the wheel. I don’t think there is any system of communication that can prevent or stop road rage. Dampen, maybe, but that’s about it.


If the internet has taught us anything, it's that anything that facilitates conversation facilitates all types and styles of conversation. I'd posit that a more sophisticated inter-car communication system would add at least one new road rage incident for every one it prevents.


Paradoxically, in some places horns are used much more liberally and if you spend much time there that's the feeling you start getting from a honk. Like a "hey just wanted to let you know I'm behind you".


When I travelled from India to Nepal I noticed that both places had liberal horn usage, but in Nepal almost all of the horns were musical. They felt much more pleasant than the same custom in New Delhi.


They often find other ways to express their jerkiness, like stepping on the gas, swerving around you and cutting you off, or aggressively tailgating, or you know... hand gestures.


> There's something about driving that turns ordinary people into assholes.

Game theory with no further interaction so no consequence to defecting other than the immediate.

The problem is that there is always somebody willing to be a jerk to save 2 minutes since nobody wanted to be stuck in traffic. The only defense against that is "Do unto before done unto" and that turns everybody into jerks.


The world is already overly car-centered. If cars can't see and stop for people, then they've become an expansion of that problem. I live in a place where, once you're off the curb on corners where there's no light, cars are required to slow/stop for you. Many people will voluntarily do that even if you're NOT off the curb yet. It feels like a courtesy.

The scenario you're describing, OTOH, feels even more alienating than what we're already suffering. What in the hell gives you, sitting on a seat in a car, more right to travel through a space than a walking human being?


Many people will voluntarily do that even if you're NOT off the curb yet. It feels like a courtesy.

Eh, as a pedestrian, I hate that. I have to conscientiously turn my back to avoid impeding the traffic when I don't want to pass!

Still, I agree with you; self-driving cars that can't be trusted to slow down and stop for pedestrians should not pass the licensing requirements. And not just for humans, either; we're not going to teach other animals to follow traffic laws any time soon. I've already seen too many cats killed by assholes speeding over the limit on residential streets.


If stricter enforcement of jaywalking laws is required for self-driving cars to work, you can already forget about those cars in almost all other countries where jaywalking laws don't even exist and certainly won't be accepted in the future.


if this happens i will surely protest self-driving cars. cities should be built for pedestrians, not cars.


Don't protest self-driving cars, protest the laws you feel shouldn't apply to you. After all, there are plenty of other reasons jaywalking laws might become more strictly enforced.


The term "jaywalking" was a slur invented by car companies to steal rights from pedestrians. If cars can't avoid people, the cars need to be separated by physical barriers or removed.


This is in no way contradicting my point.

A pedestrian may illegally have entered a road, but still have right-of-way, and self driving cars will need to account for that.

However, protesting self driving cars will not change whether walking in the street is legal, since it already is not.

Therefore, whether you do so and get fined by police is entirely up to their discretion, which could change at any time for any reason.

If you want it to be legal, protest the illegality, not the car company.


If I'm not mistaken, car makers first lobbied to outlaw jaywalking. So if it's anything like the past, protesting self-driving cars and protesting the laws will amount to the same thing.


The laws are already on the books. Police could choose to apply heavier enforcement regardless of whether or not self driving cars become a thing.

If the law itself is unjust or inappropriate, protest the law.


Lol or perhaps jaywalking laws should be loosened if they're not being applied anyways in most cities.


It depends on the city. My nearest metro does enforce them... at locations for which a particularly large number of people are wont to illegally cross, such as around the University.

Otherwise, enforcement is spotty at best, because the police have better things to do, such as ticket parked cars with expired tabs or meters.

If there were a sudden uptick in pedestrian-car collisions, you can bet they'll crack down.

None of the above addresses whether the law SHOULD be on the books, which is what people should put their efforts towards changing, if they feel is unjust. Protesting car companies that make self driving cars because they don't like an already existing set of laws is just misguided at best, and intellectually dishonest at worst.


They already aren't built for pedestrians in most of the world. I'd be happy just removing the chance that I get killed by a car because the driver wasn't feeling good that day.

That said... I think passengers in self-driving cars will learn to be more patient and we can allow self-driving cars to yield to pedestrians more often.


There's something really repulsive about this solution that I can't really eloquently articulate. Every advance in technology seems to further restrict and control people.


We do it for many pedestrian crossings across railroad tracks, so it's not unprecedented, but obviously on a different scale.


or less for increased safety:

https://en.wikipedia.org/wiki/Shared_space


Adaptations are still localized.

Pedestrians have "adapted" to just expect everyone to brake always on most college campuses.

If you treat a factory floor or intermodal facility like a collage campus nobody will be surprised when someone has to use a garden hose to wash you off of whatever multi-thousand pound metal thing you got in the way of.


On a similar note, the innovation of Serri, and other "AI" programs was not to get computers to understand how humans talk. The innovation was getting humans to speak in a way that computers understand.


Nope. You can never count on humans to change behavior to fit a technology.

As soon as self-driving cars learn local customs, the customs will have been changed. Self-driving cars will always be 3 years away.


It may be worse. If cars are very good at avoiding colliding with pedestrians, why would pedestrians look out for cars at all before stepping onto a street?

We may have to tweak the car’s software to be assertive, bordering aggressive, in busy neighborhoods, if the car is to make progress at all.


No. Let the passenger deal with that if they're annoyed. Machines should be always "humble". A human in front may be an inconsiderate jerk, or they may be trying to prevent the car from hitting something they're unlikely to detect.


Pittsburgh left custom is 50years old or more


This is already happening in the trucking industry. Self-driving trucks are being developed to be able to drive 99% of the trip that's on highways and then human drivers take over for the local 1%. To make it easier, transfer hubs are being built near cities off major highways to make everything easier and non-disruptive. The same thing could be done with cars without the need for the transfer hubs and it basically is already done.


I can't beleive they would do a trail run with an automous truck weighing 80,000lbs barreling down the freeway.

This revolution should have started with mundane tasks, such as city street sweepers and perhaps garbage trucks. Vehicles that move slow on a predefined path in the night. Then advance to faster moving objects like taxis.

Just like the DARPA Grand Challenge that started this mess. Many of those vehicles were industrial usage, not consumer.


>This revolution should have started with mundane tasks, such as city street sweepers and perhaps garbage trucks. Vehicles that move slow on a predefined path in the night. Then advance to faster moving objects like taxis.

No, you've got it backwards. The point of all of the above comments is that driving in cities and densely populated areas is a much more difficult challenge than driving down a straight open highway in the middle of nowhere. That's the easy part, the hard part comes when there are humans and all sorts of obstructions and road/land features to deal with.


Or when there's a random tire or mattress lying in your lane. I see that kind of stuff on the interstates all the time.


There's no way 'object in road' can even get close to the amount of variant situations you find yourself in during urban driving.

Freeway driving is pretty straight and forward. If you can't keep from running into a stationary object, you probably can't drive in the city either.


But as this accident shows, that's still an easier problem: if there an object in your lane on the freeway, you should pretty much always avoid it. On a city street, it's more difficult to decide which objects should or shouldn't be ignored.


Autonomous cars have an advantage here compared to even the best human drivers - a persistent 360 degree awareness of the surrounding.

If an object appears on the road, the autonomous vehicle can easily calculate whether it's safe to steer left, or right, or hit the breaks. A human will make a roughly random choice.


> Autonomous cars have an advantage here compared to even the best human drivers - a persistent 360 degree awareness of the surrounding.

Autonomous cars don't have an advantage because they don't exist. They may have an advantage one day when they are finally ready (but that may always be 5 years away)

> If an object appears on the road, the autonomous vehicle can easily calculate whether it's safe to steer left, or right, or hit the breaks. A human will make a roughly random choice.

Even with a 360 degree view, it still has to recognize the object, classify it correctly and make a decision. All tasks which are difficult for a computer to perform reliably enough to be allowed on a public road.

The Uber car had a 360 degree view but it still ran over Elaine Hertzberg, and as far as we know didn't even stop after it hit her. The Tesla Autopilot has 360 degree view but still tends to crash into barriers, tractor trailers, street sweeping trucks and fire trucks.


> I can't beleive they would do a trail run with an automous truck weighing 80,000lbs barreling down the freeway.

Fast driving necessitates predictability. Unrestricted stretches of the autobahn are not killzones because the speed breeds discipline: lanes are changed with indicators on more often than without (barely, but still..), passing lane rules are followed almost religiously, compared to what you see in America (even though drivers constantly complain about imperfections) and so on. The fastest roads of any jurisdiction would its easiest for a robot car or truck. Slow environments on the other hand are full of ad-hoc improvisation and weird traffic participants like the occasional flock of sheep, horse team or a biathlon squad on rollers.


> This revolution should have started with mundane tasks, such as city street sweepers and perhaps garbage trucks.

Honestly I would think that automating a slow moving vehicle in the city would be a much harder engineering challenge. City driving can be a lot messier than driving on the freeway.


The stakes are much lower though. A standard car drifting into the oncoming lane is basically death. If it can't handle city driving, the methods are probably too immature to put people's lives against.


Pretty much all interstates have barriers between the directions to prevent head on collisions.


Hitting a barrier in highway speeds is quite dangerous, even without hitting a car in the oncoming lane.

As the recent Tesla crash showed, even on a divided highway there are plenty of opportunities to cause death and mayhem to the autonomous car and the surrounding vehicles.


> This revolution should have started with mundane tasks, such as city street sweepers and perhaps garbage trucks.

Garbage trucks are big and heavy, so if things go wrong they can go badly wrong, such as the incident in Glasgow that killed six people[1].

https://en.wikipedia.org/wiki/2014_Glasgow_bin_lorry_crash


This revolution should have started with mundane tasks, such as city street sweepers and perhaps garbage trucks.

It has. The EasyMile EZ10 (a passenger shuttle) has already been deployed in a lot of places, for example.


Essentially make the freeways "car trains" as the autobahn implies. Eventually over time you add automation to other kinds of roads.


I think "bahn" translates to "path". Autobahn doesn't connote "car train". (Disclaimer: 1 year of college German)


Pretty much. See in English railway (Eisenbahn in German) or runway (Startbahn/Landebahn) for the equivalent patterns. Especially since in the beginning railway and Eisenbahn meant primarily the way with rails only, not the entire enterprise.

Autobahn might actually be derived from Eisenbahn in that Wikipedia says some group choose it because of the similarity, but it only works because Bahn is the more general concept.


We should anyway just remove cars from cities altogether


You're underestimating the difficulty. Self-driving cars will always be 5 years away.


This really speaks to other cues that we use while driving that are more complex that just the location and velocity of objects/people. In this case: attention and gaze.

I sometimes do the "walk up to the lane and wait" thing, but I always look up and make eye contact with the driver to communicate that I know they exist and I'm not walking out in front of them. And when I'm driving and someone is walking up to my lane, the presence of their gaze is what I use to decide whether to continue (cautiously) or slow down and be ready to stop.

I know that CV as a field has pretty advanced gaze/attention detection (e.g. see faceid), but I wonder if it's actually a part of self-driving car tech yet.


I think body language might be more important, and seems to me should be harder to detect. Eye's are pretty easy to detect because of the contrast between the whites and iris, and, of course they reflect light.

Detecting the difference between a hustle, a saunter, and a day dream might be a tad more difficult.


Gaze is actually more important when you're trying to figure out whether someone sees you. If I don't see your eyes, I would assume that you don't see me at all.


> If I don't see your eyes

I think you really mean if you don't make eye contact.

Eye contact is a two-person activity. How does an autonomous car make eye contact with a person?


Something like this could be developed. For instance, the car could have an array of lights on its grill (think Knight Rider) that track in the direction of detected obstacles, if you don't see a light facing towards you then you know you are not detected and to use caution. Alternately a light on the front of the car could turn from yellow to blue to indicate its intent to yield.


A small screen (or more) might be a better option. It seems Mercedes made a concept version of the Smart with one, and the cars launched by Drive.ai already have them.


There's a road near the University of Waterloo where students get hit, some killed, on pretty much an annual basis. Often it boils down to the large number of foreign students who come from a part of the world where for one reason or another, you just don't need to bother looking before crossing, so they don't and walk right in front of cars (and in one case a city bus).

To me this is an example of the overwhelming importance of locale-specific context. When I'm near UW, my driving behaviour changes significantly even though the laws do not.


> To me this is an example of the overwhelming importance of locale-specific context. When I'm near UW, my driving behaviour changes significantly even though the laws do not.

To me this is the incredibly danger of road and our current reliance on cars. Should it be ok to be killed if you aren't paying attention.


I love this question. And while I’m sure it’s a two way street, I would expect traffic patterns to normalize as pedestrians learn to anticipate what a driverless car does. Pedestrians (and culture in general) will wrap itself around the technology.

It’s sort of analogous to how wagon roads were different until fast cars required some standardization. The new technology shapes expectations for what normal means in Tempe and in NY and in Pittsburgh.

The prssent cultural context assumes that a human brain is driving a car. Take the brain away, and the context has already changed.

couldn’t resist.

P.s. standardization has beaten back the whimsical variability of culture in fast food, housing, cars, most clothing, computers, phones, etc. I want more whimsical stuff.


> I would expect traffic patterns to normalize as pedestrians learn to anticipate what a driverless car does

In this case normalizing means running people over less and less?


One can only hope. We're in a safety arms race, where to win, the car companies have to emphasize how much more safe they are than everyone else.

It's a nice change from the horsepower arms race of the past.


I think you're overselling the extent of the difficulty here. People from out of state usually manage to get through New York, pedestrians and all, without getting into an accident.

A car won't get through a block as quickly as it could if it merely obeys the laws makes sure it doesn't run into anybody. I'm sure this will cause great joy among people crossing the street and consternation among other drivers but the problem isn't a blocking one as long as the car can actually avoid hitting people - even if it's more cautious than it needs to be.


>I think you're overselling the extent of the difficulty here. People from out of state usually manage to get through New York, pedestrians and all, without getting into an accident.

At this point, "it's easy for a person to do it" is probably better evidence that it's a hard problem in AI.

In this case, I believe it's an inverse learning problem. We observe other agents acting in an environment, deduce their policy, then implement that policy ourselves.

A self-driving car that can reconfigure it's software to mimic the behavior of other cars it observes, sounds difficult to me.


> We observe other agents acting in an environment, deduce their policy, then implement that policy ourselves.

This is essential, especially internationally. If you're an American you think it's perfect normal to be allowed to turn right even when the traffic light is red. You don't get told not to do that when you hire a car in Italy. Now an automatic car can be programmed with different rules depending on which country it's in, but even in the same country, you wouldn't enter a box junction when the exit isn't clear, at least in most places. Turn up in rush hour London and you find there's no way to actually get through the junction unless you do that, because others fill the junction so when your light is on green, the exit isn't clear.

I remember pulling upto a red light late an night. Empty road, but the red light only goes green when it detects a car. It didn't. What would a fully automatic car do in that case?


I always drive differently, more cautiously and slowly while near colleges. Especially during the night. It is much more likely for college students to be intoxicated and even if they are paying attention they might not be capable of realizing a car is coming etc... This is also true near elementary schools, but not because they might be intoxicated but because little children are more likely not to pay attention. I really jope self-driving cars are trained for such nuances.


You don't need it. Companies will program the cars for the various regions where it's profitable to do so, and block the self-driving features elsewhere. Much like geo-restrictions in streaming platforms, and such. Will anyone in the US not buy a self-driving car because it doesn't work in, say, Portugal?


That sounds much harder than just stopping if a pedestrian is on a trajectory to enter your lane. I mean, obviously it would be better to deduce their intentions and act according to that but I don't see any reason to say that that's necessary to have useful self driving cars.


A car that defaults to braking without any deduction beyond detection will never make it through an intersection, let alone a whole trip, in some parts of the world.


New York looks like a well regulated army training course compared with somewhere like Cairo.

Never is a long time, but I don't see self driving cars coping well when traffic is coming the wrong way down the street, let alone when cars are driving along the sidewalk


Yes, Cairo will probably have self-driving cars over a decade after New York has them. But that doesn't mean self-driving cards aren't a commercially viable product before they work in Cairo. Very few technological innovations work better than the thing they replace in all circumstances initially. Back In The Day there were a lot of places a horse could go that a Model T couldn't but that didn't prevent cars from replacing horses.


Works for one SDV in a stream of human-driven cars. Doesn't scale well.


George Hotz had a great point about this and how Comma.ai handles defining the driving problem and what "driving badly" actually means, aka, interacting with and driving as other humans are expected to drive, quirks included. His whole talk is interesting but the short part about that is here: https://youtu.be/IxuU5L2MEII?t=38m11s


I think legally, the car has to stop for an approaching pedestrian if it's physically able to stop, so if the pedestrian doesn't "stop about a foot away from your lane" as expected, you don't run into him.

Though I don't know the NYC traffic code. In California, I believe 21950 paragraph C applies... the pedestrian that stops just before walking into the path of the car might be violating paragraph B since the driver may reasonably expect the pedestrian to continue walking.

(b) This section does not relieve a pedestrian from the duty of using due care for his or her safety. No pedestrian may suddenly leave a curb or other place of safety and walk or run into the path of a vehicle that is so close as to constitute an immediate hazard. No pedestrian may unnecessarily stop or delay traffic while in a marked or unmarked crosswalk.

(c) The driver of a vehicle approaching a pedestrian within any marked or unmarked crosswalk shall exercise all due care and shall reduce the speed of the vehicle or take any other action relating to the operation of the vehicle as necessary to safeguard the safety of the pedestrian.


The law as written and the law in practice, are different matters.

NYC would not function with actual adherence to the law.

There is never a break in pedestrians long enough for a car to turn in a fully legal fashion at most intersections, which is why human drivers wait for the largest mass of the built-up crowd to go and then inch their way through the stream of those just arriving at the intersection (with pedestrians seeing what the car is going to do and then flowing around the car). Technically, the pedestrians have the full right of way.

On the pedestrian side, blocks are short, and if you don't jaywalk it would take forever to walk up/downtown.

And if you were to fully split pedestrian and vehicle phases (and get New Yorkers to start adhering to it), you'd just grind the city to a halt because you're moving less people/cars than the "maximum throughput" flows than we have now.


Imagine how much safer NYC would be if the cars and pedestrians were isolated. A dedicated car layer with an extra lane on each side (at current level) and a new pedestrian layer built above the cars with physical barriers preventing falling to the level beneath.


Actually, NYC is very safe compared to most metro areas in the US. On a 2016 ranking, it was the 95th most dangerous metro area out of the 105 ranked.

https://smartgrowthamerica.org/dangerous-by-design/


Another way to increase safety would be to slow down traffic... a lot.

It used to be that cars, trains (and horses) traveled at just over walking pace, pedestrian crossings (and jaywalking) didn't exist, pedestrians crossed the street anywhere.

https://youtu.be/aohXOpKtns0?t=4m4s

Though I guess today's traffic densities wouldn't allow for slowly meandering through traffic.


Final scene, General Motors Futurama, 1939 World's Fair.[1]

[1] https://youtu.be/1cRoaPLvQx0?t=1237


Maybe, but good luck ever getting that built.


Typically the law is broad enough and redundant enough that who's in violation is simply up to the discretion of whoever has the blue lights.


Almost, but not exactly true -- the cop can issue a ticket, but a judge decides who is actually in violation (assuming that the person that got the ticket contests it).

Always-on cameras in self-driving cars will help protect the innocent (assuming the footage is saved and available through subpoena, otherwise, like police dash and body cams, few people will let their own camera prove them guilty)


Oh, you mean like the "too-dark-to-see" scapegoat camera in that very case the article is about? Riiiiight.


The entire thing that the ML fanboys need to figure out is the entire underlying logic of ML and statistical methods in general is that averages wash away quirks. In fact, that is their power and what people who traditionally deal with numbers know and use daily when we utilize averaging. So, ML and statistically derived math will work in the typical case but will not deal with exceptions well. This is why people (people whom I've found are NOT actual researchers) who are predicting we'll all be jobless by 2050 need to chill it out and realize the limits of current statistical methods.


There is an obvious solution: don't stop, just slow down to <25 mph. On the streets where it is normal for pedestrians to approach that close, the speed limit is almost always 25 mph or lower and often the typical speed is much lower because of traffic. On streets with higher speeds, if a pedestrian is approaching that close to traffic, they either don't see the car or they have a death wish.


The car could also honk to warn and check for a reaction. I wonder what the rules and social conventions well be for autonomous horns!


Not related to the article but I want to chime in on your point. Having spent some time in VA, aggressive driving and dangerous scenarios were infrequent from my experience. Then having spent some time in FL, aggressive driving was the norm and driving laws did not exist for most people. So I wonder, do self-driving cars needs something like driving profiles for certain regions, where defensive/aggressive response behavior is ramped up or down in accordance to what is observed? In a passive region where you only have to slam your breaks or jerk the wheel in rare instances, the profile would be tuned to give gentle responses to changes in traffic conditions However, in regions where aggressive driving is rampant, the profile would have to be tuned to be alert that sudden responses to other drivers frequently breaking the law and putting people in dangerous situations is common and to respond in kind. For example, to not slam the breaks when you get cut off, or not to speed up when an impatient person is literally 1 foot away from your bumper even thought you're doing 5 over the limit already. Now that can all be bundled into one, and already is I think in current self driving cars. I'm asking specifically if it would help self driving performance and decision making if the decisions could be influenced by knowing how rough things can or can't get in that area.


Personally, I’d rather see driving behavior normalize nationally than cater to regional differences in enforcement. The driving laws in each city & state the US are less different than the locally accepted behaviors, but it would be safer all around if drivers and pedestrians all expected the same things, and if we took traffic & safety laws more seriously. More enforcement of existing traffic laws will save many orders of magnitude more lives than customizing autonomous cars for various cities.


The "reality should conform to my ideal, not my ideal to the reality" approach is and has always been a guaranteed recipe for failure. "Let's change everyone's minds and behavior everywhere" is much harder than changing software.


That’s an ungenerous way to paraphrase what I said, given that we already have traffic laws, and enforcement of those laws.


But the point is those laws aren't enforced the same way in different areas, even when they're the same. Jaywalking is illegal in NYC. Come here and see if anyone, cops included, cares. But other cities seem to care much more about that. It's not as simple as having the same laws, or having them enforced, because the underlying cultural attitudes are different. If you tried to crack down on jaywalking in NYC you'd be laughed out of the city.


It doesn’t matter what the cultural attitudes are. All it takes to change it is a significant increase in fines, then people would, statistically speaking, break the law less often.

We don’t have to convince the public to accept more enforcement, that’s not how traffic laws are made or enforced.

What we have is a system of cultural attitudes that’s bad for us. People are dying all the time because of lax attitudes. What you’re advocating is status quo. Why? Is it a good thing that a lot of people in NYC who choose to jaywalk get hit by cars?

I’m not on a mission to change things, I don’t care that much if people jaywalk. I’d prioritize speeding enforcement far above jaywalking. I’m just pointing out the obvious — that fretting over how safe it is for autonomous cars to obey the law is just “enculer les mouches”, to use the French term. It’s nitpicking the tiniest of margins while ignoring the primary cause of the problem.


Sure. You be the politician who does that in NYC. And good luck on your next election. It's not going to happen.


I'm not running for office in NYC, and your sarcasm seems to indicate you didn't read past the first sentence of my comment.

Jaywalking was @Sangermaine's straw man issue, not mine. I was thinking more about speeding and aggressive driving than about what pedestrians do.

Your comment seems to indicate you might not know how traffic laws are made or enforced. They are rarely touched by elected officials.

I honestly don't know why I'm getting such vigorous push-back here. My original comment was about safety and nothing more. People hate speeding and seat-belt laws too, yet it's a fact that lower speed limits and mandatory seat-belts save lives.


> yet it's a fact that lower speed limits and mandatory seat-belts save lives.

Near where I live there was a bridge that lorrys kept getting stuck under, and the local council were very confused.

Then measured it, and found the warning sign wasn't right -- it said 15' but it should actually have been 14'.

They changed the sign, and the accidents reduced dramatically. Having a lower limit reduced accidents.

Unfortunatly someone looked at the statistics and thought "we're still getting accidents. Last time reducing the sign on the bridge by 1' caused them to go down, lets reduce it further and save even more lives!"

So the sign was reduced to saying 10'. And accidents went up because people ignored the signs.

Lower speed limits may save lives in some circumstances, but in the UK the safest roads are the fastest roads, which seems to contradict your statement.

If you put a 30mph limit on the motorway, it will cost more lives 1) It would be ignored by many, so difference in speeds will be higher 2) People would route down more dangerous back roads rather than staying on the safest roads 3) The extra time spent on the motorway has a cost too. The average life is say 40 million minutes long. If your speed limit adds 40 minutes to 1 million people's day, that's 1 life a day you're costing

You could argue that we should stop all vehicles, but then the economy collapses and you end up with civil war as we can't support this population level with an agrarian society


> “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

― George Bernard Shaw


A massive enough influx of self-driving cars all driving with the same profile is perhaps one of the few ways reality could be coaxed into this particular ideal.


Nothing could possibly go wrong there. /s

In other words, you're assuming a massive (no, even "10 000 cars" is not "massive"), coordinated, correct deployment. I'd like a pony and world peace while we're in fantasyland.


It can't and won't happen without significant increase in police and traffic enforcement spending which is not always a popular position and there is little motivation. Some states just can't afford it. But in Maryland for example, 55 almost universally means 75 and if an autonomous car didn't know that they will cause many accidents due to unsafe and aggressive driving even if they are technically not "at fault"


I drive a maximum of 62 in a 55 on Baltimore's inner loop and I as commonly have to leave the right lane to pass as being passed myself. It's a safe speed.


Why’s that? Fines are income, not expenditure.


People don't want more highway patrol, red light cameras, and traffic enforcement and tend to see it as a waste of resources. It also disproportionately screws lower income people and breeds contempt for police - people feel they are being persecuted when there are non-victimless crimes happening every day.

I can't find a source for this, but I imagine they are not ever making net profit on tickets when a trooper makes 75k per year minimum not including overtime or benefits[1] plus 60k+ for a new cruiser[2] plus all other training and administrative overhead (dispatcher etc).

[1] https://www.criminaljusticedegreeschools.com/state-trooper-r...

[2]http://www.latimes.com/business/autos/la-fi-hy-cop-cars-2015...


Fines can increase without hiring more people or buying more cars. Fines could be made proportional to income, at least one other country has already done that.

Why would they need to make a net profit? That standard isn't applied to anything else the police do. We already have traffic enforcement as well as many other police services paid via taxes. The argument that it costs too much to increase enforcement in any way doesn't really hold much water. Whether people want enforcement is a different story, but the point I brought up that you responded to is purely one of safety. We can debate all the myriad reasons why we can or can't change our cultural attitudes, but it's still true that increased enforcement of human drivers would have a far greater impact on overall safety than fine-tuning autonomous cars to break certain laws in certain locales.

Letting driverless cars break some laws or have "aggressive" driving profiles will never happen anyway, so it's moot. It would be too much liability for the car manufacturers. So how would you make roads safer?


>Fines can increase without hiring more people or buying more cars

Fair point.

>Why would they need to make a net profit?

At the end of the day the resources either come from somewhere else or it costs more taxpayer money. Yes, I agree it would improve safety.

>The argument that it costs too much to increase enforcement in any way doesn't really hold much water.

It does though because it is the only solution and the fact that is costs too much is the main reason it isn't done already.

>So how would you make roads safer?

That isn't my job and it isn't a pressing political issue in most places. If autonomous cars exacerbate the problem, they should lobby the state and local government to spend more on traffic enforcement to enable them to operate safely.


> It does though because it is the only solution and the fact that is costs too much is the main reason it isn't done already.

More spending is not the only solution. And cost is not the only reason we're not doing more. Despite my suggestions to increase fines, the people making and enforcing traffic laws are not out to make a profit, and they are sensitive to the costs to drivers.

Where I live, the state lawmakers mandated raising speed limits in opposition to the Highway Patrol's safety recommendation for lower speed limits and increased enforcement. The speeding laws and their enforcement are lax here because of politicians, regardless of the known safety consequences, and not because of costs.

> That isn't my job and it isn't a pressing political issue in most places.

This is a thread about car safety that we both chose to participate in. I would hope that in doing so, all participants might be willing to offer constructive ideas and not just criticism.

@Rotdhizon offered an idea, as did I. Since you suggest mine won't work, it's fair game to ask you for alternatives, isn't it?

FWIW, in my mind some kinds of "increased enforcement" don't involve punitive measures. "NSC estimates traffic fatalities in New York fell 3 percent last year and have dropped 15 percent over the last two years. Safety advocates say the decline may be due to New York City's push to eliminate traffic deaths by lowering speed limits, adding bike lanes and more pedestrian shelters." https://www.cnbc.com/2018/02/14/traffic-deaths-edge-lower-bu...

Establishing spaces for peds & bikes is something that factors into the existing costs, and doesn't necessarily mean any more spending at all. The increased safety means that the state gets back some money that was spent on emergency services. Lowering speed limits doesn't cost anything.


>state lawmakers mandated raising speed limits in opposition to the Highway Patrol's safety recommendation for lower speed limits and increased enforcement. The speeding laws and their enforcement are lax here because of politicians

"Your comment seems to indicate you might not know how traffic laws are made or enforced. They are rarely touched by elected officials."

>I would hope that in doing so, all participants might be willing to offer constructive ideas and not just criticism.

People try to dismiss criticism by saying "well do you have a solution?" but that is fallacious and you can make a valid point without offering a different solution that addresses that point


exactly, act according to context. driving in SF is totally different from nearby places like Lakeshore or San Leandro (people from those places complain about how bad people drive in SF)


> For example, in NYC, there's nothing at all uncommon about a pedestrian beginning to cross the street in front of you and approaching the area of your lane where you will shortly be. Pedestrians typically then stop about a foot away from your lane, let you drive past them, and then continue walking. If a car slammed on the brakes in this situation, it would likely cause more accidents than not braking would cause.

> On the contrary, in Tempe, a pedestrian starting to do the exact same thing as above is likely much more a case of them not realizing your car is coming at all, in which case slamming on the brakes is appropriate.

And in many SEA country you're expected to just drive around a pedestrian crossing the road (or a cow lying on it).


Your examples are absurd. Would you program a car to assume that all New Yorkers won't Jay walk? How about all people in Tempe will rush into the streets?

Cultural context definetly matters, but for none of the reasons you've mentioned.

Writing specific use cases is insane for more reasons than I should have to point out to you. What if someone from tempe is on vacation in NY, should they be in danger because they follow a different culture?

You need to rely on universal rules and implement the most cautious strategies.

Pedestrian shows up at the curb, slow to a speed in which you can safely stop on a dime, resume cruising speed when the pedestrians intent is known.


Or... make Jay walking punishable by death, and autonomous cars won't need to stop, or even look for random pedestrians.


This, absolutely this. It's always bothered me why pedestrians seem to be universally given the right of way, when the irrefutable laws of physics dictate otherwise: a human, although having a lower speed, has a far wider view of its surroundings, and can change direction much quicker than a car whose driver is relatively slow to react and has a much more restricted view. In a collision, the bare human is always going to lose.

To take another example, it's pretty well accepted that it's entirely your fault if you drive into the path of an oncoming train and get killed. Then why does it seem so controversial that the driver should not be at fault if you get hit because you didn't look and just stepped into the road expecting the oncoming car to avoid you?


Outside of designated crosswalks, I don't think drivers are usually considered liable when a pedestrian walks in front of them if stopping would violate the laws of physics. Stop if possible, yes. I'd do the same for a squirrel, so that doesn't seem like too much to expect. At least the human might get cited for jaywalking.


Because the train is on tracks.


That doesn't change the fact that pedestrians are still far more agile than cars.

(Anyone else want to explain the downvotes?)


Don't be absurd. Some pedestrians are Usain Bolt and some pedestrians are Stephen Hawking.


it’s really a question of probabilities more than anything. chances that a pedestrian walking into the middle of the road will continue and walk in front of the car differ depending on if you’re in NY or Tempe


It's a hard problem to solve that doesn't need solving. Glorifying the complexity of the challenge shouldn't distract from the questionable ethics of endangering lives in the pursuit of luxury.


I think that cultural context definitely influences driving safely. And that's going to be a very hard problem for companies to solve. But I think it's solvable, and part of the solution may be to change how people interact with streets. The introduction of the car caused a lot of problems in cities. People didn't know how to safely drive them, pedestrians didn't know how to deal with drivers, regulation was poor to non-existent, and city infrastructure couldn't handle large numbers of cars. But those problems eventually got fixed.https://www.detroitnews.com/story/news/local/michigan-histor...


My impression looking at self-driving cars given these incident is that the whole thing was a pump scheme. They are simply not ready. Roads are not rail tracks. They are highly random and dangerous. You'd need something with a high level of awareness and not just visibility.

Humans have that awareness and lack on visibility (you can't see 360degrees, you can't see a wide range, you can't see beyond your car Caracas...). The awareness was enough to let us move but the lack of visibility did certainly kill people.

The best we could have done today was a system that helps human visually. Then the system reports back how human interacted with that particular situation (ignore, brake, etc...) Then maybe 5 years later of work we could have had something.


Soon, Uber et al will lobby government to outlaw normal pedestrian behaviour like this. Any accidents will be due to illegal behaviour.


Well, that wouldn't be without precedent either. Apparently in the 20s, pro-car groups pushed social and political agendas against "jay-walkers", and indeed were responsible for popularising the term.

Source: http://www.slate.com/articles/life/transport/2009/11/in_defe...


A similar thing happens to voice recognition systems trained on American English. There are lots of videos of frustrated Scottish, or northern England people not being understood.


Thinking about it this way is interesting. It could be an issue with bias or unbalanced training data.


> But it seems like a very non-trivial problem to do this correctly.

I had this same thought. You could theoretically have a universal rule where the self-driving car stops 100% of the time if it detects an object that is determined to be a pedestrian. But there are unforeseen problems with this approach. What if the model is overly cautious and there are many false positives? That could be extremely dangerous too, if a self-driving car is constantly stopping and going.


Stopping is only one option slowing down is another.

However, I suspect a car may be aware a person has stopped significantly before a person would enter their lane. At 30 mph a car takes ~2 seconds to physically brake and at an average walking speed a mph moves 9 feet in 2 seconds. However, to stop before entering the cars lane they would need to start slowing down even sooner.

So, I guess you really need videos with calculated velocities of people jaywalking to really tell if this is going to be a problem or not.


I'm looking forward to seeing the full NTSB report.

This hints that Uber's mindset is focused on "obstacles", not "flat road". The first job of automated driving is to drive only on flat surfaces. Doesn't matter why it's not flat. Then you worry about where to go. That's how the off-road DARPA Grand Challenge vehicles had to work. "Flat road" is a pure geometry thing. Not much AI needed.

On road, non-flat road is rare, and you don't have to worry much about going off cliffs, rocks in the road, and such. So it's tempting to focus on "obstacles" to be tallied and classified. Tesla definitely has an "obstacle" focus, and a limited class of obstacles considered. Waymo profiles the ground. Looks like Uber had the "obstacle" focus.

If you have a "flat road" detection system, things the obstacle detector doesn't understand get stopped for or bypassed. So the vehicle isn't vulnerable to this flaw. There's a higher false alarm rate, though. And a non false alarm problem. A piece of trash on the road is likely to result in a slowdown and careful avoidance, not a bump as the car rolls over it. Still, better to take the safe approach unless you're on a freeway and the cars ahead of you just got past the obstacle successfully.

The Udacity self driving car "nanodegree" trains people to build obstacle detectors and classifiers. That may be a problem.


Is another way of framing this:

"flat road" => assume you can't continue until you prove it's safe

"obstacle detection" => assume you can continue unless there is something blocking the current path

The bit of autonomous vehicle work that I did started from a set of potential trajectories that were then pruned by obstacle detection. So we always assumed you could continue moving forward unless every potential trajectory was pruned. We ignored the cliff problem as beyond our scope.


> The first job of automated driving is to drive only on flat surfaces. Doesn't matter why it's not flat. Then you worry about where to go.

The old adage applies. Aviate, Navigate, Communicate. In that order.


The disturbing thing about this crash is that it's such a basic thing to test on a test track. A pedestrian walking in front at night, with nothing else unusual happening, should be on the list of scenarios to get 5 9s reliability on before taking it out in public.

The things you have to test in public are the complex real-world interactions, like these: https://medium.com/kylevogt/why-testing-self-driving-cars-in...


> get 5 9s reliability

That's it? Humans have greater than 7 9's reliability when driving. (As measured by fatalities per time of driving, and assuming a fatality takes 5 minutes, and average speed is 30mph.)

If the cause of a fatality is 10 seconds of failure, not 5 minutes, humans have greater than 9 9s of reliability.

Math: 10 seconds / (1.13 / (30 miles/hour / 100 million miles))

Fatality rate is 1.13 per 100 million miles.

Good luck programming a computer to even stay on and not crash with that level of reliability.

There's this belief that humans are horrible at driving, etc, etc. Until you run that math and realize, no, they're not. There's just a lot of driving going on.


It's only really 9 9s of reliability if you assume somebody will definitely be killed as soon as you have a 10-second lapse in concentration, which is obviously not true.

In motorway driving you could probably shut your eyes for 10 seconds every 20 seconds and be perfectly fine almost all of the time. Especially at night time, or other quiet times of day.


If people actually did that, then someone would crash, and that crash would be factored into the final accident rate.

So that final reliability rates includes any and all things humans do while driving. (Not what they could do, what they actually do.)


Yes but what I'm saying is you don't need 9 9s of good driving in order to get 9 9s of accident-free driving. You only have an accident when you're driving badly and are exceptionally unlucky (I suppose the amount of bad luck required decreases as the badness of the driving increases).

In the scenario I mentioned you'd have at best a 50% duty cycle of good driving, but the accident rate would be substantially lower than 50% every 10 seconds.


> There's this belief that humans are horrible at driving, etc, etc. Until you run that math and realize, no, they're not. There's just a lot of driving going on.

I've been hammering this concept for a while :) Unimpaired, non distracted humans are actually really good at driving. We're optimized for processing visual inputs and making fuzzy decisions, have high dexterity and coordination, and fear death. All of that is alien to software.


Oddly enough, even impaired drivers are probably pretty good compared to some current self-driving cars. I'd give good odds that someone with a 0.10% BAC would have swerved or stopped for that woman that the Uber car hit.


Yep sadly too the Uber car didn't even slow after the hit, so even a drunk person probably would have done different things... Enough maybe to be non-fatal. Not saying we should replace Ubers sdv fleet with drunk drivers obviously the entire program is scrapped now.


> If the cause of a fatality is 10 seconds of failure, not 5 minutes, humans have greater than 9 9s of reliability.

If you're driving by the textbook, then you have two seconds to react to the car in front braking; the assumption being that you take one second to react normally, and get one extra second for safety.

On the other hand, in something like a play street law grants you exactly zero seconds of response time.

The question is now, of course, whether only those <<10 s intervals count as the failure, or your entire approach to driving (e.g. driving while texting).


> then you have two seconds

Which takes us to 10 9s of reliability. Are there any computers with that level of reliability?

> The question is now, of course, whether only those <<10 s intervals count as the failure, or your entire approach to driving (e.g. driving while texting).

I thought about this all day. I feel that from the moment you can no longer avoid to accident (no matter what you do), until it occurs, is the correct timeframe to measure. Not sure what that number is though.

Say you texted, got in a dangerous situation, then corrected. Is that really a failure? It's a risk of course, and enough people doing that will increase the final death tally. But for each individual driver it's not a failure, if it's not a failure then you can not count it.

i.e. doing it any other way would be counting it twice: Once for those situations that actually caused an accident, and again for taking the risk.

Or put another way, taking a risk and winning is not a failure. (Otherwise where do you draw the line on what a risk is?)


Fatalities seems like a pretty low bar to measure failures by... I would prefer to measure reliability by reported accidents per 100k miles. This stat seems to be 183 per million miles in 2009, compared to 1.13 for fatalities per million miles. So quite enough to shave 7 9's down to 5ish...


The problem is that a lot of accidents are not reported and even injury severity can be very subjective (especially if there's an interest to do so[1]). I prefer counting fatalities because it's a hard endpoint - bodies are hard to hide.

[1] https://www.youtube.com/watch?v=_ogxZxu6cjM


Collisions with trees and with other vehicles? Or collisions with pedestrians? Maybe we're fine with five 9s of reliability with inanimate objects and with other 2-ton metal crash protection cages but prefer seven 9s with fragile unprotected human bodies (which are far more likely to end up as fatalities than just crashes).


> So quite enough to shave 7 9's down to 5ish...

From other comments in this thread, the correct number is actually 10 9's, not 7. So using the accident rate instead of fatality takes you to 8 9's.


I agree with the basic point, but does the 1.13 per 100 million miles figure include freeway driving?


Yes.


freeway driving causes less accidents than street driving


I think you were downvoted to dead because people saw this as an objection to the parent; more charitably, I assume you're just adding more color - but you might want to telegraph your intentions a bit clearer.


He was not downvoted to dead, his account is banned, all his posts show up dead.


Gotcha


5 9s sounds impressive, until you think that's 1 failure per 100,000 interactions.

There are individual intersections in the US that have that many vehicles pass each day.

I haven't seen estimates on number of specifically vehicle-pedestrian interactions, but 5 9s is nowhere nearly safe enough.


How many times does a car see this particular interaction, though?


It needs to handle new interactions too. Humans aren't predictable


That's also a factor. I've been driving for decades now, yet still need to handle surprisingly novel situations now and then.


I think the disturbing thing to me is that there wasn't any sort of layered reliability approach to the testing process itself. When you know you're going to be testing changes to a very complex, multi-sensor real-world system, it would be a standard approach to lay in a more foundational set of technology and practices to the system tests to serve as a conservative safety net.

For example, an overriding set of heuristics to brake if needed could be one precaution, and in terms of practices, the human monitor, should have been swapped out at high intervals to alleviate attention fatigue. One can go back and forth on the details, but the need for a separate and independent treatment of safety reviews from the primary product development activity in this kind of endeavour should have been drawn from any of a number of existing practices in other industries.

And this is something that a lot of the casual tech articles completely miss - criticizing the primary autonomy tech which of course will have faults, but if you know you will have faults, then where is the serious plan to operate safely knowing the dev system is anticipated to have faults of many types.


You mean they shouldn't have disabled the car's own autonomous, non AI braking system and kept it as a backup for when theirs failed? Yeah, but that would take a responsible, caring player and we're talking about uber here, a company built on ignoring laws, best practices, and common sense. This can easily be a case of manslaughter, but their cozy relationship with the az government will probably prevent that from happening despite uber's egregious recklessness.


> You mean they shouldn't have disabled the car's own autonomous, non AI braking system and kept it as a backup for when theirs failed?

I admit that was a thought, but I'd hold back from to saying that simple specific thing was the missed item, or even if that was a realistic expectation to maintain that setup. Without a built up safety culture that looks at the issue from multiple directions in detail, and has the authority to halt testing if needed, then it could have easily been something else. And there are multiple industries from civil/military aviation, medical, architecture, as well as many industrial occupations from which one could have formed a safety framework upon.


That, and the inattentive safety driver. It's quite infuriating really, letting lose a ton and a half of fast-moving mass without even the decency of looking where it's going. There's really no excuse.


Unfortunately I think it's one of those things that is both inexcusable and guaranteed to happen.

It absolutely makes sense to put a human in a car you're testing and don't have full confidence in yet but are sending into real traffic. Not doing so would be ludicrously negligent. However, humans simply don't maintain focus for a long period of time without some sort of extreme, years-long training. The ability of monks to concentrate on a single thought for hours is correctly seen as very, very difficult. There's no way any regular person could deal well with a scenario like "sometime in the next 8 hours, you may or may not need to smash this brake pedal with a second's notice, but other than that you have no task." That's one of the things that most worries me about "almost there" systems like Tesla's.

That said, if you've decided to attempt this anyway, checking your phone is definitely not okay.


> but other than that you have no task.

Then give them a task! Install a HUD drawing a box in real time with the classification of nearest detected object (bike, pedestrian, car, mailbox) and put buttons in the wheel for the driver to confirm/reject the classification. This way:

a) you improve your machine learning

b) driver has wheels on the road

c) driver has hands on the wheel

d) driver is focused on incoming objects


I think there's some room for improvement though. Instead of "just one human in there", they could put them out in teams of two, and have periodic "awareness checks" that you have to respond to by pressing a driving-irrelevant button.


Dead-man switches do not work.


Don't work for what purpose, according to whom? Are you sure that applies here?

What I suggested there isn't really a deadman's switch ("stop if no one is around to do task x"), but something to periodically, randomly check the driver's attentiveness, and can raise a red flag if they take too long to respond.

A deadman's switch is different and enforces "do X if person Y stops doing this thing at this predictable interval" -- not what I was proposing.


Vigilance control. But even these fail with more experienced users, similar to how alarm clocks fail some people (they wake up just enough to get up and turn the alarm off, then continue sleeping).


Vigilance control. But even these fail with more experienced users, similar to how alarm clocks fail some people (they wake up just enough to get up and turn the alarm off, then continue sleeping).


I was proposing it as a way to validate engagement by the testing teams, not the mass market.


Like I said, these are especially hard to get right for frequent users, presumably your testing teams would log a lot of hours on these - just like train drivers, which have had this exact same problem.


Yes, you said it several times, just not with a citation or other means of grounding your claims that might lead us to a way of resolving the disagreement productively.


they do but they must be very random so you can't activate them when asleep.


However, if the car saw the pedestrian, even as a false positive (presumably from far enough away to have enough time to brake/avoid it), I think it should’ve alerted the driver. The driver would’ve most likely started paying attention and avoided the crash.


> I think it should’ve alerted the driver.

I like the idea, and if these pruned false positives are rare enough I think it would be a great safety measure - you could even have the driver verbally label things, which might have both training (in the AI sense) and awareness (for the humans) benefits (https://en.wikipedia.org/wiki/Pointing_and_calling).

Note that "rare enough" needn't be all that rare, just that if it is multiple every second or something it clearly wouldn't be practical.


>> I think it should’ve alerted the driver.

> I like the idea

I'm not sure how well I would respond to a sudden alarm when everything has been deemed safe until a moment ago. I'd potentially need several seconds to figure out what is going on.

I would hazard a guess that an approach that might be more fruitful be to have a meter that dynamically adjusts the "riskiness" of the situation between a yellow, orange, and red zone, so that the driver (a) has to pay attention constantly, (b) gets information constantly, and hence (c) has a better chance to react earlier. I know if I see a "danger meter" getting into the orange zone, I'm not going to wait until it goes into the red before I start paying attention.


"I like the idea" doesn't mean I will like any given implementation of the idea - I certainly agree that there's plenty of room to get the interface wrong.

I want the driver paying attention constantly, but I want them paying attention to the environment, not a meter. A periodic "things are unusually interesting, what's going on?" query seems like a way to motivate that. But any actual attempt at a solution should be validated in testing...


GM SuperCruise uses a driver-facing camera to confirm that the driver does in fact have eyes on the road. Something like that would be trivial for Uber to add to its self-driving car fleet.


> However, humans simply don't maintain focus for a long period of time without some sort of extreme, years-long training.

And anyone who's achieved this ability is unlikely to be willing to be hired as 1099 contractor for whatever low pay is being offered.


Immaterial. "Let's keep killing bystanders, that's cheaper" is a good way to rouse a mob with torches and pitchforks; conversely, not very useful for getting the technology approved.


Humans are terrible for supervising mostly functional machines.

This was researched in the context of security guards, who had a 95% missrate after just 20 minutes on the job.

It is entirely unreasonable to expect a human supervise to detect and correct this type of situation in time.


I both can and can't blame the safety driver.

To expect full attention from a person that is not supposed to have anything to do for hours on end (different from driving, which requires constant inputs and adjustments) is not something that's compatible with humans.

With that said they were looking at their phone instead of zoning out eyes front.


yes for sure.

I wish articles about this crash would continue to hammer on the basics, to keep front&center this crash was completely avoidable.

Even if there were no technical issues,

(1) the driver was on her phone before and during the crash. The whole reason for the driver to be present is to intervene in a situation like this.

(2) the car was driving too fast for conditions. Nobody should drive this fast in the middle of the city in the middle of night. It doesn't look like the car could have come to any kind emergency stop. Self-driving cars should drive more carefully than humans, not push against and over the speed limit regardless of outside conditions.


It was 38 mph in a 45.


The overriding principle is normally about driving to the conditions, not achieving a "target" speed.


This is one of the core problems of the car culture: ignoring the envelope except where mandated by law. A speed limit means, literally, "you are not allowed to go any faster than this" - but is interpreted as "you are supposed to go at this speed, or better 5 above". Nowhere else is this so prevalent: "there are 50 apples here, so you can't eat any more" is "you must eat all 50 of them" by the same logic.


The question I want to know is:

Did Uber doctor the camera video to make the road seem darker than it actually was?

If you recall, there was initially a doubt that Uber was at fault because the streets appeared extremely dark, but I think they were buying time to delay the outage at incompetent Uber.

The police initially didn't fault Uber based on this video.


That was a bad call on the part of the police.

You're only allowed to drive as fast as conditions permit. If it's dark enough that you can't avoid an obstacle by the time your headlights illuminate it, you need to slow down.


A self-driving car doesn’t need visible light to see a pedestrian. Between Lidsr and IR, she would have lit up like a Christmas tree.


Sure, but my point is that whether or not the video was darkened (and regardless of how the car "sees") overdriving your radius of obstacle detection is an unsafe way to operate a vehicle.


That's not really true. Thousands of deer get killed on the road every year, because deer aren't smart enough not to be an obstacle when it's dark. Do we therefore change all the speed limits at night? No. It's a driving risk taken, just like any other.


Those deer are killed because thousands of humans aren't smart enough to not overdrive their headlights so they hit obstacles in the dark. Hitting a deer is not a small thing. You will seriously damage your car, and may be severely injured or killed yourself if the deer is tall enough to come through the windshield. In open range country you can also hit a cow. In some areas you can hit a moose.

Do we therefore change all the speed limits at night?

No, but we shouldn't drive like idiots either. It's not just "a driving risk taken", it's reckless and negligent behaviour.


I've driven in Palo Alto near 280, and from the distance, in broad daylight, I saw a buck run across an entire field at full speed and straight across the road about 3-4 cars in front of me, and get hit by the car. There was no way the driver could see what I saw, and he had no chance to avoid it. The deer was simply acting irrationally. To say that people who hit deer are "driving like idiots" is wrong.


I should probably let this go as it's off topic, but I don't understand the description of this incident:

- Was this on 280, or near 280?

- If the car that hit the deer was 3-4 cars ahead and you could see the deer "run across an entire field", why was "there no way the driver could see what you saw"? I'm trying to imagine what sort of obstruction could block the drivers view but not yours.

- How fast was the car going?

- How long did it take the deer to "run across an entire field"?

- Normally deer are active around sunrise and sunset and bed down during the day (source: deer hunting). It seems odd to see a deer in running in "broad daylight".


The point isn't that speed limits need to change, it's that it is incumbent upon the driver to slow down if environmental conditions prevent them from traveling safely at the speed limit.


What is that supposed to mean? You can't have 100% safety at any speed. If it really is incumbent upon the driver to always drive at night in such a way that no collision can occur at all, then why not change the speed limit?

My whole point is that it's not the driver's responsibility to account for every possible scenario. It can't be.


Um, do you have a driver's license?

Nobody said anything about 100% safety in some provable way. But certainly it is very much the driver's responsibility to pick a safe speed. Driver's Ed (at least in most? EU countries) teaches that the posted speed limit is just that, the maximum legally allowed speed. As a driver you -- and nobody else -- are responsible for picking a safe speed depending on the environmental conditions.

In particular, the following are explicitly taught in Driver's Ed: 1) Don't overdrive your visibility (headlights, curve)... you must be able to look sufficiently far ahead such that you can stop in the event of an obstacle appearing. (In case of two-directional traffic in the same lane, you must be able to stop in half your visible distance.) 2) Reduce speed according to weather: Rain (aqua-planing), Snow/Ice (slippery), Fog (reduced visibility as in 1) 3) You must be aware of situational dangers and be ready to stop if necessary (e.g. parked cars on the side of the road between which children could emerge...)

Nobody else will tell you what the maximum safe speed at any given moment is. You're supposed to learn how to handle your vehicle during driver's ed, and then act accordingly.

Overdriving your visibility is just plain stupid -- you simply will not be able to avoid a collision with an object in time; you're basically driving blind and hoping for the best.


What is that supposed to mean? What's prudent and what is safe?

The reality is that driving at night isn't safe at all and that people often can't avoid collision from an obstacle suddenly appearing at night.

Maybe if they drove at 15MPH at all times, it would be possible in 99% of cases, but in practice we don't expect that.

Hence, we have collisions. Not because the drivers are always guilty of recklessness, but because driving at night has a risk to it. That's the point.


I seriously think you need to retake your drivers ed class. It’s entirely possible to drive at a speed at night that ensures you won’t hit an object immediately in front of you. That’s the textbook definition of not overdriving your headlights. Most nights aren’t pitch black thanks to street lamps, which is why you can drive at a reasonable speed. And in the cases where the night is pitch black, you can use your high beams when there’s no oncoming traffic. If there is oncoming traffic and you can’t stop in the distance illuminated by your headlights then, yes, it’s incumbent on you to slow down. These are basic rules of the road, not some unobtainable standard to strive toward.


I already told you what is safe: Your headlamps must illuminate a distance within which you can stop (including reaction time). With 50km/h this is ~40m. This is a distance that your low-beams will illuminate even in a dark night. If you want to drive faster you need high-beams or some other sources of light (street lamps).

If you don't overdrive your headlamps than objects will not just "suddenly" appear in front of you unless they are doing something genuinely stupid (e.g. jumping in front of the car). In this case the policy or a judge will decide whether you were driving prudent or safe.

We have collisions (during day and night) mainly because people don't pay attention or don't follow some basic rules (visibility, distance to car in front,...). This isn't rocket science. Don't blame fate or "general risk" if you cause a collision.


>The reality is that driving at night isn't safe at all

Where did you learn that? Sources, please!

Because the reality as I know it disagrees with this claim.



And lo, that source's number one safety tip for night driving:

> 1. Allow for enough distance to stop. We recommend that you gauge this distance using your headlights. Low beams should allow you to see up to 160 feet away, while high beams should illuminate about 500 feet in front of you. Make sure that, if and when you must brake hard, that you can brake within those distances.


And what a great tip it is! "Drive so that if you need to brake, you don't collide". No shit, Sherlock.

In other words, people just need to always drive far slower than what everyone believes they can handle in a situation that they aren't trained for, and everything will be fine. Everyone who collides anyway is just reckless and stupid!

Now please excuse me, I have exhausted my sarcasm budget for today.


Good start, but that doesn't say that night driving isn't safe. It says that most accidents happen at night. Accidents are rare.

The page also lists drunk and drowsy driving as one of the major reasons why night driving is less safe. These are correlated risk factors, but neither has to do with the conditions being inherently unsafe.

The page doesn't talk about the extent to which drunk and drowsy drivers account for nights being dangerous.


>> Good start, but that doesn't say that night driving isn't safe.

It also doesn't say that it is safe. It lists all kind of risks that make it unsafe in one way or another. Whose definition of "safe" are we supposed to go by, anyway?

We can probably at least agree on what isn't safe: Crossing a road at night into incoming traffic, as a pedestrian.


It's not about accounting "for every possible scenario."

Arizona has a Basic Speed Law, which says "A person must drive at a speed that is reasonable and prudent under the existing conditions."

If the speed limit is 55 MPH, but conditions (rain, darkness, etc.) would prevent you from being able to safely stop in time to avoid a collision, then you are not allowed to drive at 55 MPH until conditions change.


> Do we therefore change all the speed limits at night?

No, but we legally require drivers to modulate their speed to suit the conditions.

> It's a driving risk taken, just like any other.

So generous of drivers to take up that risk to others.


In British Columbia, the speed limit is defined as the maximum speed limit when the road is bare and dry and visibility is good. From the driver's guidebook[1]: >Aim for a speed that’s appropriate for the conditions in which you are driving. The posted speed is the maximum for ideal conditions only. Choose a slower speed if the conditions are not ideal — for instance, if the roads are slippery or visibility is limited.

If a driver was driving too fast to react and stop without hitting that deer, then that driver was driving too quickly.

[1] http://www.icbc.com/driver-licensing/driving-guides/Pages/Le...


That's not really what tends to happen with deer. The situation starts with them on one side of the road deciding to cross. Then the partway through they freeze. There's often not time to react. Deer are very poorly adapted to dealing with roads. They're camouflaged which makes them hard to see. And while freeze is a perfectly good strategy against wolves, it absolutely sucks against a Buick.


I have a question for you: did you create this account just to troll around?


I drive that stretch of road almost every single day, at night.

Yeah, the video looked different than what my eyes see because my eyes have better low light sensitivity than most cameras.

However (if this makes sense): the video released was an accurate representation of what you see driving on that road. A few weeks ago, I had somebody almost walk into my car out of the shadows (this person was paying attention though, and stopped in the empty lane next to me). It turns out it's REALLY difficult to see people in the streetlight shadows.


> However (if this makes sense): the video released was an accurate representation of what you see driving on that road.

There have been many videos on that same road which were super clear. Only the Uber video was really dark. The other videos weren't. Further, darkness doesn't impact LIDAR.


If you want to compare conditions, were any of those videos taken without moon light? How do we know the city didn't increase the street lighting following the incident?


I can recall thinking the video was very bad quality. To the point where I was questioning if the video was from some dashcam they installed instead of the self driving car tech.


The video was from a cheap dashcam, not any of the sensors the car uses. The LIDAR would have seen it clearly. So would your eyes, which have wider dynamic range than any but the best cameras.


I recall noticing that the human driver of the car was not looking forwards immediately prior to the accident, which presumably makes him/her ultimately responsible for the accident?


If the breaks don't work because of an engineering defect who is responsible? In this case it's the LIDAR that failed.


The article said the LIDAR was fine. It was the decision making software that failed.


The decision making for the LIDAR?


If I were to speculate: Decision making which takes LIDAR as one of the inputs.


That was just a dashcam, not the vehicle sensors. NTSB reported that.


I highly doubt it, considering the video came from Tempe police, not Uber. Regardless, does it matter at this point? Uber is basically saying here that the sensors did detect the pedestrian but the issue was that the software to ignore false positives was tuned incorrectly.


Tempe police obtained the video from Uber, Uber could have easily altered it before handing it over. Uber has behaved badly in the past, and they had the motivation and past experience to do what it takes to advance their agenda.

Remember, Uber terminated its autonomous driving testing as a result of this preventable tragedy.


> Tempe police obtained the video from Uber

If by "from Uber", you mean they popped the SD card from the Uber-owner car's dashcam, sure.

> Uber could have easily altered it before handing it over

See my comment from the last time someone had suggested the idea of doctored videos: https://news.ycombinator.com/item?id=16644663


What's the timeframe on the video being handled, given to police, etc. Do the police have the authority to take any video files recorded by a vehicle involved in an auto accident? Is a warrant needed?

It's more likely that the police opened an investigation, and as part of the investigation, handed over dashcam footage. We have no reason to believe the police simply confiscated the video files on the night of the accident.


IIRC the video was released by the police the next day

> We have no reason to believe the police simply confiscated the video files on the night of the accident.

From my own experience, the police will take dashcam video evidence the first chance they get because it's the best data they can get their hands on while on a scene. Source: I had interactions with the police on two separate occasions (once when my dashcam happened to record a car break-in taking place as I was driving into a parking lot, and once when some guy road raged me). On the second occasion, they were reviewing the video less than 15 minutes from the time of the incident and had the whole thing resolved within the hour.

Besides, as I said earlier, Uber itself is making a statement admitting it was at fault, and the NTSB investigation has way more forensic data than the crappy quality dashcam video police released, so I don't see how doctoring the dashcam video would fit into the narrative.


> so I don't see how doctoring the dashcam video would fit into the narrative.

Well, at the end of the day this is all involves a strong element of PR & perceptions, so regardless of the quality of other data, it would be very much in Uber's interest to have a super dark video be in the initial stories about this accident.


> it would be very much in Uber's interest to have a super dark video

Why? As soon as the video came out, the first thing people (correctly) pointed out was that the dynamic range of the dashcam was crap. Besides, everyone knows that self-driving cars are essentially giant self-telemetry collection machines, so one would have to be either extremely short-sighted or incredibly stupid to think they would be able to get away with a "well it was too dark!" lie after the NTSB got on the case and the media caught wind of the shenanigans.

Also, as I said in my linked comment above, the logistics of actually attempting to tamper the video would be borderline comical.

It's much more plausible that the dashcam dynamic range was in fact crap and that the police officers at the scene were smart enough to figure out to just pop off the SD card off the dashcam.


>It's much more plausible that the dashcam dynamic range was in fact crap and that the police officers at the scene were smart enough to figure out to just pop off the SD card off the dashcam.

..and Uber's lawyers figured that most people have no idea what "dynamic range" is and will eat up "it was dark" as a legitimate excuse for this manslaughter.

There likely was no need to manipulate that video. It was still quite dishonest to release it without clarifying that it was nothing like what either a human or Uber's sensors would see in BIG BOLD RED LETTERS. Most people don't really know better.


> Most people don't really know better.

Well, if this thread is any indication, people seem to always be confusing the roles and actions of lawyers, PR and the police to fit some bizarre conspiracy theory. The fact is that the dashcam video was released by police, and Uber admitted to being at fault with this latest statement. I honestly don't get why you're talking about lawyers now.


I talked about "lawyers" because I don't know which part of Uber is responsible for damage control and would carefully vet all public statements in such scenarios. I assume it's the legal dept. because that's my understanding of procedures in the place I worked at.

The facts are that:

1. The dashcam video is highly misleading, as many people assumed that the accident would have been hard for a human to avoid based on the video. See threads like [1] - please read the top comment there.

2. The police released a misleading statement, echoing the same sentiment - that the accident was hard to avoid.

3. Uber sat silent for 50 days - long enough for people to stop caring - before admitting fault.

There is no conspiracy here. The actions of the police department have misled the public in Uber's favor (yes, we can assume incompetence as the reason), and Uber used this to their advantage by keeping silent for two months (as any company likely would). This is expected.

What I don't expect is the public cutting Uber any slack in this scenario.

[1] https://www.reddit.com/r/uberdrivers/comments/866xmv/video_f...


> which part of Uber is responsible for damage control

Damage control typically falls in the realm of public relations, not legal. Legal can help inform PR on what to say, but then again, so can any other relevant department, including engineering.

> Uber sat silent for 50 days

I was under the impression this was by request from NTSB. I mean, even your own criticism is that the dashcam video being released prematurely caused the public at large to reach inaccurate conclusions. Investigations take time. We can't have the cake and eat it too.

BTW, the comment you linked to seems pretty representative of the general response to the video: "yes it looks dark, but Lidar should've seen her"


>I mean, even your own criticism is that the dashcam video being released prematurely caused the public at large to reach inaccurate conclusions.

That is correct.

>BTW, the comment you linked to seems pretty representative of the general response to the video: "yes it looks dark, but Lidar should've seen her"

LIDAR is the red herring here. The less crappy camera would have seen her. The naked eye would have seen her. Pretty much anything but that dashcam would. Even the dashcam, perhaps, if it was set to higher exposure.

And that was someone who knows about LIDAR talking - most of people don't know what a LIDAR is. And so a lot of people really accepted the "it was dark" line of reasoning, never stopping to think that it would simply imply that Uber was driving beyond its headlights.

>I was under the impression this was by request from NTSB... Investigations take time.

Right, that's where "cutting Uber some slack" comes in. It can't feasibly take 50 days to come to the following conclusion: "we screwed up here, the pedestrian clearly shows up on vehicle cameras/sensors the moment she steps into the roadway" - which seems to be the case here (again, even now, we can't say that for sure!).

I can't blame Uber for not admitting fault - it's in their interest to do so. I do blame the city and the general public for letting Uber get away with that, and creating an overall victim-blaming sentiment (which was there from day 1 - including searching the public records of the victim).


To be fair, I doubt the police just "popped the SD card out of dashcam"... more likely Uber had managers (and lawyers) at the scene within a short time window and Uber had time for very careful deliberations before handing anything to the NTSB (though as you say it's still more likely the got they raw dashcam footage instead of an altered video)

Here is the more likely order of events:

1. Driver calls supervisor

2. Supervisor redirects call to senior Uber lawyer

3. Uber lawyer says "Don't do anything until we get to the scene. Be polite to police but tell them to wait until our legal team arrives"

4. Legal team at scene, talks with uber CEO (and senior lawyers over phone) then after an hour or so tells NTSB they will hand over footage after engineer arrives and extracts the dashcam footage

5. Engineer hands dashcam footage to lawyers, who then hand it to NTSB


That still sounds pretty farfetched IMHO. You're basically saying that both the supervisor and a senior lawyer were on call at 10:30pm on some random day _and_ were immediately available upon request _and_ knew exactly what to do and who to call next _and_ the police were just twiddling their thumbs the whole time, despite there being a dead body on the scene. And lawyers/engineers can teleport from SF to Tempe somehow since there's no way they would've been able to fly in with that short notice so late at night.

If anything, I'd imagine that the police would be blocking access to the accident area and the car, especially to an alleged Uber employee trying to muck around with evidence without a government-sanctioned body overseeing forensics.

If I were to make a guess, I'd imagine it would've gone like this:

- "omg, I just hit someone"

- driver sees victim on the ground, panics and calls 911

- police and paramedics arrive

- police interviews driver, gets the account of the story, checks paperwork, get driver's information

- driver tries calling supervisor to escalate the issue (potentially unsuccessfully)

- paramedics declare victim dead, driver gets upset, body eventually is taken away while police tries to calm the driver

- tow truck arrives, takes car into police custody

- next morning, local police investigators look at the car in the impound, find the dashcam and look at the video. Uber ATG wakes up to the bad news and scrambles to figure out what went wrong and what to do next. The engineering team is still nowhere near Arizona and probably too busy wading through TBs of sensor/AI data to even remember there was a cheap dashcam installed in the car.

- Police releases a statement (w/ the dashcam video)

- NTSB gets involved in the investigation and gets in contact with Uber to understand how to acquire forensic data out of the SDV's databases.


So our differences in interpretation are exactly what distinguishes a "fantastic lawyer" from a "mediocre lawyer"- I agree if Uber has mediocre lawyers your scenario is more likely. A "fantastic lawyer" will have someone competent at ground zero dealing with the police on behalf of the company and handling the temp worker driving the car (otherwise the only representative of the Uber corporation and likely to incriminate the company) in 45 minutes.


> A "fantastic lawyer" will have someone competent at ground zero in 45 minutes

Gonna be honest, that sounds like something straight out of a hollywood movie. In reality, when suddenly presented with a unexpected event, real people simply aren't physically able to coordinate narrative-altering eloquence training so perfectly on a 30 minute notice close to midnight on some random day. That's kinda like saying that if a macdonalds night-shift employee witnessed a customer choking to death on a lye-contaminated drink, a super star legal team would be on the scene in 45 minutes to instruct the employee on how to avoid incriminating the company when talking to the police. That simply isn't how the world works.


If they are doing a self driving car pilot, it would be stupid not to have some junior legal staff in the same or nearby county. The senior lawyer just calls the junior staff (who are always potentially on the clock when it comes to that type of work) and says "get your ass down there!"

That doesn't seem like a Hollywood movie to me, that just seems like the way competent people operate.


Again, the whole notion seems completely misinformed. It would be ridiculously wasteful to have junior lawyers sit around in a completely different office from HQ just in case something happened.

And if you ever do on-call, you need a rotation system. In my team, on-call rotates among 9 people, none of which is required to get their ass anywhere in the wee hours of the night even if the world is ending. Requiring such commitment on top of a regular workload is completely unrealistic, and the notion that it actually got carried out as efficiently as could be is borderline wishful thinking. The alternative is shiftwork, which is fundamentally incompatible with the nature of the legal industry, and thus doesn't actually happen.

What's more, talking with the police is not a skill that is cultivated by corporate lawyers at all, or any lawyer for that matter, especially if we're talking about a 911 call.

Perhaps another obvious indication that uber would be unable to deploy SWAT-style legal teams is the number of outages they experience and their durations, despite having literally hundreds of highly paid engineers on call who are able to rollback bad deploys from their phones.


You've convinced me you understand the situation better and are likely right- I stand corrected.


Um, nope. While this might be valid for HN, I see the scapegoat video paraded around as "proof" that nobody could have done anything, every.damn.time the collision is mentioned.

PR: mission accomplished.


I agree with this, Occam's Razor suggests your description of events is more likely.


It shouldn't matter at all how dark the roads were, the vehicle was equipped with LIDAR.


It's easy for Uber to point to this video to say: "Look know dark it was! If it was you driving, you'd be in the same problem we are in!"

Look how much time has passed to learn of the fact that it was Uber's sloppy programming at fault in this tragedy.

The good news for Uber is that this comes out far after the event when people are in general less likely to care, or even remember the details of the accident.


The whole point of this piece is to put the blame on a human programmer, of course.

What's there to "sloppily" program here? Most likely, somebody set an arbitrary threshold constant 0.05 too low. The data that threshold was working with will still have been provided by the machine learning system, which is opaque to humans.

We'll have to face it that if we want self-driving cars, there isn't some magic procedure we can follow to not have accidents due to false classifications.



I think it was just a choice of a crappy dashcam (intentional? IDK); someone coined the term "scapegoat camera" in one of the discussions.

In other words, off-the-shelf dashcam equipment sucks in the dark and doesn't convey a realistic scene, no doctoring required. Here's a random piece of dashcam footage from elsewhere - watch for the pedestrians! https://youtu.be/typj1asf1EM The video is unaltered - just crappy, they were actually clearly visible.


The cameras used in applications like these are not like your typical 24-bit RGB cameras in phones or webcams that need gain adjustment: their sensors support a very high range of brightness values (like "high dynamic range") - but the video released to the public and the media would be MPEG/H.26x/etc which do not support HDR - or at least most video players wouldn't. The police probably lacked the expertise to know to demand the raw high-range camera data but instead just asked for "the video" and it's possible Uber deliberately took advantage of their ignorance to release a video that mitigates their culpability.


The video was not from a "high dynamic range" sensor, regardless of whether the car actually had one installed. I believe it did have such a "high dynamic range" sensor, which shows the blatant disregard for the truth that Uber is showing here by releasing such misleading video.


The video was very obviously obtained from a dashcam solution that isn't actually used to do any of the autonomous driving. It's possible they don't even record the raw footage from the AI cameras, that's a huge bandwidth hog.


They could easily buffer the last N minutes of recorded data in a black box. Not doing so would seem pretty irresponsible.


Uber? Irresponsible? Inconceivable!


> We’re actively cooperating with the NTSB in their investigation. Out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident.

Hey look, somebody knows how to follow the script! Uber may be grossly negligent, but at least they respect the process unlike Tesla.


It's actually the rest of that statement which shocks me a bit:

> In the meantime, we have initiated a top-to-bottom safety review of our self-driving vehicles program,

So they actually think they can continue to work on self-driving vehicles after this, and not scrap the program entirely. That's...not what I expected. No municipality in the country should allow Uber to test on their streets after they killed someone, almost certainly because of their recklessly fast development. That's not something you come back from by promising to Do Better, especially not when there are more than a few competitors already way ahead of you.


> No municipality in the country should allow Uber to test on their streets after they killed someone

This is pitchfork mentality.

Many industries have to deal with unintentional deaths. Car manufacturers sometimes have to deal with deaths resulting from hardware failure. Food companies sometimes have to deal with recalls due to food poisoning deaths. If the response to a death is to ban all further production work, eventually nothing is ever going to get done because it's inherently impossible to be perfect all the time.

It sucks that it had to come to someone's death, but the rest of the statement said:

> we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture

so it sounds like Uber is at least trying in earnest to put the house in order.


Or a cynic could interpret that as "we've hired a crony of the people who will be judging us so he can lobby them to let us off easy".


Said cynic sounds pitchforky. Even HIPAA tries to work with non-compliant companies to get them compliant before resorting to dishing out their notoriously expensive fines.

At the end of the day, the NTSB isn't in the game of shutting down companies. NTSB would much rather see that self-driving vehicle development is happening safely since the matter of the fact is that it _is_ happening, despite all of its imperfections.


Hiring a former NTSB chair while there's an active NTSB investigations sounds a lot like a revolving door to me.


Um, to go back to being NTSB chair, he would have to be re-appointed by the POTUS...

One does not just hire this guy as if he was some sort of money-chasing sleazeball. I mean, just google the guy's bio


I may have abused the 'revolving door' term a bit, but I think you are missing the point of hiring former regulators.

You are not hiring them for their bio, achievements or abilities. You are hiring them to signal existing and future regulators that being nice to you can be highly profitable.


You keep saying Uber hired him, but where did you read that? The statement in the article said "we have brought on former NTSB Chair Christopher Hart to advise us". Given how high profile Hart is, and given the focus of his career, I understood that to mean that he agreed to review safety practices and make safety policy recommendations, a topic in which he's an expert on. Unlike wording like "joining to lead a safety program", this arrangement doesn't really imply profit-driven motivation, IMHO.


I don't see any other way to interpret their statement:

> Uber, which suspended testing of autonomous vehicles after the accident, on Monday said it was looking at its self-driving program and said it retained Christopher Hart, a former chairman of the NTSB, to advise it on safety.

> “We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture,” Uber said. “Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”

https://www.reuters.com/article/us-uber-selfdriving/uber-hir...


> said it retained Christopher Hart

Ah I see, you're talking about "hiring" as in paying a retainer fee. I thought you were saying he was hired as a full time employee, my apologies.

Assuming Reuters' wording is accurate, that actually seems fairly reasonable and in line with how I understood it.


It seems like it was actually Uber who disabled the safety settings that were there to prevent this according to Volvo. "Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera."

https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...


This whole line of reasoning frustrates me somewhat. The story of "Uber Disabled the Safety System" paints Uber out to be negligent for doing so. Outside of consideration of all the other facts (which unlike this one, do indeed suggest that Uber are to blame), this element of the story is not indicative of negligence. Of course they disabled the existing safety system - they were aiming to build a new, safer one that could operate without reliance on or interaction with the existing one. That (again, on its own) seems like a totally reasonable engineering call.


They are absolutely negligent for doing so. I wouldn't have made that decision, because in a safety-critical system it's ALWAYS preferable to have a backup system, especially when the system you're working with is unproven software that you don't fully understand. It doesn't matter that the old system would have interfered with the new system, and in fact if the two systems did interfere it would behoove you to understand why. The decision speaks volumes about their engineering culture leading up to this incident.

You can't just call something a safety system. You have to prove that it is a safety system by testing it, which is something that Uber hadn't done before they disabled the existing system.


So I agree that the system was insufficiently tested to be operating on public roads and endangering lives, and that lack of testing was negligent; I also agree with your comment about their engineering culture.

But those two points seem distinct from the idea of disabling a system in order to have a better understanding of what's going on. Suppose they had built a car that was safe, conditioned on the presence a black box system that they likely didn't have access to the internals of - would this be satisfactory?


Even with the red herring about the safety overrides being a black box, yes, it would be satisfactory - see my other post. Not only would the triggering of the safety override generally indicate a failure of the autonomous system, the use of failsafe overrides to catch corner cases should be a feature of the final system.

If Uber could demonstrate, through the analysis of a statistically significant number of events, that its system was actually safer without the car manufacturer's override (e.g. if all the events were false positives), then it would be appropriate to disable it at that time. That's how you do it.


Replying to your other comment here as well - the inclusion of conceptually simple safety mechanisms is important (eg I agree), as is the broader scheme of including both hardware and algorithmic redundancy to improve safety. I also agree that "live" initial testing of such safeguards is inappropriate, and as above Uber clearly failed to do appropriate testing.

However you describe the (potential) black box nature of the existing system as a red herring -> to be honest, this is what I'm most interested in. My opinion is that including a black box component into a saftey-critical system would be inappropriate. Do you disagree with that? If your answer is "probe it until it's no longer a black box and then include it", would you not consider that to be overall semantically equivalent to "don't include a black box"?


It is a red herring because:

Firstly, it assumes Volvo is not sharing the parameters of the system. It seems unlikely that Uber is installing an automated driving system into these cars without the cooperation of Volvo, especially with the agreement to ultimately get 24,000 autonomous-system-equipped cars from them.

Secondly, if Uber could instead determine what it wants to know about the parameters by testing, then the question is irrelevant, as are the semantics.

Thirdly, it is presumably safe for humans to be driving cars without knowing the exact parameters, and so should not present any particular problem for the autonomous system - if the emergency brakes are triggered, it is likely to be a situation in which it is the right thing to happen, and possibly a result of the autonomous system failing. Just as for human drivers, an autonomous system is expected to usually stay within the parameters of the emergency system, without reference to those parameters, just by driving correctly. For example, if the emergency brakes come on to stop the car from hitting a pedestrian because the autonomous system failed to correctly identify the danger, what difference would it have made if the system knew the exact parameters of the emergency braking system?

Lastly, the road is an environment with a lot of uncertainty and unpredictability. If the system is so fragile that the tiny amount of uncertainty of not knowing the exact parameters of the automatic braking system raises safety concerns, then it is nowhere near being a system with the flexibility to drive safely.

It is possible that a competent autonomous driving system might supplant the current emergency braking system, in which case the way to proceed is to demonstrate it in the way I outlined in the last paragraph of my previous post.


Thanks for answering in so much detail - I think the last two points make a compelling case for not disabling the system, even in the true black box case, and the first two are very compelling in the real world, even if they don't apply to the thought experiment of an actual black box. You've broadly changed my mind on this issue :)


I should have said that your concern is valid where two systems might issue conflicting commands that could create a dangerous situation, it is just that I don't see it likely in this particular combination of systems.


It's perfectly appropriate when both systems are intended to enhance safety. It does not matter how the internals work, just that the system enhances the safety of your solution. It could be a webcam beaming images back to a bunch of Mechanical Turk operators and it wouldn't matter as long as it was proven to work.

It's not like Software of Unknown Provenance where it's running in the same execution environment and you don't have any control. This was a completely separate system with independent sensors that was marketed to stop the car in the exact situation the car failed to stop in. Disabling it was foolhardy.


Seriously!?

The responsible position would be to retain the existing system in a passive standby mode, capable of overriding the active 'under test' one in the event certain safety/detection thresholds were exceeded.


IIRC someone had said that there are physical limitations to what one can actually do (namely, having to physically unplug the connector for one safety system in order to be able to add your own at all)


Sure, if you want to move fast and disrupt yada yada. They shouldn't be putting hack-job self-driving cars on public roads. Either do some engineering so that they keep the backup system in place or keep it on a test track. This is Chernobyl-like irresponsibility.


Are you an engineer or at least familiar with vehicle engineering? Do you know for a fact that Waymo/others don't disable built-in safety mechanisms in similar fashion? If not, perhaps it may be best to not pass armchair judgment...


As the vehicle can obviously be operated with the emergency braking system in place, it seems highly implausible that an autonomous system could not be designed to work with it in effect. If Uber did not do so merely because it would have been inconvenient or taken more time or effort, then that would be even more damning than a bad, but well-intentioned, engineering decision.


This argument frustrates me more than somewhat, especially if it is what Uber's engineers were thinking when they removed the last safeguard that could have avoided this fatal accident.

There is a reason for there being such a thing as a crash-test dummy, and for crash tests being done in elaborate facilities that simulate crashes. It is because testing how a vehicle handles a crash in live (literally) situations is both ineffective and unethical.

The purpose of the system under test is to drive safely. If the simple safety override was triggered, then that was probably a failure of the autonomous driving system. Any case where it was not could be identified afterwards, by analysis.

Furthermore, when you have a complex system controlling potentially dangerous machinery, it is a sound engineering principle to include simple 'censors' to catch the rare corner cases that are a feature of complex systems, no matter how carefully designed and thoroughly tested. Nuclear systems have a number of independent safety systems that can scram the reactor, each in response to a single specific condition. Disabling them for a test is what happened at Chernobyl.


This system could have been used as a backup and almost certainly would have prevented this death. There is no logical explanation for defending their disabling of this production grade system that we know works in favor of their alpha at best grade system that they knew did not always work. Not having a backup system, other than a human which is a poor backup system, seems grossly negligent and reckless. Why couldn't the existing system work as a backup for the new system? Until uber provides a compelling reason, disabling the old system can only be viewed as knowing, reckless endangerment that led to manslaughter that could have and should have been prevented.


The exact circumstances of the struck pedestrian likely violated many [possibly curde] bayesian priors for what to consider a positive collision threat: A rogue pedestrian at an unflagged portion of road, at an odd time of night, on a road that doesn't usually have pedestrians, with a bike moving perpendicular across the road (instead of parallel along it). Each one of these had been learned to be a high false positive to true negative ratio.

The product of these rarely encountered events (even independently) allowed a higher level algorithm to "score" the approaching unverified object through bayesian inference, below the level of a reasonable expectation of human collision.

In a way, this system should never be expected to become overly reckless or feckless as compared with even top human drivers in the long run: close calls of this nature should be input to the system (perhaps deliberately through staged QA?) and added to the model of collision threat identification.


I am rather confused:

Should the car always respond by slow down as the mandatary response when there is object in front?

Are you implying that the algorithm would conclude that because it's unlikely that there is an object in that circumstance, it is ok to disapprove its own observation that there is some thing in front of the car? That sounds utterly nonsensical.


Exactly. I'd expect collision-prevention to have absolute priority over any other system. It should not be possible for any logic bug or combination of environmental factors to make the car run into a large object in its field of vision. There's no need for statistics here--an overriding 'don't run into stuff' rule will suffice.

If there's truly no possible route to avoid some collision and therefore the car needs to decide which collision is preferred, then the statistics can kick in, but in this Uber scenario, it shouldn't have gotten that far. From the outside, it seems like a fundamental design issue.


> I'd expect collision-prevention to have absolute priority ..to [not] make the car run into a large object in its field of vision.

This is a good point.

But I think we've all placed our "toes over the curb" with a car hurtling towards us in preparation to cross the street as soon as the car has passed. No system can stop at each instance of this, nor can it recover past the point-of-no-return created each time this event occurs.

What constitutes "the curb" is subjective when it's not a raised sidewalk. Surely you don't have to be "on the grass" of a multilane high speed road but merely somewhere on the shoulder to claim expected pedestrian lane status. But that (may!) also imply you're an 'awaiting street crosser' not an 'engaged street crosser'. That's where the high level stats comes in.


The toes over the curb thing is a potential collision threat. It's fine to statistically categorize and prioritize these, as they're everywhere. But as soon as it becomes an imminent threat (i.e. is directly in the vehicle's path), avoiding the collision should immediately become the overriding objective. If the collision is truly unavoidable (this should very rarely be the case with an AV), then it should still be slamming on the breaks and doing everything possible to lessen the force of the impact.


I believe that your reasoning, if adopted by self-driving car companies as a norm, will delay the progress exponentially.

Innovation is an uphill battle.


In many of the cities where Uber would like to deploy self-driving cars, a "rogue pedestrian" crossing where there is no marked crosswalk would be at least a multiple-times-per-day event. Any system trained to treat that as an ultra-rare anomaly would be a killing machine if unleashed in such a city.


As much as "rogue pedestrian" seems like blaming the victim, if you go actually did go "rogue" at any truck stop in this country, I suspect you would not survive one night. Trucks Stops and Times Square are two ends to a spectrum of expectations of erratic pedestrian behavior.

I want (and expect the system already does) account for this difference. And I also suspect the model to apply this pedestrian expectation modelling is imperfect and could use improvement. I don't know how to make it perfect, or if that's even a reasonable goal.


That is the classic self-driving tradeoffs question, no?

I believe rather than absolutes, the solution is her is properly identifying a serious crash causing injury was missed. Regardless if it was ID'd as human, it was still someone pushing a shopping cart full of bags across a highway intersection that could have been detected. Even if it wasn't human, and totally dangerous and improper crossing at night time on a busy highway, it should have stopped even for the drivers own safety.


I think that I am qualified to say that cost-effective testing is and will continue to be the barrier to self-driving car adoption.

A big part of the solution to this problem is design-for-test; and the adoption and tight integration of a wide range of (quite conventional but sadly often overlooked) testing and systems engineering standards and processes.


Anybody still want to defend the police's claim within a day of the accident putting the blame on the woman that got killed?


Yes. They said "preliminarily" the Uber driver was not at fault. Judging from the footage, that was reasonable. You can't put the blame off the woman, you can't just cross a road at night like that and expect to be noticed.

Accidents like that happen all the time, it's only because of the self-driving car that this is newsworthy.

Imagine if there wasn't any dashcam footage and this was a normal car with a distracted driver, without any other witnesses they would likely be off the hook.


A distracted driver has the option to tell the police they were distracted. Or are you of the opinion that since the woman was dead it didn't matter anymore?


First of all, in this scenario, the woman would have at least shared some of the blame.

Second of all, what are the odds that a human who just killed a woman would admit even to themselves that they were the one to blame in such an accident? Practically zero. Cognitive dissonance won't allow it. In their minds, they weren't distracted, at least not enough to take any of the blame.


> Second of all, what are the odds that a human who just killed a woman would admit even to themselves that they were the one to blame in such an accident? Practically zero.

As a paramedic who sees fatality accidents? Hardly. "Oh my god, I've killed someone!" is a fairly common statement, or paraphrasing.


"Oh my god, I accidentally killed someone" is something quite different from "Oh my god, my willful negligence alone has caused a fatality and I'm ready to take full responsibility".

While not entirely impossible that the latter happens, it's highly unlikely. What's much more likely is that these people are saying this because they want to hear "Oh, but it's not your fault" from somebody else.


> First of all, in this scenario, the woman would have at least shared some of the blame.

Possibly. But a distracted driver is definitely at fault, no matter what. Driving is a full time occupation.

> Second of all, what are the odds that a human who just killed a woman would admit even to themselves that they were the one to blame in such an accident?

Pretty good. Most people are honest, even in a bad situation.

> Practically zero.

You speak for yourself.

> Cognitive dissonance won't allow it. In their minds, they weren't distracted, at least not enough to take any of the blame.

Well, you probably have your reasons for writing this under a novelty account. But it's clear that when you are at fault it is best for everybody involved to recognize that fact, even if it means discomfort for you.

I get that in todays climate it's what you can get away with, but I don't think that strategy will hold in the longer term. It's way each and every lawsuit is appealed to the highest courts because nobody bothers to just admit their errors and it's why when there is a car accident the perpetrators will sue the victims. But that's wrong on many levels and I refuse to subscribe to that view.


>> Possibly. But a distracted driver is definitely at fault, no matter what.

Both parties are at fault, I'm not denying that. That's not the point.

>> Pretty good. Most people are honest, even in a bad situation.

What gives you that idea? People are dishonest all the time, especially to themselves. It's a basic coping strategy.

>> Well, you probably have your reasons for writing this under a novelty account.

Yes, it's that I'm saying "controversial" things that inevitably collide with the ideas of all the "righteous" people around here.

>> But it's clear that when you are at fault it is best for everybody involved to recognize that fact, even if it means discomfort for you.

That's cute and all, it's just not how human psychology works. There are endless examples of this, from the pettiest thieves to the biggest mass murderers.

>> I get that in todays climate it's what you can get away with, but I don't think that strategy will hold in the longer term.

If you sincerely don't believe you are at fault (even if that is objectively wrong) and there is no evidence to prove otherwise, you'll get away with it. That's how the legal system works, if it works. In fact, if you were to get convicted despite believing to be innocent, that's (psychologically) the worst thing that could happen to you.

On the other hand, if you happen to be the odd one out that seeks atonement for their sins, getting convicted may help you out. That's probably a pathology in and of itself though.

>> But that's wrong on many levels and I refuse to subscribe to that view.

But... that's entirely irrelevant. Have you considered that maybe your moral ideals are distorting your view of physical reality? Some sort of magical thinking towards justice?


> That's cute and all, it's just not how human psychology works. There are endless examples of this, from the pettiest thieves to the biggest mass murderers.

This argument fails on both moral and statistical grounds: most people are not petty thieves or mass murderers nor should we accept that "getting away with it" is an acceptable basis for human behavior.


I'm making a prediction on what's likely to happen based on well-researched human psychology. Again, the keyword is "cognitive dissonance". It encompasses all aspects of human life, the petty theft is just an example.

The moral dimension to this is entirely besides the point, the fact that you even bring it up tells me you are probably "suffering" from what is called a "Moralistic Fallacy".

I'm not making a moral argument or judgment here, but there's certainly the related question of whether someone who believes their own lies is truly a liar.


Uber's ultimately at fault because it was their car. But the woman driving should be arrested for vehicular manslaughter. Uber knew its cars weren't autonomous, that's why they had a driver to take over when the car screws up. But she was texting while she should've been ready to take control of the vehicle and hit a pedestrian any driver that was paying attention would've easily avoided.


> Uber's ultimately at fault because it was their car.

Uber while hiring should've specifically looked for people who can pay attention for hours on end. Basically like a train driver. Further, they should review if people pay attention and have a system in place.

She was employed by Uber. If you make a huge error while working for someone, often it's the employer who is responsible. There's way too much which should've been done which wasn't to just blame this driver.


Is this a binary decision? I normally slow down when I see a something at the side of the road.


If I understand it right, this is the biggest blind spot of all the automated driving systems. The software just ignores stationary objects because the sensors don't have the resolution to determine whether they are in front of the car or on the side of the road. And if it didn't ignore stationary objects the car would come to a halt at every garbage can, mailbox, and bush along the road. Until they can work lidar into the system this will not be preventable. And, correct me if I'm wrong, but I think the vast majority of accidents that were the fault of the self-driving car happened because it hit something stationary in the road (or off to the side of the road, in one case I recall).

Since I've learned about this blind spot in these system I no longer trust them. I think everyone with a self-driving car should understand this. But maybe it's not in the interest of the manufacturer to explain it, or people would demand they fix the problem, which they cannot do just yet.


That wasn't the case at all for this collision. As the article says, the sensors had detected the woman but the information had been disregarded. And the moving object thing is exclusive to radar, vision systems don't suffer from it though vision systems have their own problems.


The article linked mentioned "B was the problem", meaning it ignored the sensor input. Another article(1) about it mentions that Uber had reduced the number of LIDAR sensors from five to just one rooftop sensor. The president of the company that builds the sensors even said you would need side sensors to see pedestrians, especially at night. But LIDAR is expensive. So it seems that, like Tesla, they were trying to rely on radar for that kind of thing. But that does leave the blind spot(2) I mentioned.

Given that I don't see how this case is so different.

Though I still don't understand how, specifically, that blind spot problem works. They say when the car in front of you changes lanes and a stopped truck is now in your lane you'll hit it. What if no car changed lanes in front of you? Would you still hit a stopped truck in your lane? The guy whose Tesla hit the bridge column seems to tell me yes, but I'm not sure.

1. https://www.theregister.co.uk/2018/03/28/uber_selfdriving_de...

2. https://www.wired.com/story/tesla-autopilot-why-crash-radar/


I haven't worked with automotive radars but the way it works in aeronautical radar is that you apply frequency binning in the electronics and ignore returns that have the wrong Doppler shift at the waveform level, long before step A in the article. If you didn't do it this way your computer would be overwhelmed considering every tree and house you can see but only moving objects like airplanes.

It's the same principle as the way your eyes only report things that move over your retina as your eye moves, letting you filter out imperfections in the lens of your eye and your blind spot.

Similarly, I assume that the computers in a car would be overwhelmed by the task of taking the returns from every single post on a guardrail and sorting them into distinct objects before deciding to ignore them due to relative velocity so I'm fairly sure that a similar mechanism is at work within the hardware of the radar unit the Uber car.

So given that the woman showed up on Uber's sensors she would have been detected by the camera system or the lidar. And apparently she was detected.


Thanks for the explanation. I hadn't heard of frequency binning(1) before. It makes sense that stationary objects would be filtered out before they even reach the signal processing software.

But Uber did seem to be relying more heavily on radar than the other sensors. From an article I linked in another comment "the number of LIDAR sensors were reduced from five to just one – mounted on the roof – and in their place, the number of radar sensors was increased from seven to 10. Uber also reduced the number of cameras on the car from 20 to seven." And for LIDAR this "results in a blind spot low to the ground all around the car."

They had to know radar wouldn't detect stationary objects. So the signal processor should be prioritizing camera and LIDAR reports of anything stationary in the road. If it really was programmed to ignore such a signal it seems like gross malfeasance to me. If not, then the speculation in the linked article is incorrect and it's the same problem Tesla's system has.

1. https://www.eetimes.com/document.asp?doc_id=1278779


I thought people were saying in previous comment threads that Teslas ignore stationary objects because too many false positives from roadside objects that are not in the road. The combination of software and sensors on Teslas is not enough to be able to distinguish stationary objects in the path versus stationary objects not in the path.

Based on the slim evidence of the Chinese accident video and the LA post accident photo, it looks like that's correct - http://www.dailymotion.com/video/x4tes8u

https://cdn.teslarati.com/wp-content/uploads/2018/01/tesla-c...


Wow. That Chinese accident video confirms my worst possible understanding of this. There are no special conditions. If a truck is parked in your lane you're going to hit it. More people should know about this. Maybe people don't want to believe it. I don't know. But I do know that I'm not going to trust any system that doesn't have multiple LIDAR sensors. Though it may take a while for the price on those to drop enough to make it affordable.


Tesla doesn't build self driving cars. They have a very fancy adaptive cruise control and lane assist. Tesla's don't have LIDAR. They cannot see stationary objects on the road. They are also prone to killing the driver but usually nobody else. There is also no potential for improvement without hardware upgrades.

Sure. Uber's cars have been involved in a deadly collision but they don't suffer from the same problems as Tesla. The primary problem with Uber is that the company is acting recklessly to gain a competitive advantage. If they had multiple fail safe mechanisms then accidents like this could be prevented.


i haven't read much about the observation that driving = communicating. aka : whenever you drive, you send and receive information from all the environment and human, establishing sometimes what could be similar to a dialogue.

Doesn't this implie that autonomous driving won't be solved until we're able to have "true" personnal assistant ?

The message in driving are simpler than with a regular voice communication, but the objects and concepts visually present seem just as diverse.


Didn't that car also have a builtin automatic braking system that was disabled?


I think so. But I'm not sure why that would be a bad thing. The Uber built system was designed to fully replace such systems.


As long as Uber's system logged which automatic brake was triggered - or simply failed to log that its own braking system stepped in - it could have saved a life without impacting the data gathered.

They didn't put a tarp over the windshield; why get rid of other proven safety features?


You don't generally want multiple autonomous control systems in a single vehicle competing with each other for control. There are scenarios where the less-sophisticated collision avoidance radar could do the unsafe thing where the self-driving system has a more holistic plan.

Basically, there's no reason the self-driving tech shouldn't have been way more effective than the collision avoidance radar other than Uber royally screwed up. The LIDAR and other sensors should have (and by this article, likely did) detected the woman long before the vehicle was in striking distance, much longer range than the forward radar.


That argument applies equally to human drivers competing for control of a vehicle with automated braking systems. Are there not also scenarios where the less-sophisticated collision avoidance radar could do the unsafe thing where the human has a more holistic plan? If not, why not?


> You don't generally want multiple autonomous control systems in a single vehicle competing with each other for control.

And yet, every modern car has two such systems, competing for control, all the time.

1. The car's safety systems.

2. The human driver.


> There are scenarios where the less-sophisticated collision avoidance radar could do the unsafe thing where the self-driving system has a more holistic plan.

Like, for example, this plan? https://www.youtube.com/watch?v=od5YG6kBrJs

I believe the whole thing will require better conceptualization of something that we call crew resource management for human teams. That is, how do you respond if you have conflicting information?


Hubris is the reason it is bad.


Sorry for the snark, but there's a joke in here about:

"Uber business development saw but ignored laws it broke."


Snark aside, false-positive rejection will be an aspect of any lawful autonomous vehicle.

Incorrect false-positive rejection will be a reality, and will have to exist within lawful operation.


There is without doubt a grotesque irony in this story. Uber stands as a perfect symbol of the reckless capitalism of the 21st century: invests hundreds of millions in building a self-driving monster in the shape of a 2 ton luxury SUV, then puts behind its wheel- as a "safety driver"- a former convict for attempted robbery. And the capitalism-fueled metal golem immediately crushes a homeless woman pushing a humble bicycle. I mean, it could have been written as a satire of our times. Except, it really happened.


> This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Are you kidding me? At least 80% of the comments I saw online the following week were blaming the woman.

It seems to me like Uber's self-driving system is absolute trash. But they were in a hurry to "go live" with it, because they need it so that the company can become profitable ASAP. And thanks to the deregulation bill from last year (which was almost universally praised online), they could actually do that, too.


It's still the biker's "fault" in some sense. That same video would have 100% exonerated a human driver. There's no way that was a safe crossing. The fact that there was a clear automation fault also doesn't change the fact that traffic safety is everyone's job, including but not exclusively the computers.


You mean the video that showed the safety driver staring at a cell phone for 6 full seconds right up until nearly the moment of impact? Maybe that would have exonerated the driver, but only to prove how perverse our driving laws are. But in my state the driver would have been convicted.


Do we know what she was looking at? I haven't read anything conclusive of that.

What would the driver have been convicted of? Would you be willing to get second opinions from prosecutors in your state as to whether or not they would have persued a conviction of that crime?


It was pretty clear in the video.

I myself was charged for a much lesser distraction a while back involving a crash in which I was the only one at risk and for which the other driver shared equal blame. If I had run down a pedestrian I have no doubt the charges would have been severe.


If the video correctly portrayed the conditions, the driver would have been squarely on the hook for overdriving their headlights. Nope, the scapegoat camera doesn't help Uber in any conceivable scenario.


I totally agree. There's a PR cover up going on.


I'm wondering how much more attentive the emergency driver would have been if they were one of the developers. Sometimes I worry about how much trust non-developers assign to software (in all sorts of areas)


"Saw" doesn't seem like the right word. Granted, apparently, the sensors picked something up. What it failed to do is recognize and categorize; and from there properly take action.

Given the failure of the recognition + categorization system, it would be interesting to know what it thought it saw. Was it filed under IDK, a plastic bag blowing in the wind, what?


ISTM that HN policy would replace TFA with the link cited in TFA:

https://www.theinformation.com/articles/uber-finds-deadly-ac...

...although, there is a paywall, so maybe not.


Not just any paywall either. It's $40/month.


Um...if I as a human see a plastic bag on the road ahead, I slow down to be able to carefully ascertain if it perhaps is a plastic bag full of nails. I don't drive full tilt over it.


i wonder how this info got out. i thought (learned this from tesla) that companies are not allowed to release info about accidents under NTSB investigation. which this one is.


Sounds like the departments are fighting over the responsibility of the accident. Correct me if I am wrong.


What’s worse, it saw it but misinterperated or sensors has a blind spot and missed something?


Ethical dilemma, should a vehicle swerve to avoid an unavoidable pedestrian and potentially killing more people or should it ignore that person (not saying that what happened)

Should a autonomous vehicle also prioritise it's own passenger over the safety of others, aka head on collision with a people carrier.


it's dramatic but it's also a good reminder that "modern" "recent" wonder tech doesn't comes free. It needs proper examination. This will surely push toward more serious work.


> This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

> This is not good.

The state of journalism saddens me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: