Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like it was actually Uber who disabled the safety settings that were there to prevent this according to Volvo. "Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera."

https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...



This whole line of reasoning frustrates me somewhat. The story of "Uber Disabled the Safety System" paints Uber out to be negligent for doing so. Outside of consideration of all the other facts (which unlike this one, do indeed suggest that Uber are to blame), this element of the story is not indicative of negligence. Of course they disabled the existing safety system - they were aiming to build a new, safer one that could operate without reliance on or interaction with the existing one. That (again, on its own) seems like a totally reasonable engineering call.


They are absolutely negligent for doing so. I wouldn't have made that decision, because in a safety-critical system it's ALWAYS preferable to have a backup system, especially when the system you're working with is unproven software that you don't fully understand. It doesn't matter that the old system would have interfered with the new system, and in fact if the two systems did interfere it would behoove you to understand why. The decision speaks volumes about their engineering culture leading up to this incident.

You can't just call something a safety system. You have to prove that it is a safety system by testing it, which is something that Uber hadn't done before they disabled the existing system.


So I agree that the system was insufficiently tested to be operating on public roads and endangering lives, and that lack of testing was negligent; I also agree with your comment about their engineering culture.

But those two points seem distinct from the idea of disabling a system in order to have a better understanding of what's going on. Suppose they had built a car that was safe, conditioned on the presence a black box system that they likely didn't have access to the internals of - would this be satisfactory?


Even with the red herring about the safety overrides being a black box, yes, it would be satisfactory - see my other post. Not only would the triggering of the safety override generally indicate a failure of the autonomous system, the use of failsafe overrides to catch corner cases should be a feature of the final system.

If Uber could demonstrate, through the analysis of a statistically significant number of events, that its system was actually safer without the car manufacturer's override (e.g. if all the events were false positives), then it would be appropriate to disable it at that time. That's how you do it.


Replying to your other comment here as well - the inclusion of conceptually simple safety mechanisms is important (eg I agree), as is the broader scheme of including both hardware and algorithmic redundancy to improve safety. I also agree that "live" initial testing of such safeguards is inappropriate, and as above Uber clearly failed to do appropriate testing.

However you describe the (potential) black box nature of the existing system as a red herring -> to be honest, this is what I'm most interested in. My opinion is that including a black box component into a saftey-critical system would be inappropriate. Do you disagree with that? If your answer is "probe it until it's no longer a black box and then include it", would you not consider that to be overall semantically equivalent to "don't include a black box"?


It is a red herring because:

Firstly, it assumes Volvo is not sharing the parameters of the system. It seems unlikely that Uber is installing an automated driving system into these cars without the cooperation of Volvo, especially with the agreement to ultimately get 24,000 autonomous-system-equipped cars from them.

Secondly, if Uber could instead determine what it wants to know about the parameters by testing, then the question is irrelevant, as are the semantics.

Thirdly, it is presumably safe for humans to be driving cars without knowing the exact parameters, and so should not present any particular problem for the autonomous system - if the emergency brakes are triggered, it is likely to be a situation in which it is the right thing to happen, and possibly a result of the autonomous system failing. Just as for human drivers, an autonomous system is expected to usually stay within the parameters of the emergency system, without reference to those parameters, just by driving correctly. For example, if the emergency brakes come on to stop the car from hitting a pedestrian because the autonomous system failed to correctly identify the danger, what difference would it have made if the system knew the exact parameters of the emergency braking system?

Lastly, the road is an environment with a lot of uncertainty and unpredictability. If the system is so fragile that the tiny amount of uncertainty of not knowing the exact parameters of the automatic braking system raises safety concerns, then it is nowhere near being a system with the flexibility to drive safely.

It is possible that a competent autonomous driving system might supplant the current emergency braking system, in which case the way to proceed is to demonstrate it in the way I outlined in the last paragraph of my previous post.


Thanks for answering in so much detail - I think the last two points make a compelling case for not disabling the system, even in the true black box case, and the first two are very compelling in the real world, even if they don't apply to the thought experiment of an actual black box. You've broadly changed my mind on this issue :)


I should have said that your concern is valid where two systems might issue conflicting commands that could create a dangerous situation, it is just that I don't see it likely in this particular combination of systems.


It's perfectly appropriate when both systems are intended to enhance safety. It does not matter how the internals work, just that the system enhances the safety of your solution. It could be a webcam beaming images back to a bunch of Mechanical Turk operators and it wouldn't matter as long as it was proven to work.

It's not like Software of Unknown Provenance where it's running in the same execution environment and you don't have any control. This was a completely separate system with independent sensors that was marketed to stop the car in the exact situation the car failed to stop in. Disabling it was foolhardy.


Seriously!?

The responsible position would be to retain the existing system in a passive standby mode, capable of overriding the active 'under test' one in the event certain safety/detection thresholds were exceeded.


IIRC someone had said that there are physical limitations to what one can actually do (namely, having to physically unplug the connector for one safety system in order to be able to add your own at all)


Sure, if you want to move fast and disrupt yada yada. They shouldn't be putting hack-job self-driving cars on public roads. Either do some engineering so that they keep the backup system in place or keep it on a test track. This is Chernobyl-like irresponsibility.


Are you an engineer or at least familiar with vehicle engineering? Do you know for a fact that Waymo/others don't disable built-in safety mechanisms in similar fashion? If not, perhaps it may be best to not pass armchair judgment...


As the vehicle can obviously be operated with the emergency braking system in place, it seems highly implausible that an autonomous system could not be designed to work with it in effect. If Uber did not do so merely because it would have been inconvenient or taken more time or effort, then that would be even more damning than a bad, but well-intentioned, engineering decision.


This argument frustrates me more than somewhat, especially if it is what Uber's engineers were thinking when they removed the last safeguard that could have avoided this fatal accident.

There is a reason for there being such a thing as a crash-test dummy, and for crash tests being done in elaborate facilities that simulate crashes. It is because testing how a vehicle handles a crash in live (literally) situations is both ineffective and unethical.

The purpose of the system under test is to drive safely. If the simple safety override was triggered, then that was probably a failure of the autonomous driving system. Any case where it was not could be identified afterwards, by analysis.

Furthermore, when you have a complex system controlling potentially dangerous machinery, it is a sound engineering principle to include simple 'censors' to catch the rare corner cases that are a feature of complex systems, no matter how carefully designed and thoroughly tested. Nuclear systems have a number of independent safety systems that can scram the reactor, each in response to a single specific condition. Disabling them for a test is what happened at Chernobyl.


This system could have been used as a backup and almost certainly would have prevented this death. There is no logical explanation for defending their disabling of this production grade system that we know works in favor of their alpha at best grade system that they knew did not always work. Not having a backup system, other than a human which is a poor backup system, seems grossly negligent and reckless. Why couldn't the existing system work as a backup for the new system? Until uber provides a compelling reason, disabling the old system can only be viewed as knowing, reckless endangerment that led to manslaughter that could have and should have been prevented.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: