Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is incorrect. The hardest part of developing a self-driving car is predicting the world around you in the immediate future. Knowing whether or not that object is a person is a lot easier than guessing whether or not that person is going to jump out in front of the car 1 second into the future. You have to know who is going to run stop signs, when cyclists are about to cut you off, when someone is about to back up into a parking spot.

I don't know whether or not AGI needs to be developed to make a useful self-driving car, but as time goes on I'm beginning to believe that's the case.



This is incorrect.

Predicting motion once you have small time slices and very accurate 3d representations is very very easy. You can easily calculate expected paths. You have to remember that computers see the entire situation at the same time. A bike doesn't just cut off a self-driving car the same way it does for a human. Humans are slow, our increments of time are large and in the hundreds of milliseconds and we can only focus on a couple of things at a time. A computer will notice the slight change in velocity and acceleration within single-digit milliseconds. Then it just has to predict the probability of collision. These calculations are simple.

Deciding what to do in these situations can very much be efficiently hardcoded using decision trees. No one right now working on self-driving cars dares to use a neural network or any other unexplainable & unbounded ml algorithm for policy. You have to be able to hard code in new edge cases as they emerge. You have to be able to study specific crashes or incidents and then adjust the decision-making scheme to specifically avoid that situation in the future.

Truly, the hardest problem is taking in data from multiple sensors, segmenting it, and then labeling it. All in real-time. The sensors are faulty and super expensive. There are also so many different objects out there. If you actually look at the ancillary startups in this industry. They're not working on "common-sense" general intelligence algorithms. They're working to make better & cheaper lidar. They're working on computer vision problems. They're working on image segmentation.


You're focusing on the wrong part of the problem. You're thinking of everything as a giant physics simulation, and completely ignoring the hardest part: humans.

Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision. The problem is the robot didn't anticipate that the pedestrian was going to cross despite not having the right of way. Humans are better at reading social cues than robots. Maybe robots can learn that, but it's a significantly harder problem than path planning and image segmentation. This applies further than pedestrians and also drivers and predicting their behaviors on the road. And if you try to drive cautiously to avoid this potential scenario, you effectively stop and crawl every time you see a pedestrian and are not very useful for moving from point A to point B (not to mention all the pissed off traffic behind you).

The reason it's difficult is because it's an uncontrolled environment, and the robot has to be able to anticipate what other drivers/cyclists/pedestrians will do. Robots have done wonders in controlled environments, but trying to bring them to the real world has always been a struggle.


I doubt that most human drivers are good enough to avoid a collision in that situation. You will always be able to come up with a scenario that will fool a computer; you can also always come up with a scenario that will fool a human driver.

The standard isn't "perfect under all conditions", it's "better than a human". Humans are, honestly, pretty bad at driving. The bar is not that high, perhaps unfortunately.


> Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision.

Why does a robot driver need to anticipate this? Does a human driver need to?


Er, yes? Remind me not to walk close to your car :P


As both a pedestrian and a driver, I certainly have to read social cues.

If I'm walking up to a pedestrian crossing and a car is approaching, I don't just step out into the road, even though I have the right of way. I try to make eye contact with the driver to see if they recognize I'm crossing. They'll often nod or do something similar to signal that they're letting me cross.

A machine has to understand these social cues as well. It might even be helpful if the machine has a way to signal its intentions back to pedestrians.


You can probabilistically predict those events with machines much better than humans can. You don't really know someone will decide to run a stop sign, but you do know when the vehicle is past the point it was supposed to start slowing down. That's relatively "easy", we have been predicting physical object motions with analog computers, even. As the parent says, accurate data from sensors is a much bigger problem. But once you have the data, you can model these objects with dumb algorithms.

Computers can also have a much faster reaction time, so a human may need to predict one second ahead, but computers may be able to get away with less.


> You can probabilistically predict those events with machines much better than humans can.

This is an assumption and has not been shown to be correct or even probable


I question the claim that you need to predict the immediate future. Human reaction times are pretty slow, to the point where when our feet hit a pedal in response to light that hit our eyes two seconds ago, something that is prediction for us could be reaction for a machine. A human has to live two seconds in the future because our appendages and lower faculties are lagging behind in the past.


> I question the claim that you need to predict the immediate future.

Alternatively you could develop braking technology which gives vehicles a stopping distance of 0m, but this might be a bigger technological advance than full self-driving AI, and I'm not sure it would be that comfortable for the passengers....


For me, the definite proof that we won't have self-driving anytime soon is the massive fail that was encountered in the recent chatbot fad.

One key part of driving is communicating - with pedestrians, cyclists, other drivers. This happens through body language and other fairly subtle cues.

When you can't make AI work for responding to questions given in text form on an extremely limited problem domain, how on earth would it work for something that's orders of magnitude less well defined and more broad?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: