Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tesla will have 100,000 prototypes being trained in shadow mode for a billion miles per year across the globe within a year. Tesla will compare quite favorably


One can scale up the first 99% of an autonomous driving os pretty fast just using pattern recognition, but the devil is in the edge cases.

We haven't heard much from Google because they've been programming and validating all the unglamorous little problems that training data and a neural net can't solve alone. 4 way stops, passing motorcyclists and stuff like that, where old fashioned hand coded features are necessary to adress situations in which specific kinds of reasoning is necessary.

Tesla has to solve all this boring stuff as well before they can offer comparable capabilities in an AV.


>old fashioned hand coded features are necessary

I don't believe your hand coded features will outperform ML trained over a billion miles, even in the long tail rare events.

Sure ML is bad at rare events, but hand coded features also map poorly to such rare events, because it's unlikely you'll be able to imagine them all, or even manage to handle categories of rare events well.

Furthermore, I believe it would be possible to train an ML based rare-event detector, which simply detects the difference between "regular driving I'm familiar with" and "something funny is going on, I haven't seen this much in the training dataset", and then refer to a remote human to resolve the issue.


> and then refer to a remote human to resolve the issue

Successful remote control is a whole different project, very different from self-driving. In order to give the remote pilot enough situational awareness, you need plenty of streamed imagery (recreated or recorded). And you need a feedback loop with a very small latency. Not saying it couldn't be done, but it won't happen "organically" by cars becoming self-driving.


That's the benefit of having 100,000 cars with all the sensors but humans performing all of the edge cases. Tesla can take data from all the cases where there was a deviation between what the car calculated it should do and what the driver did.

I don't think hand coded features are necessary. No one hand codes humans to drive.


And how many of those miles are in conditions like:

* that road with the one-foot deep pothole (it's more like a sinkhole at that point, though) in the middle of a lane

* the perpetual construction zone where there are five layers of lane markers, only one of which is the actual layer of lane markers

* the 10% grade two-lane winding road in the dark with 2 inches of snow on the road and more coming

* open highway with a constant 30mph crosswind

* dense fog with less than 100m visibility

* torrential downpour with so little visibility you can't even see the lane markings on the road right in front of you

Those are all conditions I've driven in, and I can think of several other horrible driving conditions that I haven't yet had the (mis)fortune to drive in. Just because you can drive well in good conditions doesn't mean you can drive well when things get tougher--as the driving records of many human beings can well attest.


There are some conditions a human shouldn't be driving in. For example a hurricane or while intoxicated.

We can likewise say there are some situations a machine shouldn't be driving.

As long as the machine can identify when it is incapable as a driver, it's only the same as the human pulling over for a nap because they can't stay alert.

Sure, if you ran a taxi service with such a system you'd have to send a regular taxi to pick up the customer, but it's only the same as any other kind of breakdown like a puncture.


This is why Tesla's strategy is better than Google's. Tesla will get the data on those situations, because their data comes from every driver of their cars. Google gets data on Mountain View and downtown Austin.


That's a prediction, not a comparison.


That's a super conservative prediction which has zero growth from current sales numbers in it. Tesla grows has historically grown at about 50% per year.


Not talking about the number of cars sold, talking about that translating into self-driving capabilities. And in that sense, it is a very aggressive prediction, given that Google has had fully-autonomous, no-backup-driver, public-road-testing for over a year.

So you can say that Tesla is at least 1-years behind. More like 4-5.


Tesla is definitely behind Google now, but I'm saying their trajectory is orders of magnitude better, because they are collecting real world training data much faster. Humans show the minimal set of sensors needed for driving, and Tesla have a superset of that being delivered to customers today, which is being used to train their model. They get a lifetime of real world driving experience on those sensors every day.


Your prediction is only good as your assumptions: that Tesla can achieve human-like levels of intelligence in under 10 years.

That is extremely aggressive. Getting data is the easy part of self-driving.


Surely they will make all that data available to OpenAI to help democratize the future of AI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: