Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. Humans average something like 250,000 miles per insurance-worthy event. Even if we assumed a average speed of 80 mph, which is absurdly high, that is ~3,000 hours between accidents. That system would need to make the interaction-less roundtrip commute 600 times, 2.5 work years, without even a single minor accident for the unassisted system to even be comparable to the average driver.


Why are you comparing it to a human driver?

Compare it to other self-driving cars and you’ll see that it is actually one of the better systems on the market.


Because people buy this feature because they don't want to drive (as a human); what does it matter how it compares to other self driving systems?


Because it undermines the argument made in the "criminal probe". Tesla is doing self-driving as well as it can possibly be done with today's technology. If it could be done better, it would be, by one of their competitors, but they can't beat it.

It is in fact so good that people can sleep in their cars and not crash -- and I wouldn't bet that the guy who sleeps in the car only did it once!

If you are going to compare it to human drivers, it also matters which humans you compare it to. Every time I drive I see people doing things far stupider than what a Tesla would do. Driving skill, and judgment, is so unevenly distributed in the human population that it would not be difficult to find a large number of humans who underperform the Tesla system.

It would be weird to punish Tesla for not being perfect. It has delivered self driving features that are already better than a lot of the humans on the roads today, and better than all of its competitors.

Let's go a bit deeper into your argument:

> Because people buy this feature because they don't want to drive (as a human); what does it matter how it compares to other self driving systems?

Ok, pretend you are someone who is looking to buy a self-driving car because you don't want to do the driving. Wouldn't you want to buy one that works well? It's a matter of life and death, after all. So you would do some research to find out who had the best systems. If you did that you would find out that Tesla's system, despite all the criticism it gets, is one of the few on the market that is worth buying.


> It is in fact so good that people can sleep in their cars and not crash -- and I wouldn't bet that the guy who sleeps in the car only did it once!

This thinking is absolutely going to get someone killed. It’s also why Tesla faces so much criticism.

Any improvement under 99.999% is unremarkable because the end product is unusable. No one is going to buy a car that crashes once a year.

Disengagements should be unheard of if Elon wants to continue selling FSD.


Any improvement _over_ 99.999% is impossible. Saying something like this is completely detached from reality. People _do_ buy cars that crash once a year. All the time. And often the defects in the system aren't even related to self driving functionality. Further, most drivers who get in a car have records far worse than that.


No, on basically all accounts. In the US there is a fatality every 75,000,000 miles, a injury every 1,250,000 miles, and a reported accident every 550,000 miles on average [1]. From a "reliability" perspective, we can reasonably assume that at any given time, you will probably crash in less than a minute without control over your car. The average driver probably averages ~40 MPH per unit time. So, on a minute basis that is one fatality per 111,000,000 minutes (99.999999%, 8 9s) a injury every 1,875,000 minutes (99.99995%, 6 9s), a accident every 315,000 minutes (99.9996%, 5 9s).

Even the worst component of the driving system (the driver) is solidly over 99.999%. And every single hardware component is vastly better than the driver. The Pinto, a classic example of a death trap, only resulted in a hardware-induced fatality something like every 1,000,000,000 miles, or using the minute basis above a fatality every 1,500,000,000 minutes (99.9999999%, 9 9s). The person you responded to is correct, 99.999% is unusably, criminally bad.

[1] https://cdan.nhtsa.gov/tsftables/National%20Statistics.pdf


>Tesla is doing self-driving as well as it can possibly be done with today's technology. If it could be done better, it would be, by one of their competitors, but they can't beat it.

This comment is everything that's wrong with FSD and its marketing.

The bar isn't what self-driving cars can do in general (besides the fact you completely ignore that other companies are operating driverless taxis as we speak).

>It's a matter of life and death, after all.

Consider this: it's not just life or death for you. Do you understand that bystanders and other drivers don't want to be part of your experiment? The selfishness is unreal.


If you want FSD to improve then you need to allow experimentation in the real world. That means accepting risk. Our governments have wisely decided to do that and most people don’t seem to be worked up about it.

If you want to ban FSD, then go ahead and vote for a ban. But it’s not realistic to say that FSD is allowed only when it’s perfect — no technology reaches perfection without first passing through a phase of imperfection.


What I want from full self driving is something at least close to hiring a professional chauffeur. I’ve had to do this a few times in Europe. The driver pulls up in an immaculate Mercedes sedan, he drives precisely and carefully with an expert understanding of local traffic and road conditions. This is something I’ve also experienced in Tokyo in every taxi I’ve hailed on the street (except for the brand of vehicle).

Out of politeness I engage in chit-chat with the drivers, but I would be completely comfortable reading hacker news in the backseat or even sleeping. I hope self-driving reaches this level of ability soon; I’m going to be to old to drive in 15 years.

To get a perspective of what this requires, reflect carefully on what your brain is attending to while you drive. Does it ever require higher levels of reasoning than pattern matching? It does for me. I regularly encounter issues that would confuse current technology: bad fog, reckless drivers, icing on overpasses, drivers going the wrong way, a marathon blocking my route, boisterous junior high-school students walking along the curb, detours, temporary lanes inconsistent with actual markings, broken traffic lights, and damaged or vandalized traffic signs. I could sleep soundly on my way being driven by a professional driver under such circumstances.


> Compare it to other self-driving cars

No one else sells self-driving cars to consumers because they know they cannot.

"Being unable to do something and not saying you can do it"

"Being unable to do something and saying you can do it"

Spot the difference?


As someone who could potentially be killed by one of Elon's homicidemobiles, I would very much prefer if vehicles that are more deadly than those driven by humans would not drive in places where they could potentially kill me.


You’re a lot more likely to be killed by a young idiot in a Dodge Charger.


Are we really going there?

About 40,000 people die per year in the US from traffic fatalities. There are about 200,000,000 people licensed to drive in the US. So, about 1 in 5,000 drivers die per year. Tesla has released FSD Beta to ~160,000 drivers. So, to first order, we can approximate that as ~32 Tesla drivers will die a year due to a traffic fatality if they constitute a representative sample (they do not, but we are doing some quick estimates here). So, a system that is used as a full-time, unassisted replacement for their human drivers will result in, on average, one extra person dying per year per 3% worse it is than a human driver when applied to a mere 0.1% of the US driving population. If applied over the entire US driving population a system that is 3% worse applied to all driving interactions would result in ~1,200 extra people dying per year.

At the scales being discussed, extremely small differences applied over the entire domain result in very serious risks to human life. It is not okay to knowingly sacrifice people at the altar of moving fast and breaking things so that Elon Musk can make another couple billion dollars.

Also, it is not even close to being one of the better systems from a safety perspective. It is something like 400-800x worse than Waymo and 2,000-4,000x worse than Cruise [1]. It is so far behind so many other companies it is fair to say that they are not even in the race let alone a frontrunner.

[1] https://news.ycombinator.com/item?id=33227450


Though to be fair, the faster the system crosses the 3% mark the faster that 40,000 number can come down.


They are not within 3% or even close. They are 2,000x worse than Cruise which is still at least 8x worse than human drivers. Being generous they are 16,000x (1,600,000%) worse at unassisted, intervention-less driving than humans. To put that into perspective, if every car in the US had Tesla FSD Beta and they were all using it as a fully autonomous system without babysitting, it would average 1,750,000 deaths per day and everybody in the US would be dead in 6 months. Within a week it would kill more people than have ever died from traffic accidents in the US. We are literally talking 1 deca-Hiroshima per day of badness. The only saving grace of this whole thing is that hardly anybody is insane enough to use it unattended more than a few times so we do not see the sheer catastrophe that would occur if it was actually used as advertised.


I'm not sure flat risk per distance is the right model to use. If we consider a model like n classes of issues that all need to be handled one after another, then after most changes the incident rate will still be far above human, until it suddenly becomes much more safe. In that model, the failures come from the software failing categorically, not probabilistically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: