It's probably why so much people keep resisting evolution like autopilot, they can't admit that even them can someday fail and crash. They believe that if they are good drivers they will avoid it.
Sadly very good drivers die every day and not only because of someone else's mistake.
Of course a computer or a machine can fail too, but in comparison it will never fail as often as human do. Because we can be careless, sleepy, drunk, unskilled,...
Now the real problem we will face (what scares me even though I'm pro-autopilot) is to accept to hand our lives to machines that will have to make choices in emergency situations (should it saves its owner or the kids in front of it ?, who is responsible in case of a crash ?,...)
Maybe eventually, but what I'm interested in is right now, does the tesla autopilot fail more often than a human driving in similar conditions. My suspicion is that it does.
Consider this anecdote: https://twitter.com/elonmusk/status/756004029239472132
Then rethink if you really believe that this pedestrian deserves to be dead because you don't trust the data and the world-class engineers for which this is literally their job everyday.
This is the average for all cars on all roads, in all driving conditions.
To have a meaningful comparison, we would need to know the fatality rate for new luxury sedans on divided highways in good weather conditions.
Your statistic ignores the fact that autopilot is only used in relatively simple situations, and since you're measuring fatalities only, you need to compare it only with cars of equivalent safety ratings.
If you have statistics comparing humans and autopilot in relevant situations I'd be very interested, but the ones you quote are exactly the ones I'm complaining about as being nonrepresentative.
Even considering all miles equally, a rate of 1 per 130 million autopilot miles (a much too small statistic from which to generalise) still does not compare particularly well to a lot of cars driven by humans, e.g. the BMW 7 series.
Incidentally, it's not at all clear that the situation described in your anecdote even involved the autopilot functionality, as it sounds like it was autonomous breaking which is a standard feature on many cars.
Indeed, and that may often be because of taking warranted risks. It bothers me that when making comparisons of driving safety people tend to suffer from absence blindness and discount other important things like the death avoidance cases. Let's say you have a bleeding injured person which requires urgent access to a medical facility. Here the driver can take some risks in order to save a life. Or any number of causes that may warrant risk taking. Now, take away the control from that driver and leave it to an autopilot that may compute the driving parameters in order to satisfy minimum pollution, safety (from the manufacturer's judicial liabilities prospective), and whatnot. Heck, I foresee cases when the autopilot won't even approve any movement due to whatever considerations when there may be passengers in risk of loosing their lives if won't reach somewhere soon enough. For now people can take risks, which may be both good and bad. Don't look only at the bad side.
That's exactly what I argue about. In a rare (but very important) occasion where risk taking would be needed, you're forcing the moral dilemma of an even rarer occurrence with a presumed victim. We don't know if it would come to that. For all I care there might be only some speeding on a rainy road. My car will recommend going slowly for my own sake, but in that moment I care less for me personally and more for avoiding wife's impending death. Actually, I fear that my car will not limit itself to recommending, it will force that on me, because it knows better, because the people behind the said decisions won't be only the engineers trying give their best, but also lawyers doing "mercantile calculation of legal liability", politicians trying to score on public safety through their regulations, and so on!
A human, with proper training and heuristics, can reasonably be expected to never make a mistake that causes a terrible accident.
Unfortunately, a large part of "proper training" involves being driven around as a passenger for your entire childhood, immersed in a car culture. Further, the heuristics are commonly ignored by males younger than, say, 25.
fighter pilots make mistakes causing terrible accidents. and they're a group of humans that were pre-screened to ensure physical/mental/emotional fit, and had extensive training.
And in my opinion the only reason that can make you believe that is that there is, fortunately, few enough car crashes that you think those who had never crashed avoided it because of their skills.
Of course skills and training will allow you to behave well in a lots of situations that could have killed you otherwise but my conviction is that if you go through life without a big accident, a huge part of it is due to luck. Luck that you didn't encounter the situation in which training and skills couldn't have saved you.
Ok I know this is my little cause of the moment, but seriously why are computers apriori better at anything? Sure they don't make poor life choices but they are absolutely at the mercy of the quality of the algorithms, sensors, operating system, training data and physical computational hardware.
But seriously, the most important part of that sentence 'safe' is absolutely unproven.
If a driver ever has to make that choice, they were going too fast in the first place. You should never be going faster than the stopping distance to the nearest blind spot, unless you're driving on a controlled access highway.
the real shocking truth is none of these systems is capable of driving when even skilled drivers might need the assistance. think about it this way, car safety systems are designed to insure recovery of safe driving during some of the worst conditions yet self driving systems cannot handle most of these poor conditions as they cannot accurately see and assess the situation.
I'm pretty sure that on contrary, autonomous car constructors are actually testing their cars in the worst possible conditions even some that are unlikely to ever happen to anyone. At least it's what Google is telling about their test process.
I seem to remember reading a paper postulating drivers unconsciously always try to avoid crashing their side of the car into hard objects.
It would seem as the life expectancy of dwarves will go up.