Wow. The other driverless car players should be all over this lobbying to shut Uber down. If Uber massively deploys a commercial service with subpar quality in order to "win", and then those cars start getting into accidents, the entire field is going to be delayed by 10 years. The general public is not going to just think "Uber is bad", they are going to think "self-driving cars are bad". Politicians will jump all over it and we'll see very restrictive laws that no one will have the guts to replace for a long time.
And honestly if that happens, that's probably what we would need anyway. If the industry doesn't want to be handcuffed they need to figure out some really good standardized regulations on sharing data with law enforcement, how to determine fault for self-driving vehicles, and what penalties there should be. That are fair and strict.
The alternative, they are not first to market, someone else is and immediately replaces Uber with a network of cheaper self-driving cars: a) Uber goes out of business b) They can somehow convince someone to license the tech or sell them the cars at a reasonable price, making them vulnerable and less profitable, with no market advantage.
Any engineer with this attitude needs to learn the lesson of the Therac-25. The issues in the Ars article are very similar to section 4 "Causal Factors" of the report[1].
> To get to that better software faster we should deploy the first 1000 cars asap.
Is that admitting that they do not have the "better software" and intend to deploy 1000 cars using "lesser software"? That's treading dangerously close to potential manslaughter charges if prove this willful contempt for safety to a court.
To play Kalanick's adversary, he might be arguing for more real-world data collection. Tesla famously equipped most of their cars with more sensors than were required at the time of delivery, using the data to drive development of the Autopilot function that was later added to the cars.
Btw. all this of collecting sensor data and improving on this, is an original idea of MobilEye that they presented at conferences before the so-called Autopilot was available from Tesla.
He is clearly right about that. Human-driven cars are safety critical and already do fine without redundant brakes and steering. How many crashes are due to brake or steering failure? I'm guessing it's well under 10%.
Most human crashes are due to bad driving, and for computers it will be the same. I mean, even this fatal crash probably could have been prevented with better software. It's not like the brakes failed. They just weren't applied.
> To get to that better software faster we should deploy the first 1000 cars asap.
This is where he is totally mad.
• Self-driving car kills child
• Hackers send self-driving car on wild ride
• Empty self-driving semi found in warehouse lot, GPS pirates make off with entire cargo
Creating the perfect self-driving car, with redundant systems, safety everything & so on will certainly help its safety records.
But it will also drive up the cost.
And put it out of reach for a lot of people.
If the goal is to save lives, the bar self-driving cars should be held to is what humans do driving today, not perfection.
> But zooming out from the specifics of Herzberg's crash, the more fundamental point is this: conventional car crashes killed 37,461 in the United States in 2016, which works out to 1.18 deaths per 100 million miles driven. Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States.
I don't think there's enough evidence to say that self-driving cars are as safe as humans.
But that is human experimentation, something we as a culture generally agree is abhorrent.
Wow there goes that "safer than human drivers" argument.
Consider if there was a new lottery and you weren't sure what the odds of winning were. You play it three weeks in a row and the third time you win a million dollars. Conveniently, no one else tries the new lottery yet.
Does it follow then that the odds of winning a million dollars are 1 in 3? Or should you play it a few more times before you declare to all that one in three plays will make one a millionaire?
Assuming that accidents are independent, we can model this as a Poisson point process. If the accident rate is 1 per 100M miles and Uber has driven 3M miles, the probability of there being zero accidents in that time is P{n=0} = ((λt)^n / n!) * e^(-λt), where λ=1/100M and t=3M. Doing the math, it seems that's 97.04%.
So, yes. It is possible that Uber's accident rate is 1 in 100 million. If so, this incident would fall in that remaining 3%. It's unlikely, but possible.
One thing we can say about the woman killed the other day by an autonomous Uber, is that unlike the other ~40,000 killed on America's roads over the past year, her's was not in vain.
Every day that we delay the widespread deployment of this technology, it's another 100 or so people dead. Of course, the public is unlikely to see it that way. They see one death as a tragedy, but 40,000 is just a statistic, business as usual, nothing to get excited about.
Bayesian approach: To make the math really simple, let's assume a discrete prior on Uber's death rate. Say 33% that Uber's cars are much safer than humans (0.1 deaths per 100M miles), 33% that they are equally safe (1 death per 100M miles), and 33% that they are much more dangerous (10 deaths per 100M miles). After observing one death at 3 million miles, your posterior is should update to {safer: 1%, equal: 11%, more dangerous: 88%). This is a substantial shift in confidence.
Math: http://www.wolframalpha.com/input/?i=(1-10%2F100)%5E2*(10%2F...)
Frequentist approach: Let the null hypothesis be that Uber's self-driving cars have the same death rate as humans - 1 death per 100 million miles. The odds of Uber killing someone within 3 million miles is about 3%. Therefore, we can reject the null hypothesis with a p value of 0.03. One positive "data point" is statistically significant.
Statistically, one death after 3 million miles is not proof that Uber's death rate is higher than 1 in 100 million miles. But it is statistically significant, in both a frequentist and Bayesian framework. You have to get really, really unlucky to have a death at 3 million miles if your death rate is 1 per 100 million miles.
Bottom line: This collision isn't proof, but it's strong evidence. (To go along with all the evidence from crash rates, disengagement rates, engineers working at these companies, and the video of the crash itself.)
(I'm guestimating on some of those numbers, but the should be somewhere in the ballpark.)
They let a car on the road that couldn’t even stop at a red light ffs
https://www.theverge.com/2017/2/25/14737374/uber-self-drivin...
The fact that they were allowed to deploy in Arizona after this is really regrettable
And it’s totally unsurprising that Uber “got the first kill”
It only took two incidents to shut down the Concorde program.
Would it have been fair if Uber last week were to declare that they have a 0% probability of pedestrian deaths, since they'd never had one yet?
The goal of these statistics is to predict future outcomes. But with such a small data sample, you cannot fairly predict the future- just as in my lottery example.
I'm not defending Uber here- I'm defending statistics!
With more data, we may discover that Uber cars are 100X worse, not just 25X. Or we may discover they're better. But we don't have the statistical power to make that estimate when we've only seen the event happen once.
I would add that people represent (as a whole) various kind of drivers, including the non-expert/fresh licensed, the elderly, the too sure of themselves, those speeding or not respecting road signals (or driving in connection with a crime), the emotionally fragile, those under the effect of alcohol or drugs, those tired by not having had enough sleep, etc., etc.
The "model" for an autonomous vehicle should be instead a particular "perfect" subset, ideally it should replicate the behaviour of an hypothetical extremely prudent, healthy 30-something with some experience with safe driving, very familiar with the way his/her car handles, having had a nice night sleep, no use of alcohol or drugs, without any external pressure (to arrive in time, to do other chores apart driving) without a cellphone or tablet distracting him/her.
If we could find this latter kind of drivers (should they actually exist) and isolate from the statistics the amount of incidents they caused, that would be the reference benchmark.
Taking it a step farther, I’d expect that road conditions in the Uber tests so far have been more benign than average city driving. Less bad weather, straighter roads, etc.
A proper comparison in this case is comparing passenger car death rates.
And then you need to factor in other conditions, such as the fact that weather was clean, and that you should be looking to compare pedestrian/bicyclist deaths, and you see that this incident already throws out wack the death rate for autonomous vehicles.
> I believe with perfect faith in the coming of the Messiah; and even though he may tarry, nonetheless, I wait every day for his coming.
The rationale response for individual cities is to say: "Do your risky testing elsewhere, thank you very much. You may come back here once you are as safe as our average driver."
It's a pretty unfair comparison, with 1 death on one side and over 30k on the other...
This is because stopped vehicles often have people come out of them to fix the vehicle, and too many cops, tow operators, etc, were getting killed by careless motorists.
Guess this is news to you (and uber), update your driving style please for the sake of first responders!
The story in the linked article about an Uber whipping through an intersection at 38mph next to two stationary lanes, seems sufficiently conclusive to me that their self-driving system is not ruled by a sense of caution.
Here in the UK, we have speed limits, but the rules of the road also call drivers to consider "appropriate speed" - you slow down in situations where you might have to react with very little warning. This ought to be extremely easy for an automated system - it can measure its braking distance with high accuracy, it can measure distances to objects around it with high accuracy, it can determine exactly which areas of the world around it it can't see, that could pose a threat with relatively low warning, so just fucking slow down.
I've long been bearish on full autonomous driving because I consider there to be so many corner cases in real world driving where ad-hoc non-verbal communication is required to solve traffic flow, that the computers would never catch up. Now I wonder if their solution is to just plough through every problem at 98% of the speed limit and then disclaim responsibility.
Set "too tightly", it will also have you slowing for every car approaching a stop sign from a side street.
Cars that randomly slow out of an excess of caution are also a hazard to other road users. Don't believe that? Go drive for a month and set a series of alarms on your phone every 5-10 minutes. Everytime the phone goes off, abruptly slow to half of your prior speed. Do you think you'd make 1,000 miles without causing a road hazard or collision?
For example, unnecessarily stopping in a middle of a highway is extremely dangerous, especially if visibility is limited or roads are slippery.
Looks like Uber has attracted Levandowski due to his cultural fit.
"To get to that better software faster we should deploy the first 1000 cars asap. I don't understand why we are not doing that. Part of our team seems to be afraid to ship."
And from another email:
"the team is not moving fast enough due to a combination of risk aversion and lack of urgency"
On the other hand, redundant steering and braking seem like probable overengineering. Brakes are already somewhat redudnant (dual section master cylinders were common in the 70s and are almost certainly in any modern vehicle), and better software could periodically verify they're working and if not, coast to a stop. Steering failure could be handled by engaging the brakes. Simultaneous failure is likely rare and catastrophic anyway -- losing a wheel and having the brakes pressure go with it can happen, and when it does, you put on your blinkers and hope you come to rest in a safe manner.
And then, if their code results in a death, they are liable and can have their license completely revoked, and they would be unable to work on self driving cars again.
- If bad code makes it into production, that is a systemic failure not an individual one (Why didn't the bug get caught in code review, QA, etc.)
- No one is going to want to work on a project where a single failure can taint their career.
- What if I use a 3rd party lib and that is where the bug is. Who is at fault then? What if the code isn't buggy, but I'm using it in an unexpected way because of a miscommunication? If I am only allowed use code that I (or someone certified has written) development is going to move at a snails pace.
- What if I consult with an engineer who doesn't have a certification on a design decision and the failure is there, who is at fault?
- What if the best engineer on the project makes a mistake and ends up banned? Does he/she leave the project and take all their tribal knowledge with them, or are they still allowed to consult? If they can consult, what stops them from developing by proxy by telling other engineers what to write?
Not to be a dick, but this is an awful idea that would basically kill the self driving car.
In safety-critical fields, setting a much higher quality bar than the regular 'it seems to works, the tests pass' seems perfectly rational to me. We can now write provably-correct C compilers (CompCert) and OS kernels (SeL4). There's no excuse for not putting similar levels of effort[0] into something as safety-critical as self-driving cars.
[0]: Note that I'm not advocating for "provably-correct self-driving car software" (that may not be the right approach, as a formal spec is likely unrealisable), but find the argument that "it's ok to write buggy spreadsheets, so it's ok to write buggy self-driving cars" to be morally unacceptable.
Yes, there are reviews, QA, and all of that. So, yes, there is no single person responsible (exceptions apply).
But there is no excuse for using 3rd party libs. Just don't use it. If you not know: do not use it.
That is the reason, why certifications are for. The same rules apply for medical and other areas.
Wait, what? That goes against one of the core benefits of open source software--that having many eyes on a problem decreases the risk of bugs. I'm willing to bet that if Uber had to implement their own machine learning/vision libraries from the ground up, there would be significantly more issues.
Pretty much everything in development relies on the work of other people. I used 3rd party lib just as an example, but what if it's in the framework or even the language that an engineer uses, who would be at fault then? You can't expect every developer to have gone through the entire source code for whatever language they are writing in.
Sciences build on each other and and after a certain period of time you have to take things for granted in order to keep moving forward.
> The same rules apply for medical and other areas.
No, they don't. Doctors kill patients all the time and they aren't banned from medicine for it. There is an investigation, they make sure it wasn't intentional and there wasn't any gross negligence and that this isn't a repeating pattern, if none of those are the case they see what they can learn from it and move forward in hopes that what they learn can help other doctors.
"The engineers employed by Jack D. Gillum and Associates who had "approved" the final drawings were found culpable of gross negligence, misconduct and unprofessional conduct in the practice of engineering by the Missouri Board of Architects, Professional Engineers, and Land Surveyors. Even though they were acquitted of all crimes that they were initially charged with, they all lost their respective engineering licenses in the states of Missouri, Kansas and Texas and their membership with ASCE.[22] Although the company of Jack D. Gillum and Associates was discharged of criminal negligence, it lost its license to be an engineering firm." - https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...
From what we seen so far the Uber car failed to detect an obstacle, we also had Tesla crash where the car did not seen a truck so it is obvious that there are some major issues that are not tested for. They need to have better tests and maybe get better security drivers inside the cars so they don't text on the phone on the job.
Does that demonstrate that the cars are safe? Even a little bit?
To me it just demonstrates that a grunt is on the chopping block for what amounts to systemic failure.
https://insight.ieeeusa.org/articles/professional-licensure-...
https://ncees.org/ncees-discontinuing-pe-software-engineerin...
Likewise if you knowingly observe anyone else in your company breaching safety/regulatory guidelines then as a professionally certified engineer you have a legal responsibility around ethical disclosure.
See: http://www.professionalengineers.org.au/rpeng/ethical-commit...
I do not know how things work in the US but in Australia these rights are protected by law. The company legally can not fire an engineer in this situation.
If you want error-free software you need a blameless culture based around process, not individual ownership of code. It should not even be possible for an error to be one individual's mistake, because by the time it hits the road it should have gone through endless code review and testing cycles.
Just like it relocated its testing to get a more "business friendly regulatory environment".
For a mostly topical example of such rules: https://www.dmv.ca.gov/portal/dmv/detail/vr/checklists/outof...
That'd be extremely foolish. And regardless of the dumb things the previous Uber CEO has done in the past and the big deal people are making over a $150 license, they have still hired some of the best engineers in the world.
You basically have to find the brightest-of-the-brightest to build AI... and Uber pays very well and puts plenty of effort into recruiting that talent.
Not to mention the massive PR and monetary risks that are inherent in killing people with your products. That would make any company highly risk-adverse.
https://jalopnik.com/safety-third-is-the-running-joke-at-ube...
1) They make it appear that Uber is a car manufacturer.
2) Even though Uber has not been determined to be at fault, the author seems to want to make it that way anyway.
I'd be fairly surprised if there's any real appetite at Uber to continue with this now. It was never anywhere near their core competency.
3 years ago Uber hired ~50 specialists from CMU to work on autonomous vehicles. I'd call that a core competency.
https://www.theverge.com/transportation/2015/5/19/8622831/ub...
Maybe not directly, but is Uber's current business model sustainable without some form of self-driving technology replacing their human drivers?
"Indeed, it's entirely possible to imagine a self-driving car system that always follows the letter of the law—and hence never does anything that would lead to legal finding of fault—but is nevertheless way more dangerous than the average human driver. Indeed, such a system might behave a lot like Uber's cars do today."
It doesn't matter if Uber makes cars that are technically not at fault, if they're mowing over pedestrians at a rate significantly higher than human drivers then they should never be allowed on public roads. People mess up occasionally. The solution is not an instant death sentence administered by algorithm.
And whether Uber makes the entire car or not isn't germane to the discussion. They are responsible for the safety of said cars, which is what we're discussing here.
1) The physics of driving isn't random, so it could be said that there are no accidents in autonomous driving, only oversights.
2) It would set a minimum performance level by making it prohibitively expensive to have a dangerous car. Those who test responsibly would have a low enough injury rate that they could deal with the risk by taking out suitable insurance.
3) It would provide a strong incentive to make the best car possible and not to take expedient shortcuts.
4) Over time automatically liability would become irrelevant if it asymptotically forces the injury rate to zero.
5) We have an historic opportunity to create a culture that will eliminate the danger of cars. It might have an increased short-term once off cost, but a huge long term payoff in the reduction of health costs and human misery. If we miss this opportunity we will be stuck with the long term cost of an industry that will be competitively driven towards poorer performance, potentially against the will of the majority of players, by the actions of a few.
Additionally, if we train self-driving cars to always give way to pedestrians who even look like they might cross the street, they’re going to have a heck of a time getting through cities. Kids are going to learn that they can trigger a squealing emergency stop by lunging towards the curb - great fun!
What I think WILL happen is that autonomous cars will have to buy blanket insurance policies that cover their entire fleet. High accident/fatality rates will result in high insurance premiums, which will put bad actors out of business.
If you enforce your right of way and kill somebody, that's manslaughter in my book.
They are on the road with conditions because what they are doing is somewhat experimental still. There is a safety driver for a reason that did not respond. A human driver may have collided but would have responded and potentially avoided a fatality (if not a collision). The benefits of autonomous driving completely failed on all counts in this case, which imply that being on a public road is far to early for Uber - suggesting some fault to lie with Uber or the regulators.
It seems unlikely that the Police will find any fault because they probably don't want to have to file a criminal charge against the driver, but that is who it would go against if there was fault.
There was a case some years ago in Boston where a subway rear-ended another one because the operator was texting. The driver was fired and probably would have been prosecuted if there had been fatalities. Taking eyes off the road for this long seems insane to me.
If you really cared about safety, there are far more immediate and impactful solutions then spending billions on self-driving cars. If they came out and said that they were doing it to make money or make driving easier, it would have carried more weight. But you just can't trust a word this company says.
* Flouted Taxi regulations
* Living in legal gray zones in regards to contractors vs employees
* Designed a system to avoid law enforcement
* Performed shady tactics with its competitors
* Illegally obtained the private medical records of a rape victim
* Created a workplace where sexual harassment was routine
* Illegally tested self-driving cars on public roads in California without obtaining the required state licenses.
* Possibly stole a LIDAR design from a competitor
Now their vehicle killed a pedestrian in a situation that the self driving vehicles should be much better than humans at (LIDAR can see in the dark, and the reaction time of a computer is much better than humans.)
Uber has exhausted their "benefit of the doubt" reserve. Maybe, they need to be made an example of with massive losses to investors and venture capitalists as an object lesson that ethics really do matter, and that bad ethics will eventually hurt your bank account.
Self driving cars are currently in that state where they're always in accidents but never technically at fault. When individuals have this behavior patter their insurance company drops them because if they're so frequently present when shit hits the fan they're a time bomb from a risk perspective.
Edit: meant to reply to parent, oh well.
edit: wow this triggered some people. somehow 'if they are at fault they should be punished' got interpreted as 'they are not at fault and should not be punished'
> But zooming out from the specifics of Herzberg's crash, the more fundamental point is this: conventional car crashes killed 37,461 in the United States in 2016, which works out to 1.18 deaths per 100 million miles driven. Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States.
We have a sample size of 1, granted, but it's not looking very good. At the very least they were expected not to be less safe than humans.
i'm not sure that's the right way to look at it (that uber is a sample of 1, or that the death is a sample of 1).
in this case, the metric is deaths per mile, so there are purportedly 3 million samples for uber self-driving cars, with one positive (negative?) result in that sample. you need so many samples because the positive observation rate is expected to be very low (as evidenced by the 1.18 deaths per 100 million miles driven by human drivers).
if you assume the death rate is roughly the same, you can (roughly) estimate the expected error or confidence interval with the 3 million sample size versus a 100 million known rate for human drivers. as more samples are gathered, the confidence interval gets tighter: if the confidence interval currently stands at 80% with 3 million samples (made up numbers), it might go up to 85% with 6 million samples.
Almost all the miles driven are going to be in near-ideal circumstances (daylight, no rain, good road surface, driver familiar with normal road traffic conditions and drives the route regularly). I have nearly no insight into the uber death, but I gather it happened at night. It could easily be that humans are also an order of magnitude more dangerous at night.
The point is that it is to early to test this cars on public roads. The fact that self driven cars will kill less is just a hope, we may never achieve that(there is no proof we can do it with current tech) or it will take more time, and we need some actual numbers that are not tampered with before letting this self driven cars on the road, the security drivers seem to not pay attention so this car testing are a risk.
There have been cars on the market for years now that can detect a dangerous situation ahead, even in the dark, and auto-brake. If they haven't caught up to several-years-ago's consumer model of car, then what on earth are they doing putting these things on public roads?
It will be interesting when Volvo pulls back from that relationship, because even the uptodate emergency brake assistent should have catched up. Except Uber as deactivated those in favor for their own technology (because it is superior).
EDIT: German Automotive Club did tested full sized cars in 2016, which means decaded old technology in IT. The results where that the Subaru Outback was the only one detecting persons in the dark. But was very poor with biclysts. https://www.adac.de/infotestrat/tests/assistenzsysteme/fussg...
Maybe testing of autonomous vehicles should be done off public roads (at least at this stage of development).
The way it would work would be: the human is driving and the software is, at the same time, watching the driver and figuring out an action to take. Every time the driver's and the software's behavior differ, is logged and analyzed to figure out why there was a difference and who guessed better.
But the way testing is currently going on, it seems millions of miles are wasted where nothing happens and nothing is learned.
Some argume that it is okay, because it will decrease risks in traffic in the long run. This is not a valid argument to allow on road bug-testing, as there is a lot of medical research that we as a society don't allow because of ethical concerns, even some research where the risk of death is essentially zero. Applying research ethics to the Uber situation, Ubers vehicles would under no circumstances be allowed on the road until it could be proven that they were at least as safe as all vehicles already operating on the road.
So while the technique suggested might not work well to solve the problem of safe autonomous cars, the more dangerous alternatives should absolutely not be allowed.
Yeah, that doesn't work though. Basically because you would need to have an excellent situation representation to really understand the drivers' reactions to outside events. But that does not exist.
Perception and situation representation are key to mastering the driving task, and they both differ greatly between humans and machines.
This is the #1 reason I'm sceptical about self-driving cars becoming ubiquitous any time soon. Clearly they have potential advantages over a human driver in terms of not being tired or distracted, better sensors and better reaction times, but their judgement in any given situation will always be a function of some predetermined inputs. It's a brute force approach.
Until a self-driving car can recognise a pub door opening around throwing-out time where a drunk patron is about to stumble out into the road from a hidden position, or that it's about to pass a park and a nearby school just finished for the day so kids will be kicking balls around and running across the road to join their friends, or that the recent weather conditions make black ice likely and the cyclist it's about to pass doesn't seem very steady, and take corresponding actions to reduce both the risk and the consequences of a collision, it's going to take a lot of brute force to outperform an experienced and reasonably careful human driver.
In short, reacting to an emergency 100ms faster than a human driver is good, but sufficient situational awareness and forward planning that you were never in the emergency situation in the first place is better.
"win-at-any-cost" and "second place is first looser" (sic) do not cohere with safety.