Probably similar accidents are occurring every minute between human drivers, going unreported as the rule.
AVs might one day even avoid this "victimization," if these events keep following a predictable pattern. AVs could exaggerate the gap, leave a precisely calibrated amount of extra space. When anticipating a rear end collision, the AV would honk and flash brake lights while scooting forward.
Google's absolutely correct that its AVs are never at fault in any of these accidents, legally speaking. Does blame change though if there are ways the AI can prevent this series of similar accidents, but they choose not to?
The AV yields to those running a red light, even though getting t-boned wouldn't legally be the AV's fault. That seems wise to me. Is it inconsistent to expect the AV to avoid getting t-boned, but not expect it to avoid getting rear-ended? I'm not sure...
Or, more broadly: How do you divide blame between two parties when one has superhuman faculties? Is the AI responsible for everything it could have conceivably been programmed to prevent? Or do you just hold it to a human standard?
Like all hard problems, neither extreme is very satisfying.
Does blame change though if there are ways the AI can
prevent this series of similar accidents, but they choose
not to? [...] Is it inconsistent to expect the AV to avoid
getting t-boned, but not expect it to avoid getting rear-
ended?
While I was in college I worked on some wheeled robots that played a competitive ball game. We wanted to avoid collisions between robots, and to win the game.One of the things we found was: If one team has great collision avoidance and the other team has no collision avoidance, the team without collision avoidance always wins. When there's a contest for the ball the team without collision avoidance just blasts in there, and when the team with collision avoidance back off to avoid a collision they lose the ball.
If autonomous cars were so good at avoiding accidents that you could merge aggressively and they'd always brake, and run red lights in front of them and they'd always stop, manual drivers might learn to do that.
Riding in a Google autonomous vehicle would be a pretty shitty experience if you knew you'd get four or five emergency stops in every journey when assholes decide to cut you up :)
By definition, we don't do that with human drivers that don't meet the legal circumstances to be "at fault" even if in the specific case it would be possible for them to avoid the accident but they didn't. Why would do that for non-human drivers?
On the other side, well, we've been down some similar roads before. It's easier to see in the exploding Pintos and combustible Volkswagens. It's a common theme throughout the macabre history of modern (US) product liability: How much responsibility does a company have for accidents that it could prevent, but at some obnoxious cost? [1][2]
At one point, some judges had a simple rule. Just run the numbers. If you can prevent more harm than it costs you to fix, you better fix it. If not, eh, good call, save your money. Don't spend millions to prevent one stubbed toe every ten years, that'd be a waste.
Seems sensible. Until it gets applied to harder cases.
Sure, they did the tests. They knew their Pinto would burn a few hundred people to death each year. But they ran the numbers. They used the right value for life, were careful in their choice of actuarial tables, and used conservative costs for the upgrades. It was just too expensive. I mean, they'd have to be hit just right, perfect angle and velocity. Better to put the exploding fuel tanks on the road.
Maybe not better for Lilly Gray, trapped in a burning vehicle, skin sloughing off, pain so extreme her heart eventually quit.
But you know. Better for society.
So the resulting case[1] tackles that inevitable question: is this really enough? Ford clearly forecasted the future. They saw Lilly Gray's charred corpse in the crystal ball, or someone like her. They didn't do anything to stop it. They followed the law instead, because it was cheaper or more expedient. Are we really going to let that slide?
Ok, that's verging towards polemic. Let's pause. Let's swing back the other way: How many accidents is a company expected to prevent? Should every car be wrapped in thousands of dollars of bubble wrap, or whatever the steel equivalent is? Should every car come equipped with roll cage and handy fire extinguisher, prepped for battle on the NASCAR track? Should these hefty tanks then run on nearly inert fuel to prevent engine fires completely, capping their range at a few paltry miles?
Clearly there are extremes.
So let's drop the absurd extremes and go back to the common, the real issue. Back to Google's problem: avoiding the simple rear end collision.
Given enough data, enough predictability, you're going to see some subtle shifts in the notion of blame.
Especially when you start looking at the issue of "fault."
In most (all?) US states, juries divine a "percentage at fault" for both parties before proceeding to divvy up damages.
You're going to get at least a few juries claiming that, sure, the rear ender was 99% at fault, everyone can see that. But that savvy defense lawyer has a point: Google was at least a teensy weensy bit at fault, just barely. With all their data and slick technology, surely they could have done something. Surely they're responsible for not improving their algorithm ever so slightly to handle this particular case. Let's call it 1%. No more, certainly, but no less.
Conveniently enough - it's enough to get the case dismissed without any damages whatsoever. At least in a few US states.[3]
N.B. - I sharked you a bit here and I apologize. In my original post, I said Google wasn't at fault, "legally speaking," but "legally speaking," nothing's really ever that simple.
[1] http://users.wfu.edu/palmitar/Law&Valuation/Papers/1999/Legg...
[2] http://caselaw.findlaw.com/us-supreme-court/444/286.html - ok, doesn't really get all the way to this point, but the facts are startling, and prime the discussion. It's a harrowing story and explains a lot about how product liability works and why it exists.
This is called defensive driving. You as a human are supposed to avoid getting killed even in cases where you aren't legally liable. This includes making sure the intersection is clear before going in. It includes checking if anyone is coming down the wrong way of a one way street before going in or crossing it. It involves something as simple as making sure "right lane must turn" actually turned before someone magically appears out of your blind spot.
And, importantly, it also includes knowing what's behind you as much as knowing what's in front of you. If you see someone is going to rear-end you, you should at the very least step off the brake to make the collision less violent. If possible accelerate forward.
And I know this isn't possible [/practical] in most american cars because they're automatics, but don't hold the brake when you're standing still. It makes potential rear-endings work out better. And never turn the wheel before you intend to turn. If you get rear-ended you could get pushed into oncoming traffic.
All this is to say that while only one vehicle is to blame for a collision, two vehicles are responsible. (unless you hit a tree or some other static barrier, a tree cannot drive defensively)
Or, more bluntly, it might be the other person's fault, but you're the one who's dead.
The bar is much higher than that. You are supposed to protect others, not just yourself, and including people who are at fault.
If a pedestrian is jaywalking and you provably have plenty of time and ability to avoid them, but you strike and kill them, you are definitely guilty of something like manslaughter. Although people somethings believe otherwise, the law (and morality) do not have the property that initial minor transgressions by one party absolve the other party of all fault in a resulting interaction.
That doesn't rule out making the standard more stringent over time, but determining what events the AI gets blamed for probably doesn't need to be a huge gray area.
Perhaps the cars should constantly be planning escape routes while slowing down and stopping with appropriate distance from those in front of it to allow for escape should it detect an inevitable collision from behind. Even where the only possible escape route involves hitting another car, it should be able to make the decision that a light collision with a vehicle in another lane is perferable to a large truck hitting it at 70mph.
[1] http://www.wtsp.com/story/news/traffic/2015/04/12/fatal-cras...
Isn't that solved by the automatic collision detection with emergency braking, as implemented by Volvo?[1]
Look at public health data. There's a vast goldmine of data that could be collected, that could track, trace, and storm-warning diseases, but that is for legal reasons hidden behind a confidentiality barrier. I'd like it to be a simple check if any of your partners were STD positive; This is currently information that is hard to get reliably (Sure, your partner can hand you test results, but not verifiable ones; The clinic won't attest to it if you call to reference your partner's results, so you can never be certain it's not a clever photoshop). This is data that has direct, tangible impact on those around you, and in many states it is a crime to not reveal certain STDs. Still, these spread, because we're afraid of making the STD-infected social pariahs, and I can't see a world where we don't have the same problem with bad drivers.
And yes, in 10 years time there will be no such notion as 'private travel'. Between your phone's GSM/CDMA, bluetooth and wifi signals and dashcams, security and traffic cameras, there will be dozens of parties who monitor every move any person makes outside of their home in urban or suburban areas (be it by car, foot or bike). With different forms of computer vision, that data is sorted and linked to other recordings of objects and there will be dozens of databases that have exact information on where everybody is, 95+% of the time.
One of my pet research projects (although I'm not making much progress in terms of actual work or publications) is on a system of tracking 'people' in a generalized form. It's basically a concept of 'strands' of information along different axises ('location', 'finance', 'internet', 'health' and a few others) which can be joined by an overarching matching algorithm infrastructure. Furthermore, each 'strand' has a 'source' and one can join datasets by deciding 'this source I know is reliable, take this as truth' or from several less trusted ones by using voting or bayesian inference. It's basically a formalization of 'doxing' - an overarching framework to work with personally identifying data from sources of varying degrees of trustworthiness.
I'm sure many people are working on something like this already, but in private and with the goal of using it against 'us' (for a broad definition of 'us'). The only way (ok, maybe not 'only' way, 'one of' the ways) to defend is to acknowledge that privacy is dead and to develop offensive capacities; much like the only last resort against tyranny is a well-armed populace.
If you want verifiable results, you can get them. Get yourself tested at the same place. Accompany your partner to pick up their results. You'll strain the relationship, but you can have the proof.
Given the time we’re spending on busy scripts, we’ll inevitably be involved in writing bad copy; sometimes it’s impossible to overcome the realities of speed and deadlines. Thousands of minor creative failures happen every day in typical American copy, 94% of them involving human error, and as many as 55% of them go unreported. (And we think this number is low; for more, see here.) In the six years of our project, we’ve been involved in 14 minor grammatical errors of more than 1.8 million words of autonomous and manual writing combined. Not once was the autonomous writer the cause of the error.
(CA regulations require us to submit CA WG form OL316 Report of Copy Error Involving an Autonomous Writer for all errors involving our writers. The following summaries are what we submitted in the “Copy Error Details” section of that form.)
[1]:https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...
Driving rain, thick fog, heavy snow, sleet, the like. Maybe the answer they'll give is, "Don't drive you dummy", but that's not really an acceptable solution for most people.
I have yet to see the new prototypes, which are allegedly making tours as well. Does anyone know what streets they frequent?
Up San Antonio to Leghorn and then to Rengstorff and over 101 is a pretty standard commute to Google.
A human driver on the other hand similarly also updates his driving behavior throughout his life.
I've read reports that make me dread being behind a AV at a 4-way stop or a blinking red light - one passenger described waiting at a light for several minutes while drivers honked angrily behind them. Maybe AV's will encourage traffic planners to implement more roundabouts, which might be less prone to other driver's taking advantage. Or maybe AVs just need to get better at estimating the speed of oncoming traffic.
The driver in front reported injuries and asked damages, I'm still paying the consequences on my insurance rate. I still remember her face vividly, I noticed the moment when she started doing mental calculations about how much she could get out of that.
I hope she gets what she's got coming for her, and self driving cars can't come soon enough, especially for the sensor arrays mounted on each cars that will provide ample proof of what accidents are really like for insurance purposes.
The lecturer is the same guy behind the Stanley car that won the DARPA challenge a few years ago (https://en.wikipedia.org/wiki/DARPA_Grand_Challenge)
*without physical connection like in Australian ones
https://docs.google.com/document/d/1vgjC5VeySpjqzcHj1_4_-5MN...
Side note: Google should whip up a google doc only url shortner... so people know its a document and not a risky link, but its still short enough to share easily...
Similarly, although we might do better to choose a world with different incentives (like one where 1/3 or more of cars are rigid rule-entitled automata), many of us have (selfish-rationally) made room for incompetent lane changers even if it means violating lane rules ourselves.