Meanwhile Google has driven +200K miles on real highways and streets with their autonomous car.
I'm not sure how autonomous Nissan is shooting for, and I think they're rather deliberately not defining it. You can draw a fairly smooth continuum between "antilock brakes" and "full autonomous car you'd put your unaccompanied children into in the middle of a Midwest winter", and I imagine they're not leaping straight towards the latter. But still, to get there, you need to be able to test things like "what does the car do on a street covered in a patchwork of ice when a dog jumps out in front of it?" without waiting for exactly those conditions to emerge.
If you want a fun thought exercise, imagine how an antilock brake system handles the left wheels being on ice and the right wheels being on dry pavement. And remember... torque. You can't "just" brake with the right wheels....
I have got to say, I am pretty amazed they have managed to swing that. Permission to drive autonomous robot cars around the everyday streets of litigation and safety-obsessed California? An accomplishment of equal size to any one technical challenge in the car's making, I would say!
The public thinks everyone else is going to ride the train and free the highway up for them... what a waste.
Tell me the chance of single passenger road vehicles routinely travelling at speeds like that within the foreseeable future, because I'd say zero.
And furthermore, these slow motion freeway "trains" will rely on networked, not merely autonomous, car technology, not included in the 2020 timeframe.
Of course, I suppose you could just hope that you start planning for the project in this downturn, but it'll only start in the next one....
Is it really that hard to be convinced that traveling 220mph on a train between cities 200-500 miles apart is a good thing?
California isn't getting the ideal system now but if it builds the first one, lots of people will use it, and the next generation will have an easier time building the 300-350 mph maglev.
Aside from traffic caused by accidents or unavoidable roadwork, the vast majority of traffic - caused by raw inefficiency and poor reaction to the aforementioned two - can be regulated away.
By the time that happens, the "poor" won't have to "replace" their car, because they'll already have dumped the expensive rust-buckets for time-shared rent-a-cars. It's the rich that will be the last holdouts, not the poor.
I hope self-driving cars will have hyper-efficient safety mechanisms, especially to protect pedestrians and bikers, not just the car's passengers. This would be a huge progress for humanity.
They are basically promising to sell this as soon as possible. They have 1-3 years for the go or no go decision.
I hate to be that guy, but it took ~3 years to get the Leaf to production. And the Leaf uses well-known technology that had been shipped by other vendors a decade earlier. I'd be happy to be proven wrong, but this feels like a very long reach.
I am very dubious about other people's driving skills, I tend to assume everyone else on the road is out to kill me and will do the dumbest thing possible at any given moment.
But even so, I am also an experienced software developer, and I know that software is only as good as the author(s). Bugs happen. It's inevitable. And I don't want to die or be injured because of software errors. I'd rather it be human error.
Now you might say to this, "Planes fly on auto pilot constantly. Every time you fly you're basically in the hands of software." And this would be true. But my response to that is:
1) The air is much less densely packed than the roads and highways.
2) In the air, even though you are traveling much, much faster than in a car, the pilots have more time to react to a problem than a driver in a car.
3) The pilots are highly trained, experienced and hopefully alert. Drivers in automated cars will be complacent and texting on their phones.
I think this is a terrible, terrible idea and misuse of technology, despite the fact that humans are shitty drivers. I think it's only going to exacerbate the problem, not improve it.
The whole thing gets interesting when you look at the judicial side of it: Who is responsible in the case of an accident? The car manufacturer (if it was a software failure)? Then even one accident triggered by it is too much.
But there's two problems here:
1) We won't be replacing all the cars overnight. The problem comes with the interaction between terrible human drivers doing wildly unpredictable, insane things and the inflexible, unadaptable automated cars.
And
2) I'm not comfortable with dying or being injured due to a software error regardless of whether its likelihood is higher or lower. The roads are far more dangerous than the air, and this is why I'm uncomfortable with the whole concept of automated cars on the road and I'm not uncomfortable with autopilot in planes.
Eventually - not this year, maybe not this decade, but eventually - technology will be better than human drivers. How many lives, then, would you be willing to sacrifice, on the grounds that human error is somehow better than (less frequent) machine error? Six? Sixty? Six hundred? Six thousand? Since 1985, nine hundred thousand people have died in car crashes in the US alone. That's more than the entire population of San Francisco. Think about that. Nine hundred thousand people. Nine hundred thousand corpses. If technology can prevent that, I'd say we have a moral duty to not only develop it, but to do so as fast as possible, before too many more people die early, violent deaths.
If you're into robocars you can do a lot worse than checking out his robocar page: http://www.templetons.com/brad/robocars/
Yes if we had an automated car ready to replace everyone's normal car, we'd start saving tons of lives tomorrow and I'd be a monster for opposing it. But that's not reality, that's not what we're talking about.
And what I'm saying is, I'm uncomfortable and see problems ahead. And I seem to be totally alone in this, which is amazing to me.
Computer - Can see 360 degrees, 100% of the time - Can read the movement of obstacles in real-time, with very accurate estimate of each surrounding vehicle's speed - Can run 100s of simulations per second to estimate the odds of an impact - Can react almost-instantly with correct inputs to the brake and steering based upon real-time instant of surface conditions
Human - Can react within .7 seconds with correct inputs to the brake and steering based upon real-time instant of surface conditions in ideal conditions - Can see ~180 degrees, with significant focal-point limitations - Can guess other vehicles reactions based upon experience
Which do you think will have a better reliability rating in many typical driving scenarios?
I'm not saying which way I lean on this. :)
Will bugs happen eventually? Yes, then they'll be patched. New bugs will be found, but every accident that happens will make the cars safer. Each error improves how safe they are, and since they're currently safer than humans based on accident statistics, I'm more than happy to see something that's already safer get even more safe over time.
Some kind of Android for autonomous cars would be quite nice I guess.
But what about Latin or Asian messes of countries? Say Rome, Buenos Aires, Lima, Hanoi or Bangkok? Lots of motorcycle and bycicle and human traffic (and in the case of my native Montevideo, horse-drawn carriages!).
Driving in some of those countries is more psychology and playing chicken than anything else (and yes, accident rates are apalling)
I said people are terrible drivers. I think likely a fleet of all automated cars will be safer than a fleet of human drivers.
The problem comes in the interaction between automated drivers and human drivers. Among other things.
> I think this is a terrible, terrible idea and misuse of technology
What a bizarre attitude you have there. What possible rationalisation could you have for this extreme, arbitrary prejudice?
An autonomous vehicle kills a dozen people in a freak accident. Newspapers scream bloody murder. A panel of independent investigators determines a chain of causes. The Immediate bug gets fixed, and the inadequacies in the testing methodology is remedied. We all get to live a tinny bit safer ever after.
1. Autonomous cars shouldn't really be impacted by density. You can only make a highway so dense...there's one-four lanes, and it's like minesweeper. You can be surrounded by six cars at a time, maximum (two on either side, one above and one behind). Density might increase the likelihood of an accident just because there's more objects in motion, but an individual autonomous car would theoretically account for this.
2) The entire point of an autonomous car is for drivers to not need to react to anything at all. So...I'm not sure what this has to do with risk. Maybe the drivers of non-autonomous cars wouldn't be safe...but that's the same situation as today. Autonomous car drivers could read a magazine in the middle of an accident. Who cares?
3) Again, my above comment, this doesn't seem relevant.
I think you're being a bit hysterical, to be honest. It's a problem of math. You can model every possible interaction between the cars on a 3-dimensional plane with a finite number of axes. It might be complex, but we can program every single possible interaction and collision - and that's the first step to programming collision response.
Basically, the takeaway is, I don't think there's a model of car interaction that isn't improved by having an autonomous car, even if only one car out of the two is autonomous. And I'm pretty sure this scales.