Yes, Tesla is taking an incremental approach to releasing the feature sets that are required to have a fully autonomous vehicle, but no, the end product goals for Tesla and Google are not different in kind.
What certainly is different is the manufacturing approach the two companies are taking. Google is seemingly aiming to release a fully autonomous vehicle at version 1.0, meaning every system of the car, such as manufacturing process, sales, customer support, will be at version 1.0 at the same time. In contrast, when Tesla releases its version 1.0 of the fully autonomous driving feature set, they will already have very matured versions of the other components, such as their manufacturing process, battery and drive train technology, sales and marketing, customer support, etc.
Plus, the Google cars look like something one buys for their four year old niece or nephew.
You have to stop thinking like some guy from Mad Men and more like somebody buying AWS instances.
Imagine that instead of buying a single car that you drive everywhere you instead reserve a car for your daily commute, you might get various options, eg:
$500/month - Tesla Sports car, 30min journey, unshared occupancy
$300/month - 2 seat Google-Car, 30min journey, unshared
$125/month - 8 seat van, 40min journey, shared, up to 1 vehicle change
Now if you are a go-getter gunning for VP you'll pick the sports car. But others might not see the extra expense as being worth it.But as far as the business model, as someone that does not own a car but instead utilizes a mix of public transit, bicycle, ZipCar (sometimes renting the VW Golf, sometimes the Jeep, etc.), and Uber to get around, I still think the Google car looks like something one buys for their four year old niece or nephew, and have faith that a car could check all the boxes functionality wise and still be designed to look, nice, at least. Maybe even really nice.
This is interesting -- it's basically a little autonomous bus, for essentially the same price as a regular one, that might pick up/drop off at a more convenient location, minus someone there to supervise. I kind of like having a bus driver.
Tesla has stated many lofty objectives, some of which they've met, some of which they haven't. These engineering problems are hard.
I agree with the article; what Tesla has introduced with "Auto Pilot" is an improvement on current technology, like adaptive cruise control and park assist. It's cool, but it isn't in the same sphere as what Google is doing with their autonomous cars.
That means that Google is happy with cars that only work most of the time, and only goes to some routes, as long as they are fully authonomous and reliable when they work.
Tesla, by its side, is happy with cars that are not fully autonomous, as long as they work every time, and can go to any route.
Of course their products are different, and they'll not become alike if none of them change their business.
As far as I'm aware, Tesla is selling cars to people who want to drive cars. Real cars, with muscles.
Google isn't selling anything, and again, as far as I'm aware, it's not settled whether they ever will, for instance there's been a lot of speculation on a rent-a-ride model.
And the prototype they have is clearly aimed at people who don't want to drive cars, i.e. more or less exactly the opposite segment.
Not going to happen. Seatbelts and upright seats meant for crashes are not going away even if the robots do all the driving. And I have yet to see any attempt at a robocar capable of understanding and interacting with today's chaotic and haphazard parking rules.
Heaven help us should all passengers be required to sign user agreements and watch in-car safety vids, but that could be a reality. I've got a BMW that already asks me to "agree" a contract for the navigation system every time I start.
Self driving cars don't have to park where there are haphazard parking rules, they can drive to a garage or lot that caters to self-driving cars.
This assertion isn't obvious to me. In my experience incremental updates are often exponential in their impact (especially if enough resources are put into a problem). Moore's Law is an excellent example of this: at any given time, researchers are working on a fixed number of solutions that will generally make a fixed % impact. This is why we can see a doubling in transistor density without a huge increase in the size of the industry.
In the case of reducing accidents, I could see a similar exponential pattern. The first incremental step maybe took the accident rate from 10% to 1% by eliminating 90% of the possible sources of accidents. In the second step, researchers will again shoot to eliminate 90% of the current causes of accidents, bringing the rate to 0.1%. This could repeat every couple years until the accident rate is sufficiently close to 0.
Self-driving is much harder. The first-order problems of driving on a empty road were solved by the DARPA Grand Challenge, ten years ago. The second-order problems involve dealing with other road users. That's hard, and that's what Google is working on, with considerable success. So is the CMU/Cadillac consortium, which has demonstrated their self driving car to politicians in Washington traffic.[3] Nobody seems to pay much attention to that effort, although they may be closer to a production product than anyone else. (Or not; Uber hired some of the people involved away from CMU.)
Self-driving cars need and have a lot more sensors than semi-auto cars. There's a lot more sensing to the sides and rear, and more forward sensing than just being able to detect the next obstacle ahead. Vision processing is far more elaborate. Google's vision system explicitly recognizes humans and bicycles.
Google's little 25MPH driverless car is a way for them to enter the market. At 25MPH, slamming on the brakes is a good solution to situations the system can't handle. Those things are going to be all over senior communities in a few years. Google already has higher-performance cars on the road; they can be seen all over Mountain View most days.
[1] http://www.bmw.com/com/en/insights/technology/connecteddrive... [2] http://www.mercedes-benz-intelligent-drive.com/com/en/1_driv... [3] https://www.washingtonpost.com/local/trafficandcommuting/dri...
I just youtubed 25mph frontal crashes and it doesn't look pretty...
I had a conversation about this with friends in Germany a few months back.
In most societies, a mistake that causes suffering to another individual is usually 'blamed' on the person causing the suffering. In many cases where causality is obvious, this assignment of blame is fairly straightforward. Example: Bob fell asleep, which caused him to lose control of his car, which hit the bus, which killed a child. Bob is now culpable for the child's family suffering. Bob remains one of many others who share culpability at this point, assuming others are also falling asleep at the wheel. FWIW, 103M people fell asleep at the wheel last year in the US, so Bob will likely have company.
Now put an autonomous piece of software written by company X into Bob's car. Bob engages the autopilot, falls asleep, the autopilot software experiences an error, the software fails to alert Bob, the software loses control of the car, which hits a bus, which kills a child. Who is culpable for the family's suffering now? The software? Company X?
The only way for company X to both a) allow Bob to fall asleep and b) bear the culpability for a family's suffering is to get the software to the point it only makes mistakes in a timeframe that is, at a minimum, several orders of magnitude greater than Bob making the same mistake.
The logic goes that, once a company's software kills a child, it's going to be pretty hard to keep the public from reacting negatively, even though overall suffering will decrease. The only option company X is to require Bob to accept he is "driving" the car and bear the culpability of any suffering the car's software may cause, or alternately, be ready to pay a substantial settlement that offsets suffering.
Fast food is another. (Alcohol, tobacco?)
Side note: I would argue that your patch of ice example is not nearly as good as the deer one. Skidding on a patch of ice and crashing is, IMO, simply driving too fast for conditions.
Perhaps clauses banning class action suits and requiring arbitration will help them:
That seems to be more or less the case with the Google cars. 300,000 miles, with no accidents caused by the cars. Of course, so far that's limited to more-or-less good weather so far.
I think before deathly accidents become a problem for self driving cars/manufacturers, the public will already be convinced of it numerous benefits.
I personally think autopilot-like auto-cruise just on the highway and more established local roads would be good enough. The convenience afforded by having the robot take us from A to B parked to parked may not be worth the insane price it must have on its tag to get there.
Relatively simple from a conceptual point of view: a human drives it the first time, and later times use sensors to apply basic rules and the original 'instructions' learned from driving it manually the first time.
Extend this to a network of cars sharing route information, and only a small sampling of the population ever needs to drive any given route manually.
This isn't to say that now trips will take longer and that will adversely impact our lives because I think what will end up happening is we will rearrange our lives so that we use these longer driving trips to sleep or work, converse with friends, do homework on the way to class, etc. and thus the time it takes to get from point A to point B becomes moot as we are now able to be orders of magnitude more productive in our vehicles.
Granted, this will not only reduce accidents as now the vehicles can communicate with each other and will instead know what the intentions of the other car are and adjust accordingly instead of trying to anticipate what the other car is going to do, but it will also reduce or eliminate speeding tickets and DUIs. Due to speeding tickets and DUIs being a large source of revenue for municipalities, I'd expect this to evolve as well, unfortunately.
The author pretends that both companies have a choice and have chosen different strategies, but it's clearly not the case. Unless Google was planning on building a traditional car business first (a fairly ridiculous proposition), or partnering and integrating with the supply chain of a major manufacturer (a stretch, if just to introduce fancy cruise control), they were never going to be able to iterate towards a robocar.
Guess what... Apple didn't sell the first smartphone either.
Someone takes a small step into car driving automation, tries to create some buzz around it, then I've got to read about how it's not a big deal. The nuances between autonomous and auto-pilot need to be discussed. We need a taxonomy.
I guess writing these sorts of articles is a million times easier than adding any autonomous features to any vehicle.
Forward progress is extremely important. It comes technically and socially. Let's hope everyone demands a car with "that stuff" they have in a Tesla. We'll get a little arms race that'll pay for further development, lives will be save (in total), and we'll asymptotically approach the vision.
Really interesting. I did not realize that.
Is continuous, gradual improvement the best way to fully autonomous cars? I don't know. But the author's argument is simply that we aren't there yet.
It's learning. That is an interesting approach. I wonder how far they get by that.
I guess Google's car will also collect data and help Google to improve the performance. My impression so far was that it's mostly engineered work however, and not so much learned (in a machine learning way).
I understand there are rules of the road and rights-of-way but a right-of-way for a pedestrian in a crosswalk with the walking signal is not going to stop a bus from running the light and killing the pedestrian.
Not that I'm blaming the pedestrian but it surely doesn't hurt to think defensively, look both ways and judge if that bus is going to be able to stop and act accordingly.
I sort of think they are collecting the super detailed sensor data so that they can play it back into new versions of their software and see if it notices things earlier and makes better choices and such. A mass market version needs to be able to function on environmental data (like a tree across the road), so I don't think they are building a perfect map to have as a crutch.
Which approach has the fastest exponential growth curve? The one with thousands of cars on the road, learning from each other, or the one with a few but more capable cars? We'll see. Just remember to think exponential not linear.
Tesla's Autopilot (mostly) keeps you within the lines and regulates your speed to match the car in front of you.
Does this really need a full article?
And they want these PR gravy trains running as long as humanly possible, so launching a real product isn't even a goal.