The inputs seem to be road line recognition, optical flow for the road, and solid object recognition, all vision-driven. Object recognition is limited. It doesn't recognize traffic cones as obstacles, either on the road centerline or on the road edge. Nor does it seem to be aware of guard rails or bridge railings just outside the road edge. It probably can't drive around an obstacle; we never see it do that in the video.
This looks like lane following plus smart cruise control plus GPS-based route guidance. That's nice, but it's not good enough that you can go to sleep while it's driving.
I suppose once statistics start to prove that these cars are safer than human-driven ones, we can chalk it up to an irrational fear, but for now it seems crazy to me to put my life in the hands of an AI, when a mistake means that I die or kill someone, rather than play the wrong song on Spotify.
I don't fully understand why more effort is not put into a hardware solution, where roads are simply marked up for self-driving vehicles, e.g. magnets lining the lanes or something like that. Of course a more expensive solution, but seems like it would make the vehicles themselves a whole lot simpler and safer. Begin with inner cities, where the area is limited and traffic is most complex.
Do you not drive using only your eyes? If you are not terrified of the sensors, then software? Turing's central belief was that the human brain was 'just' a computer.
Regards doing things like embedding reflectors in roads and other ways to simplify lane holding, completely agree. But we can't forego the cameras etc that deal with situations that don't contain reflectors.
How would "magnets" be any improvement over visual markings? Is there some sensor that can track magnets at greater distance, or with greater reliability than visual markings?
Besides, how many millions of miles of road would need to be equipped with these magnets around the world? What would that cost? Would it be acceptable that autonomous cars could only drive on roads that have been properly equipped?
It's worth remembering that self-driving AI does not solely rely on lane markings to navigate. It also can see the road edge, follow other vehicles, and cross-reference with a (learned/crowd-sourced) navigation model of the laneway. Additional sensor types, like LIDAR and radar, are often used in addition to cameras.
You can see a lot of this at work, in a complex London traffic environment, in this demo:
They probably use GPS+map for a low resolution location and trajectory planning. Vision should only be used for fine positioning and obstacle detection.
This is Los Altos - Los Altos Hills - Palo Alto route to Tesla's HQ on Deer Creek Rd. A very expensive area to live and therefore well maintained.
That's a bit of an overkill.
Regardless, there was two points in the video ~48 sec, and ~1.40 min/sec mark. The car stopped AFTER turning... I thought that was really strange behavior. Then when it passes the pedestrians, it almost went to a complete stop, instead of going slightly to the left and mildly slowly down.
Overall, it's a great case study. I think your criticism is technical snobbery instead of what this means for the average person. The future is here, get ready. When techies complain that someone is doing something with 'x tool' and not 'y tool'...I think those people are just throwing mud around and not actually contributing to the conversation. Build something better if you're going to nitpik over the technical details. It would serve you better and all of humanity.
Here's Tesla's "Autopilot" killing someone.[1] This is the Mobileye "does not recognize obstacles protruding into left edge of lane" bug.
Here's another crash from that defect.[2]
Here's Tesla's "Autopilot" hitting a traffic barrier.[3] Again, it's an obstacle at the left edge of a lane.
This is what happens when you take Level 2 lane-keeping and automatic cruise control and hype it into automatic driving. Other carmakers have offered comparable systems, but with much stronger driver-must-have-hands-on-wheel enforcement. Tesla didn't do that, encouraging drivers to relax and let the computers drive.
Waymo and Volvo take the position that when the automatic system is in control, the manufacturer is responsible. Tesla tries to blame the driver.
(Been there, done that. Ran a DARPA Grand Challenge team 2003-2005. Too old now to work on this.)
[1] https://www.youtube.com/watch?v=fc0yYJ8-Dyo [2] https://www.youtube.com/watch?v=qQkx-4pFjus [3] http://video.dailymail.co.uk/video/mol/2017/03/02/5177969943...
Also, what is the car doing at 1:33. It takes a right turn, then just stops in the lane, then proceeds.
This shows why the Tesla approach isn't good enough. The vehicle is on a freeway, an supported environment for the old "Autopilot". Everything is going just fine. Until there's something unexpected the system can't handle, appearing fast enough that the driver couldn't take over in time.
Tesla is somewhere between Level 2 and Level 3. Good enough that the driver is tempted to tune out, not good enough that they can.
[1] http://video.dailymail.co.uk/video/mol/2017/03/02/5177969943...
I would love to see some work made towards configuring HOV lanes that many interstates have (toll or otherwise) to have not only clear visible markings but in road indicators to facilitate easier autonomous driving in a more controlled environment.
It would have so much safer to Tesla to have trademarked (not sure of proper legal term) Autopilot for later use and call it CoPilot until its safe on its own
Yet here we are, driving forwards up hills.
"Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year."
Assuming Tesla eventually "catches" someone using a Tesla for ride sharing profit outside of their network they could remotely disable the car, at which point they will surely be sued and the courts will decide.
And they will very likely decide against Tesla, IMO. The immediate reaction to this statement is bound to cause people who work purely in the software realm to protest, but to most courts software is still a very nebulous thing. Disabling someone's car (after selling it to them outright) because they did something with it that you didn't like is EASY to understand, anathema to the entire history of car ownership and I really don't think it'll fly.
Autopilot is a service.
Autopilot Updates We just released the latest version of Autopilot. You can now experience Enhanced Autopilot features including Traffic-Aware Cruise Control, Autosteer, Auto Lane Change, Parallel + Perpendicular Autopark, and Summon. Automatic Emergency Braking, Forward + Side Collision Warning, and more advanced safety features are also active and standard.
All Tesla vehicles have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. And Tesla vehicles continue to improve with over-the-air software updates, introducing new features and improving existing functionality to make your vehicle safer and more capable over time.
As i understand it, the newer ones use a machine learning based approach and different hardware than the old ones. Does this mean tesla is actively working in parallel to develop two completely different self driving car softwares, one built on ML and one around a more traditional heuristic approach?
Scary stuff, really. No consumer wants (or needs) autopilot "features" - at least not from a marketing standpoint. If you ask me, the car either drives itself or you are driving it.
It's a bit ridiculous to expect people to use these things safely. If anyone has to think about when the car is or isn't in control, I'm sorry, but you've already lost in my book. Humans just aren't that good at switching context or remembering what a product does (or what this particular version of a product does).
There are moments when you have to think about what state the car is in. But in opinion it is by far offset by how much less tiring it is to drive a long way.
It's like saying we shouldn't have released seatbelts because people moght not have used them. Or maybe we shouldn't have released vaccines because you still need booster shots.
Safety features, even incremental ones, make the world safer for everyone.
A in-dept report on Teslas' code and system qualification framework would be a very interesting read, I'm sure.
In the Model S configuration page, "enhanced autopilot" and "full self-driving capability" are still optional extras that together cost $8,000, so I'm not sure what they're talking about here.
Apparently, these features can also be unlocked at a future time, for a higher fee of $10,000.
My theory is still that the demo video is actually from Nvidia's SDK and the actual autopilot they deployed is totally different and not actually in the 'self-driving' category at all at this point.
But they are very aggressively rolling out updates and new features for more autonomy and yes they do intend to push for a complete door-to-door self-drive ASAP, ideally before the end of 2017 (at least as a new alpha version they can demo). Otherwise they would not sell it as such. But they do not plan to take another year to get there, based on Musk's tweets and the fact so many already paid extra for a full self-driving ability.
There are two packages for the model S which add up to $9k extra[0]:
Enhanced Autopilot
Full Self-Driving Capability
The fact those are add-ons and their descriptions seems at odds with the statements made on the page.
There a few new features that my AP1 might not have like Perpendicular Autopark, but I won't know till I get it back. From what it seems it's just gotten to the level that they were with with the previous generation that was developed by or in conjunction with MobilEye.
I think they will need a hardware revision for actual full self driving perhaps 2 years away.
Since newer sensors are become more cost effective I could imagine them incorporating them into future hardware revisions.
This is a statement of intent, and production vehicles are a long way from having software that enables this.
Tesla Autopilot is cool, but the software is very limited at present. The original link is essentially what Tesla think they can achieve with the AP2 hardware, but the production software is nowhere near capable of these feats.
He got convicted because being inside the vehicle with the engine running met the definition of "operating" it, and nobody wanted to start down that slippery slope.
The amount of objects for detecting and avoiding will be way too high.
The tests shows almost clear conditions for driving. This should be tested on streets of NY or a busy city like Mumbai
I would _love_ to see some serious "read team" testing of these exact issues with driverless cars. Pedestrian issues, weather issues, mixed-use roads, roads with construction crews, roads with simulated accident scenes, roads with physical damage/flooding/large debris.
I'd also be interested to know if the car drives differently between the rear-wheel only and the all-wheel versions and if environmental factors can frustrate these different control modes.
daytime: https://www.youtube.com/watch?v=SSc8AHIEG9o
night time: https://www.youtube.com/watch?v=XHIq0MMViZg
I think one big selling point of cars has always been that they grant the user a great amount of autonomy (unprecedented, in their time, taken for granted nowadays). You can ride your car and go anywhere you like! The cost of that autonomy of course is that some of us will be killed or maimed in road accidents, because you can't give silly little monkeys autonomy behind the controls of big powerful machines without death and carnage ensuing.
Self-driving cars propose to reduce this risk of death and injury by taking away the autonomy we traded it for in the first place. What remains would be just a mindless automatic system carting the user to and fro. Well, in that case- we don't need to wait around for full level-5 autonomy. We already have dumb machines that can do that: trains, trams, all sorts of vehicles-on-rails.
Why do we need self-driving cars, then?
Answer: we don't. And I haven't for a moment believed that any of this is anything to do with road safety. Note that nobody even discusses the other 900 pound gorilla in the room: pollution.
Guess what? Taking cars off roads completely would also reduce air and noise pollution tremendously.
A car that can drive itself does not remove the autonomy having a car gives you. The key part of the autonomy is going where you want, when you want. Not that you get to push the pedals and turn the wheel.
> We already have dumb machines that can do that: trains, trams, all sorts of vehicles-on-rails.
Which travel only from and to certain places, at certain times. It's why what would be a three hour drive is likely to take me closer to five hours tomorrow, for example. For another example: the train station near my friends house stops running trains at 7pm, the roads are open all night.
> We could replace them with, I don't know, some kind of overground system of personal pods on rails, or something like that.
Building an entirely new, country wide, overhead rail system that goes to or extremely close to every house? That sounds incredibly expensive.
> Why do we need self-driving cars, then?
They're an incremental improvement which requires no new major infrastructure.
This seems like such an obvious step... i.e. if you're at 20% battery capacity and your car reckons it would need 40% to get to your destination, then it would send out a request to local tanker vehicles and the nearest one would come to you, connect and refuel without you having to stop or break speed.
People would probably pay a premium for that service.
That claim is strong and false. What about Roadster and the old Model S with the old AP1 hardware?
I wonder what the current status is, both in terms of software validation, and regulatory approval.
To give some comparison numbers how normal series cars are tested at bigger automotive companies: There are test fleets of partly over 100 cars for a new car model, where some of those are tested all around the clock (> 600 miles per day). All in all often 2digit million kilometers.
"Radar-only braking on HW1 is getting better with every release. We're hoping to do the demo where it brakes for a UFO in the fog soon."
So maybe that has something to do with it.