Combine this with the fact that this is a choice to deliberately cripple the vehicle by removing a standard driver assistance sensor, and if I were dumb enough to have an open order, this would be a really good reason to cancel it.
From what I can find Tesla vehicles are significantly safer in terms of accidents per mile, deaths per mile, and they have the highest safety rating of any vehicle.
Of course I could be misled and you have evidence that suggests that Tesla's are not only more dangerous, but that the company releases vehicles so dangerous and irresponsible that it should not be in any way trusted.
I'd like to see how you came to that conclusion.
There is evidence though. Look at the beginning of this Tesla marketing video (in particular the "person in the driver's seat is only there for legal reasons" quote):
safer than what tho. the average driving cohort? well not a great record, since it includes old vehicles and all kind of people, while the tesla entry price act as a filter.
> and they have the highest safety rating of any vehicle.
vehicles get a flat set of points for "soft" safety features like active braking and avoidance. the amount of death by people being burning when the bactery catch fire imply the real word performance doesn't always correlate to the score.
The advertising works and some people truly believe in this supposed "full self driving". They start saying things like "Coming home from LA after work thank god for self drive" and "What would I do without my full self driving tesla after a long day at work".
And the result is: https://insideevs.com/news/507038/nhtsa-investigate-tesla-cr...
For one, this is just a dumb decision: It both means a crippled safety package today, and one less sensor one can use tomorrow. Considering Tesla hasn't actually developed self-driving capability, they can't actually say for certain if the hardware that is on the cars today is enough or not. (Any time they promise you've purchased a car that has everything it needs to do this, Tesla is lying, because they can't know the requirements of a system they haven't yet produced.) It certainly is only going backwards on the likelihood that there is once they start removing sensors.
I don't understand this, if its only a few weeks why not just wait? Tesla has been years late on this in the past so it's not very reassuring.
Or perhaps they just don't want to give customers hardware they won't need but will still add weight and potential maintenance. Or, cynically why pay for putting radar units into cars that won't need them.
Tesla has not done this. Instead, Tesla suddenly stops shipping Radar units in their cars and then tells the customer that its an upgrade.
For those outside the US (disabled the regional redirect)
My personal impression is that given how cost prohibitive it is of putting LIDAR in every Tesla to their business model, they're relying on safety assist features with the hopes that ~~eventually~~ data can solve the problem? But are Waymo's/Cruise/et al. completely convinced that that's impossible in in the medium term future? Outside the consumer market, is the Tesla marketing considered impossible?
Waymo so far seems to be inclined towards maintaining a fleet of autonomous vehicles for rides, deliveries and trucking. I suspect it's easier to justify higher hardware costs per vehicle in that case.
The probability that they can safely pull this off is very low. Right now progress in vision-based ML is incredible. If you look at AK's twitter feed [1], you'll be amazed almost every day. But I work in planning, and for planning detecting / segmenting objects, even recreating 3d models is not enough, you need a full contextual model of the world. Right now, I have 0 clue how this could be done, I don't think anyone else does either. Here's an example of what I mean: you see an image, and can say 'there's a person there' - that's detection. you see an image, and can sort of imagine the person's shape from another viewpoint - that's 3d reconstruction / completion. you see an image, and can imagine the person moving, obeying social and physical laws, waiting to cross a street, look at a car and decide to wait. You know trains will move on rails, trees will stay still unless cut, etc. You're mixing physics engine, statistical inference, heuristics in your head, and this gives you a full, contextual understanding of the world.
I think to get there, one approach is to focus on prediction ability: create a model that learns to 'see', then 'predict' and train both at once (see for example World Models [2]). This prediction task alone was also enough for GPT-3 to seemingly learn really good models of the world. Then the hope is, if you can predict the future really, really well, somewhere in you must be a really perception / simulation / contextual understanding model. In public research we are so incredibly far from being there, prediction ([5], [7]) and prediction-based planning works (eg [3], [4], [6]) are still in the proof-of-concept stage - we're nowhere near predicting high-res images or 3d models of the world with any accuracy. Still, I do see some chance of a GPT-scale endeavor by a well-funded group like tesla leading to some big leap forward. But I'm not holding my breath. To make things bleaker, some think prediction is not enough, and you actually need embodiment (skin-in-the-game), to truly develop good contextual models (I used to be in this camp, GPT made me reconsider). This is even further away, adding hardware, safe exploration and co. is just not something we're in any way close to. Another common thought is that there's a limit to current NN architectures, and that a drastic step is needed to get to the next level, though no one knows exactly what (moving away from feed-forward architectures and backprop, towards something like spike-time-dependent plasticity? higher level meta network coordinators, that decide when to connect trained models to each other's inputs and outputs, when to start and stop modifying their weights, etc, continually?)
You could argue that it might be possible to skip this entirely, and just focus on training an algorithm to be good at driving from start to finish, but there I'd say that we are even further from having the approaches to reach that goal. I'll leave that topic for someone else.
And if you say you don't need this, then you better have some magical over-engineered planning method that can perfectly account for all possible errors / missing info in your perception pipeline. As far as I'm aware, Tesla has no edge there. I'd expect that's the approach other big manufacturers were throwing millions at for all these years, before ML suddenly became more than just edge detection / segmentation.
[1]: https://twitter.com/ak92501?ref_src=twsrc%5Egoogle%7Ctwcamp%... [2]: https://worldmodels.github.io/ [3]: https://www.youtube.com/watch?v=w6TLbv_54GY [4]: https://arxiv.org/pdf/2012.04406.pdf [5]: https://arxiv.org/pdf/1808.06601.pdf [6]: https://arxiv.org/pdf/1812.00568.pdf [7]: https://www.youtube.com/watch?v=3UZzu4UQLcI
Note: I do think they could pull FSD on motorways in a realistic amount of time, though, because there's a lot less variation there. You could even add a little bit of external control, like road-side accident detection cameras, full walls to avoid animals / objects going in and out, etc.
Arbe claims to have a much higher resolution radar than anybody else in automotive. That's a significant step forward. About 10x less resolution than LIDAR, but better than most radar systems. Good enough to see humans, they claim. You can't actually order the thing, which would be a useful component for many robotic systems, and there don't seem to be third party reports on it.
Their web site is all about "partnerships" and "announcements". They're currently about to go public through a reverse merger with an SPAC from Texas.[2] They've never publicly stated that they had a deal with Tesla. They have a deal with AutoX, which builds self-driving taxis in China, and some kind of arrangement with NVidia. So some real companies think they're real.
[1] https://electrek.co/2020/10/22/tesla-4d-radar-twice-range-se...
https://www.theverge.com/2021/5/24/22451404/tesla-luminar-li...
I only 1/2 follow Tesla stuff, but why are they still saying "Full-Self Driving" here? Aren't they asking for trouble?
I'm referring to this:
https://www.theverge.com/2021/5/7/22424592/tesla-elon-musk-a...
Anyway, he just shrugged his shoulders and said, "Elon Musk."
I really hope we move back away from having a set of people who aren't subject to the same rules as everyone else. I realize that life has always been like that to some extent, but it seems worse these days than in the (recent) past.
Tesla's marketing is... misleading. "Full-Self Driving" sounds like the car is fully capable of driving itself when it does nothing of the sort. I can see why some drivers assume it's more capable than it is. "Autopilot" was bad enough.
What is Tesla going to call their eventual level 4 autonomy feature? "Actually Full Self Driving"?
It's like if Apple were selling a smartphone with 80 Megapixel Photo Capability, and the fine print reads:
"Devices equipped with 80MP Photo Capability do not have an 80MP sensor. The capability refers to the hardware's ability to potentially upscale images to this resolution. We don't yet ship software that enables the 80MP resolution."
(I was stupid enough to pay for this bullshit FSD feature on a Model 3. Nice car otherwise, but with Musk it's always caveat emptor.)
But they advertise it as if you can sleep in the driver's seat (I think they literally have said that in the past).
wait, or is it the other way around?
>Basic Autopilot ...Included with every new Tesla, Autopilot enables your car to steer, accelerate and brake automatically for other vehicles and pedestrians within its lane. There to assist with the most burdensome parts of driving, Autopilot works alongside features like emergency braking, collision warning and blind-spot monitoring.
> Full Self-Driving Capability $10,000...The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving features evolve, your car will be continuously upgraded through over-the-air software updates.
Technically that would be Actually Full Self Driving Sometimes.
"these will be the first Tesla vehicles to rely on camera vision and neural net processing to deliver Autopilot, Full-Self Driving and certain active safety features."
Future tenses all the way. Someone might make them clarify this, but the clarification won't be as high profile as this release.
That’s how almost all regulations for vehicle standard enhancements are implemented.