FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.
To the investor, ChatGPT is sold as “AGI is just round the corner”.
Meaning that it's really not reliable enough to take your hands off the wheel.
Waymo shows that it is possible, with today's technology, to do much much better.
What they do claim is that with human supervision, it lowers the accident rate to one per 5.5 million miles, which is a lot better than the overall accident rate for all cars on the road. And unlike Waymo, it works everywhere. That's worthwhile even if it never improves from here.
Fwiw you can take your hands off the wheel now, you just have to watch the road. They got rid of the "steering wheel nag" with the latest version.
Tesla only counts pyrotechnic deployments for their own numbers which NHTSA states is only ~18% of all crashes which is derived from publicly available datasets. Tesla chooses to not even account for a literal 5x discrepancy derivable from publicly available data. They make no attempt to account for anything more complex or subtle. No competent member of the field would make errors that basic except to distort the conclusions.
The usage of falsified statistics to aggressively push product to the risk of their customers makes it clear that their numbers should not only be ignored, but assumed to be malicious.
[1] https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf
"By 2019 it will be financially irresponsible not to own a Tesla, as you will be able to earn $30K a year by utilizing it as a robotaxi as you sleep."
This was always horseshit, and still is:
If each Tesla could earn $30K profit a year just ferrying people around (and we'd assume more, in this scenario, because it could be 24/7), why the hell is Tesla selling them to us versus printing money for themselves?
Being more specific: Product either requires a certification, like a driving license, or is foolproof.
where there's enough bandwidth
> you just have to watch the road
... and then react in a split second, or what? it's simpler to say goodbyes before the trip.
> They just think they'll get there.
of course. I think too. eventually they'll hire the receptionist from Waymo and he/she will tell them to build a fucking world model that has some object permanence.
Yes, it was a stupid system and you are right to criticize it. And as a Tesla driver in a country that still only has that same Autopilot system and not FSD, I'm very aware of it.
But the current FSD is rebuilt from the ground up to be end-to-end neural, and they have the occupancy network now (which is damn impressive) giving a 3d map of occupied space, which should stop that problem occurring.
Soooo just like ChatGPT then? As the parent comment said.
https://www.huffingtonpost.co.uk/entry/tesla-driverless-cars...
Oct 2014: "Five or six years from now we will be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination."
Users generally have time to decide if the output ChatGPT provides is accurate and worth actioning.
We found out about the lawyers citing ChatGPT because they were called out by a judge. We find out about Google Maps errors when someone drives off a broken bridge.
https://edition.cnn.com/2023/09/21/us/father-death-google-gp...
For other LLMs we see mistakes bold enough that everyone can recognise them — the headlines about Google's LLM suggesting eating rocks and putting glue on your pizza (at least it said "non-toxic glue").
All it takes is some subtle mistake. The strength and the weakness of the best LLMs is their domain knowledge is part way between a normal person and a domain expert — good enough to receive trust, not enough to deserve it.
No such luxury is granted to the driver using FSD who has just collided with another vehicle.