Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
LiDAR, radar assistance feels crucial
https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...
The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.
I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.
Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.
In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.
Most are minor, but even so - beating that shouldn't be a high bar.
There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.
Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:
Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.
Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny
Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.
No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.
Dynamic range, focus speed, resolution, FoV and motion detection still lacks.
...and that's when we imagine that we only use our eyes.
That’s the mistake Elon Musk made and the same one you’re making here.
Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.
Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).
But there is a lot of expenditure relative to each mile being driven.
> The goalpost will be when you can buy one and drive it anywhere.
This won't happen any time soon, so I and millions of other people will continue to derive value from them while you wait for that.
So if we're saying how many times would it have crashed without a human: 0.
They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.