If you can train a policy that drives well on cameras, you can get self-driving. If you can't, you're fucked, and no amount of extra sensors will save you.
Self-driving isn't a sensor problem. It always was, is, and always will be an AI problem.
No amount of LIDAR engineering will ever get you a LIDAR that outputs ground truth steering commands. The best you'll ever get is noisy depth estimate speckles that you'll have to massage with, guess what, AI, to get them to do anything of use.
Sensor suite choice is an aside. Camera only 360 coverage? Good enough to move on. The rest of the problem lies with AI.
It always goes back to my long standing belief that we need dedicated lanes with roadside RFID tags to really make this self driving thing work well enough.
Making a car that drives well on arbitrary roads is freakishly hard. Having to adapt every single road in the world before even a single self-driving car can use them? That's a task that makes the previous one look easy.
Learned sensor fusion policy that can compensate for partial sensor degradation, detect severe dropout, and handle both safely? Very hard. Getting the world that can't fix the low tech potholes on every other road to set up and maintain machine specific infrastructure everywhere? A nonstarter.
Also, 99% of roads in civilized areas have something alongside them already that you can attach RFID tags to. Quite a bit easier than setting up an EV charging station (another significant infrastructure thing which has rolled out pretty quickly). And let's not forget, every major metro area in the world has multi-lane superhighways which didn't even exist at all 50-70 years ago.
Believe me, I've thought about this for a lot more than 15 minutes. Yes, we should improve sensor reliability, absolutely. But it wouldn't hurt to have some kind of backup roadside positioning help, and I don't see how it would be prohibitively expensive. Maybe I am missing something, but I'm gonna need more than your dismissive comment to be convinced of that.
AI + cameras have relevant limitations that LIDAR augmented suites don't. You can paint a photorealistic roadway onto a brick wall and AI + cameras will try to drive right through it, dubbed the "Wile E. Coyote" problem.
If you are doing end to end driving policy (i.e the wrong way of doing it), having lidar is important as a correction factor to the cameras.
Every time you pit the sheer violent force of end to end backpropagation against compartmentalization and lines drawn by humans, at a sufficient scale, backpropagation gets its win.
I fully agree, but your statement is quite ironic.
For driving, humans drive well because we operate more like Mu Zero model does - we can "visualize" the possibilities of the future states depending on what we do and pick the most optimal path. We don't need to know what the specific object is on the road, the fact that we can recognize that its there, in our path, and understand the physical interacting of a car hitting something that is taller than a bump means we can avoid it.
The way to implement self driving is exactly that - train your model to take sensor data and reconstruct a 3d space in latent dimensions, train another model to predict evolutions on the 3d space given time history with probabilistic output, and then your inference is a probabilistic guided search in that space with time constraints based on hardware. Mu Zero is nothing new, and already proved that you don't even need a hardcoded model of environment to operate in.
And you don't even need human driving data for this, as the model will be able to predict things collisions solely based on pure physics. And as a bonus, as you enhance it with things like physical models of the cars, where it can reconcile what it thinks the system is going to do versus what the physics calculations predict, you can even make it drive well in snow with low traction.
The irony of your statement is that everyone who is going end to end is manually hand coding all these hacks (like image warping in the case of Comma AI) to make the training work, all because the training data is just not sufficient, which is the exact same exercise as humans drawing lines.
And if you doubt that what Im saying is true, again, Mu Zero was proven to work. Driving is just another game where you can easily define a winning scenario, the board, and moves you can make, and apply the same concepts. The only technical part becomes accurately determining the board from sensor data.
Source: trust me, bro? This statement has no factual basis. Calling the most common approach of all other self-driving developers except Tesla a wank also is no argument but hate only.