There are quite a few transforms HSV, HSL etc that you can do on images for doing things like lane detection in different conditions (bright light or darkness). Also typically you take the input from 3 camera sources left, right and center.
To add to it there are techniques like applying shadow to a bright image.
When we combine some of these techniques, the task does become possible, even in in adverse conditions.
Also, various techniques often complement each other. For e.g. We may derive a steering angle out of a Machine learning system, and we can cross check that against the lane detection, vehicle detection (no collision) etc.
To think of it, we when we drive ourselves, we just use the eyes. So there is a view that cameras alone are sufficient. But definitely as Radar and Lidar complement the cameras and make the driving system more robust. As Lidar gets cheaper, we are bound to see it being used more even in camera only approaches.