I am not a self-driving car expert (software or hardware). Nor am I an expert in ML, computer vision, etc. But I do have minor experience in each of those. Currently, I am enrolled and taking Udacity's new "Self-Driving Car Engineer" nanodegree course. For my cohort (November), we just completed the first basic project - Lane Finding.
This project utilized Python and OpenCV, and was far from a robust enough system for it to be used on a real vehicle (for one thing, it wasn't fast enough - it does well with non-real time video and still photos, but it would likely need to be re-written in C/C++ for it to stand a chance in a real-time situation). It couldn't deal with curves, and it has other corner cases.
We had several still pictures to work with, then two required videos, and one bonus "challenge" (ie - extra credit) video. If we had time, we could try our hand with our own videos, etc. It was very easy to get things working with the still images, and the two regular videos. But the challenge video was something else entirely.
In that video, you had to deal with not only different color lines, but also a section of different color roadway, shadows and varying light conditions. With enough effort and thought, it was easy enough to get things working properly, but it highlighted the fact that humans do some amazing things when they drive.
For one thing, our eyes are better capable to handle subtle differences in color, shadow, and lighting (particularly brightness and contrast) than most traditional image sensors. Furthermore, we can "fill in gaps" and "infer" where and how things should be, based on other information in the environment. In cases of rain or snow, we can - for instance - watch where other cars go, follow the "tracks" other cars make in front of us, follow road "edges", and use other subtle cues to help us guide our vehicles as we drive (another instance - recently they repaved the road near my work - but haven't painted new lane markings - I and probably others instead used the "seams" between the asphault runs as "lane markers" instead).
These and many other issues likely are "edge cases" that haven't been fully explored in this relatively new cycle in self-driving car engineering. I am confident that the problems will be solved, likely with deep learning systems of varying kinds, as well as better sensors. Sensor data fusion and priority could also help (ie - use a LIDAR to "follow" a car in front of you, perhaps? Of course, filter out the noise of the falling rain/snow, first).
So far, this course has enlightened me more on just how hard of a problem overall self-driving vehicle systems are to solve. Furthermore, there aren't many players (companies nor individuals) in the market (which is why I am taking the course - to expand on my current skills). Solving these kinds of hard problems will be paramount for a successful self-driving vehicle to last in the market.