I suppose you could make ML avoid sharp changes in recognition, but I don't trust the current neural-network models to do so reliably.
This is the same thing..
The truth is people really really stuck a driving, but it's easy enough that the vast majority of mistakes don't result in any problems.
For a naive definition of "vaiable" that does not take into account other people that need to use the roadways. A car stopping or even slowing drastically in a lane will have the same negative affect on the transportation system as a minor accident.
There's plenty of conditions that are more difficult to humans: cars that are painted with the same color of asphalt seem to disappear in highways. Also, fog.
Also, I'm more worried about temporary, unexpected configuration of things that suddenly make a NN see something weird.
Humans don't see elephants suddenly because a prior is telling them that an elephant is improbable in that context.
Something like that in a computer system shouldn't be above the current capabilities.
As a human, if you asked me to match these synthetic images to a most-likely real-world match, then my responses will also look "confused."
https://codewords.recurse.com/issues/five/why-do-neural-netw...
yeah, instead you get hallucinations, muscle twitches, seizures, mental breaks and many more ways a brain can fail to do it's job.
A computer isn't going to look down at a text message on their phone, fall asleep at the wheel, or get in a car while intoxicated. All which occur literally every day. People take driving for granted, because (at least in the US) we have to. Our sprawling suburbs couldn't function without it, so the barrier to entry is simply being a teenager.
Would I completely trust Tesla's auto-driving? No, not at all. But I at least trust them as much as I do an average driver, which is still just barely. The difference with Tesla's system is that one failure leads to improvement of the entire fleet, whereas one human failure maybe improves that one human driver. That means self-driving cars will continuously improve, while humans will just continue to be terrible drivers.
Whereas humans get the same kinds of fatal crashes over, and over, and over...
The Australian government has a "Black Spot" [1] program to provide federal funds (often millions) to fix up locations with a history of crashes. Bet it's cheaper to make those fixes to software.
[1] http://investment.infrastructure.gov.au/funding/blackspots/
For a human, the noise isn't bad recognition, it's corrupted sensor data via distraction. It's one of the most common causes of accidents. For the AI agent an elephant will never appear unless it's been trained to be recognized. More likely it's recognized as an obstacle, which is ok. But keep in mind that years of Google's autonomous vehicle success make this pretty improbable, certainly less than in the average driver.
Much of the fear here is irrational and natural when humans lose control.
I was just answering about the concerns of self-driving systems relying solely (or primarily) on visual data from cameras. It's true this is how humans drive, but human visual system is much more complicated than what the current (published) state of the art in image processing seems to be. I do not trust visual-only systems today (especially if they employ deep learning shenanigans).
I know those problems aren't insurmountable, but I'd feel much safer if they threw in a LIDAR there too.
You'd be surprised. All kinds of conditions can do really weird things to your vision. Migraine auras probably being the most common.