The status quo is indefensible so setting up moving unknowable goal posts for something to replace them doesn’t make sense to me.
This particular problem can be easily solved by cameras in the under carriage to make sure there aren’t humans shoved in there by other bad drivers. I wouldn’t mind making that a requirement across the board and moving on to the next challenge the unpredictability of human drivers throws at a repeatable robotic system.
There is no evidence that there is a magical different approach that will work better.
And even with supposedly* perfectly consistent awareness, the automation still failed catastrophically.
> The status quo is indefensible so setting up moving unknowable goal posts for something to replace them doesn’t make sense to me.
AVs are not better than the status quo, making them even less defensible. A human would not have drug that poor women for 20 feet because it was compelled to execute a pull-over maneuver. Even an OCD psychopath knows better.
* None of these things run actual realtime operating systems with fixed, predictable deadlines. Compute requirements can vary wildly depending on the circumstance. When compute spikes, consistency drops. A robot can only way approximate constant awareness by massively undersubcribing the compute budget.
Yea, it would probably be a lot more [1]. This is from...just last week. Its a pretty constant occurence. Its only in the paper of record because it happened in New York.
[1]: https://www.nytimes.com/2023/10/26/nyregion/nypd-tow-truck-b...
"Mr. Hayes said that after the collision, the child’s mother chased the tow truck down the street, screaming that the driver had killed her baby."
We don't have the data to claim this, this confidently, and the only way to get the data is let the experiment keep running in the real world (only place that matters).
There will obviously with holes in the awareness (literal missing cameras under the car) that's what the testing is for. If someone says they can sit in a room, in a simulation environment and come up with all potential crazy things humans can do around autonomous cars, they are lying to you.
To me, its either this, or we pull all human drivers off the road, restructure our cities and put em on public transit (wholly support this).
I re-iterate: The status quo is unacceptable and indefensible. The human driver who actually caused the accident has still not been held to account (and probably never will be).
P.S: I accept your point about the system being non-realtime. Though I think there are some critical safety systems (LIDAR/RADAR cutoffs etc.) that might have a real-time response?
How about we start with something simpler: have Waymo, Cruise and their likes produce a rigorous safety case[1] arguing why their vehicles are safe.
Once the safety case is in the open, we can also evaluate how well their system satisfy the claims in the safety case, and if the assumption do not hold, we can stop the experiment.
They are experimenting on humans. The usual requirement is informed consent.
But if we're doing this, lets also make human Drivers do this, and for real parity, make sure all human drivers are kitted out with all the same cameras and logging systems we ask of from autonomous car companies, auto submitted to the DMV.
Then analyze all the reports on an annual basis to see if the human and/or autonomous agent should be allowed to continue to operate on the road.
I think people forget that driving is not a right but a privilege, I agree that both humans and autonomous agents should earn this privilege.
P.S: If the claim is that a one-time DMV driving test is enough, then that should be enough for autonomous cars as well (I'm not making that claim)
I don't have to prove it. It's incumbent on the AV evangelists to prove they are better. I signed up to be a driver on roads with other humans. I have zero interest in being part of this experiment. Especially not when it comes out of silicon valley.
For an industry that claims to be all about safety and fixing how dangerous driving is, I expected them to be taking inspiration from Boeing and the commercial airlines. The remarkable, steadily improving safety record of the Airline industry should have been the paragon. Instead, they've copied the move-fast-and-break things playbook from the silicon valley tech bros. Which makes all of these claims hard to take seriously.
Cruise took 8 years before putting its truly driverless car on the road. “Move fast and break things” is a laughable idea here.
I will happily accept the safety record of flying, slowly achieved over a century of actually flying in the real world
I know someone who was drug for several feet by a human driver who didn't realize he was under her car. He was very fortunate to survive.
I'm skeptical about AVs, and this was definitely a bad look for them, but your response gives far too much credit to people.