But that is human experimentation, something we as a culture generally agree is abhorrent.
This seems unreasonably optimistic.
First, this particular crash is an egregious counter-example. The car doesn't even seem to slow down when it first sees the pedestrian's foot. Nor does it try to swerve. This is basic stuff for a human driver, never mind more complex avoidance and risk mitigation a human driver can perform.
Second, we've had years of training various AI content curation algorithms on social networks, videos, blogs, etc - and the most advanced AI and search company in the world still can't keep adult-oriented conspiracy videos off of Youtube Kids. And while you might counter that content is a human problem, driving is too! Dangerous driving situations happen at the periphery of traffic rules, where someone is doing something the drivers around him don't expect. I've seen people run solid red lights, drive the wrong way down a one-way street, pedestrians start crossing when their light turns red, etc. Short of creating special roads for autonomous cars only - how does an autonomous car deal with all of this successfully?
In fairness here: Google could do this so easily by throwing people at the problem. A whitelisted set of content producers could do this, with people dedicated to whitelist curation. Google can keep them off, and quite readily. Keeping that off automatically and cheaply is the issue...
On that front: automated content recognition that can parse the nuances between regular claymation Elsa and spooky claymation Elsa who talks about jamming things in her "happy spot" is AI-hard. These are not random videos, they are content tailor made and adapted to pass whatever filters are in place.
For data scientists there is a massive gap between working with concrete sensor data that can be managed reasonably, and undefined philosophical/moral/sexual boundaries in dirty noisy counter-programmed video content... One replaces our eyes and ears and reflexes, the other seeks to replace our fabulous brains.
I'm not sure which side of this I fall on just yet, but something strikes me here: you're assuming a human driver with no impairment (e.g. eyesight or fatigue) paying complete attention to what they're doing. We know that this isn's always the case (and could even be a minority!), so this doesn't seem to be a great argument.
It is irrelevant what average humans do. That is the current rule set for all drivers.