I am sorry but I sincerely hope you are never made responsible for the release of anything remotely safety-critical.
Please re-read what you wrote. You are saying that because a business does not have enough money/resources to scale a process in real-world conditions that the solution is to release something and verifying in the real-world, on a public road, risking real human lifes?
If you don't have enough money to test it then you don't have an actual product/business.
Wild to infer that from someone asking a hypothetical ethical question.
Why are so many on this site so aggro?
> I considered this before posting ...
To me this is not someone asking a hypothetical but someone who is aware that what they are about to write is controversial yet are still considering it to be the best way forward. I happened to strongly disagree with that.
First, before we assume that it is inherently risky to release this tech in the public, let's consider where the risk comes from.
There is some risk caused by the inadequacy of the technology to handle certain edge cases. This can result in the vehicle making dangerous maneuvers. However this risk is mitigated by allowing the driver to control the car as soon as the car does something wrong. I'd imagine that the vast majority of errors like this can be handled safely by a human driver who's vigilant and has control of the car.
Some unknown proportion of these might be unavoidable accidents caused by the self-driving software that a human would have probably made too (e.g. a deer running onto the road).
Another possibility is that the self-driving car causes an error that a human might otherwise not have caused and could not have been avoided by a human taking control. If you group the latter two cases together, and the accident rate is lower than that of the driver without self-driving enabled, you could argue that it has a positive impact.
The other source of error is human error. Some argue that self-driving makes people complacent and they might not be as vigilant as if they were operating the vehicle themselves. I think companies are trying to address this too by implementing driver monitoring systems, however this is completely avoidable by the passenger and its a stretch to say that self-driving cars are risking human lives because of this.
Hopefully I have conveyed the reasons why I don't think public testing is necessarily inherently risking human lives (of course this is dependent on the state of the tech being released). I'm sure you understand that every company has a limited runway and a window of opportunity to scale their technology. I am 100% with you on making sure that the product doesn't risk people's lives recklessly. However, I think the optics of this make it seem like far more lives are at risk than reality. I'm open to changing my mind as more information about the safety of the tech comes out but I am not de facto against it for the reasons I mentioned above.