Especially for machine learning scenarios, don't expect a show of force to prove as an adequate deterent. Posture and presumptive correctness aren't enough to protect you from entities ignorant of fear and indifferent to wastefulness.
It's not clear that their goal is deterrence. If they are transparent with heuristics for ranking vendors, it could provide fabled market-based incentives for a vendor race to the top, narrowing the security gap between "best" and "worst" vendors. If highly rated vendors advertise their achievement, buyers could factor the rating into purchase decisions. The heuristics would need to evolve as the floor of vendors' security practices is raised.
The bigger problem is what happens if the market-based approach fails? Will regulators step in for certain classes of software? Regulators are less likely to understand Turing-completeness.
There are at least four broad, complicated areas of effect that need to be addressed to begin grasp the scope of what's at stake here.
1. Hardware interfaces for software control.
This is the entry point that puts lives in harm's way. Software can do whatever it wants to a terabyte of RAM, but there's no outcome, if that RAM isn't driving a real world system. This is the point at which the tires spin and the steering wheel turns. Machine Learning is mostly about probability and statistical liklihood, so good luck unit testing that. 2. Decision making and autonomy.
This is where, after the system comes online, checks its supplies, and determines it has everything it needs to attempt an action, it decides to do something. After Windows boots up, the autonomous entity sits at the keyboard, checks for disk space, battery life, time of day, gas in the tank, oil pressure, and anti-feeeze, and then considers options for where to go for a drive. Perhaps the corporation that owns the entity configures a default strategy of taxi service, before hauling cargo from a mine to a warehouse, or trash pickup assistance. Taxi service reduces wear on the vehicle chasis, so the entity opts to drive as a taxi. It checks in with a ride-hailing service, and offers it's resources to the pool, and queues up for an assignment. Since we're talking about an autonomous system, there's a division of activity between deciding to try something, versus carrying out the behavior that effects outcomes. This phase is simply parsing resources and allocating them, but not using them to do a thing. Budgeting capacity is a different form of autonomy than performing specialized work. 3. Succeeding at a discrete task.
This is where the car negotiates the entire trip from garage to destination and back, without damaging property, harming others, getting stranded, and hopefully turning a profit, or at least fulfilling it's role as an appliance with a warranty (90 days, one year, 5 years or 100,000 miles?). This may be vertically dependent on multiple sub-products originating from multiple corporations, all interacting to produce the talent or skill exhibited by the system. Maybe it's a smart refrigerator, and it's defaulted to always keep a gallon of fresh milk available, but you never drink milk, and you don't know how to turn the feature off. Who's to blame for all that spilt milk? Maybe your autonomous taxi can deliver milk to the houses without smart refrigerators. Who's to say it's a problem or not? Maybe the self driving vehicle wouldn't have run over than pedestrian if you could have convinced the fridge to stop replenishing the milk. Which brings us to... 4. Economic realities, sociological effects,
propaganda and psychological operations.
Yeah, great. Your car can drive itself around 24/7 earning a passive income for you while you surf the internet, and bandy about more great ideas. You and 20 million other people are all doing the same thing. This has a transformative effect in aggregate. Your car works as advertised. No bugs. No accidents. No injuries. A perfect driving record. But there could be deeper sociological ramifications to the introduction of such technology that is unanticipated, and thus hard to envision. Did Facebook drastically augment the outcome of an election, if not willfully then perhaps by sin of omission? Would we have imagined such a conversation in 1999, before the dot com bust? Why didn't AOL produce this sort of acrimony? So, if uber's gig economy and side hustle isn't disruptive, while we work out the kinks that run over pedestrians, what about the decentralized mastodon for self driving side hustles? What about self driving transoceanic zeppelins, over international waters, than launch their own weather satellites? Who will stop them from polluting low earth orbit?Some of this isn't about software bugs. Some of it is behavior and psychology.