In the railway signalling industry (which for historically obvious reasons is obsessed with reliability) there even is a pattern of running different software implementing the same specification, written by different team, running on a different RTOS and different CPU architecture.
This, in the context of 'modern vehicle safety standards' still makes me cringe when considering the "safety" put into modern autonomous vehicle systems.
An interesting case study in this domain is to compare the Saturn V Launch Vehicle Digital Computer with the Apollo Guidance Computer
Now the LVDC, that was a real flight computer, triply redundant, every stage in the processing pipeline had to be vote confirmed, the works.
https://en.wikipedia.org/wiki/Launch_Vehicle_Digital_Compute...
Compare the AGC, with no redundancy. a toy by comparison. But the AGC was much faster and lighter so they just shipped two of them(three if you count the one in the lunar module) and made sure it was really good at restarting fast.
There is a lesson to be learned here but I am not sure what it is. Worse is better? Can not fail vs fail gracefully?
I think this is because an AGC failure is recoverable in most phases of flight, while an LVDC failure is not.
Maybe if you know what the tradeoffs are and are ready to deal with the deficiencies (by rebooting fast). And didn't they had issues with the lunar module Guidance Computer on the first moon landing?
A transient bit failure in digital circuits? Then reboot and away you go.
A coding / algorithmic defect. Reboot and you are back in the same place.
Also, the AGC was directly interfaced to an astronaut. They could decide to ignore erroneous outputs from the AGC.
Restart your Claude Code sessions as often as possible
That has lead to some of the best rockets ever developed, and the largest satellite constellation by far. But part of the secret sauce is creating situations where you can take risks. Traditionally anything space-related deals in one-offs or tiny production volumes, so any risk is expensive. A lot of SpaceX's strategy is about changing this, whether that's by testing in flight phases the customer doesn't care about, being their own best customer to have lower-risk flights, or building constellations so big that certain failure scenarios aren't a big issue (while other scenarios still have to be treated as high-risk high-impact)
I would love to update myself if anyone has a good source.
For better or worse, it's hard to argue with results.
Their advantage in the satellite-internet industry is that they can launch stuff fast and cheap; very likely this drives different tradeoff decisions than the regime this article talks about.
We know Glenn is loquacious.