Agreed - This is essentially the corner stone of systems failure analysis - something I wish architects thought about more in the software space.
I'm a product manager for an old (and if I'm being honest somewhat crusty) system of software, the software is buggy, all of it is, but its also self healing and resilient, so while yes, it fails with somewhat alarming regularity, with very lots and lots concerning looking error messages in the logs, but it never causes an outage because it self heals.
Good systems design isn't making bug free software or a bug free system, but rather a system where a total outage requires N+1 (maybe even N+N) things to fail before the end user notices. Failures should be driven by at most, edge cases - basically where the system is being operated outside of its design parameters, and those parameters need to reflect the real world and be known by most stakeholders in the system.
My gripe with software engineers sometimes, they're often too divorced from real users and real use cases, and too devoted to the written spec over what their users actually need to do with the software - I've seen some very elegant (and on paper, well designed) systems fall apart because if simple things like intermittent packet jitter, or latency swings (say between 10ms and 70ms) - these are real world conditions, often encountered by real world systems, but these spec driven systems fall apart once confronted with reality.