Sure, usually the most graceful thing to do is exit and hope a human fixes it. But that's usual because the usual condition is that sudden failure is NBD and a human is right there to screw with it.
That's becoming less common, though. When software was mostly something running on a PC doing some boring office task, reliability didn't matter. But as software is running our airplanes, our cars, our medical devices, and even, as with implanted pacemakers and insulin pumps, our bodies, then reliability gos from NBD to BFD.
We see the way forward with things Chaos Monkey [1] and crash-only software [2] and the sort of design for failure you see in things like Agent supervisor hierarchies [3], where the way to reliability is through designing for failure recovery from the beginning and testing thoroughly to make sure it really happens.
[1] https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey
[2] https://en.wikipedia.org/wiki/Crash-only_software
[3] http://doc.akka.io/docs/akka/snapshot/scala/fault-tolerance....