For ML systems, it's an engineering mistake to deploy a complex model when you don't have a simpler baseline (e.g. does this outperform a basic n-gram model?). Similarly, it's a strategic mistake to deploy a deep learning model without assessing the baseline of human performance (including bias).
I see the problem of inexplicability as less salient than (1) responsible, informed deployments of models, and (2) ongoing measurement (especially against a human baseline).
You can deploy explainable models without (1) and (2) and end up with a much, much worse result.