Yes, explaining the "why / how did the SAT solver produce this answer?" can be more challenging than explaining some machine learning model outputs.
You can literally watch as the excitement and faith of the execs happens when the issue of explainability arises, as blaming the solver is not sufficient to save their own hides. I've seen it hit a dead end at multiple $bigcos this way.