Currently, self-driving AIs are kind of like chess AIs, or the Watson AI that won at Jeopardy: they have some inhuman strengths that mostly compensate for their sub-human understanding of inputs.
So, for example, chess AIs are better chess players than humans because they have longer lookahead, even though they're worse at analyzing a given board. The Watson AI didn't understand questions as well as a human, and had obvious comprehension failures such as the final jeopardy answer, but when its comprehension was good enough, it had a vast database that it could perfectly recall at very high speed, and those more than compensated for its comprehension problems.
Driving AIs are not as good at understanding what's happening around them, but they are constantly attentive in 360 degrees and have fast reactions, which (may, someday, but does not yet) compensates for their imperfect understanding of the world.
This AI is not generalizable to tons of other circumstances in which there is no obvious way to parlay the inhuman strengths of the AI into compensating for their weaknesses. As such, while there may be some other places where a driving-AI-like intelligence could be used profitably, there probably aren't many such places.