Abstractions get leaky when underlying assumptions fail to hold. As an example, comparing insurance rates and coverage from different companies via AI is of limited utility if you ignore counter party risks where the companies fail to keep their end of the contract. Of course you can add that or any specific thing I bring up to the model, but you can’t include everything because the model is always simpler than reality.
The desire for AI decisions to be explainable forces them to use even simpler which makes this worse.