Not the OP, but my best guess is it’s an alignment problem, just like gun killing what the owner is not intending to. So the power of AI to make decisions that are out of alignment with society’s needs are the “something, something’s.” As in the above healthcare examples, it can be efficient at denying healthcare claims. The lack of good validation can obfuscate alignment with bad incentives.