This is feels like an elephant in the room when it comes to AI bias. We develop an AI that accurately predicts outcomes and discover it is biased, then instead of asking if maybe this means our current system is deeply biased and needs to be changed, we say, "don't use the AI; keep using the people who might or might not be biased but we don't know because we can't measure it in the way an AI can be measured."
If it isn't acceptable to use an AI to create biased outcomes how is it acceptable to use people to create the the same outcomes. AI decision making can be examined and tuned in ways that people cannot.