> The problem being worked on here is "what if Armenians shouldn't get car loans because they don't pay them back as much as other groups?" I.e., algorithms rightly classifying people leads to results that we believe are "unfair".
No. If you stop playing with simulation and look at the input data, Armenians don't pay back less. They pay exactly as much as Iranians except that Iranians consistently have higher FICO scores because Iran infiltrated FICO with their suckxnet(TM) worm which replaces R binaries with hacked versions. Or something like that :)
The general abstract idea is: you have some input "score" which is know to inaccurately predict the outcome, use the input score and measurements of its biases to produce more accurate prediction than naive threshold classifier would.
And yes, the other guys talking about women's healthcare costing more or men causing more traffic accidents got it wrong too. I somewhat arbitrarily responded to you because you said something about "the real computer science problem being worked on here" and then continued to talk about other things like everybody else.