The algorithm is biased if it's giving the wrong score due to race or redundantly encoded race. To show that the algorithm is biased, you need to show that (score, race) pairs are more predictive than (score, ) singletons.
Line [36] and [46] both attempt to address this question. The only one of these which is statistically significant is "race_factorOther:score_factorHigh" in line [46].
The other things you bring up are interesting, but do not show bias. At best they show disparate impact which isn't remotely the same thing.
Orwell would have loved "disparate impact isn't bias" :)
https://en.wikipedia.org/wiki/Bias_of_an_estimator
https://en.wikipedia.org/wiki/Disparate_impact
To understand this intuitively, here's a simple thought experiment.
Consider Captain Hindsight, a predictor which returns the right answer 100% of the time. By definition, E[\hat{theta} - \theta] = 0, i.e. zero bias. (Also zero variance.)
Now suppose that blacks have a higher recidivism rate (hardly implausible, ProPublica's analysis suggests they do with p < 0.01).
Captain Hindsight - being 100% accurate and having no bias - must predict that blacks have a higher recidivism rate. Yet because Captain Hindsight predicts a higher recidivism rate for blacks, he now has disparate impact.
Seriously, you are calling standard mathematical terminology Orwellian? What's your angle here?
Your claims about the emirical means of recidivism rates do not prove what you think they prove. Different races might be misclassified at different rates for a variety of reasons - e.g., one race might be affected more by some high-variance predictor, or there could be composition effects (e.g. the pdf of blacks|high score might be different than whites|high score).
The way to factor out whether they scores are biased is to do the cox survival analysis with interaction terms. Which they did. You just don't like the result.
Could you clearly lay out the statistical argument that you believe implies that E[\hat{\theta} - \theta] > 0?
And no, I'm not calling standard mathematical terminology Orwellian. What I'm calling Orwellian is your describing a biased system as unbiased (by attempting to reframe the discussion around a specific statistical definition, chosen by you).
If the predictor were biased then you could build a more accurate score based on both the original scores and race_factorBlack:score_factorHigh (and other interaction terms). I.e. you'd be building a new bias in to cancel the old bias, leaving an accurate predictor.
Their analysis doesn't show that this is possible.
I have no interest in playing but-what-does-the-exact-dictionary-definition-say semantics games.
That's certainly reasonable, but actually you're responding to a statistical argument about a statistical study, made by a statistician.
It would seem tendentious to argue that these numbers are unconnected to systemic social bias, but he is not making such an argument.
bias: prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.
Then, you'll be able to click your computer mouse to solve problems.