A trivial example would be: what if you trained a classifier to predict whether a person would be re-arrested before they went to trial? Some communities are policed more heavily so you would tend towards reinforcing the bias that exists and provide more ammunition to those arguing for further bias in the system, a feedback loop if you would.
Or what if some protected group needs a higher down payment because the group is not well understood enough so that you can't distinguish between those who will repay your loans and who won't? Maybe educational achievement is a really good predictor on one group, but less effective on another. Is it fair to use the protected class (or any information correlated with it) when it is essentially machine-enabled stereotyping?
Recently it has been noted that NLP systems trained on large corpuses of text tend to exhibit society's biases where they assume that nurses are women and programmers are men. From a statistical perspective this correlation is there, but we tend to be more careful about how we use this information than a machine. We wouldn't want to use this information to constrain our search for people to hire to just those that fulfil our stereotypes, but a machine would. This paper has some details on such issues: http://arxiv.org/abs/1606.06121
I don't think there are any easy solutions here, but I think it's important to be aware that data is only a proxy for reality and fitting the data perfectly doesn't mean you have achieved fair outcomes.
With biases about people based on immutable characteristics (sex and race), we need to be clear about why stereotypes are bad and what we hope to achieve by eliminating stereotype-based reasoning. There is a great deal of hypocrisy and pretense around this subject, but only by being explicit and unapologetic can we explain to a computer what it is we want to achieve.
Stereotypes are not bad because they are false. Many stereotypes, even negative and unpleasant ones about vulnerable minorities, are statistically true at this time. A stereotype is nothing but a certain kind of model, and indeed, models built on sex and race stereotypes may perform better than those that aren't.
Nonetheless, we have strong norms against using stereotypes in law, public life and employment, because the outcome of such reasoning would be intrinsically unjust, and because of a long history of political struggle against a society, that explicitly discriminated on the basis of race, sex, homosexuality and so on. Conservatives will disagree with these premises, but we implicitly reject a conservative, discriminatory vision of society. Rejecting oppressive and unjust stereotypes is an unavoidably political act.
We recognize that
[0] It is a category error to treat humans like other kinds of objects which can be measured, because human beings are intelligent and can alter their behavior. Telling humans that science has discovered certain facts about their behavior may well change their behavior, or even re-order society around these new 'facts'. There is no neutral ground; doing statistics on humans has ethical implications. An awful example of this was the eugenic movement that inspired the Nazis.
[1] Stereotypes can reify themselves. A society which treats women as less than men will end up as a society where women are less than men, and are systematically harmed - they will be less educated, and will get treated as less intelligent. This is a kind of positive feedback loop between the widespread endorsement of a stereotype and its being 'confirmed as true'.
[2] It's intrinsically unjust to judge individuals, especially in a negative way, based on the behavior of others. This is a matter of justice, and it overrides considerations of predictive accuracy.
[3] We live in a society that is still unjust, racist, sexist and so on. We want a society where fair and equal opportunities are given to everyone, and where everyone has a chance to escape from negative stereotypes. This overrides efficiency. In that sense, we can hope to change the nature of society by changing the nature of claims that are widely held as 'truth'.
These are sophisticated premises for rejecting stereotype based reasoning, and they come from rejecting the society-wide outcomes of treating stereotypes as truth. We know what these effects are, because we know what a discriminatory society looks like. But this is the kind of reasoning we can use to build non-discriminatory, socially just models that do not harm people simply for being who they are.
It can be self-reinforcing. Imagine some new demographic group of customers appears, and without any data you make some loans to them. The actual repayment rate will be low, not because that group has a worse distribution than other groups, but simply because you couldn't identify the lowest-risk members. A simplistic ML model would conclude that the new group is more risky.
Of course, smart lenders understand that in order to develop a new customer demographic they need to experiment by lending, with the expectation that their first loans will have high losses, but that in the long run learning about how to identify the low-risk people from that demographic is worthwhile. And they correct for the fact that the first cohort was accepted blind when estimating overall risk for the group.
However, if blacks and whites need to be treated fundamentally differently in order to make accurate loan decisions, then this argument applies. I.e., perhaps whites need a 20% downpayment for a loan to be financially a good risk but blacks need 40% (or vice versa).
I wonder how many people calling algorithms racist will endorse this conclusion. It sounds kind of...racist.
(Note that I don't use "racist" a synonym for "factually incorrect" or "we should not consider this idea", but merely "this sounds like the kind of thing a white nationalist might say, or Trump would be criticized for if he said".)
http://www.nytimes.com/2015/10/31/nyregion/hudson-city-bank-...
> The government’s analysis of the bank’s lending data shows that Hudson’s competitors generated nearly three times as many home loan applications from predominantly black and Hispanic communities as Hudson did in a region that includes New York City, Westchester County and North Jersey, and more than 10 times as many home loan applications from black and Hispanic communities in the market that includes Camden, N.J.
That's of course, just recent history. Redlining that occurred in the 1960s on would be enough to adversely affect the housing history data of minority groups even today. Treating everyone equal in the eyes of the algorithm is certainly an easy route to go but as the non-algorithm expert MLK Jr. pointed out:
> Whenever the issue of compensatory treatment for the Negro is raised, some of our friends recoil in horror. The Negro should be granted equality, they agree; but he should ask nothing more. On the surface, this appears reasonable, but it is not realistic.
One hypothetical example: suppose that there existed a group G that was not able to go to the top n% of universities due to discrimination. Your company uses some rank of university attended as one of the features input to its favorite machine learning algorithm. However, the dataset you trained on excluded group G. Within this group, the best university individuals have been able to attend is X which is by definition not in the top n%. Had the algorithm been trained on this group it would have observed that school X is highly correlated with success in this group, even if not in the original training set used. As is, your ML system assigns a low probability to members of group G.
Issues like this will be hard to prevent. While that doesn't mean we shouldn't work hard to make real innovations in ML, I think the legal approach of a "right to explanation" as analyzed in http://arxiv.org/pdf/1606.08813v3.pdf and recently added to European law is regardless a helpful tool to ensure accountability.
For instance, when dealing with immigrants, US banks often fail to see any signal at all because their credit reporting only covers US institutions, and they don't know how to verify employment or schooling abroad. So to start making loans to immigrants from any given country, they need to figure out what the signals are (job, schooling, ...) and how they correlate with risk.
Is that really necessary?
Some of us are treating the political system like a blackbox, I'm just sending a different corrupt payload at it to see what the output is.
There is an old joke about how people use statistics like a drunk uses a lamp post: for support and not for illumination. Given this, we can expect people to use AI like everything else in statistics, to support the agenda of whoever is operating it while defraying negative personal accountability for the results, because artificial intelligence. It's just an obfuscated and sophisticated version of, "Computer says no."
The alternative is the near future headline, "AI confirms racists, sexists, on to something."
This is pretty much the only important concept for figuring out how we will use this tech politically.
Because it takes genius-level intelligence to be able to figure out whether you're just telling yourself what you want to hear, and incredibly rare responsibility to remember to [keep on trying to] do so, individuals and tiny groups may be able to use AI for these sorts of things, but large groups, municipalities, states, corps, etc. never will.
The systems we can understand and manage as a group are vastly simpler than those which you can understand and manage as an individual.
If machine learning algorithms are unfairly discriminating against some group, then they are making sub-optimal decisions and costing their users money. This is a self-righting problem.
However, a good machine learning algorithm may uncover statistical relationships that people don't like; for example, perhaps some nationalities have higher loan repayment rates. In these cases, the algorithm is not at odds with reality; the angsty humans are. If some people want to force machines to be irrational, they should at least be honest about their motivations and stop pretending it has a thing to do with "fairness".
For unfair stereotypes it's simple, you just ignore them; but there will be some group differences that are real - it would be a mighty coincidence if so many so diverse groups would magically happen to be identical in all aspects.
So it's up for the society to decide what to choose what we will do if it turns out that, other observable factors being equal, race/religion/ethnic background/etc X actually is 10% more likely to default on a loan.
The issue here isn't that machine learning gives wrong answers, it's that our definition of 'fair' is irrational.
Hypothetical possibility: members of group X are not perceived as 100% as effective as group Y because of pervasive bias by the employers that assumes their incompetence. They are generally perceived to be 80% as effective as a standard Y member despite actual 100% performance, and paid accordingly. A member of X needs to be 120% as effective as a Y member to be perceived at 100% Y efficiency because of stereotypes coloring their perception and an inability to objectively evaluate their performance.
Some non-hypothetical studies touching on this:
http://www.nber.org/papers/w9873.pdf http://www.pnas.org/content/109/41/16474.full.pdf+html http://advance.cornell.edu/documents/ImpactofGender.pdf http://www.socialjudgments.com/docs/Uhlmann%20and%20Cohen%20...
However I think a case can be made that certain protected attributes should be censored. Not to prevent the algorithm from making optimal decisions, but to prevent it from overfitting on those attributes. Which, if you think about it, is essentially what discrimination is.
(Good) ML algorithms don't suffer from human biases; they don't know that there's a categorical difference between e.g. race and shoe size, so we don't need to hide race from these algorithms. That is, of course, unless one's explicit goal is to cripple and pessimize the algorithm for political reasons.
Specifically, we should train a classifier on non-Asian minorities. We should train a different classifier on everyone else. Then we should fill our quotas from the non-Asian minority pool and draw from the primary pool for the rest of the students.
As this blog post describes, no matter what you do you'll reduce accuracy. But every other fairness method I've seen reduces accuracy both across special interest groups and also within them. Quotas at least give you the best non-Asian minorities and also the best white/Asian students.
Quotas also have the benefit of being simple and transparent - any average Joe can figure out exactly what "fair" means, and it's also pretty transparent that some groups won't perform as well as others and why. In contrast, most of the more complex solutions obscure this fact.
[1] Here "best" is within the framework of requiring a corporatist spoils system. I don't actually favor such a system, but I'm taking the existence of such a spoils system as given.
Small data is actually kind of the problem. When you have limited ability to process data or limited data density then your segmentation ability is limited to small data like state, county, zip code, credit score, whether you own a home, etc.
Big data processing, big bad ML algorithms and the ubiquity of data is making advanced segmentation available that allows us to make arguably more equitable outcomes.
If this is the case, then it should be detected and ML should NOT be used for the minority class. There are many classifiers out there which work on one-class problems.