Not everyone shares your politics.
I'm perfectly happy with algorithms detecting that certain people are more likely to be safe drivers than average, and giving them lower rates, and concentrating premiums on the groups more likely to be in accidents, even if I don't understand why Armenians (in your example) get in more crashes.
The distinction is that with young male drivers, we have two supporting classes of information:
* A clear statistical observation
* A conceptual understanding of why the observation is likely to be valid
With machine learning, we might have neither of these classes of information.
<shrug> What's the null hypothesis here? From my point of view, you're the one who defined into existence as a problem something that is not a problem.
A great example is how very resentful many young white men of college age are that universities are requiring them to take sensitivity courses designed to reduce the instance of campus rape, but strictly speaking men of that age are the overwhelming majority of bad actors in that environment. Statistically and logistically speaking, it's smarter and cheaper to just require all men of college age to take courses reminding them that rape is not okay rather than dealing with the moral, legal and healthcare costs of the alternative.
In some cases, our relatively primitive algorithms pick up on correlations that should not be acted on because we're actively working to correct them. For example, it would be inappropriate to pre-reject job applications based on skin color if in a certain culture, it's less likely for that person to have a college degree.
Even if that insight is correct, it's usually part of something that society hopes to correct or that applicants should be given the benefit of the doubt about, otherwise very serious negative responses will emerge.
Acting on existing categories may reinforce. It may not. For now, it's a case-by-case basis we'll have to act on. Maybe one day, modeling techniques and data sources will become sophisticated and robust enough to make every decision for us. That day is not today.
I'm sure that I've experienced increased costs based on this.
The issue is that I don't consider that the morality of forcing my will on other people depends at all on whether their current behavior is advantageous or disadvantageous to me.
Why on earth would I believe anyone who says this? I don't think humans are capable of such abstractness. It's not a matter of wanting to, biology itself is at odds with this mindset.
But also, there are many markets which are not effectively free markets. Insurance is a good one, and health service is an especially good one. In these cases, it's very dangerous to start agreeing that insurance agencies can start to pay "pass the puck" with human life.
And I'm not sure what sexual orientation has to do with any of this.
Nice bait, I guess?