We've seen this exact thing with other machine learning algorithms - there was the one in the news recently that insisted on classifying a photo of a man in the kitchen as a woman, because all its training data firmly convinced it that women are the only people in the kitchen.
I guess the thing worth explicitly asking is whether biases for entirely logical reasons are defensible - for instance, if you start with an industry where (for whatever reason) men are paid much more highly than women, it's okay to offer people their current salary + fixed delta to poach them from their jobs. I would say no, because the goal of legal policies like this is to achieve a specific result in society, and specifically not to punish bad people in charge of companies, so the question is not whether people had a bad motive, but whether the result is being achieved. If you're allowed to apply a non-biased algorithm to biased starting data and have it yield an equally-biased output, you're not actually solving the bias.