One hypothetical example: suppose that there existed a group G that was not able to go to the top n% of universities due to discrimination. Your company uses some rank of university attended as one of the features input to its favorite machine learning algorithm. However, the dataset you trained on excluded group G. Within this group, the best university individuals have been able to attend is X which is by definition not in the top n%. Had the algorithm been trained on this group it would have observed that school X is highly correlated with success in this group, even if not in the original training set used. As is, your ML system assigns a low probability to members of group G.
Issues like this will be hard to prevent. While that doesn't mean we shouldn't work hard to make real innovations in ML, I think the legal approach of a "right to explanation" as analyzed in http://arxiv.org/pdf/1606.08813v3.pdf and recently added to European law is regardless a helpful tool to ensure accountability.