Of course data contains biases. But again, please read the article I linked; algorithms will have a tendency to correct that bias.
The examples in the article you link to are not algorithmic bias at all. They consist of:
1) Humans at Facebook manipulating trending results.
2) Google's keyword algorithm (accurately) reflecting the fact that people with black names are more likely to have arrest records.
Lets distinguish "bias" from "accurately learning things you wish it wouldn't learn" or "accurately learning things you wish weren't true."
None of what I'm saying is remotely controversial. If I told you statistics could detect and correct bias in a mobile phone compass, you'd just think "cool stats bro". Is this article remotely controversial? https://www.chrisstucchio.com/blog/2016/bayesian_calibration...
The specific feedback loop you describe - variable detection probability => variable # of detections - can be directly mitigated. For a non-controversial example drawn from sensor networks (sensors report events with a delayed eraction, the longer you wait the more events you detect), see here: https://www.chrisstucchio.com/blog/2016/delayed_reactions.ht...
(You can find similar examples all over the place. I just link to the ones I wrote because they spring immediately to mind.)
In a compass, a sensor network, adtech or other quant finance, the idea that machine learning can fix biased inputs is not remotely controversial. The concept that statistics suddenly stops working to fix racism is just silly anthropomorphism.