> , it's going to be "We showed 7 billion people tweet X and 7 billion people tweet Y and tweet X caused 5% more people to engage so we tweaked this value".
There is an entire new subfield of ML that is tackling this problem. There are now conferences dedicated to this topic. It is not an easy problem, but it is not impossible.
There are hundreds of researchers working on fairness, interpretability, trust and explainability in ML and a lot of them are working on models much much bigger than what Twitter's feed might involve.
This is a good starting point:
[1] https://www.fatml.org
> And sure, you can say "Well that's a bad way of designing the algorithm" but then what you're really saying is that you don't want to open source the algorithm at all, you want to re-write the algorithm to satisfy your sense of how the world should work with no evidence it'll actually work.
You can still open source multiple steaming piles of shit and then let the community improve that so that it is more widely understandable and trusted. See [1] again.