The problem being worked on here is "what if Armenians shouldn't get car loans because they don't pay them back as much as other groups?" I.e., algorithms rightly classifying people leads to results that we believe are "unfair".
This research is illustrating that you can't simultaneously have accuracy and fairness. You need to explicitly decide how much accuracy you are willing to give up to get fairness. I.e. it's computing the tradeoffs needed to evaluate the ethical question: how many Armenian deadbeats should you extend credit to in order to be "fair"?
Go play with the simulation to see. The various fairness criteria all achieve lower than maximal profits.