The entire purpose of the chooser system is to discriminate between people; they want to investigate only those people likely to be cheating. If they really want to avoid discrimination, then they chould be choosing who to investigate using a straw-poll.
They have laws against certain kinds of discrimination, e.g. on the basis of race or gender. If those facts are used as input to the chooser, then race- and gender-discrimination is inevitable. There's not usually any protection against discrimination for e.g. being short, or having red hair, or speaking with a regional accent; I have no idea how such characteristics are correllated with cheating on welfare claims.
It's unclear if "inability to sue" applies in this case. For example:
>Discriminating against people based on their gender and ethnic background is illegal in the Netherlands, but converting that information into data points to be fed into an algorithm may not be. This gray area, in part the result of broad legal leeway granted in the name of fighting welfare fraud, lets officials process and profile welfare recipients based on sensitive characteristics in ways that would otherwise be illegal. Dutch courts are currently reviewing the issue.
It’s called “money”, which together with credit scoring functions to make sure you abide by the rules society sets: no pay, no play.
I can get money from almost anybody who's willing to give it to me for any reason, the government's influence and visibility into this are limited. I can earn money in some other currency/jurisdiction and convert it, too. Agree the US credit bureaus are problematic (they are not a "Western" thing, they're an American thing), but they're nothing like the social credit score.
With China's social credit score there's one number in a database which the government can adjust as they see fit, if they decide to penalize you, you're no longer able to participate in a vast array of social services and functions. It is total control and you become a pariah overnight.
This comment was whataboutism, I guess. https://en.wikipedia.org/wiki/Whataboutism
It’s already looking like a bad journalism piece in the first part :
” Being flagged for investigation can ruin someone’s life, and the opacity of the system makes it nearly impossible to challenge being selected for an investigation, let alone stop one that’s already underway. One mother put under investigation in Rotterdam faced a raid from fraud controllers who rifled through her laundry, counted toothbrushes, and asked intimate questions about her life in front of her children.”
Here the problem is not the algorithm, its the investigators.
Another ethical problem for me : the system of flagging in whole relied partly on anonymous tips from neighbors. I am not an expert but I feel more at ease about a system that rely on a selection algorithm + randomness than delation.
I think the problem was the processes around the algorithm not its existence itself. The journalist seems to assume during the whole piece that the algorithm will become the main/only way to identify fraudsters. If its the case, it’s terribly wrong because how are you training your algorithm then ?
Most of the time, the piece try to put the reader in an emotional state of fear and anger and is not at all doing any analysis, while faking it using a lot of numbers and graphs.
Sorry for the long rant but I am surprised that this came from Wired which I consider quite good on tech topics, and that its on HN 2nd page.
I am against government scoring and algorithms for legal / police cases precisely because it can be badly used by powerful people.
Am I the only one to feel that its not a good article ?
Indeed. Going deeper, the 'problem' is a social/cultural belief that doing X at scale, using a computer, is somehow more ethical than having a bunch of people do the same immoral thing. Computer automation and algorithms become a moral justification (or at least a Hail Mary) for immoral acts.
There is at once a diffusion of responsibility, a causal disconnection of consequential acts, a reassignment of responsibility, and - given that we bow down to machines as our masters rather than our tools - a change from choice to a belief in the "inevitability" of unstoppable processes.
Together these make us no longer question whether:
1) Computers are more reliable, consistent and accurate than people
2) Computers are fairer/just
3) Computers are more comprehensive/inclusive or selective/prejudicial
4) Computers are actually economically more effective
Of course, this has been going on since the 1960s at least and was part of "systems analysis for automation". I think we have regressed. Whereas it was commonplace to sceptically question technology in the 60s and 70s, today we start with the assumption that it must be "better" and then have to figure out how maybe it's not.
Humans seem more subject to bias than algorithms are. Algorithms only look at data, but humans are additionally vulnerable to stereotypes and prejudices from society.
Furthermore, using an algorithm gives voters an opportunity to have a debate regarding how best to approach a problem like welfare fraud.
Human judgment relies on bureaucrats who are often biased and unaccountable. It's infeasible for voters to audit every decision made by a human bureaucrat. Replacing the bureaucrat with an algorithm and inviting voters to audit the algorithm seems a heck of a lot more feasible.
I give the city of Rotterdam a lot of credit for the level of transparency they demonstrated in this article. If they want to be successful with algorithmic risk scores, I think they should increase the level of transparency even further. Run an open contest to develop algorithms for spotting welfare fraud. Give citizens or representatives information about the performance characteristics of various algorithms, and let them vote for the algorithm they want.
In the same way politicians periodically come up for re-election, algorithms should periodically come up for re-election too. Inform voters how the current algorithm has been performing, and give them the option to switch to something different.
One might think that, but algorithms are built by humans, so they (algorithms) automatically have the same biases as the humans that built them.
If I'm a chemist, and I write an algorithm to do something related to chemistry, that algorithm does not "automatically" know everything I know about chemistry.
Bias works the same way.
I think it is morally justifiable as a residency requirement, but not justifiable to let people live there without being able to receive government support.
I think it's a situation where the government want to be racist or at least xenophobic, the citizens agree, but the law prevents them. Accenture was drafted in to get around the law.
Given the choice between accepting a small number of immigrants with government support, or a large number of immigrants without government support, the second option seems more humane to me.
By choosing the first option, you are effectively creating a privileged class based on whatever criteria your country uses to accept immigrants. The underclass might be out of your sight, but it still exists.
As long as you are using some criteria to accept immigrants, "willing to work without government support" (at least until they become fluent in the local language) seems like a perfectly reasonable criterion to me. And it is a criterion that gives your nation the capacity to accept a larger number of migrants without breaking the budget -- thereby helping more people from developing countries improve their economic situation.
Must be a cloudflare outage?
IMO, ghe article is only good for better understanding the abysmal failure of the Rotterdam, Netherlands 2017-2021 government benefits random audit program. The authors allege that they contacted other cities that set up something similar but don't name any. Related reading: https://www.wired.com/story/welfare-algorithms-discriminatio...
Poor = suspicious
Algorithms give the rank and file the option to defer all accountability to a machine. The algorithms make mistakes. No one gets blamed or fired for trusting it in the first place.