- Attempted coup - Radicalized and misinformed supporters (majority) with an extremist, armed element (minority) - Extremely dangerous moment for free and fair elections in the west
What would the world have looked like if in 1923, the disaffected germans who took part in that were identified en masse; and serious efforts were made to re-integrate them to society while addressing the systemic issues they faced?
Instead, we ended up doing https://en.wikipedia.org/wiki/Denazification several years later after a lot of death.
I think there is a very good argument for identification and holding people to account. There also needs to be very, very robust adherence to due process - AI identification is not proof, and a suitable alibi should be step 1 in invalidating it.
Edit-if you’re referencing just how people treat one another 1-1 in culture, well, the forever trumpers just need to learn to treat people like people again. Apologizing helps, but simply being friendly and polite goes a long way.
Are you arguing violent extremism and trying to overthrow the government is synonymous with "right-leaning views"?
No one is deplatforming Mitch McConnell, Mike Lee, Ben Sasse, John Thune and many others all of whom with "right leaning views", even further to the right the president.
Your argument is a strawman. It's like when twitter bans jihadist leaders, going out and saying "why are muslims being deplatformed?". Which is not true.
As much as they hate the far left, I believe that progressives such as Bernie and AOC are actually far better for them, and everyone they know, than the current GOP leadership.
Here's a crazy thought. How about we do nothing and just arrest anyone that breaks laws, right or left.
Expulsion can be appropriate for dealing with people who harm society.
It should be used with caution but does have its place.
De-integrating (so to speak) may sometimes be a required first step before effective "re-integrating" can occur.
In any case, mass democracy is on the way out, everywhere. It is just not compatible with the post-industrial society and will be progressively less so in the future. Question is about a particular way in which it will happen (anything from outright dictatorship to some form of democracy with prequalification)...
> "From 1945 to 1950, the Allied powers detained over 400,000 Germans in internment camps in extrajudicial fashion in the name of denazification."
It’s worked out pretty well for the Germans, wouldn’t you say?
The US is a two party state. So extremists just need to infiltrate one party to corrupt the system, especially if the other party is not providing competent opposition.
Currently the security of democracy in the US, and therefore the balance of power between Russia, China and the US which is the basis of world peace in so much as it exists today, rests in the constitutional procedures of the Republican Party and the Democratic Party as much as it does in the US constitution, who sets those rules? Do Americans memorise them in school?
Edit: I completely agree there is nothing special about this case. No exigent circumstances.
If it happens, it should be done by an institution that is under the supervision of congress and staffed by public servants. This emerging, largely unaccountable surveillance industrial complex with ties to extremist political figures worries me more.
However, I think it's perhaps more worrying that one day this evidence will be presented in court and the jury will trust the accuracy of "AI" over their own judgment.
To clarify, the situation I'm thinking of is where a grainy CCTV image is shown to the jury. The AI expert comes in and talks about all of the technical details of their algorithm and how it determines that the grainy image is of defendant A with 96% probably.
I think so. That mechanism is judicially auditable. The AI is not. We should not be arresting people based on the output of an unauditable mechanism.
Are we, though? If “ClearView AI gave us a hit” is being treated as probable cause sufficient for an arrest warrant, that's a problem.
If it's being used as a tool to generate leads to investigate and traditional evidence is gathered and presented, I don't see a big problem.
Edited the original.
However, it's hard to put the genie back in the bottle with technology. Like we've seen this summer with local police using military hardware, there will always be the urge to inappropriately deploy military-grade technologies in a civilian setting.
This technology was of course also used to identify those laying siege to various federal buildings over the summer, but I guess it’s okay now. This is to say that the context in which this technology is used obviously matters a lot and is directly related to who holds the reigns of cultural power. Yay.
What are the bounds for such technology and/or companies like Clearview?
Are there quality of life differences in places where facial recognition crime technology is used vs not used?
???
one of those situations where I'm supposed to be
supportive of the use of facial recognition
No, it's not. This is the Hoan Ton-That supposedly protecting us from the crazies... "Founder Hoan Ton-That’s has links to the far-right movement that move right past suspicious into obvious, according to HuffPo. He reportedly attended a 2016 dinner with white supremacist Richard Spencer and organised by alt-right financier Jeff Giesea, an associate of Palantir founder and Trump-supporting billionaire Peter Thiel. (Thiel secretly bankrolled a lawsuit that bankrupted Gizmodo’s former parent company, Gawker Media.) Ton-That was also a member of a Slack channel run by professional troll Chuck Johnson for his now-defunct WeSearchr, a crowdfunding platform primarily used by white supremacists; that channel included people like the webmaster of neo-Nazi site Daily Stormer, Andrew Auernheimer, and conspiracy theorist Mike Cernovich"
https://www.gizmodo.com.au/2020/04/creepy-face-recognition-f...>258 points 12 days ago 174 comments
https://news.ycombinator.com/item?id=25562321
...
> I got my file from Clearview AI (onezero.medium.com)
> 811 points 9 months ago 224 comments
Regardless, I've had this theory about cancel culture. I don't necessarily agree with cancel culture, for the aforementioned problem of it being mob social justice. But it seems to me like it has arisen out of a failure by the real justice system. Issues like sexism in particular, which affect half of the population, have been ignored and marginalized. It took how long for Bill Cosby's heinous crimes to finally be prosecuted? More over, how likely would it have been for his crimes to yet again have been swept under the rug had cancel culture not fostered an environment where the victims felt comfortable coming forward?
The very topic we're discussing, the terrorist attack on our Capitol, is another example of racist failures of our police force.
So is it really any surprise that society has collectively taken matters into their owns hands?
Again, I don't _agree_ with the idea that society at large should pass their own judgements. I'd rather the courts do that. But they haven't been. And aren't. And we just suffered through one of the worst years on record of blatant police abuse and court inaction.
If we want to get rid of cancel culture I think we need to fix our policing and justice system to the extent that society feels they don't need to take up the mantel of justice themselves.
In other words, I don't see value in deriding cancel culture. If one feels that cancel culture is wrong, my belief is that one should be calling for action to repair the _cause_ of cancel culture, not the symptoms. And that cause is a prejudiced justice system.
It is not the society. The virtual mobs that hunt people online, even though looking massive (1:N is scary even for N==100), are absolutely tiny when compared to the society as a whole.
And as for the reasons, every mob in history, including the ones that did absolutely horrible things (such as pogroms), had some reasoning as to why their activity is virtuous and noble. And putative inefficiency or corruption of the legal system was one of them. Check up on history of lynching - that was done because the legal system was perceived to be "soft".
> In other words, I don't see value in deriding cancel culture.
This is apologism, and starting such an argument with "I don't agree with this but..." is just a way to trick a few more people into taking your ideas seriously enough to read to the end of the paragraph.
And terrorists, really? You could use your same EXACT logic, in fact even more justifiably, as a defense for what happened at the U.S. Capital.
Yep. I lived in China for a few years, speak Chinese, and never heard of anybody being kicked out of a job at a private company because of their social credit score.
Also, how were those events a racist failure of the police force?
If I don't want to buy a Musk-mobile because I don't agree with Musk on issues like racism[0] or don't like how he treats his employees[1], I am allowed to act on it. I am also allowed to share these thoughts with people on appropriate platforms, like I'm doing here.
I can understand the frustration coming from people digging up problematic tweets from 4 years ago, but this whole idea that "the internet forgets nothing" isn't new. I learned it in grade school. If you want to post racist/sexist/homophobic things on your public Twitter, so be it, but don't be surprised when someone finds it. There's also a delete button.
[0] https://www.theverge.com/2018/11/30/18119832/tesla-elon-musk...
[1] https://www.ibtimes.com/elon-musk-hot-water-after-tesla-empl...
So, what is going to be the kiss of death in 2050? Do you know in advance? How many of your contemporary comments are going to run afoul of the standard of 2050 and will you remember to delete them all, including from Internet archive/wayback sites, in 2049?
Remember, a senior manager at Boeing was forced to step down 33 years after he wrote an offending article. The article was published before WWW even existed.
https://www.nytimes.com/2020/07/08/business/boeing-resignati...
I don't have any particularly noteworthy political opinions; most people would probably put me in the "surprisingly ordinary" box if they had to categorize me. Despite that, I wouldn't dream of getting into something like a political discussion on Twitter. It's simply impossible to know whom the mob will go after next, or in 10 or 20 years.
I don't know how many others feel the same way, but I do know there are others out there. There is a non-zero chance that one of us has something really important to say, but won't, because we don't want to face a potential angry online mob in the future.
So, “cancel culture” been a matter of fact, at least in the US, for quite some time now. It’s just given a scary name when those who have been “canceled” in the past are doing the “canceling”.
There’s a baseline among progressives which, once reached, will create entirely new divisions, trust me.
Did most of the federal judiciary die or resign? Was the Constitution amended to remove supermajority to requirements for certain actions?
Social actions will always have social consequences; that's a necessary corollary to freedom of speech. What's different today is that social conservatives and especially the radical right are starting to experience those same social consequences. If people are going to use their freedom of speech and association to, say, support Nazis, then other people are free to use their freedoms of speech and association in response.
Not necessarily. Belarusian protesters are working on methods to unmask anonymous police officers [1].
[1] https://meduza.io/en/feature/2020/10/01/you-have-no-masks
I think prohibiting recognition and tracking for commercial reason without explicit consent, plus allowing recognition and tracking only when acting on a court order would be a good start.
What happened was a shame, but I truly hope this doesn’t evolve and make it worse than what it is already.
I think the horse has left that particular barn.
Although, of course, the barrel of badguy-stupidity is bottomless, so I suppose they'll keep catching the dumber and dumber ones pretty much forever...
No, it's not.
> Imagine the stasi had it?
Yes, the problem there is the Stasi, not the technology. There is literally no technology that the Stasi having would be a good thing, including pen and paper. Or things so basic we don't tend to think of them as “technology”, like, say, language.
Not sure I agree with that analogy... pen and paper doesn't scale!
Having the ability to do something at global scale, like facial recognition or real-time tracking and saying "Honest! We won't use it for dodgy things" is not sufficient...
It'd be naieve to say I'd rather it didn't exist, however that cat is out of the bag now so there _must_ be incredibly robust and tamper-proof checks and balances round its use and the penalties for subverting that should be incredibly severe.
https://en.wikipedia.org/wiki/Vasily_Blokhin#Role_in_the_Kat...
Most of what the Stasi used didn't scale. They used it anyway.
This technology is here, being used openly today. If has a chilling effect, where is it?
Why is that a problem? Are there any investigative tools that are perfect or is there a reason why facial recognition should be held to a higher standard?
> It's no better than fnger printing which often leads to unjust arrests.
You need to be more specific. Why is it no better? Has facial recognition been demonstrated to lead to more unjust arrests than other investigative methods?
> There is no replacement for detective work and the government shouldn't be lazy to exclude it.
Who's saying that facial recognition is a replacement for detective work? It's just another investigative method, like looking up a license plate or asking people at the crime scene what they saw.
> The best it can do is narrow down suspects.
It can also help find suspects when you don't have any other leads. Why isn't that good enough?
Facial recognition has also time and time again proven to be racially biased[1][2].
Not to mention how easy it is to create a surveillance state with facial recognition[3].
[0] https://threatpost.com/lawsuit-claims-flawed-facial-recognit...
[1] https://www.cnn.com/2019/12/19/tech/facial-recognition-study...
[2] http://sitn.hms.harvard.edu/flash/2020/racial-discrimination...
[3] https://www.washingtontimes.com/news/2019/dec/9/social-credi...
But is facial recognition itself the problem here? It seems to me the problem is the human who makes a decision based on flawed evidence. This is likely to happen based on other investigative methods as well and not just facial recognition (e.g. "the old lady across the street was sure she saw you breaking into that house the other day").
> Facial recognition has also time and time again proven to be racially biased[1][2].
If we know about the bias, we can correct for it. First in training and decision making and then through improving the facial recognition models.
And again, I'm sure there's racial bias in other investigative methods as well.
> Not to mention how easy it is to create a surveillance state with facial recognition[3].
So long as you have the ability to install millions/billions of cameras throughout the state and put them under centralized control. If that's legal in the US, the problem is that the law allows it, not facial recognition. In most European countries, such a thing would be incredibly illegal.
Facial recognition is like watching footage from a bank robbery, and then recognizing the person in it, except a computer does the initial work and a human being verified it before making any moves. I’d be worried if high def cameras were on every corner but this footage was taken at the scene of a crime, by reporters and criminals alike. So what if a private group runs it through a filter?
The problem only happens if law enforcement believes that the matches are somehow infallible and refuse to look for or believe in other evidence that would rule out a suspect.
EDIT: Nevermind there is an active HN thread on just that
Hell, Apple's FaceID makes a mistake every million faces, and that system is both professional and has an order of magnitude more data to work with from the FaceID scanner. Clearview is just using blurry photographs.
I don't think it is the case. Facial recognition can drastically speed up the process of nailing down suspects, accompanying with other information sources.
I don't really see facial recognition as the sole reason to be worried here. Information collection and sharing is already ubiquitous, that is what leads to all these.
Also, it's much easier to get people's photos than fingerprints.
Narrows the number of haystacks to search for needle => Reduced resources required for successful search => More crimes prosecuted.
Problem occurs when:
* Users of FRT assume all in haystack are needles
* Crimes on book must not be universally prosecuted
The first part can't be helped. US Police, like most US government jobs, is a rest-and-take-it-easy job. In aggregate, unexceptional people doing an unexceptional job. The second part is because people want other people prosecuted but not themselves.
I'm in second category myself. For instance, I am quite capable of using all sorts of drugs and maintaining a productive life. Other people are not. So it's important to prosecute other people and not to prosecute me.
Therefore, for these two reasons, I don't want FRT to be used universally. I want to preserve inequitable outcomes in policing because society is stronger with inequitable outcomes - permits good life for high percentile individuals and constrains operations for low percentile individuals. Demarcating crime from uncrime is Sorites paradox.
Thanks for being clear about your perspective. Do you think there's potential for abuse with different rulesets for different people?
> Demarcating crime from uncrime is Sorites paradox.
I disagree. The measure of a crime is subjective and objective. Subjectively, the victim notices they have been wronged. Objectively, there is a claim by a plaintiff against a defendant. A claim either exists or it does not, there is no sorites.
Oh, most certainly. The same structure allows for racial discrimination, which I do not believe is a sensible angle of discrimination: i.e. I think Ben Carson should not be discriminated against for being black. Too high value as a top surgeon.
On the whole I accept it, though, because I don't want Elon Musk prosecuted by the SEC and the instrument that permits both is blunt.
> The measure of a crime is subjective and objective. Subjectively, the victim notices they have been wronged. Objectively, there is a claim by a plaintiff against a defendant. A claim either exists or it does not, there is no sorites.
Indeed. When there is a threshold. However, the costs imposed on society by drug users are dispersed. You can't Categorical Imperative them because some people are not capable enough to handle the responsibility.
Other times the crime is exposure to increased risk: no actual harm may occur. For instance, if you do burnouts on city roads there is little concrete harm, only increased exposure to risk.
It's the same with many things: public drunkenness, drink driving, jaywalking. And society reacts to these by permitting these activities in practice for high-value individuals while proselytizing against them at the same time.
I don't drink-drive but I happily do the other two.
Unfortunately I'm not convinced that anyone is capable of handling the responsibility of determining who is / is not capable of having responsibility. There's just too much potential for abuse.
Thanks for the reply.
If it helps, I am familiar with the Veil of Ignorance, the Categorical Imperative, and every other basic tool of ethics you can think of.
I'd like to rob a bank.
I want enough bank robberies to succeed that the fictional movies and books which are an import part of my life to still be plausible.
In general I don't want just anyone to be able to rob a bank.
If anything, far FEWER mistakes will be made. Not more.
It's this weird thing where people hear about this one scary story of a guy misidentified and think "OMG facial recognition is terrible!" but they don't realize that happens to XYZ number people a day via human error.
But we're all MORE comfortable with it if it's good old fashioned...human error?
Okay they get arrested, maybe charged with a crime. Others will get doxxed and most will lose their jobs. They'll become social outcasts.
Do you really want a bunch of very angry people to effectively be pushed into a corner, demonized by society and so on?
This will end up having very big unintended consequences.
My understanding was that there was 100k+ people in attendance at this rally and thousands of people were in the capitol building itself.
At the next Trump rally of 100k everyone will know not to break the law. It is pretty simple.
Sarcasm aside, we are moving quickly to a straight up authoritarian apparatus of state sanctioned discourse enforced by a handful of corporations. This will be used against _anything_ that challenges those in power or goes against the approved narrative. For those cheering this on, realize this will applied to you in one form or another with no recourse. Here are a few scenarios off the top of my head: being a pacifist or home schooling advocate, posting on social media something tacky like "Prominent figure X is a fat idiot", or personal medical decisions having a direct impact on your ability to: travel, access jobs, housing and financial systems.
Yeah there are crazies that will be fine with that, but what they can do without the support of “normalish” people is limited.
The alternative is to allow the escalation to continue.
I'm diametrically opposed to Trump and his supporters on a lot of issues, but I recognize that a functional society needs to accept their right to protest as well. They should be able to have their marches just like we have ours.
However, the stated goals and actions of many of those in last week's march and rioting are explicitly violent and seditious. Many of the protestors were heavily armed. They killed a policeman.
Do you really want a bunch of very angry people to
effectively be pushed into a corner, demonized by
society and so on?
I understand what you're saying: by pushing individuals into a corner, we may make them more desperate and feed into the overall "persecution" complex that motivates their movement as a whole. I think that's true.On the other side of things, if we do nothing we legitimize their extremist views? Those extremist views become the new normal, or at least most the goalposts for the range of views considered normal.
Admittedly, it's a "damned if you do, damned if you don't situation."
But this isn't about denying their views or their right to protest. It's about targeting specific criminal actions that we don't want my side, their side, or any side to do.
I have to admit my gut feeling on governments using facial recognition at scale to round up its citizens feels like something you'd find under an authoritarian regime, reminds me of https://news.ycombinator.com/item?id=14643433
I think it's important to try to frame an opinion on tech like this from a well-informed viewpoint, striving to lend appropriate weight to the latest incidents while avoiding the temptation for tunnel vision to fixate on immediate goals to the exclusion of broader, long-term consequences.
I agree acts that crossed too far past the line of civil disobedience ought to be held to account.
I just hope our collective response doesn't erode the willingness of people of good conscience to take a stand when they see their institutions behave in a legitimately unsanctionable manner.
I think the bottom line is how the tech is employed.
Are we tracking ordinary citizens en masse, for some potential future use? Are we tracking folks just because they're dissenting/protesting? We would probably all agree those uses are bad. Very bad.
I'm not terribly worried about using them to identify specific criminal targets. Put another way, how absurd would it be for us to have perfectly clear video of people committing specific criminal acts, and not use the available technology to identify them?
On a related note, the (lack of) opsec displayed by the Capitol rioters is... really something. These people were happily mugging for the cameras, sans masks. I'm not sure if it was stupidity or entitlement. The easy and snarky answer would be "stupidity" but there were clearly intelligent folk among them or at least people that should know to cover their tracks better: lawyers, military and police officers.
I think there was a rather stunning sense of entitlement there: a lot of these folks honestly cannot believe they're being charged with crimes. As if they expected to be greeted as liberators!
God help us if and when they sharpen up their tactics. The sickening feeling in my stomach tells me that this was a hell of a practice run.
Is it regional based on where many such supporters live? I certainly don't see any ideological link to untrimmed beards, so it must be some other co-occuring factor.