(edit: on further reading it looks like GDPR requires opt in, while CCPA seems to only require opt in for people younger than 16, and requires opt-out for people 16 or older, which is some bullshit)
Regardless, I should be able to say "I'm opting out" and the onus should be on them to figure out how to do it, rather than me submitting enough pictures for them to recognize and exclude me from their crawler. This seems against even CCPA.
(another edit since this pisses me off so much: The page linked says "Alternatively, you can email: privacy-requests@clearview.ai" so I'm just going to email them to tell them I opt out. They can figure out how to comply with my opt out)
They'd have to crawl all incoming data, and compare it with you (I.e. identifying you) in order to know whether or not that picture they just slurped up was someone who had opted out. Opt-outs don't work by definition when what you are opting out of is a system of identification.
The fact of the matter is, the system shouldn't have been made in the first place,and the mere existence of it guarantees the capability for it to be abused.
I casually know the guys who made this, and they're, well, unorthodox thinkers in terms of ethics. I highly doubt that, say, government regulations would've prevented them from building this app (although it may prevent them from using it in certain regions).
It's very hard to put neural network and image recognition tools into public hands and also prevent anyone in the world from building something like this. Such is the Pandora's Box of technology.
We need laws. (Or just let everybody know everything: "Privacy is dead".)
[1]: https://arstechnica.com/tech-policy/2020/04/french-regulator...
When a person uploads a profile picture or appears in a security camera feed, they typically have an intent that doesn't match Clearview's use case, and an expectation that the stuff that Clearview.ai is trying to do with the image was humanly impossible. Historically, it has been impossible. True, some people are good with faces, and I'm sure some of them work in law enforcement or advertising, but no one can cross-reference 7 billion profile pictures to every security camera on the planet, and remember who went where at what time.
I'd argue that there's a fundamental difference in whether a right does or does not apply based on scale. A human looking at one data point needs to be approached ethically and legislatively differently from a machine looking at a million identical data points, because the use cases are different.
Clearview.ai is trying to make a land grab on human rights, asserting that because the things that they're trying to do have not yet been prohibited (because they're complicated, and because no one realized they were feasible) that they ought to continue to be allowed to do them.
By your logic:
1. train a model with A-list actors' tabloid photos
2. make a CGI feature film
3. ???
Profit!
That’d be a fascinating case!
If these were people rather than computers, I don’t think you could ask them to forget a face. But they could be asked to destroy notes or derived work.
(Not exactly the same) Just like Facebook announced a few years ago asking users to send it nude photos just so it could take down nude photos of that person, this also is ripe for abuse. It’s a matter of when, not if, they opt out information also leaks.
https://brandyourself.com/protect-your-privacy-online
Full disclosure: I helped develop the BrandYourself platform.
Once with my real info (against all reason), the second time I entered some random name and adress that couldn't possibly be real.
In both cases, I was told that I was highly exposed and that I should pay hundreds of $ per year to protect my privacy.
Also, I recognized several dark/shady design patterns.
I can only conclude that this is a very elaborate way to scare people out of money.
https://twitter.com/alexstamos/status/1249202002875297792?s=...
I mean good grief.
And this is before we even get into the fact that we’re talking about Facebook.
IMHO if this kind of measurements are required to protection the user it's _generally_ a better idea to not have a system which requires that.
If not, was it too difficult for them to think of this option?
It’s essentially a filter bubble for finance ideas
You’re all doing it again, making “too big to fail” ideas that filter the benefits towards a minority (aka TV, and religion to an extent; utilitarian value aside, bubbles controlled by a minority, rent seeking on attention at scale)
Stop being a market for human rights abuse at scale
Name ONE facet of human existence Facebook has improved? Or Instagram? Other than generating wealth for a minority?
Did we have a huge problem organizing birthdays before?
I for one am excited to have front row seats from my cushy biotech job as the attention economy collapses
I don’t owe society my time in promise of a future it can’t honestly guarantee
Still waiting on my nuclear powered car, personal helicopter, and AI that does my job for me
You needn't use your real name of course, but for HN to be a community, users need some identity for others to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
I feel like there's a meaningful legal difference between a totally public, open to be downloaded image of you from the internet, even storing it forever, and then using that in a product.
It would be like taking something with a GPL license - totally legit to download and use and modify and repost, with the original license/copyright attached - and using it in a closed source commercial product.
1. Using my/anyone's profile picture in an AI system for profit is commercial use. 2. A neural network is a derivative work of all images used to train that network.
I suspect things will have to get worse before they get better. But then again we have the 3 big credit reporting agencies, and they don't seem to be going anywhere.
And you have to send them a clear photo of yourself to opt-out... there is something very wrong with that. I don't believe for one second that they don't have, for example, my email address.
If it really riles you up, which it should, you should go and pressure your legislators into creating legislation that prevents them from doing what they're doing.
The Ad Choices opt-out is the same thing - better not delete your cookies if you want to 'opt-out' - total BS.
Scumbag authoritarians.
I just submitted an access request under the EU resident category. If I don't get a response I will consider making another request in writing.
At what point does an investigator take a look at this?
Are there legal implications if you cannot submit an opt out request because of their technical choices?
Not much of an "opt-out".
My question is how do we get people to stop working for places like clearview and google and facebook that all work against the public good?
Sadly for general public (the rest of the world), you can only "opt out" if your image source was removed.
They require a photo of a government issued ID for some stuff, so if you have an old ID that may work.
"If your company receives a request from an individual who wants to exercise their rights, you should respond to this request without undue delay and in any case within 1 month of receiving the request. This response time may be extended by 2 months for complex or multiple requests, as long as the individual is informed about the extension. Requests should be dealt with free of charge. If a request is rejected, then you must inform the individual of the reasons for doing so and of their right to file a complaint with the Data Protection Authority."
https://europa.eu/youreurope/business/dealing-with-customers...
It's directly in the GDPR itself, which everyone should take them time and read in full anyway. It's not very complex. And I mean everyone, including people from non-GDPR countries!
https://gdpr-info.eu/art-12-gdpr/ Article 12, Paragraph 3