> "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types."
Yes, and it was patently obvious from the onset. Why did it take a massive public backlash to actually reason about this? Can we get a promise that future initiatives will be evaluated a bit more critically before crap like this bubbles to the top again? Come on you DO hire bright people, what's your actual problem here?
Do you have a link to that issue? I didn't heard about it (I'm from a third world country)
It worries me that Sarah Gardner seems to be truthful when she hints at that no one at Apple reached out to her after Apple killed the plan in December 2022. She must have also missed articles like the one at The Verge [1].
Gardner’s message to Tim is dated 2023-08-30 03:24:37 (CEST).
Erik Neuenschwander's reply was printed out 2023-08-31 14:41:00 (probably PDT).
So that’s a period of about 44 hours that the print out [0] represents.
It would be helpful to know the time when Apple’s reply was sent to deduce whether Apple had time for additional deliberations before writing the email to Gardner.
Regardless, as reported by The Verge they obviously had deliberations shortly after the public outcry, killed the plan in December 2022, and then presumably failed to inform Gardner about the reasons.
If that’s the case, far smaller companies have checks and balances in place to keep that from happening.
[0]: https://s3.documentcloud.org/documents/23933180/apple-letter...
[1]: https://www.theverge.com/2022/12/9/23500838/apple-csam-plans...
In the old days, and I assume it is still the case, judging from observing Apple for the past 20+ years, every single public response has to go through Apple PR. Even public interview before hand they were given lots of preparation.
It is likely Apple has this prepped for a long time. Or even it was another PR case scenario [1] where the CEO was baited to send this email before a response was given and somehow "leaks" to the press.
[1] Read Submarine PR by Paul Graham.
Whether this was the right response to such concern is something I’m not unsympathetic towards. Certainly I think it’s reasonable to say that Apple was trying to thread a needle in a way which was never going to please everyone, even if it somehow turns out to have been the least-worst outcome.
How could they not see that they would have a giant backlash on their hands? Did they overestimate their ability to get away with the "it's for our children" excuse this badly?"
When it was made public people shot all sorts of holes both in the technical mechanisms themselves as well as their insufficiency even if they all worked as intended.
We should have some sympathy-- they did have some neat technology. But neat technology isn't enough.
Post Steve Jobs Apple, especially after Scott Forstall and Katie Cotton left. ( Along with a few other top executives) Tim Cook's Apple were left with people of Harmony. These left aligning, DEI focus people have the same characteristic, their way is for the force of good, hence their way is the only way. Same as early Google in the 00s. ( Privacy is a fundamental human right? Actively securing and promoting Chinese components in their supply chains. )
I am sure CSAM started with good intention. As with most ideals do. But fundamentally they don't work in this complex world.
All roads to hell are paved with good intention.
I hope there are still enough of Steve Jobs' conviction left inside Apple.
Ironic that this opinion is just a reductive and reactionary as the thing that it decries.
I'm curious why they also withdrew those. For those who don't remember the parental control, which were largely overshadowed by the controversy over the cloud stuff, they were to work like this:
1. If parents had enabled them on their child's device, they would scan incoming messages for sexual material. The scan would be entirely on-device. If such material was found the material would be blocked, the child would be notified that the the message contained material that their parents thought might be harmful, and asked if they wanted to see it anyway.
2. If the child said no, the material would be dropped and that would be the end of it. If the child said yes what happened next depended on the age of the child.
3. If the was at least 13 years old the material would be unblocked and that would be the end of it.
4. If the while was not yet 13 they would be given another warning that their parents think the material might be harmful, and again asked if they want to go ahead and see it. They would be told that if they say "yes" their parents will be notified that they viewed the material.
5. If they say no the material remains blocked and that is the end of it.
6. If they say yes it is unblocked, but their parents are told.
There wasn't a lot of discussion of this, and I only recall seeing one major privacy group object (the EFF, on the grounds that if it reaches step 6 it violates the privacy of the person sending sex stuff to your pre-teen because they probably did not intend for the parents to know).
Not quite the same but see: https://www.forbes.com/sites/thomasbrewster/2023/04/06/sex-t...
Apple’s updated system allows children to ask for help from an adult using the Communication Safety features, which strikes a good balance to me.
uh, by preventing them from seeing sexual content?
“Apple knew my 14 year old was receiving sexual material and did nothing about it?!”
Big fan of keeping different forms of content separate.
Let's say the parents are abusive, and someone wants to talk with the child about that (via chat, for some reason).
Now, if the algorithms sometimes incorrectly flag private messages that were in fact safe -- could that be mitigated by letting the sender know: "Your message will be scanned and possibly shown to the parents of the recipient" before they hit Send?
(O.t.o.h. that leaks info about the age of the recipient.)
ETA: looks like they directly provide documents from Apple at the bottom of the article
She's using PR to pressure on Apple into implementing the kind of solution her previous company is selling. Won't someone think of the children??
[1] https://www.linkedin.com/in/sarah-gardner-aba90013/ [2] https://www.thorn.org/our-work-to-stop-child-sexual-exploita...
There is only one correct answer though and that is what they have clarified.
I would immediately leave the platform if they progressed with this.
This is not to say they should scan locally, but my understanding of CSAM was that it would only be scanned on its way to the cloud anyways, so users who didn’t use iCloud would’ve never been scanned to begin with.
Their new proposed set of tools seems like a good enough compromise from the original proposal in any case.
I speculated (and now we know) when this new scanning announced, that it was in preparation for full E2EE. Apple came up with a privacy preserving method of trying to keep CSAM off their servers while also giving E2EE.
The larger community arguments swayed Apple from going forward with their new detection method, but did not stop them from moving forward with E2EE. At the end of the day they put the responsibility back on governments to pass laws around encryption - where they should be, though we may not like the outcome.
At the time I also thought it was obvious it was in preparation for e2ee (despite loud people on HN who disagreed).
I do wonder if they had intended to have it be default on though, maybe not since probably better for most users to have a recovery option.
To counter the "think of the children" -argument governments use to justify surveillance, Apple tried scanning stuff on-device but the internet got a collective hissy-fit of intentionally misunderstanding the feature and it was quickly scrapped.
They basically did. If you turn on Advanced Data Protection, you get all of the encryption benefits, sans scanning. The interesting thing is that if you turn on ADP though, binary file hashes are unencrypted on iCloud, which would theoretically allow someone to ask for those hashes in a legal request. But it's obviously not as useful for CSAM detection, as, say, PhotoDNA hashes. See: https://support.apple.com/en-us/HT202303
how was it misunderstood? your device would scan your photos and notify apple or whoever if something evil was found. wasn't that what they were trying to do?
> scanning stuff on-device
What do you think they were going to do once the scanning turned up a hit? Access the photos? Well that negates the first statement.
I don’t see the problem with this status quo. There is a clear demarcation between my device and their server. Each serving the interests of their owner. If I have a problem with their policy, I can choose not to entrust my data to them. And luckily, the data storage space has heaps of competitive options.
The generic space does, yes. But if you want native integration with iOS, your only choice is iCloud. It would certainly be nice if this was an open protocol where you could choose your own storage backend. But I think the chances of that ever happening are pretty much zero.
No outright statement confirming or denying this has ever made to my knowledge, but the implication, based both on Apple's statements and the statement of stakeholders, is that this isn't currently the case.
This might come as a surprise to some, because many companies scan for CSAM, but that's done voluntarily because the government can't force companies to scan for CSAM.
This is because based on case law, companies forced to scan for CSAM would be considered deputized and thus subsequently it would be a breach of the 4th amendments safeguards against "unreasonable search and seizure".
The best the government can do is to force companies to report "apparent violations" of CSAM laws, this seems like a distinction without a difference, but the difference is between required to actively search for it (and thus becoming deputized) v. reporting when you come across it.
Even then, the reporting requirement is constructed in such a way as to avoid any possible 4th amendment issues. Companies aren't required to report it to the DOJ, but rather to the NCMEC.
The NCMEC is a semi-government organization, autonomous from the DOJ, albeit almost wholly funded by the DOJ, and they are the ones that subsequently report CSAM violations to the DOJ.
The NCMEC is also the organization that maintains the CSAM database and provides the hashes that companies, who voluntarily scan for CSAM, use.
This construction has proven to be pretty solid against 4th amendment concerns, as courts have historically found that this separation between companies and the DOJ and the fact that only confirmed CSAM making its way to the DOJ after review by the NCMEC, creates enough of a distance between the DOJ and the act of searching through a person's data, that there aren't any 4th amendment concerns.
The Congressional Research Service did a write up on this last year for the ones that are interested in it[0].
Circling back to Apple, as it stands there's nothing indicating that they already scan for CSAM server-side and most comments both by Apple and child safety organizations seem to imply that this in fact is currently not happening.
Apple's main concerns however, as stated in the letter by Apple, echo the same concerns by security experts back when this was being discussed. Namely that it creates a target for malicious actors, that it is technically not feasible to create a system that can never be reconfigured to scan for non-CSAM material and that governments could pressure/regulate it to reconfigure it for other materials as well (and place a gag order on them, prohibiting them to inform users of this).
At the time, some of these arguments were brushed off as slippery slope FUD, and then the UK started considering something that would defy the limits of even the most cynical security researcher's nightmare, namely a de facto ban on security updates if it just so happens that the UK's intelligence services and law enforcement services are currently exploiting the security flaw that the update aims to patch.
Which is what Apple references in their response.
> Nothing in this section shall be construed to require a provider to—
> (1) monitor any user, subscriber, or customer of that provider;
> (2) monitor the content of any communication of any person described in paragraph (1); or
> (3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).
The core of 18 U.S. Code § 2258A - Reporting requirements of providers is available at https://www.law.cornell.edu/uscode/text/18/2258A.
Since it all went down they added the advanced security option that encrypts photos, messages, and even more.
But that option is opt-in since if you mess it up they can’t help you recover.
It’s objectively better than what google does but I’m glad we somehow ended up with no scanning at all.
Companies don't want to employ people. People are annoying. They make annoying demands like wanting time off and having enough money to not be homeless or starving. AI should be a tool that enhances the productivity of a worker rather than replacing them.
Fully automated "safety" systems always get weaponized. This is really apparent on Tiktok where reporting users you don't like is clearly brigaded becasue a certain number of reports in a given period triggers automatic takedowns and bans regardless of assurances there is human review (there isn't). It's so incredibly obvious when you see a duet with a threatening video gets taken down while the original video doesn't (with reports showing "No violation").
Additionally, companies like to just ban your account with absolutely no explanation, accountability, right to review or right to appeal. Again, all those things would require employing people.
False positives can be incredibly damaging. Not only could this result in your account being banned (possibly with the loss of all your photos on something like iCloud/iPhotos) but it may get you in trouble with law enforcement.
Don't believe me? Hertz was falsely reported their cars being stolen [1], which created massive problems for those affected. In a better world, Hertz executives would be in prison for making false police reports (which, for you and me, is a crime) but that will never happen to executives.
It still requires human review to identify offending content. Mass shootings have been live streamed. No automatic system is going to be able to accurately differentiate between this and, say, a movie scene. I guarantee you any automated system will have similar problems differentiating between actual CSAM and, say, a child in the bath or at the beach.
These companies don't want to solve these problems. They simply want legal and PR cover for appearing to solve them, consequences be damned.
[1]: https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...
The whole uproar about this system was made by people who didn't know the most basic things about it.
The tl;dr is that despite this man ultimately having his name cleared by the police after having his entire Google account history (not just cloud) searched as well his logs from a warrant served to ISP, Google closed his account when the alleged CSAM was detected and never reinstated it. He lost his emails, cloud pictures, phone number (which losing access to prevented the police from contacting him via phone), and more all while going through a gross, massive invasion of his privacy because he was trying to do right for his child during a time when face-to-face doctor appointments were difficult to come by.
This should be a particularly salient reminder to people to self-host at the very least the domain for their primary and professional e-mail.
[0] https://www.nytimes.com/2022/08/21/technology/google-surveil...
The google one actually does try to detect new ones and there are reported instances of Google sending the police on normal parents for photos they took for the doctor.
Pick your favourite other example of when Apple and Google have faced roughly the same problem as each other, and hold up their respective solutions next to those in the example of CSAM scanning above. I bet they'll look similar.
And there's another dynamic where telling your customers you're going to scan their content for child porn is the same as saying you suspect your customers of having child porn. And your average non-criminal customer's reaction to that is not positive for multiple reasons.
> (2) Civil liability
> No provider or user of an interactive computer service shall be held liable on account of—
> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
Both of these arguments are absolutely, unambiguously, correct.
The other side of the coin is that criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to. This is, I argue, a bad thing. Is is bad for the individuals who are re-victimised on every share. It is also bad for the fabric of society at large, in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.
Does the tech industry have any alternate solutions that could functionally mitigate this abuse? Does the industry feel that it has any responsibility at all to do so? Or do we all just shout "yay, individual freedom wins again!" and forget about the actual problem that this (misguided) initiative was originally aimed at?
Much like other 'tough on crime' measures (of which destroying E2EE is one) the real problems need to be solved not at the point of consumption (drugs, guns, gangs, cartels) but at the root causes. Getting rid of E2EE just opens the avenue for the abuse of us by the government but in no way guarantees we'll meaningfully make children safer.
And no, we are not 'condoning' it when we declare E2EE an overall good thing. Real life is about tradeoffs not absolutes, and the tradeoff here is protection from the government for potentially billions vs. maybe arresting a few thousand more real criminals. This is a standard utilitarian tradeoff that 'condones' nothing.
Governments are not good at this type of thing. It requires careful analysis, planning and actual decisions.
But slap on a regulation and require private companies to do the hard work for you - now we are talking!
Yes, and you can tell because the proposed solutions attack privacy when alternative solutions exist.
For example, simply deleting CSAM material from devices locally without involving any other parties could have achieved the goals without privacy violations.
It makes me somewhat uncomfortable to argue for not involving other parties (like the police) in cases where real CSAM is found on someone’s device. Same as most people, I think that CSAM is morally reprehensible and really harmful to society. But just deleting it en-masse would have been an effective and privacy-respecting solution.
I think it’s important to see nuance even in things we don’t like to think about. Not everything that has a price tag has a price. We were told we needed to give up privacy, but that wasn’t necessary to take CSAM out of circulation.
Yes. Since becoming an abuser is a process and not a moment, part of the solution must be making access to CSAM much harder.
> And no, we are not 'condoning' it when we declare E2EE an overall good thing.
Agreed. I'm sorry if I worded things in a way that caused you to see an implication which was not intended. To be clear: E2EE is a good thing. Championing E2EE is not equivalent to condoning CSAM.
What I did say is that in failing to try and provide any meaningful solutions to this unintended consequence of E2EE, the industry is effectively condoning the problem.
> This is a standard utilitarian tradeoff
If that's the best we can do, I'm very disappointed. That position says that to achieve privacy, I must tolerate CSAM. I want both privacy and for us not to tolerate CSAM. I don't know what the solution is, but that is what I wish the industry were aiming for. At the moment, the industry seems to be aiming for nothing but a shrug of the shoulders.
There is no parallel to be drawn between better encryption and worse outcomes for kids. Should we also outlaw high-performance cars because these sometimes serve as effective getaway vehicles for criminals?
CSAM producers and consumers should be found and punished via old-fashioned methods. How was this done in the past? Did we just never catch any human traffickers / rapists? No, we had detectives who went around detecting and presumably kicking down doors.
To outlaw large sections of mathematics because of this is absurd. And from the amount of power it would give big governments / big businesses, the fabric of society doesn't stand a chance.
The "old-fashioned methods" that they used in the past included intercepting communications of people that were suspected of crimes, such as by getting a warrant allowing them to force the person's phone company to record and turn over the person's calls, or by getting a warrant to intercept and inspect the contents of the person's mail at the post office.
> To outlaw large sections of mathematics because of this is absurd
No one has or is proposing outlawing large sections of mathematics, or even small sections of mathematics. The laws are outlawing some applications that make use of mathematics.
Calling that outlawing mathematics is as absurd as saying that building codes that won't let me use asbestos insulation in new construction are banning sections of thermodynamics. Or saying that laws that restrict how high I can fly a drone are banning large sections of aerodynamics.
Recently invented encrypted chat rooms allow people to coordinate and transfer CSAM without any government official being able to infiltrate it. And just being able to freely discuss has been shown to make the problem worse as it facilitates knowledge transfer.
This is all completely different to in the past where this would have been done in person. So the argument that we should just do what we did in the past makes no sense. As technology advances we need to develop new techniques in order to keep up.
Yea, LE/IC clearly have gone too far in many modern tactics.
Yea, it's possible to build a surveillance/police state much more efficiently than ever before.
Yea, we should be vigilant against authoritarianism.
Yes. If consumer cars’ speed was capped to 100 mph it would affect a very small percentage of people and zero legitimate use.
The difference with encryption is that while it protects criminals from the police, it also protects legitimate users from criminals.
What if we change the last bit after the "because" to "these sometimes are used at unsafe speeds and, intentionally or not, kill people who are not in cars?"
Because, at least for me, the answer is an unambiguous yes.
I agree that privacy and security should be available to everyone. But we also shouldn't count on being able to find people who are doing vile things--to children or adults--because the person messed up their opsec. I think Apple is correct here but as an industry we have to be putting our brains to thinking about this. "To outlaw large sections of mathematics" is hyperbole because we use mathematics to do a lot of things, some useful and some not.
I don't belive tech has an over weighted responsibility to solve society's problems, and in fact it's generally better if we don't try and pretend more tech is the answer.
Advocating for more money and more prioritization for this area of law enforcement is still the way to go if it's a priority area. Policing seems to be drifting towards "mall cop" work, giving easy fines, enabled by lazy electronic surveillance casting a wide net. Let's put resources towards actual detective work.
We deceive ourselves honestly by pretending like we have not created new realities which are problematic at scale. We have. They are plentiful. And if people aren’t willing that we walk back tech to reduce the problems and people aren’t willing to accept technical solutions which are invasive then what are we to do? Are we just to accept a new world with all these problems stemming from unintended consequences of tech?
Apple's proposed solution would have theoretically only reported cases that were much more than likely to be already known instances of CSAM (i.e. not pictures of your kids), and if nothing else is reported, can we say that they were really surveilled? In some very strict sense, yes, but in terms of outcomes, no.
I think we’re well beyond that point now. Whether or not encryption is allowed or not and however private you believe your virtual life to be, in the physical world surveillance is the norm. Your physical location, biometric information, and relationships can and will be monitored and recorded.
If illegal data (CP) is being transferred on the net, wiretapping that traffic and bringing hits to the attention of a human seems like a proportional response.
(Yes, I know, it's not going to be 100% effective, encryption etc, but neither is actual detective work.)
Blah blah blah, the same old argument given by the "think of the children" people.
There are many ways to counter that old chestnut, but really, we only need to remember the most basic fundamental facts:
1) Encryption is mathematics 2) Criminals are criminals
Can you ban mathematics ? No. Can you stop criminals being criminals ? No.
So, let's imagine you are able to successfully backdoor E2EE globally, on all devices and all platforms.
Sure, the "think of the children" people will rejoice and start singing "Hallelujah". And the governments will rub their hands with glee with all the new data they have access to.
But the criminals ? Do you honestly think they'll think "oh no, game over" ?
No of course not. They'll pay some cryptographer in need of some money to develop a new E2EE tool and carry on. Business as usual.
[Citation needed]
This mindset—that assigns people into immutable categories, "criminal" and "not criminal"—is actually one of the biggest things that needs to change.
We absolutely can stop criminals from being criminals. We just can't do so by pointing at them and saying "Stop! Bad!" We have to change the incentives, remove the reasons they became criminal in the first place (usually poverty), and make it easier, safer, and more acceptable to go from being a criminal to being a not-criminal again.
> No of course not. They'll pay some cryptographer in need of some money to develop a new E2EE tool and carry on. Business as usual.
I used to think this, I changed my mind: just as it's difficult to do security correctly even when it's a legal requirement, only the most competent criminal organisations will do this correctly.
Unfortunately, the other issue:
> And the governments will rub their hands with glee with all the new data they have access to.
Is 100% still the case, and almost impossible to get anyone to care about.
How many times will actual abusers be allowed to go free while your own family is victimized by the authorities and what ratio do you find acceptable?
Federal and state police, some of the best funded, equipped and trained police in the world are so inundated with cases that they are forced to limit their investigations to just toddlers and babies. What use is it to add more and more cases to a mountain of uninvestigated crimes? Whats needed is more police clearing the existing caseload.
That's an unbelievably depressingly low bar for society.
That is a bigger problem and it will take a long time to fix. So long that I suspect that anybody reading this is long dead, but its like the saying with planting trees.
Indeed, they are correct. And they were also brought up when Apple announced that they would introduce this Orwellian system. Now they act like they just realized this.
Absolutely.
> It is also bad for the fabric of society at large, in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.
Much less clear. There's always been the argument: does it provide an outlet with no _new_ (emphasis on "new" so people don't skim that word) victims, or does it encourage people to act out their desires for real?
I don't have any answer to this question; but the answer matters. I do have a guess, which is "both at the same time in different people", because humans don't have one-size-fits-all responses.
Even beyond photographs, this was already a question with drawings; now we also have AI, creating new problems with both deepfakes of real people and ex-nihilo (victimless?) images.
> Does the tech industry have any alternate solutions that could functionally mitigate this abuse?
Yes.
We can build it into the display devices, or use a variation of Van Eck phreaking to the same effect.
We can modify WiFi to act as wall-penetrating radar with the capacity to infer pose, heart rate, and breathing of multiple people nearby even if they're next door, so that if they act out their desires beyond the screen, they'll be caught immediately.
We can put CCTV cameras everywhere, watch remotely what's on the screens, and also through a combination of eye tracking and (infrared or just noticing a change in geometry) who is aroused while looking at a forbidden subject and require such people to be on suppressants.
Note however that I have not said which acts or images: this is because the options are symmetrical under replacement for every other act and image, including (depending on the option) non-sexual ones.
There are places in the world where being gay has the death penalty. And if I remember my internet meme history right, whichever state Mr Hands was in, accidentally decriminalised his sex when the Federal courts decided states couldn't outlaw being gay and because that state had only one word in law for everything they deemed "unnatural".
your source for proof of that?
>> in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.
No. this narrative of "silence is violence" and "no action is support" etc is 100% wrong.
You started out great, but i can not get behind this type of thinking...
>>Does the tech industry have any alternate solutions that could functionally mitigate this abuse?
Why is it a "tech industry" problem
>>Or do we all just shout "yay, individual freedom wins again!"
For me the answer is simple... Yes individual freedom is more important that everything. I will never support curbing individual freedom on the alter of any kind proclaimed government solution to a social safety problem. Largely because I know enough about history to understand that not only will they no solve that social safety problem, many in government are probably participating in the problem and have the power to exempt themselves, while abusing the very tools and powers we give them to fight X, for completely unrelated purposes.
Very quickly any tool we would give them to fight CSAM would be used for Drug Enforcement, Terrorism, etc. It would not be long before the AI based phashes detect some old lady's Tomato plants as weed and we have an entire DEA paramilitary unit raiding her home...
Yet, the only justification around for mass surveillance for camera pictures, the important societal matter to rest literally on "the other side of the coin", is "some minority of people could be sharing naked photos of young members of society for highly unethical acts of viewing".
wtf.
You mean at rates greater than in the 1970s when it was legal in the US and there were a bunch of commercial publications creating and distributing it (prior to the "Protection of Children Against Sexual Exploitation Act" in 1977 (which targeted the production) and "Child Protection and Obscenity Enforcement Act" in 1988 (which targeted the distribution))? I find that doubtful.
> Does the industry feel that it has any responsibility at all to do so?
Why should it? Should every home builder and landlord install a camera in every bedroom to stream back to the government to be sure that no children are being molested? Why not? It's technologically possible today.
The fact that garbage people do garbage things to vulnerable people doesn't mean that the entire world must be commandeered to stop it.
E2E encryption has replaced a lot of in person meeting that also would have been secure against dragnet surveillance. We shouldn't lose our privacy just because it became technically possible to take it away, no different than the cameras in bedrooms mentioned above.
https://www.nbcphiladelphia.com/news/local/child-porn-victim...
Oh, please. As if we couldn't just compare the hashes of the pictures people are storing against a CSAM database of hashes that gets regularly updated
When this was proposed people would respond "But they could just mirror the pictures or cut a pixel off!"
Who cares? You got that picture from some place in the dark web, and eventually someone will stumble upon it and add it to the database. Unless the person individually edits the pictures as they store them, that makes it so that you're never sure if your hashes will posthumously start matching against the DB.
People who wank off to CSAM have a user behavior similar to any other porn user, they don't store 1 picture, they store dozens, and just adding that step makes them likely to trip up, or straight up just use another service altogether
"What if there's a collision?" I don't know, go one step further with hashing a specific part of the file and see if it still matches?
This whole thing felt like an overblown fearmongering campaign from "freedom ain't free" individualists. I've never seen anything wrong with content hosters using a simple hash against you like this.
But even if there was an md5 hash collision back when md5 was the only one hash use, it still doesn't matter because upon viewing the image that matched, if it's not csam, it doesn't matter. Having said that, the chance of dozens of images matching hashes known to be associated to csam is also so unlikely as to be unthinkable. Where there is smoke, there is fire.
And further, a hash alone is meaningless, since in court there must be a presentation of evidence. If the image that set off the csam alarm by hash collision is say, an automobile, there is no case to be had. So all this talk about hash issues is absolutely moot.
Source: I have worked as an expert witness and presented for cases involving csam (back when we called it Child Pornography, because the CSAM moniker hadn't come about yet), so the requirements are well known to me.
Having said all that, I am an EFF member, and I prefer cryptography to work, and spying on users to be illegal.
Cops need to investigate the same way they always have, look for clues, go undercover, infiltrate, find where this stuff is actually being made, etc.
Scanning everyone's phones would make their jobs significantly easier, no doubt, but it simply isn't worth the cost to us as a society and there is simply no good counter-argument to that.
"Apple" wasn't scanning your phone, neither was there a "backdoor".
If you would've had iCloud upload enabled (you'd be uploading all your photos to Apple's server, a place where they could scan ALL of your media anyway), the phone would've downloaded a set of hashes of KNOWN and HUMAN VERIFIED photos and videos of sexual abuse material. [1]
After THREE matches of known and checked CSAM, a check done 100% on-device with zero data moving anywhere, a "reduced-quality copy" would've been sent to a human for verification. If it was someone sending you hashbombs of intentional false matches or an innocuous pic that matched because some mathematical anomaly, the actual human would notice this instantly and no action would've been taken.
...but I still think I was the only HNer who actually read Apple's spec and just didn't go with Twitter hot-takes, so I'm fighting windmills over here.
Yes, there is always the risk that an authoritarian government could force Apple to insert checks for stuff other than CSAM to the downloaded database. But the exact same risk exists when you upload stuff to the cloud anyway and on an even bigger scale. (see point above about local checks not being enabled unless iCloud sync is enabled)
[1] It wasn't an SHA-1 hash where changing a single bit in the source would make the hash invalid, the people doing that were actually competent.
Creating backdoors that allow encryption schemes to be subverted is _fundamentally_ going to cause harm on the internet, and eventually fail the weakest users/those that need privacy/security the most.
A mechanism that can subvert cryptographic protocols can be used by any party, including oppressive regimes, private entities etc. that have the resources/will/knowledge to use the backdoor etc. Backdoors harm both the trust on the web (which can have an impact on economic transactions among many others) and the people that need security/privacy the most. In the meantime, criminals will wise up and move their operations elsewhere where no backdoors exist.
We basically end up with a broken internet, we are putting people in harm's way and the criminals we are targeting are probably updating their OPSEC/MO not to rely on E2EE.
A small percentage of people are involved in this crime and subjecting every single person to illegal searches and possibly getting wrongly identified or nation states using this to imprison its enemies is wrong.
Sometimes there is no solution that satisfies everyone and sometimes the only good solution is the least shittiest solution but still a shitty solution. In this case, I believe that my and everyone else’s freedoms and privacy are worth it and we should instead spend much money time and effort trying to catch these criminals instead of scanning everyone’s phones which won’t ultimately work.
I don't really buy any "slippery slope" arguments for this stuff. Apple already can push any conceivable software it wants to all of its phones, so the slope is already as slippery as it can possibly be.
It just doesn't make sense to say "Apple shouldn't implement this minimal version of photo-scanning now even though I don't think it's bad, because that's a slippery slope for them to implement some future version of scanning that I do think is bad." They already have the capability to push any software to their phones at any time! They could just skip directly to the version you think is bad!
...regardless of whether Apple rolls out E2EE right? End to end encryption is available through a whole host of open-source tools, and should Apple deploy CSAM scanning the crooks will just migrate to a different chat tool.
I think that companies might need to enable some kind of mechanism for offline investigation of the devices though. The CSAM is a real problem, there are real predators out there, the only risks isn't CSAM and law enforcement does really need to have a way to investigate devices. Previously, my proposal was the ability to force the device scan the user content for fingerprints of the suspected content but only with physical access. Physical access enforces the law enforcement to actually have a real and official investigation with strong enough reasons to spend resources and risk repercussions when done improperly.
However, the project of scanning all the user content for policing the users was one thing that irked and later relieved when Apple abandoned the project.
Apple's explanation is good and I agree with them but IMHO the more important aspects are:
1) Being able to trust your devices being on your side. That is, your device shouldn't be policing you and shouldn't be snitching on you. At this time you might think that the authorities who would have controlled yor device are on your side but don't forget that those authorities can change. Today the devices may be catching CSAM, some day the authorities can start demanding catching people opposing vaccines and an election or a revolution later they can start catching people who want to have an abortion or having premarital sexual relations or other non-kosher affairs.
2) Being free of the notion that you are always watched. If your device can choose to reveal your private thoughts or business, be it by mistake or by design, you can no longer have thoughts that are unaligned with the official ones. This is like the idea of a god who is always watching you but instead of a creator and angels you get C level businessmen and employees who go through your stuff when the devices triggers decryption of your data(by false positives or by true positives).
Anyway, policing everyone all the time must be an idea that is rejected by the free world, if the free world doesn't intent to be as free as Democratic People's Republic of Korea is democratic.
I'd suggest there's a lot the not-tech industry could do to stop condoning abhorrent behavior that stops short of installing scanners on billions of computing devices. It's become a bit of a trope at this point, but it's bizarre to see a guy who is/was spokesman for a "minor attracted persons" (i.e. pedos) advocacy group getting published negatively reviewing the controversial new sex trafficking movie ... in Bloomberg:
https://www.bloomberg.com/opinion/articles/2023-07-15/qanon-...
For some background:
https://www.opindia.com/2021/08/meet-noah-berlatsky-prostasi...
His 501(c)(3) non-profit also advocates in favor of pedophilic dolls and has a "No Children Harmed certification seal" program for pedophilic dolls/etc:
https://prostasia.org/blog/dolls-prevent-child-sexual-abuse/
https://prostasia.org/no-children-harmed/
I'm not sure you can criminalize stuff like this, but it sets off my alarm bells when pedophile advocates are being published in mainstream news at the same time there's a moral panic around the need to scan everyone's hard drives. Is society actually trying to solve this problem, or is this more like renewing the Patriot Act to record every American's phone calls at the same time we're allied with al Qaeda offshoots in Syria? Interesting how terrorism has been the primary other argument for banning/backdooring all encryption.
----
As an aside I couldn't find ~anything about this group "Heat Initiative" Apple is responding to? Other than a TEDx talk by the founder a couple years ago which again seems very focused on "encrypted platforms" as the primary problem that needs solving: https://www.ted.com/talks/sarah_gardner_searching_for_a_chil...
> Does the tech industry have any alternate solutions that could functionally mitigate this abuse? Does the industry feel that it has any responsibility at all to do so? Or do we all just shout "yay, individual freedom wins again!" and forget about the actual problem that this (misguided) initiative was originally aimed at?
The issue at large here is not the tech industry, but law enforcement agencies and the correctional system. Law enforcement has proven time and time again themselves that the most effective way to apprehend large criminal networks in this area is by undercover investigation.
So no, I don't think it is the tech industries role to play the extended arm of some ill conceived surveillance state. Because Apple is right: This is a slippery slope and anyone that doesn't think malicious political actors will use this as a foot in the door to argue for more invasive surveillance measures, using this exact pre-filtering technology are just naive idiots, in my oppinion.
I'm all for privacy, but those who put it above all else are already likely not using Apple devices because of the lack of control. I feel like for Apple's target market the implementation was reasonable.
I think Apple backed down on it because of the vocal minority of privacy zealots (for want of a better term) decided it wasn't the right set of trade-offs for them. Given Apple's aim to be a leader in privacy they had to appease this group. I think that community provides a lot of value and oversight, and I broadly agree with their views, but in this case it feels like we lost a big win on the fight against CSAM in order to gain minor, theoretical benefits for user privacy.
> a new child safety group known as Heat Initiative
Doesn't even have a website or any kind of social media presence; it literally doesn't appear to exist apart from the reporting on Apple's response to them, which is entirely based on Apple sharing their response with media, not the group interacting with media.
> Sarah Gardner
on the other hand previously appeared as the VP of External Affairs (i.e. Marketing) of Thorn (formerly DNA Foundation): https://www.thorn.org/blog/searching-for-a-child-in-a-privat...
So despite looking a bit fishy at first, this doesn't seem to come from a christofascist group.
Why would you assume this in the first place?
So yeah. When these things pop up I assume malicious intent.
You can rest assured that they are scanning all of it serverside for illegal images presently.
The kerfuffle was around clientside scanning, something that it has been reported that they dropped. I have thus far seen no statements from Apple that they actually intended to stop the deployment of clientside scanning.
Serverside scanning has been possible (and likely) for a long time, which illuminates their "slippery slope" argument as farce (unless they intend to force migrate everyone to e2ee storage in the future).
Approximately no one uses it.
Hopefully Apple will begin promoting users to migrate in future updates.
Whatever they said, it was probably worded to give you this impression, without actually saying that. Apple is extremely careful and goes to great pains to actively mislead and deceive whilst avoiding actual lies.
Could you please link to or quote these statements from Apple? I would bet any money they say something different than what you claim, a "not wittingly"-style hedge.
The earlier design was a hybrid model that scanned for CSAM on device, then flagged files were reviewed on upload.
Note that by design the hashes cannot be audited (though in the legitimate case I don’t imagine doing so would be pleasant), so there’s nothing stopping a malicious party inserting hashes of anything they want - and then the news report will be “person x bought in for questioning after CSAM detector flagged them”.
That’s before countries just pass explicit laws saying that the filter must includE LGBT content (in the US several states consider books with lgbt characters to be sexual content, so a lgbt teenager would be de facto CSAM), in the UK the IPA is used to catch people not collecting dog poop so trusting them not to expand scope is laughable, in Iran a picture of a woman without a hijab would obviously be reportable, etc
What Apple has done is add the ability to filter content (eg block dick picks) and for child accounts to place extra steps (incl providing contact numbers I think?) if a child attempts to send pics with nudity, etc
What does this mean? What is IPA? I tried Googling for it but I’m not finding much. I would love to learn more about that
In a word, decentralization.
By detecting unsafe material on-device / while it is being created, they can prevent it from being shared. And because this happens on individual devices, Apple doesn’t need to know what’s on people’s iCloud. So they can offer end-to-end encryption, where even the data on their servers is encrypted. Only your devices can “see” it (it’s a black box for Apple servers, gibberish - without the correct decryption key).
It is amazing that so much counter-cultural spirit remains in Apple. They are probably going to ban likes and other vanity features in all iOS applications, prohibit access to popular media, put “pop stars” into rehabs, and teach their users to disobey (the hardest of all tasks).
A lot of people try really hard not to see that “unusual” abuse of the children is the same as “usual” abuse of everyone. Conveniently, the need for distinction creates “maniacs” that are totally, totally different from “normal people”, and cranks up the sensation level. The discussion of “external” evil then can continue ad infinitum without dealing with status quo of “peaceful, normal life”.