imo diminishing ppl's privacy is a goal. Apple's csam could be tricked in different ways, esp with generative algorithms, like an malicious person will send you an album with 100+ normal looking photos(to the eye) but altered to trigger csam, now govt needs to check 100+ photos per person per send and dismiss the false positives. Since this can be replicated, imagine gov't will need to scan 100k similar usecases just for 1k ppl? that's insane, they'll either not check them, so system became obsolete(bc in this case ill intentioned ppl can just send an album of 5k photos, all triggering csam and only a bunch will be real csam. multiplied by nr of these ill ppl, you understand system is easy to game, or they spend thousands of hours checking all this photos and checking each person. Another vector of attack is generation of legit looking csam, bc, generating algorithms are too good now, but in this case(afaik) it's not a crime, since image is fully generated(either by only using ppl's face as starting point or using the description of their face tweaked enough to look realistic).
So what we get is:
- a system that can be gamed in different ways
- a system that's not proved to be effective before releasing
- a system that may potentially drive those ppl to other platforms with e2ee that don't have the csam scan(i assume since they know what e2ee is, they can find a platform without csam), so again obsolete
AND:
- a system that can't be verified by users (like is the csam list legit, can it trigger other things, is the implementation safe?)
- a system that can be altered by govt by altering the csam list to target specific ppl (idk snowden or some journalist that found something sketchy)
- a system that can be altered by apple/other company by altering csam list for ad targeting purposes
Idk, maybe i'm overreacting, but I've seen what a repressive gov can do, and with such an instrument it's frightening what surveillance vectors can be opened