Now, how well this NeuralHash does preserve privacy is a different question, and /not/ one that is being answered by the original post here. In fact, I've not seen anybody look at the hash distribution over natural images, which would be an actual argument against the system.
Apple now has the ability to encrypt the images before sending them to icloud, with a private key you own. Except that some percentage of images that match the CSAM fingerprint with their neural feature extractor will be sent to a CSAM filter on the server side (whose workings we don't have many details about)
This whole thing backfired on Apple entirely due to psychological effects, not because they are really doing anything more "panopticon" that they would already able to do now on their icloud storage (after all people are ready sending their photos to apple)
Therefore, why trust any of their other claims?
Just because someone has found an image of a nearly featureless diagonal thing which collides with another image of a nearly featureless diagonal thing, that doesn't disprove Apple's claims.
Given people can now generate images that collide, it seems like the statistical likelihood has drastically changed since it was originally announced.
So again the actual argument becomes: what is that distribution like?
I doubt that's what they did. I think they ran tests on huge numbers of pictures, got an estimate, put in a safety factor, and determined the threshold to hit their target (and put in another safety buffer then).
Naturally occurring collisions are not going to be an issue, and adversarial ones neither, I predict. Just as with current cloud providers.
Apple never made a false claim. They have never anywhere stated that neuralhash makes false positives at 1 in a trillion. Only that that is the rate for the system as a whole to flag accounts for review. The explicitly mention that they will vary the number of matches needed to maintain this if it turns out to be higher or lower based on images in the wild.
There are good arguments against this system but most of the technical debate seems to have devolved into amplifying lies now.
Still. Why trust them after that?
If a company can make my own smartphone report me to the police, and they want my business, they better prove I can trust them. Apple has plainly done the opposite.
The whole ordeal is just utterly 1984.
In contrast, a NH system believed to have a collision chance of 1 in a trillion trillion may well be considered infallible, and any detection be directly reported as CSAM, with the 'backend verification' amounting to nothing more than a rubber stamp.
Of course, if you implicitly trust Apple not to do the second, than you're right, the NH collision rate doesn't matter too much.