hahaha. What a coincidence, Google!
So you got a hold of the neural hashes, and then used an error function and descent to generate images that match a 'hash'?
It feels wrong to call them 'hashes' when they're so weak to pre-image attacks. They're not the same idea as cryptographic hashes at all.
Also want to underline how spooky it is that some of them do resemble human forms.
First is veg&butthole, then boobs, next is doggy style etc etc (edit: it seems the order isn't consistent. So I'm likely seeing different images then you.)
You can go through them all and see the original pornography if you look at the shapes. To me, it looks more like they started with the real images and tweaked them to make them artsy.
These images could be a joke, as I don't think we have a clear technical documentation of how these hashes are generated. Computer vision? Vectors? Face recognition software? It's definitely not a naive hash.
Edit: seeing the other comments in this thread referencing Twitter, it looks like it's more naive than expected, as the hash is resistant to resizing, but not to cropping. The implementation can change at Apple's discretion, though.
There is a reason cryptographic hashes are distinguished; some applications of hashing are only concerned with minimizing non-malicious collisions.
(Arguably, this is an application where malicious collisions are an issue, but perceptual hashes don't purport to be cryptographic.)
Long before they claimed that they scan email for child porn, they made an email scanner to appease China, on a condition they will not target dissidents.
I think all remember how it went. Seeing their intimidation work, it only fired up the Chinese government, and led them to only increase their attempts at arm twisting, until Google clumsily pretended to "be tough" while still doing their last attempt behind the scenes negotiations, which, to their big surprise, got them banned overnight.
Even though Google could have made a ton more money by helping China to build the tools of repression.
Don't forget that other large US corporations like Microsoft, Apple and Activision do build censorship tools and participate in repressing dissent.
Not sure exactly how you'd go about doing it, but it seems like there might be a process for 'evening out' areas into solid color that maintains the hash? In which case you're running extensive image processing on illegal images and making variations from those very images.
More info on how this is done?
I would assume these were engineered by getting the perceptual hash valies, using distance from the hash values in the DB as an error function, and starting with an innocuous image and hash value, and iterating to a collision for each.
I'm sure not interested in proving they can. Mind furnishing the info about how it's really done, then? Since according to you (for very obvious reasons) you can never compare these images to the source for the hashes, where did you extract the hashes from?
If you can so easily reverse engineer false positives from random data without ever seeing or using genuine porn to produce it, shouldn't you be disseminating this content as widely as you possibly can, rather than warning people about the danger of interacting with these false-positive images?
Still puzzled how and why this is being done. Are you trying to render Apple's system useless, or not?
Summary: I'm saying "there may be a way to take existing images that are illegal even to possess, and process them to obliterate the image while maintaining the hash. Is that what's being done here?" and the response is "AM NOT!!"
Is there an archived link?
Edit: I guess this? https://gist.github.com/unrealwill/c480371c3a4bf3abb29856c29...
This is not true. They may match the hash, but the will not match the visual derivative.
The system is not as easily fooled as you think.
I would like to believe that is true, but the negative consequences of even generating a false-positive is enough to not attempt to upload any image.
The database of 200_000 images used by Apple (and others?) is private, and I did not found any trace of the hashes (but I could made a mistake here). So, how do you know that those correspond exactly (or with a certain threshold that has NOT been disclaimed by Apple) to the CSAM DB?
Also, NeuralHash has NOT been released by Apple yet (https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...), so...
They way I see it, this is the best photo backup approach one can possibly take. Just get flagged for child porn, and have all your iPhone photos stored indefinitely on FBI servers.
Does the FBI have geo-redundancy?
Assuming the images do as claimed match the hash, they must also match the ‘visual derivative’ in order to trigger a match.
The system isn’t as easily fooled as is being claimed here.
The NeuralHash is what matters, solely.
[1] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
Thanks for the heads up.
The current practice is that Apple, Google, Microsoft, etc scan the content of your cloud storage.
The scenario that you described is a risk and has been since cloud providers started scanning 10-15 years ago. Some large companies scan their file servers as well.
Both must match to cause a positive.
These images may match the neuralshash, although we have no proof of that at all. They will not also match the visual derivative.
This whole post is based on incomplete information.
Google has been recently focused on cultivating the image that they care about user privacy. The last thing they want to do is call the cops on a bunch of HN users for looking at some abstract swirly pics.
The images shown do appear to be adversarially generated inputs against some NN-based image hash or classifier, but there is no evidence to suggest that this is at all related to Apple's NeuralHash, or that the colliding hashes are from a real CSAM database (the target hashes are not public).
OP claimed they would "release 5 pieces of proof in the next 5 days" [1], and guess what, 11 days later they still haven't.
Look at OP's post and comment history, it's quite clear that they are a troll.
In the mean time, it has been actually proven that hash collisions against NeuralHash are trivially possible, see [2]
EDIT: Oh, I mixed up tabs. This is a link to a google drive of pictures. Because I have scripts disabled, I got no thumbnails, and I'm thinking since this was flagged, maybe I really don't want to get any thumbnails.
/r/Apple talks about this topic a lot and, similar to HN, is not happy about it. This drive link brings very little additional light to what was already known and discussed.
It’s certainly not correct.
The "perceptual hash" should be able to say "no, that's still the same image" while the file data has been entirely transformed.
On what basis is a set of forest-like and post-alien-invasion and post-apocalyptic abstract art is going to get flagged (my poor eyes see one or two that could have some symbolism)?
There really is a Simpson’s quote for everything.
Then the list gets more accurate and we move on.