Fat chance. I can think of exactly zero examples where a photo shared on social media, or even on Whatsapp, has its metadata intact. This is frustrating to me because often it's the only way to get a photo from computer illiterates, and I like my EXIF data, specifically, the exact time/date the picture was taken.
Inserting fake sensor readings is plausible but complicated enough I doubt any but state actors would or could bother.
Even this standard only raises the cost to create fake images to 100K would be fantastic for journalism and democracy. Much better than the cost being $0.
What DoF/focal length metadata?
Then why not replace the sensor entirely and send fake sensor data? It would be difficult to fake depth of field changes in response to camera's focus motor moving. If I were a security researcher, I would really try to see if the camera is smart enough to tell if the sensor data is fake.
> take a photo of a sufficiently high-resolution display
A 60 MP display? (10000x6000 resolution? I want one!!!) That would still make pixels visible. I suspect a display at least 4x larger would be needed. And it needs to be curved, or the out-of-focus corners will ruin the illusion.
The signature just says who took the picture, not that the image is not ai generated.
However, I think the digital signing is a good way to provide the photographer with proof that the picture in the paper/competition is the picture that they made with the camera and not their pal Midjourney.
Why they put it in such a sluggish contraption, I don’t know, but probably a good place to start compared to a Z9 and trying to tag pictures at 500 frames a second or whatever.
Canon (maybe others) offered a similar feature from the very early days of digital, with a module for the 1Dx at least that would cryptographically sign files the camera generated to say "this is what the sensor saw". Typically it was marketed to law enforcement, because there was a wariness around digital photography as evidence even in the early days.
So you can sign a fake photo that you modified and its provenance from that point on is traceable. But you still wouldn’t know if the photo was captured authentically.
I guess it doesn’t matter — this is about traceability rather than authenticity.
The point is to prove that the image was taken in the real world, to stakeholders that might not necessarily take your word for it otherwise.
But here is an alternative:
You pass an image through a lens, you take photo from the camera. Now, having the input original AI-generated image and photo with the artifacts you calculate what changes your camera introduces. You reverse it to know what needs to be modified in original image to get photo exactly as you want. If one pass does not suffice, you do multiple passes.
Yes, it is easy, as in a CS or math student can do it.
As ever, yes it can be faked (by taking A picture of a screen for example) but it ups the cost of faking a massive amount from where it is currently
https://it.slashdot.org/story/10/12/03/2133218/canons-image-...
You can already sign a JPEG with your GPG key; that won’t convince anyone that it wasn’t actually photoshopped or outright generated by AI.
The point here is to have a more trusted hardware vendor vouch for their camera not being easy to trick into signing arbitrary data, but only actual images it took itself.
Of course that also puts a lot of pressure on the key generation, storage and processing mechanism of that vendor; trusted computing in this scenario (i.e. the adversary has unrestricted and persistent access to the system) isn’t easy to get right.