- Alter/manipulate the audio/video has much as you want.
- Play it.
- Have a "attested" microphone/video recording the "manipulated audio/video".
For video maybe you can actually try to prove that's recording a screen not the real event, but seems much harder for audio, you can just say the echos/ distortions came from the environment...
Trying to secure that regime on the input side would seem to be even more fraught with problems. At least using the analog hole for the output side causes quality degradation from reencoding the content (the most common goal is digital redistribution). Whereas on the input side, the content begins in the digital domain so it's not even adding an extra analog step.
I cam see a gpg like registry where news stations publish their public keys to verify that their audio / video snippets have not been tampered with.
I wish the article was more clear on what threat model they're trying to address.
Unless counter-forensics are applied, AI audio is not going to have the right hum.
It’s not cryptographically secure, but it gives good assurance and doesn’t require tamper resistant hardware (which makes the cryptographic security dependent on how secure the resident keys are)
https://en.m.wikipedia.org/wiki/Electrical_network_frequency...
I'm pretty sure the private key is very accessible, if it's used at runtime which it is. Just not easily.
Their verification system also seems to include a smartcard or SD card-like thing (which might be doing something special, or might just be DRM)
I don't understand how this works, or what exactly is being proven here. For instance, you could silence the given inputs and inject some other unrelated audio, so the fact that your output hash incorporates the input hashes doesn't seem very meaningful.
I figure I must be missing something here.