So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
You could have said the same about say, pre-AI deceptively edited/ragebait/made up content going viral on FB, "actually this is good because soon people will realize they are being tricked/lied to, they'll think extra-critically before sharing dubious content next time".
Which has not happened. I can only see AI videos/images making the problem worse as people are fed personalized, narrowly targeted content that seem to perfectly appeal to their own beliefs/biases/emotions/etc.
Also, if anything it seems like we will have to trust authoritative groups more thanks to GenAI. If I have to consider every video on the internet from e.g. Iran as fake, I'm going to turn to NYT or WSJ who can be relied on to (usually) share only original content, or highly vetted 3rd party content.
I can't really provide a truly good solution, as this problem has large ramifications into philosophy and ethics, but I'd think it would involve solutions like attestation and certificates, and, primarily, thinking of shared media (text, images, videos, etc.) not as facts, but, strictly as allegations.