https://www.hitc.com/en-gb/2022/10/21/paris-hilton-fans-conf...
Still super impressive but it’s not two deep fakes at the same time.
This is either going to send a shockwave to society in the trust department (i.e. we'll have to absolutely distrust everything and everyone will have to adapt immediately), or we're in for a very, very rocky road where different people will walk around living in different realities where different things have happened (this is already the case, but more so when you can send a conspiracy theorist videos that -prove- everything they've been saying, and look completely real).
9/10 people will just accept it, probably 7/9 wont have much other option without any practical ability to authenticate the hundreds of things they see a day.
I'll give you one decent example from recent popular media: Sandy Hooks. Alex Jones didn't even need a deep fake to convince people.
PS: People being manipulated by propaganda is really old news. The only thing that the internet has changed is that all the village idiots who easily fall for propaganda discovered that each village has its idiot and they started to communicate and coordinate. But that's unrelated to deep fake and was already a problem before.
Photoshop didn't cause society to collapse and photoshop-for-video won't either.
(And even then sometimes wonder if a hardware hack was involved)
If some major news org said they contacted Tom Cruise and asked him if the video was real and he said it wasn't him, then as an average person I'd probably believe them because I have at least some degree of trust that they'll either tell the truth or get called out on the lie and ultimately what do I care either way.
It's extraordinary claims that require extraordinary evidence and "Paris Hilton and Tom Cruise do rich people things on camera" doesn't meet that threshold. If someone posted a video of the president eating a live baby I'd probably be more skeptical of my sources.
It's scary to think about how AI will boost the already way too effective politics of "fake".
One recent example: our education system has been neglected for long. Now, that we have an inflation of ~25% (inflation of food is around 50%), teachers literally can't make ends meet. They started to fight for themselves, and instead of taking the problem seriously, the government fights back with its power. (E.g. by firing or silencing teachers who demonstrate.) Teachers are leaving for other jobs in huge numbers. The buildings of even some of the best schools in the country are in catastrophic shape, on the brink of causing major damage to those inside. All this, because it is not a real priority to have good schools. This is only advertised, but it is a lie. The whole education system is gradually shifting into a mode of "baby sitting" kids while the parents work.
Another example is the prosecution system. Interestingly, they are very quick and effective in investigating the smallest wrongdoing if it helps those in power. If the investigation would hurt those in power, they very quickly abandon the investigation with funny and obiously fake reasons. Again: the prosecution system looks like a real one, but it's not. It has purposes different from what is officially advertised.
The closest advisor of the prime minister openly said this week, that "if you control the media, you control the thoughts of people". This, sadly, seems to be true. It really seems that the point of the government is not to run the country decently, but only to fake it. And it works.
It's not necessarily a bad thing.
Right now, I hope the truth will be revealed by police releasing body camera footage. In the near term future that footage will satisfy nobody and we'll all be left wondering what really did happen.
If it's too easy and accepted to wave off real images and videos as fake, that's also problematic.
To pick the example from the other user, if video footage of a shooting matches neither ballistics, eye witness accounts or other evidence on site it'd be very easy to spot a fake, without even technically analzying the video itself.
But deepfakes used by governments, police or citizens to frame innocent civilians would be really scary. There wouldn’t be any witnesses, but we wouldn’t necessarily expect there to be any either.
With current tech, deepfakes can be really good. The linked one is pretty much there, it's super convincing. When I know it's fake, something seems a bit off about how "Cruise's" head is in relation to his body when he walks through the doorframe but that's it. If I wasn't actively looking for anything fake, I would not have registered that either.
Whoever runs for POTUS in '24, it's certain that both candidates will be well-known people with lots of video material for training a model. It's also certain that many groups and individuals will be strongly vested in the outcome of the election, and some of them will have the resources to produce deepfakes at least as convincing as "Tom Cruise and Paris Hilton" here - this is no longer something that requires hardware worth millions to run for two months straight.
There will be fake videos of the candidates doing/saying highly scandalous stuff. Going on a racist rant, expressing corrupt intent, promising illegal things, etc. And those videos, if somewhat intelligently produced, will have a big impact once they air, no matter if they can be definitively proven to be fake later.
"Hi PornGPT, make me a film with X actor and Y actor doing so-and-so, culminating after 5 minutes - I'm in a rush".
The difference between the illegality of porn and of deepfaked porn is that the former is usually based on a purely moral argument (e.g. "porn is sinful" and allowing sinful behavior "corrupts people with sin", leading to more - and worse - sinful behavior) whereas the latter is based on a lack of consent.
Consent is the corner stone of most societies (it's what allows for contract law and thus the "social contract" to begin with). Note that deepfaked porn is illegal in every country in which regular porn is illegal, so clearly the concern isn't with the porn aspect of it. It's categorically more similar to revenge "porn" or CSAM in that the subjects of it don't (or can't) consent to its distribution (or even creation). Also note that in countries where deepfaked porn is illegal but regular porn is not, deepfaked porn produced with the consent of the actors and those lending their faces would usually be legal. Consent matters.
AI becomes so commoditised so quickly. It doesn't feel like we are a long way from text-to-video available in open source. Then all it takes is someone with a big stash of training data to train it, and voila.
Deepfake porn seems like more of a cottage industry.
- https://en.wikipedia.org/wiki/Aeolipile
- first batteries were used on parties to run current through people for fun
- first use of uranium was in phosphorescent paint
Some video storage service or ‘authentication service middleman’ is going to make a mint with blockchain authenticity proofing. They’ll provide a seal of authenticity that the feed being viewed came direct from a camera. Feed will be checked during viewing as well to verify its ‘camera direct’ state.
Someone will figure out how to crack it though, or the possibility will be plausible enough, and then we'll be at zero again.