Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
Maybe Apple will be able to pull it off? Aka if you FaceTime me I know that you are a person
Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”
“Oh it really is you Johnny!”
We’re all going to have to start communicating this way. Best of luck.
I offer consulting services on the side to help professionals hone these skills. $250 / hour.
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
A summary, courtesy of chess dot com:
> The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.
> According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.
> After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
This is the downside of being a human being.
Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)
If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.
I'm sure I'm not the first to use this technique, but I don't know what it's called.
So at each stage in the loop they are always super convinced of the position.
It's not even close.
It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:
- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.
- Who would've thought that it's extremely hard to decide when to stop talking?
- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.
- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.
Then came the time when I wanted to use it. They didn’t remember. Not the phrase. Nor that we ever talked about this in the first place.
Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.
https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...
It feels good to connect with humans that way.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)
https://ars.electronica.art/panic/de/view/reverse-turing-tes...
(I.e. trying to hide the fact that you're human, among a group of AIs)
How was this solved, actually? More training data, or was there more to it?
More training on fingers specifically.
Image VAEs (variation auto encoders) are functions that compress the latent (working) image down. The earlier VAEs would mess up fine details. At a most basic level, just picture compression issues.
Training against bad previous work with six fingers.
Models working in 1024 instead of 512.
Mexed Missaging.
Though, it you believe that Netanyahu is dead, then it will look to you as an attempt to convince you, but I don't think this was the goal of the author. Still, if you in this situation, try to run with the opposite hypothesis and think of ways how Netanyahu could prove he is alive. Or, if it seems difficult, then imagine any other prime minister who accidentally posted a six-fingered video of herself and now faces a problem of proving that she is alive. You'll get the idea of the article easily.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope. <shrug>
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
The people you'd want to be wary of would be the ones that'd look legit.
e.g. "yes i guess i will send my son $400,000 in cash tonight because he's been kidnapped, and i know it's real because there's no AI watermark that all the nice US/EU companies use."
Necessity is the mother of invention.
It’s absolutely asinine that we’re still relying on paper birth certificates and social security numbers, and stupid tax systems. I’m interested in breaking everything we have to see what comes next.
I truly believe that it is a crime against humanity
Because no frontier model is allowed to go against the popular narratives of the day.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
Tell us more about this axe you appear to need to grind.
Really? The coffee in his cup, filled to the brim, did the most bizarre dance possible. And he handled the cup as if was empty, without any care.
But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.
https://www.etsy.com/listing/1667241073/realistic-silicone-s...
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.