> on the contrary, it must be developed.
No, it mustn’t. There’s not a gun to your head, forcing you to do this. You want to develop this technology. That’s why it’s happening.
Technology isn’t inevitable. It’s a choice we make. And we can go in circles about whether it’s good or bad, but at least cop to the fact that you have agency and you’re making that choice.
[1] https://github.com/iperov/DeepFaceLive/issues/41#issuecommen...
Also some entertainment works by artificially instilling a desire which cannot be fulfilled. If people can use deepfakes and masturbation to defuse that desire it might be a moral positive for them.
Let's flip the argument upside down. Let's imagine you want to produce porn content, but don't want to be recognized. This technology allows that.
You might not like the art, but deepfakes are not harmful.
On the other hand somebody can say "if we don't, then others will". So regulate and ban the technology, then.
I'm not convinced this represents "progress". That aside, the goal of resisting it isn't to allow you to become complacent some day. The goal is to avoid being harmed.
How many generations did it take for society to adapt to the industrial revolution? What were the spasms which occurred during that period?
As for deepfakes, we know before we even begin that it breaks (utterly moots) social norms about identity, trust, reputation.
How many people are going to die while societies adapt?
Is that price acceptable for the benefit of more amusing viral videos on TikTok?
That is a nice piece of history.
Ethics is important. And one consequence is: I may die but at least it was not because I made it possible.
Saying “everyone with means to develop would need to choose not to” is like saying “every writer would need to choose not to write this book”. It’s not inevitable.
Let's imagine you have a horrific face malformation, or went through a terrible accident that left you disfigured. Why not allow you to pass for an ordinary person, at least in video calls, so you can have a somewhat normal life?
It took me two minutes to come up with a legitimate use for this technology. I imagine there are many more only a couple minutes away.
If I've understood that right, both your comments can agree. There are strong arguments to say we _should_ develop this technology, but there also many good counter-arguments.
i disagree, society could stop inventing and humanity would likely continue to exist. However, that would would require the population to stop being interested in the development of technology which i think wont happen.
The point is, if this engineer doesnt write this code, someone else will. If a person can imagine a tool, someone will eventually want to make it a reality.
i think deepfakes and ai generation are going to change the world in a way that im a bit leery of, it feels safer to just stop here, but that isnt a possibility, even if deepfakes are made illegal, people will still create their own tools and have their own harddrives for storage.
maybe these things should be illegal or something, hard to say. But the tech will move forward either way imo.
I’ll promote here one of my favorite reads, The Technological Society by Jacques Ellul. It feels very apropos here as we embark on an new smorgasbord of technologies that can do us harm and good. There will always be people who want to develop technologies even if they are harmful. We need not only ask ourselves “what is the benefit”, but “at what cost?”
I think it is inevitable actually. The history of humanity seems to suggest so at least.
See and that's the problem, you can just talk about "you" not "we", because someone else will do it. Technology, and with it information cannot be stopped.
Knowledge isn't useful if you can't apply it in practice.
My work's IT has a habit of sending out phishing emails that closely match emails legitimate emails our employees and vendors send. We're supposed to be vigilant for looking at the Reply-To field, which is hidden by default on mobile devices. (As is the url things are going to.)
I will never not trigger those. Why? My brain can't tap the header field for every single email I read. And I can't ignore legitimate emails.
Every time on my phone, I'll click the link, then check the url in the browser. I can't get my brain to check before clicking the link. (Trust me, I'm neuro-divergent and I'd tried for years. It won't reprogram.)
This also leads potentially to more or better image/video signing Technologie.
And yes just because someone develops this in the open doesn't mean that someone else is not developing it in parallel but hidden.
During WWII the protagonist's parents are shot by Germans after a body of a collaborator was found in front of their house. The parents were arguing with the soldiers about something when arrested and ended up shot during the arrest.
The collaborator was shot by the resistance in front of their neighbours house, and the neighbours moved the body in front of the protagonist's house.
Over the years, he encounters many people involved in this event, and starts seeing things from many sides. One of the themes that's explored is who bears moral responsibility for his parents' death? The Germans for shooting them? His mother for arguing? The neighbours for moving the body? The resistance for shooting the collaborator? The collaborator for collaborating? All of their actions were a necessary link in the chain that lead to their death.
One of the characters utters a simple and powerful way of dealing with that morality: "He who did it, did it, and not somebody else. The only useful truth is that everybody is killed, by who he is killed, and not by anyone else."
It's a self-serving morality, because the character was part of the resistance group that shot the collaborator in a time where reprisals where very common. But it's also very appealing in its simplicity and clarity.
I find myself referring back to it, in cases like this. In the imagined future, where this tech is used for Bad Things, who is responsible for those Bad Things? The person that did the Bad Thing? The person that developed this tech? The people that developed the tech that lead up to this development?
I'm much inclined to only lay blame on the person that did the Bad Thing.
For example, in the story above, the argument against the Germans being to blame would be that the protagonist has no agency with respect to them being there and shooting people. The counter argument "He who did it, did it, and not somebody else" is shifting perspective from the protagonist to society, which does have agency with respect to the Germans shooting people.
Sometimes you just didn't have the complete picture and you might have been acting ideally given the limited information you had; that is, if you changed your behavior, it might change for the worst in other situations. It's not about self-punishing, it's about learning!
It was Germany who created the situation as part of their quest for world dominance, it was Germany who trained their soldiers to be violent and cruel, who created the policies and it was German soldiers who shot.
In the US, we have the Second Amendment, which has probably prevented many, many lawsuits against gun manufacturers and dealers (but there have still been a few).
Unless there's a constitutional amendment, protecting the tech, it's pretty likely that some hungry lawyers will figure out how to go up the food chain, and get some money.
If the tech is used to go after politicians and oligarchs (almost guaranteed), you can bet that some laws will appear, soon.
In the US, threat of lawsuits govern many decisions. It's a totally legitimate fear, and politicians are notorious for protecting their hunting grounds, and their privileges.
Look at the fight over encryption. It is a "no-brainer," if you are aware of the tech, yet politicians have a very real chance of doing great damage to it.
If you'll allow me to be cynical for a moment; I believe the PLCA was passed because the people in power felt safe from guns, behind their security and metal detectors. I don't think they feel safe from deep fakes or hackers (nor is there much lobbying money), so a similar law protecting technology would never pass.
[1]: https://en.m.wikipedia.org/wiki/Protection_of_Lawful_Commerc...
There's no technology that can cure humanity of moral failing. Most things contain potential good and evil uses.
You remind me of a story I was told years ago which describes a series of "good" and "bad" events and each subsequent event changes the seeming meaning of the prior one.
The part I remember is where a man is given a horse and rejoices at his good fortune and then he is thrown from the horse, breaking his leg, and he decries his misfortune. Then war breaks out and he is not conscripted because his leg is in a cast and he again feels relieved and fortunate.
Imagine a car suddenly swerves in front of a motorcycle rider, which causes him to crash. Perhaps it was a rainy day, the rider was talking on the phone, they were traveling slightly above the speed limit, and they were sitting in the car’s blind spot. It was the car that was at fault for causing the crash, but if one of the other factors in the rider’s situation was eliminated, he may have been able to find a safe escape path to perform an emergency maneuver and avoid the crash.
Ultimately, it’s the final piece in a chain of events that directly causes it, but there are a multitude of variables that are necessary to lead to the situation. Blame can be assigned to a higher degree for a single party, but I believe blame can also be applied to varying degrees to multiple parties.
It seems the authors of the tool in question do not see where their technology fits into a larger puzzle. They may not be the ones who ultimately use it for malice, but it is worrying that they so readily shrug off ethical considerations when they have agency over this piece of the equation.
A better analogy might be a guy who likes to hand out free bolt cutters on the street.
A better heuristic, and almost as simple, is to rate responsibility in proportion to power, ie those with the most leverage to steer events take the most blame. Blame is information. It has no mass, and can be divided effortlessly.
If this is used to scam old people out of their belongings then you really have to question your actions and imho bear some responsibility. Was it worth it? Do the positive uses outweigh their negatives? They use examples of misuse of technology as if that would free them of any guilt. As if previous errors would allow them to do anything because greater mistakes were made.
You are not, of course, completely responsible for the actions others take but if you create something you have to keep in mind bad actors exist. You can't just close your eyes if your actions lead to a strictly worse world. Old people scammed out of their savings are real people and it is real pain. I can't imagine the desperation and the helplessness following that. It really makes me angry how someone can ignore so much pain and not even engage in an argument whether it's the right thing to do.
In fact, anecdotally, it seems the people with the technical ability are least likely to have a nuanced understanding of the ethical impact of their work (or, more optimistically, it's only people with the conjunction of technical ability and ethical idiocy who would work on this, and we're not seeing all the capable people who choose not to).
Also, what's with all the people in this thread coming up with implausible edge cases in which deep fake tech could be used ethically to justify a technology that will very obviously be used unethically in the vast majority of cases? It's almost useless for anything except deception—it is intrinsically deceptive. All the 'yeah but cars kill people so should we ban all cars?' comments miss the obvious point that cars are extremely useful, so we accept the relatively small negatives. The ethical balance is the other way around for deep fake tech. It's almost entirely harmful, with some small use cases that might arguably be valuable to someone.
Correct. There is a tendency to think that because a person is exceptionally intelligent or skilled in one area, they must also be intelligent in other areas. It's simply not the case. An expert is authoritative in the areas of their expertise, but outside of those, their opinions are no more likely to be correct than anyone else's.
This error is often leveraged in persuasion campaigns -- thinking that, for instance, a brilliant physicist's opinions on social policies are more likely to be accurate than any random person on the street.
"Moral intelligence" (or "MQ") and "moral cripples".
...are my provisional terms for talking about this.
This deepfake stuff is a difference in degree, not kind, and once these people figure out how to use AI to help them, everyone is going to have to level up their defenses all over again. You ain't seen nothing yet.
I'm thinking now this is to justify away the collective guilt of bringing into the mainstream harmful products.
It seems to come from the same origins as "crypto can't be regulated", "government can't to anything", "it's ok because it's legal" and it's always worrying me to not really see any sort of moral stance being taken anymore.
In my view, it's one of the things that makes HN great.
Let's just get it over with... Technology is evil. Let's ban technology.
Every time serious regulators have made a strong move, crypto markets and general population access to crypto products have actually been affected. I'm pretty sure if the US government would try to ban all crypto networks, most activity would stop, leaving only a few die-hard activists and some actual criminals.
Using fake media to trick people into believing anything used to be a privileged reserved for nation states and the ultra rich. Now that _ANYONE_ and their cat can do it, it should follow that nobody can believe anything that's on a screen anymore (this comment included).
I think this in general (text, audio, video) will produce a societal earthquake, in a way send us into the Middle Ages - you can't really verify things yourself, because everything can be faked, all you can do is anchoring yourself to some trusted authority.
Imagine you read on hacker news an (AI generated) article about a new breakthrough in physics - new convincing evidence for the cyclical universe hypothesis. In the discussion, there will be a lot of seemingly informed comments arguing about this (all AI generated), links to video presentations from reputable scientists (all AI generated) and papers (all AI generated). It will be all wrong (= there wasn't any breakthrough in the first place), but impossible for a non-physicist to assess correctly.
In a way it will lead to centralization of internet and knowledge, people will stick only to their trusted sources. For some it may be Wikipedia and NYTimes, for others some AI-generated island of knowledge/manipulation.
I also wonder what effects this will have on social platforms when 99% of content is generated by AI.
Marshall McLuhan - Understanding Media
Even outright banning this technology won't make a dent in the bad uses, since those individuals are highly motivated, highly competent and don't care the least about the ban..
It's not like the development won't happen, it will just be hidden, making it less obvious to the potential victims (increasing the size of this group, since you then need to be "in the know").
So no, the only way forward is to have this out in the open as much as possible so that as many people as possible become aware of how trivial it already is to fake stuff.
So, what are you saying ?
The church has been around for years and despite the millennia of lies we are all still here and thriving and improving.
Bullshit spreads faster, but truth wins out over centuries because it doesn't go away when you stop looking at it.
Once you let the genie out of the bottle, a wish will be made. A technology might not be inherently bad, but neither are knives, and we don't leave those lying around.
That said, it is the human species that develops technology, rarely is one human individual capable of holding back a technology.
Knives seem like the perfect example, really. We do leave them lying around in our drawers, usually under zero real security against any malicious guest, but we recognise it is about the choices of the people we invite into our houses, not the existence of the knives themselves, that is the real danger.
You can certainly argue though as you have that humans simply shouldn't have the choice to do certain things - what comes to mind immediately is nuclear weapons.
The problem comes from the scale, it is a knife vs nuke question, but you seem to stop a bit too short in your reasoning. Yes both can kill, they just don't operate on the same scale.
You could forge documents 400 years ago, it doesn't mean it would be ethical to release a software that let you forge any document at scale instantly and for free in a single click
One steam engine is a marvel, 1.4B ICE cars on earth is a nightmare
It's always about scale, almost never about the original purpose/intent of the tech. Modern tech develops and spread infinitely faster than anything from even 20 years ago
Neil Postman has two amazing books on this subject.
It is not even that technology is good or bad, any technology will have good and bad aspects but the main issue is that we have surrendered culture and agency to technology as a society.
There is no going back or fix now. The fix for all was a culture that devalues bad uses of technology so that there is no money in creating something shitty. We basically have the opposite of that. Even completely useless technology is worth a ton of money because of our culture.
The solution is not Luddism either. Especially a ridiculous techno-Luddism.
For me, the Faustian bargain with technology has already been signed in blood and there is nothing to do other than enjoy the roller coaster ride.
Technology is not neutral and never will be because the nature of technology is to be a means to an end, and as such, to be inseparable from its end.
Technologies enable certain future outcomes.
That some of those outcomes are deemed "good" or "bad" just depends on whether they're compatible with the ends that were pursued initially and the worldview that supports them.
Some technologies have unexpected outcomes, but that doesn't make those outcomes independent of those technologies, either.
>The solution is not Luddism either. Especially a ridiculous techno-Luddism.
Cough.
It's worth noting that nuclear weapons seem to have prevented more deaths than they caused. The MAD doctrine has prevented direct confrontation between the superpowers and limited their military conflicts to proxy wars.
And this is a technology that has almost no application beyond vaporizing cities almost instantaneously.
If our children all die in a nuclear holocaust, it doesn’t really matter how many lives we saved, does it?
edit: I’ll take your knee-jerk DV, and any others, as an admission of an inability to speak to positive utility of this technology.
1. CGI for video editing - lower the bar of entry to de-age actors, or use a stand-in. Actor can't make it to a shoot that day? No worries, replace their face in post easily.
2. Identity protection - Cold call with someone that reached out to you, you're not sure if they're safe or dangerous, could be a good way to protect yourself.
3. Social media content for clients - become a fake avatar for hire essentially, customize your narrator for any video or brand. Video call centers with fake video (they already have voice modifiers and fake names), Enhanced VTuber sort of things (virtual avatars for streaming).
4. Unexpected outcomes: for example Holly Herndon created (and sold) access to an AI replica of her singing voice (n1), and I could see artists selling or renting access to their faces.
Obviously this can and will be used maliciously, but I personally could see myself using it for more positive reasons.
n1. https://holly.mirror.xyz/54ds2IiOnvthjGFkokFCoaI4EabytH9xjAY...
With that said almost every use-case cited was financial or monetary gain whereas I enquired about social utility and value.
That dishonesty ie creation of a fake avatar is cited as being of social utility strikes me as a reach. I don’t see how adding more dishonesty and facades to the world adds social value, but then I may just be of limited imagination.
- Representing assistive robots/software with friendly human faces
- Reconstructing the likeness of people with permanent facial injuries when connecting with family
Other, questionably "legitimate", commercial uses are already in production:
- auto-generated corporate training videos
- "Personalized" advertising
I'm hating it already.
I could see it being used in AR to conceal identity to facilitate more equitable medical outcomes, I suppose.
Thank you again for the input! I was honestly at a loss for positive applications outside of financial gain.
I haven’t seen any ads driven by deepfake, or at least I don’t think I have. That advertising bit does sound rather obnoxious though!
[0] https://collider.com/trey-parker-matt-stone-almost-made-sass...
I suspect it's going to become popular for both consensual-deepfake of oneself (PR, magazines, actors, pop stars, any form of public speaker) and "bought out" deepfake (actors selling out their image rights and then losing creative control; dead actors, etc.)
The political-deepfake is really going to accelerate the debate over how much free speech permits you to just lie about people, though.
Another analogy. Say somebody makes some hacking kit. Say it uses zero day exploits to compromise Windows, Mac, and Linux. Would any of us take issue with that? Would it be a different story if it was made into a push-button tool like WinNuke was in the 1990s? Or automated to the extent that somebody who can make a word doc could employ it against your systems? Is there really no feasible line of distinction here, in your eyes?
Think about it: people choose to trust or not trust based on a face. When deepfaking becomes a tool easily available to every average joe, appearance will lose some of its power. People will learn to lose their irrational trust in face.
This means that the incentive to develop this technology is already there, and so it WILL be developed no matter how much people wish it wouldn't.
The only difference at this point is whether some of the implementations are developed in public view or not. If none are public, then all of them will be done in secret, and our opportunities to develop countermeasures will be severely hampered by having fewer eyes on it, and a smaller entry funnel for potential white hats.
Sure, but there's a value in delaying that development. Delaying harm is a valid tactic.
> Sure, but there's a value in delaying that development. Delaying harm is a valid tactic.
It is a horrible tactic when your adversary has ultra-deep pockets & mountains of bodies to throw at the problem.
Whatever's being developed here has 1/10th the capability of the tech being developed in black site facilities at the behest of national militaries.
But software is not "tech". It is the explicit expression and projection of cultural objectives and values onto a particular type of tech. You can take the exact precise hardware we have today and reprogram a million different worlds on it, some better, some worse.
Developers are simply the willing executioners of prevalent power structures. Deal with it. If you have a moral backbone (i.e., you don't agree with the prevalent morality as expressed in what the software industry currently does) do something about it.
[0] Ofcourse upon deeper examination overall system design (e.g. how client or server heavy the configuration, what kind of UI is promoted etc) is not neutral either. Cultural/political/economic choices creep * everywhere*
But all of them with computing as a cultural artifact of some kind — and certainly not a "neutral" one. I've basically come to think of the idea of "general purpose computing" as basically a trojan horse for the naive "tech is neutral" POV. "Making things computable faster" is not a "neutral" change of the universe, it's a step in a specific direction, it defines the boundaries of a particular future cone at the exception of many others.
Machine guns, an advanced piece of engineering widely known to being developed purely as an academic exercise. No one could expect other uses.
This technology is going to be developed regardless of what we do here. Please realize that you are not advocating for it not to be developed: rather, you are advocating for it not to be developed in the open.
Let this technology be made by someone else. Let it be made by governments and whoever in secret. We know its possible; but just because its possible doesn't mean you have to be the person who does it. If we can delay scammers using this stuff for a few years, we'll keep millions of people from being scammed in the intervening time. Thats a win in my book.
We actually blame them, except for airplanes. Most of these were invented at the times when lives had much less value and are of no use unless some half-minded pig attacks you or tries to undermine your defenses.
I’d like to see how this line of reasoning changes when someone releases a virus for your DNA in your backyard, made with funnyjokes/easy-create-virus-for-a-drone-app.
So, what's the possible scenario for that outcome? Well, look at the upcoming elections in Nigeria. The BBC writes: "With an estimated 80 million Nigerians online, social media plays a huge role in national debates about politics. Our investigation uncovered different tactics used to reach more people on Twitter. Many play on divisive issues such as religious, ethnic and regional differences." ABC News writes: "At least 800 people died in post-election violence after the 2011 polls."
Adding deepfakes into this mix can trigger violent reactions. Should that happen, the creators of deepfakes are obviously to blame, but also those who enabled them, and that includes the original researchers, are responsible. Ignoring that is just putting your head in the sand.
No one can be blamed for unintended consequences (most of the time, I guess). However, the fact remains that one is a crucial part of the chain of events that led to the very existence of the consequence.
But you wouldn't blame the man who puts a poison pill near a playground, and claims "I didn't make them eat it"?
FOSS comes without warranty and liability. Read the license.
But as the barrier to entry for really convincing output goes up (768px /1024px training pipelines, and beyond), and it suddenly becomes something that one person alone can't really do well any more, the 'amateur' stuff is going to look far worse to people than it does now. You just have to wait for that barrier to rise, and I can tell you as a VFX insider that that is happening right now.
Deepfakes are the reverse of CGI, which began as an inaccessible technology and gradually became accessible, before the scale of its use in VFX reversed that again.
Now, assuming you can either afford or will pirate the right software, you could probably match any CGI VFX shot in a major blockbuster if you gave up your job and worked on it non-stop for a year of 18-hour days (assuming you'd already been through the steep learning curve of the pipeline). So it's out of reach, really, and so will the best deepfakes be.
This stuff everyone is so scared of will end up gate-kept, if only for logistical reasons (never mind any new laws that would address it) - at least at the quality that's so feared in these comments.
Those posses a complete lack of a moral compass.
Whether it is evil or not, we can debate, but let's start by assuming good faith by all debate participants.
Do you trust videocall participants because you recognize their faces and voices? ...Or because a server certified by a root CA has authenticated the other participants?
The age of deepfakes has started, nobody can stop it. Improving our mental security models will become as essential as literacy.
the alternative is that its being developed hidden and used by the most vile and evil without many being aware of it (which it most definitely will)
As with nuclear, the cats out of the bag or the babys bathwater(?) already spilled, no way to turn back the clock on technological innovation.
Bullying is alive and well even without this tech. I guess scissoring out a photo from the yearbook and glue it in a porn magazine would work as good. It is not about believability but about the psychological damage when it comes to bullying
If somehow we could get the United Nations to agree to ban deepfake development worldwide, then surely we should enter an ethical discussion about whether we should do it or not. In a world where sophisticated actors already have access to these (and much better) tools, having an open-source GitHub repo is a good thing in my view.
I know that it's expected that a man of a certain age will begin to say these things, but I think it's true now in a way that it was not true about "those young whippersnappers with their motorcars": we've taken a wrong turn, and I don't like where we're headed.
I personally think the internet inherently devalues the humanity of interactions - even on voice chat on Discord or some other service. There's something missing from what we get in person that we're losing out on, and I think that's part of why the internet is, generally, a cesspool (doubly so with anonymity thrown in). But we can't escape -- go to a concert, for instance, and how many people are actually paying attention versus just capturing the moment to prove they were there and share it on Snap/Insta?
Like you, I fear we're going a bad way...and they're all just "I can't wait to see how this changes things!" without ever seeing to stop and seriously question things I feel. Their arguments seem to come down to "Well, it's perfect connection for me and I think it's neat"...but then they complain about being depressed and not getting out and doing anything. Discord and the internet isn't a substitution for IRL, and we're fixing to take people assuming it is to the extreme with this new tech. A reckoning is coming, I fear.
so, deepfake authors want credits for their work. that's perplexing.
what's more, this is happening while they seem to be ignoring the ethical concerns raised in the issue. citing that people can do whatever they want with the tech.
Completely inappropriate and unethical.
What value does this technology add? Bringing Princess Leia back to another Star Wars movie? Anything more than that?
If they do not make them FOSS in public, then the Conspiracy will invent their own and use it for bad uses only.
Furthermore, even if a program is written, you can decide not to use it; that it is written (as FOSS) means that you can read its working, now that someone else wrote about it. You can also execute it on a computer, if that is what is desired. Also, if it is well known enough, then hopefully if someone does use it deceptively against you, then you might be able to guess, or to figure it out (although it might be difficult, at least it might be possible if it is known enough).
I have no intention using such a thing, but someone else might figure out uses of it.
(For example, maybe there are some uses that can be used with movies, for example, if the original actor has been injured for an extended period of time (including if they are dead) or if they want to make up a picture of someone who does not exist. (Although, they should avoid being deceptive. For example, include in the credits, the mention of using such a thing.) Even if it is considered acceptable though, some people will prefer to make movies without it, and such a thing should be acceptable too anyways.)
(I think even in Star Trek, in story, in some episodes they made deepfake movies of someone. And even in Star Trek, both good and bad uses are possible. Or, am I mistaken?)
Nevertheless, there may be some dangers involved, but there are potential danger with anything; if you are careful, then you can try to avoid it, hopefully.
Also, as mentioned in some other comments: "Alternatively, having tools like this easily available makes it easier to raise awareness and build teams to combat them." Another thing mentioned in another comment: "The entities with interest in it have bigger pockets than some random open source project. ... There are many small entities with interest in such a technology which don't have huge budgets. Small terrorist / extremist organizations pushing a specific agenda." However, in the small case, it is perhaps not as impactful, now that it is FOSS and that it allows others to raise awareness and combat them as described by the other comment. In the big case, where others could make it independently, having this FOSS implementation helps even more against it, since they would otherwise just make it up by themself.
Can you believe a politician saying something on TV? Hell no! You should exercise logic about the whole political play he is a part of. Should you think bad about a person you find on a porn site? Absolutely no, what good could result out of this in any case?
This has always been like this but now there is a thing which can push this into the common sense.
Why should people believe anything you say on this topic? How can they know you're not a bot or a psy-ops troll working for Xi or Putin with the mission to undermine the very fabric of Democratic societies?
What truths/facts should people NOT doubt when they're using "common sense" and thinking "critically". Are we not all affected by propaganda that directly targets what we consider "facts" and "common sense"?
So, to complete the paradox, I will claim one "fact": As humans there are VERY FEW "facts" we can know from first principles, and there are even fewer pinciples we can now for a fact to be universal, or even useful.
We DEPEND on at least some authorities, whether those are people, institutions, ideas or beliefs.
When someone says "think critically!", they mean that you should put some higher burden of evidence on SOME of your beliefs. But not all, and definitely not those that this someone take as axiomatical truths.
An my main worry is not to find some celebrity's face on PornHub. My main concern is the day when almost everything we see that claims to be news has been tampered with in such a way. If we have now way to tell a deepfake from a true video, we can be made to believe absolutely anything. This can be used to ruin lives, trigger wars, including civil wars, cause nuclear holocaust. And it's already happening. Twitter is full of lies of every kind and from all parties and countries.
We may need to find a way back to a world where there is one main shared narrative that we can all more or less trust. Where the custodians of the institutions that provide the narrative understand the need to maintain the trust and the risks involved in undermining the trust for personal gain, and where there are checks and balances that removes bad actors from such positions.
Without this, I believe we're f'ed, but HOW to obtain it, I don't know.
When was such time in history? I'm trying to think of one and I can't. Misinformation has spread in the past too, albeit slowly. The checks and balances did nothing to stop the custodians of the institutions from dismissing heliocentrism, the germ theory, or the continental drift. And for a modern example, the checks and balances failed when the authorities shared a narrative about weapons of mass destruction in Iraq.
All of that was possible without today's technology.
They shouldn't. I would just be glad if they would analyse my oppinion and try to understand what do I mean. Whenever I mention a fact they should check it if they consider it important.
> How can they know you're not a bot or a psy-ops troll working for Xi or Putin with the mission to undermine the very fabric of Democratic societies?
I am a psy-ops troll working for myself, my mission is to destroy unconscious beliefs, help people choose their own perception and beliefs counsciously, develop habits to exercise valid logic and become unmanipulatable and happy this way. Perhaps this indeed can undermine the fabric of Democratic societies. Democratic (let alone autocratic) is not free, it generally is a dictate of opportunist manipulators who manage to mesmerize the majority (while majority is never too bright). Isn't it?'
> As humans there are VERY FEW "facts" we can know from first principles
"A person looking like this is being shown on TV waying that right now" is a fact (not 100% reliable, perhaps I'v dreaming, but it most probably is). The statement he announces is not a fact, probably a lie. But now we can speculate about why would he say that now, given current context, how probable it is he is nit a deepfake and whether or not does this matter.
Only 5 different regular polyhedra forms can exist in euclidean 3D space - this is another fact.
1. I want to deepfake myself to have an avatar for online interaction.
2. I want to generate videos instead of filming by pasting people into existing videos.
3. Prevent a Face/Off of scenario.
Imagine what that would mean for dubbed movies/TV if it gets good enough.
There are legit usecases, and that justifies the technologies existence. The bad actors don't make it immoral to develop a technology IMO.
In Germany everything is dubbed. Never have I cared about the mouth movement. It's all about the voice and the voice acting. But ok, it will be used at everything, and everyone will get used to the fake.
Still, it is (not will, already is) mostly used for revenge porn, synthetic CP and CEO fraud.
Face swapping + voice swapping + auto translate = your customer support can be anyone on the planet but look and sound familiar to you. Maybe you're getting over a facial injury.
Face swapping = you no longer have to put on make up. Just swap your made up face for meetings.
Face swapping + voice recordings + AI that learns = that scene in Contact where Jody Foster talks to the alien - but he takes the form of her father to make her feel more comfortable.
It's like releasing some software that cracks all encryption (or more accurately, authentication), before of which only elite members of society had access to.
It's not a physical weapon, it's an information-security weapon; I don't think the gun metaphor is appropriate.
Here it's obvious what's going to happen without robust legislation protecting the likeness of all individuals (and not just special-case celebs) from the non-consensual generation of new material.
The fact that such legislation is unlikely to happen before an awful lot of suffering has occurred is a testament both to naive belief that everything new is good. And to legislative processes with a bandwidth from the age of sail that is riddled with vested interests to handle the downsides when scaling breaks the happy path assumptions.
Focusing efforts on the legislative process seems likely to be more productive than point solutions that rely on techies and scientists not to develop tech that can be used for nefarious purposes.
What if this same repo was owned by Nvidia and it had some commercial interest on that product and were ready to litigate.. would everyone still pileup on it?
Is it not on some level disdain that it's just run by a bunch of guys that can be pushed around without much consequence.
Would we have a thread saying, screw it shut down ChatGPT, doesn't fit my moral world-view.. why is that absurd but this is fair discussion?
You might argue that the technology to make pixels on a screen resembling real humans is bad, but then you have to actually make that argument (and "some people got scammed" is indeed such an argument, albeit a pretty weak one), not just shift it to "this is technology, machine guns are technology, machine guns are bad".
It’s not going to end the argument though, not least because you’ll then have to assign a relative value to abstract things like “personal freedom” or “artistic expression”.
Even if you could quantify the losses to crypto scams, you can’t put an objective value on some of its more ideological benefits.
I guess the non-sensical argument "it's not the <insert technology> that <insert and thing>, but the person using the <technology>" will never die out.
If you don't have <the technology>, it's much harder to <do the bad thing>, has to be done hands-on from a very close distance with much higher risk for the perpetrator.
Even if the government regulates that I can’t physically have one, I could rent a rig of A100X8 servers in Russia, VPN tunnel into it, and download whatever artifacts the server generates.
How in the world could any government short of North Korea effectively regulate such a thing, even if they went full authoritarian, the blowback from such draconian measures would be intense?
Edit: i really don't like the idea that i have to be paranoid about every interaction with tech.
If Company A / Country A / Person A won't do it, then Company B / Country B / Person B will do it and use it to bankrupt you / attack and possibly kill you / take advantage of you.
It's that simple.
Aspirin is unambiguously a technological good. As is the bicycle. There are many technological advances which are exclusively (or almost exclusively) used for the benefit of mankind.
Then there is morally neutral technology. Think of a hammer or a knife. Can be used just as easily for good as for evil. It's up to the person who wields it.
The third category is evil technology. Technology that just makes the world a worse place. Think of landmines, nerve gas, or biologically engineered viruses. If we could uninvent things, these are the things we would uninvent in a heart beat.
Broadly yes, but this severe simplification glosses over the some technologies having more potential for abuse than others.
A machine gun has more potential for evil than a dessert spoon.
For me everything (image, text, sound etc) that comes from a computer is suspect nowadays.
Could crypto unironically be the way out of this mess? If a document isn't signed by a wallet associated with you, it should not be considered authentic?
This is far too dramatic. What's actually happening is that we simply no longer give _unattributed_ audio and video the benefit of the doubt. Data pedigree and attribution will become more important, but we're already in a place where real media is routinely misrepresented by bad actors (as a lot of purported Ukraine war footage has shown), and this will simply make us more skeptical still.
> Could crypto unironically be the way out of this mess
I did pitch a PhD study area on this[0] before deciding I didn't want to do a PhD or get into crypto. But I do think signing media and signed attribution chains will become more important ... but I don't see much need for it to be distributed. I (along with a huge number of other people) already sign documents digitally using a 3rd party service
0: https://github.com/pjlsergeant/multimedia-trust-and-certific...
LOL
BTW.
It's not a "technology" in any classical sense of this word.
This is a funny and technologically useless rattle that can be used as a Chinese-made Kalashnikov assault rifle.
> “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
BUT.
what are people using deepfakes for, in good faith? can someone provide one example that isn't malicious?
the best I could imagine is maybe amateur filmmakers deepfaking their faces onto existing footage to cut costs - but it doesn't seem that this outweighs the drawbacks
These are just a few good-faith applications, but I think the list goes on.
They've cried wolf enough times. Everything is dangerous and everything is a crisis.
Consequently, I will ignore their warnings about this as well. It'll be okay. Tomorrow the community will forget about it. Tomorrow the crisis will be that some one-person blog is not GDPR compliant.
There will be a break in period, but the conclusion will be check the source of the information.
Making this easily available will make the break in period easier.