Important to note that it is also possible for a technology to have more potential for abuse than good.
Sure, you can come to a philosophical place where nothing is good or bad, or the good is perfectly balanced by the bad, but if we're looking at increasing freedom, peace and trust, it's hard to see how the upside of this tech is equivalent to it's potential for abuse.
The best argument might be that eventually no one will trust video.
As such, tools like this just prove the conclusion: No one should trust video.
Sure, there was a "requierment," but I wouldn't really consider it legitimate. It took concerted effort to "earn" any grade lower than an A. I wasn't personally acquainted with any other student who seriously considered the social and ethical implications of computers during or after taking a 400 level course titled "Social and Ethical Implications of Computers."
On the bright side, I heard through the grapevine that rigor, or at least workload for the course, has increased since I took it.
Although anecdotal and perhaps specific to my institution, every recollection of my University career makes me feel deeply thankful that I ended up eventually pursuing a double major in mathematics. The quality and dedication of the instructors wildly surpassed that of the CS department. Mathematics professors were there to share knowledge. CS perfessors were pressured by hiring statistics into "preparing us for the workforce" by essentially quizzing specific interview questions like "what's the difference between interfaces and subclasses in Java."
You can't go overseas to get a bridge built across the river in your town. You can't get a foreign barrister to represent you in court in your country.
You can easily get a new social media website or IoT server built and hosted in any country you want.
The scenarios where the effects of bad ethical decisions most often make the news are not things that lend themselves well to being subject to local licensing regulations, like embedded avionic or medical devices or internal banking systems. Problems do happen in those spaces, but they are rare.
None of this even touches on usability and the difficulty of restricting tool use to the "good guys". The jslint licence got a lot of stick for that.
I assumed all universities had a similar program, if they don't students are really missing out.
The main problem is there is not a certifying organization that standardizes ethics (and includes licensing). A single developer refusing something on ethics is pretty meaningless. A company will just find someone else who will do it.
I do agree with your pretense though that programmers have a tendency to not think about ethics and should be held to some sort of code in the same way doctors and lawyers are supposed to be.
https://www.uio.no/studier/emner/hf/ifikk/EXPHIL03E/index.ht...
I totally support the idea that all professionals need to be ethical and moral, and I definitely try to do so in my life, but I've given up hope that society at large is interested in this. I think individuals generally are, but any company, once it becomes powerful, seems to also become evil.
Man if only we could make all our political science grads take an ethics course, then there'd be no more war!
Edit: see also https://www.acm.org/code-of-ethics
So maybe it's best this way. Computer scientists will still do bad stuff, but at least they'll be embarrassed about it when caught, rather than come up with clever justifications.
I first thought it was a lot of philosophical bullshit about good and bad, but the history part of things were an eye opener.
Like the census data being abused by Nazis to exterminate jews. It always starts up as good intentions.
Our job is to prevent the worst case. The worst case eventually will happen.
Which is why it is crucial everyone realizes how easy it really is to create these fakes, before the masses are duped in favor of the next war or genocide by these techniques.
Obama talked about it this afternoon. He said "This is bad, blah blah oh no." Of course, you don't believe me because I made this up. That doesn't preclude you from believing written quotes, given the right chain of trust. It's been great to have formats like video that didn't require the chain of trust for a while, but if that time has passed, there's nothing we can do. It is hard, but in the context of text where quotes have been easy to fake for ages, we have dealt with it. It's good for everyone to be on the same page.
E.G: we all saw the close shots used journalists to magnify a so-so event and make it newsworthy. Yet, when seen one, many still consider it as "news". We all know which politician lied last year. Yet, when speaking again, many still listen. We all know which company abused consumers. Yet, when a new product is advertised, many still buy.
It's possible a video doesn't reveal the appropriate context. (e.g. what happened before the start of the video, and maybe what happened afterwards; or what's happening out of view).
That said, that isn't inherent to video. (And, sure, "swapping faces" doesn't lead to a more accurate portrayal).
Check out the "CaptainDisillusion" channel on youtube, it's full of examples of people deceiving others using video editing software. To my knowledge none of the examples he's talked about have used face swapping.
This is never the case independent of context.
You can imagine in a major famine that people may start killing each other over scraps of food. In that context a kitchen knife becomes more likely to be used as a murder weapon than to prepare the food that nobody actually has.
But nothing about the knife has changed, it's the context that has changed. And you don't solve the problem by banning cutlery and every other thing with a point or some heft, you solve it by resolving the famine.
You don't solve deepfakes by restricting information, you do it by adapting to their existence. Because they're not going away.
Github has an infamous history with imposing their feelings on projects they don't like.
Can you elaborate on this part? I don't remember seeing something like this before.
[0] - https://github.com/FeministSoftwareFoundation/C-plus-Equalit...
[1] - https://github.com/TheFeministSoftwareFoundation/C-plus-Equa...
Guidelines refresher:
“You agree that you will not under any circumstances upload, post, host, or transmit any content that:
is unlawful or promotes unlawful activities; is or contains sexually obscene content; is libelous, defamatory, or fraudulent; is discriminatory or abusive toward any individual or group; ...”
https://www.techdirt.com/articles/20150802/20330431831/githu...
There's lots more examples of their employees getting triggered and offended by various things and then arbitrarily banning or censoring projects.
Honestly it's a lost battle to try to censor hurtful projects names. At best you can moderate US-centric ones.
If they thing OpenAI made isn't interesting enough of a discovery without its data (because it's all arbitrary anyway), but is very useful to spammers as a piece of code, OpenAI has truly achieved the 200% opposite goals they were looking for.
I mean the Faceswap people have the same problem. They could give less a shit about porn. But that's what people used it for.
Human ingenuity will not be contained like this. I'm almost certain that somewhere between 10-100 people who saw the OpenAI censored release saw it as a challenge for them to recreate it on their own.
This is fine. Maybe this makes things significantly more chaotic in the short term. But we have to take the long view on this. Ten years from now this tech will be seen as a joke compared to whatever they will have. It's time to start preparing for that.
There's probably a word for this sentiment that I'm not aware of.
What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking
Actually, how about cryptographically signing videos as they get written on the recording device?
Maybe there even are ways to sign data so that the integrity can get validated on shorter segments, so that clips can be cut. Write a signature every 5 seconds for the past 5 seconds?
Edit: This exists and the term for it is 'video authentication'.
That wouldn't prove much besides that the person sharing the video had access to the device's private key. I think the best you can do is timestamp the video by uploading a hash of it to a blockchain, but even then that only proves the video existed sometime before that instant.
Huh, I'd never even considered that you could do that.
- a fact that has been distorted to be interpreted in a 180 degree way (Americans paying tariffs to the US Gov for buying Chinese goods = Trump saying "China is finally paying us!"), or
- a total untruth slipped in between valid concerns (like the fake Russian Black Lives Matter pages piggybacking off of civil rights abuses mentioned by the American Black Lives Matters campaigns), or just
- incitement of uncertainty in more or less solved problem domains (anti-vaxxers)
If you are interested in learning about more (failed attempts at) verified news platforms, though, try looking up verrit, and pravduh
https://www.reddit.com/r/github/comments/99aovq/unable_to_ac...
Proof: When searching "github code search login" on hn.algolia.com, it turns up this HN thread from September 2016, nearly 2 years before MS bought GitHub: https://news.ycombinator.com/item?id=12581068
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online
I feel these tools are worth having on their own and it seems widely accepted at this point that the tools themselves aren't at fault for their user's actions, even if those actions are the most popular use of the tools.
Personally I'm much more concerned about the ethical actions of internet advertisers and social media giants - those who are making direct ethical decisions that impact their users privacy and access to information.
As far as I know, at least Hex-Rays screens customers very carefully before selling IDA Pro.
Your stance against so called "dangerous knowledge" worries me - would you encourage banning books about cryptography and software development?
These technologies are to the detriment of enforcement. Because enforcement is far from a universal good, these technologies are far from a universal bad. Contrast this with faceswapping, where the upside is far less clear. Same goes for e.g. Stuxnet. It is a beautiful piece of technology, but not really a force for good given that it is widely available.
A bio-weapon delivery vessel might be a great essential oil diffuser, but I'd argue that the tool still has an essential immoral quality by virtue of its specialised design.
Does that apply here? I don't think so, I don't think this tech was created for the purpose of fomenting unrest and committing frauds, but maybe I'll be corrected on that.
In fact, it's not so hypothetical. The vast majority of iron maidens ever created have only ever been used as novelties, not for torture.
It seems like an extremely bad idea to me.
Clearly it is a dangerous tool that must be restricted to select users.
The morality and behavior come from the humans that use it.
It doesn't refer to who's actually physically using the tech.
Censorship: bad.
That Life: The tech will get created by someone else so censoring does almost nothing. Better to put it out there so we can try to make defenses. Maybe make a bunch of fakes with famous people's permssion to spread the word how you can't trust video anymore
Dangerous: To take an extreme example imagine you figured out how to make some kind of E=MC^2 bomb simply such that anyone with the knowledge could make a device that could blow up a city for $100 and a few hours of time. Would it be ok to upload those instructions to the internet for any disgruntled teen to repo?
deepfakes are certainly not at that extreme but we can also clearly imagine the harm they could do as they progress.
There have been several examples recently of people seeming to react to arguably false perceptions. I'm actually thinking of ones in the last 2-3 days but I'm sure there are plenty of others.
- Community creates a project that makes it impossible to track faces in social media and anywhere online.
yghmmm, no, not that kind of AI
If they're not and they're hoarded by tech companies or intelligence agencies then we'll just have a lopsided system where people aren't aware of how capable such technologies are, what their limitations are, how to analyze them to spot issues, etc.
Imagine if only nationstates knew about these sorts of technologies and used them for war or if only certain elites in tech had access to them and used them to implicate competitors in crimes? The technology is out there now - at this point, public knowledge is our best defense - people always question if a contentious image is photoshopped, we want that same level of questioning to happen for videos.
In terms of this being used as an excuse to get someone out a criminal charge, it might make us take a better look at the chain of custody on video evidence but I don't think it would invalidate it completely.
This might seem nice against ever growing CCTV, but probably state security cameras will be "trustworthy" and all media evidence gathered by private persons will be dismissed...
The potential for manipulation is huge given how many people trust pictures. I know that I don't distrust most pictures I see.
What GP is describing is the long term consequence of not being able to trust video evidence. Now even if you film someone red handed, they can deny it.
Another dire consequence is that the entire archive of all videos filmed since the beginning are now tainted by doubt. Any past politician speech, any past horror caught on film, etc. can now be said to have been crafted recently.
Possibly it was only enforced for very generic search queries returning thousands of results but it has been around a long time, Github acquisition was only in October 2018.
!gh or !git anywhere in a ddg search will restrict it to github.
Censoring will just draw more attention and traffic. What’s really unsettling is that GitHub is playing politics with its users, without even informing them or communicating with them. You would think they would have the courtesy to tell the owner.
Hard to guess at the intention.
I can show verifiable, witnessed audio recordings of a guy saying he likes to grab women by the pussy, but that won't stop that guy from becoming President. Powerful tools don't run societies, people do.
P.S. and yes, before the obligatory "it's a private business" comments come in, I know I can build my own Internet and avoid all this. Thanks for reminding.
1) One day somebody posts a handful of really obviously faked janky looking porn videos. We all have a good laugh, briefly imagine the possibilities, and then move on
2) Like 3 weeks later, every social media platform explicitly bans this dumb toy that wasn't even any good
3) a year or so passes
4) Now governments are passing dramatic legal bans on these things, and there's all kinds of shady things happening. Like, this is the first instance of this kind of public restriction I have _ever_ seen on github.
So: which major news events were completed fabricated?
Notice how that says "Application", not website. It amazes me how people want to make their WordPress site into an SPA simply because someone told them to do so or it was the next "hip" thing to do.
SPA have their place... migrating a desktop application to the web and making it a SPA makes perfect sense to me.
While I agree technology isn't inherently good or evil, this feels more harmful than helpful.
Why not change the license to enforce the use restictions?
When I think about people I know who have been long-time users of GitHub and how this kind of censorship resonates with them... Oh my.
These early adopters could migrate away very quickly.
I have no opinion about whether or not that is a better title, but I thought it should be known that it was modified from its original.
While censorship may not be an appropriate word, this is weird. Why would Github do something like that, except to force people accessing the repo to leave a trail leading to their PII?
Anyone can fork and mirror it where they want, and make it accessible to anonymous users. Sure, that would "inconvenience" some users, but so what? Github doesn't exist to please every single person out there.
Create your own mirror, and let us know the URL. Don't just whine and try to manufacture outrage if you aren't willing to do contribute resources required to host the code yourself.
I fully support Github's right to use their property (github.com) as they please, because I want the same right for myself.
— definitely Voltaire, for sure. /s
Works with clone though. Wonder how many more such repos exist?
Do they have a transparency report which includes such action?
As a Microsoft employee, it would be even more enormously disappointing if this were a top down rather than internal org decision.