From where I stand, it looks like the pandora box has already been opened anyway. The era of hugginface and/or Llama2 models is only going to grow from there.
I’m not in the piracy scene, but my impression was they routinely pass full res movies around the Internet without much barrier to discovering and downloading them, at least to technically competent users. Is that still true?
Well yes, that’s precisely why they are lobbying for it.
It's a homesteading land grab, plain, simple and pure.
There is a competitive landscape, first-mover advantages, incumbent effects, et al that are being anchored in e.g. Sam Altman's interests and desires at this very moment. If you want a vision of the future of garage AI, imagine a boot stomping on Preston Tucker's face over and over. The current AI industry's goal is to preserve an image of openness and benefit while getting ready to pull the ladder up when the time is right.
With Llama2 we're at Meta's mercy, as it cost 20M to train. No guarantee Meta will continue to give us next-gen models. And even if it does, we're stuck with their training biases, at least to some extent. (I know you can fine-tune etc.)
I'll argue that between stable diffusion and llama 2, there is nothing highly specific that prevents [very] large amount of people from adopting these models and specializing for them own needs.
The tragedy would be if those went away.
That situation will change as technology evolves. We'll eventually reach a point where a normal desktop PC can train AI. The wealthy will always be able to do it faster, but the gap will shrink with time.
The trick is making sure that laws aren't put in place now that would restrict our ability to do that freely once the technology is there, and that massive data sets are compiled and preserved where the public has free access to them.
By the way, I'm not sure how easy it will be to stop bad actors since barriers to entry are exponentially lower to developing a malicious AI tool than, say, developing a nuke.
The wrong hands will have the same access to whatever "superpowers" AI gives regardless of what regulations are or are not put in place. Regulations can't and won't stop potential bad actors with state-level resources, like China, from using any technology they decide they want to use. So trying to regulate on that basis is a fool's errand.
The real question is, what will put the good actors in a better position to fight the bad actors if it ever comes to that: a big tech monopoly or decentralized open source anarchy? The answer should be obvious. No monopoly is going to out-innovate decentralized open source.
> I'm not sure how easy it will be to stop bad actors since barriers to entry are exponentially lower to developing a malicious AI tool than, say, developing a nuke.
Since some bad actors already have nukes, the answer to this should be obvious too: it's what I said above about the wrong hands getting access to technology.
All of the AI danger propaganda being spread (see [1], for example) has the purpose of regulatory capture. You could have said all the same things about PageRank if it had come out in 2020. A malicious AI tool is harder to assemble than straight up cracking. The people who can do it are highly-trained professional criminals taking in millions of dollars. Those people aren't going to be stopped because the source is closed. (I'm thinking of that criminal enterprise based in Israel that could manipulate elections, blackmail any politician anywhere in the world... etc. They were using ML tools two years ago to do this.)
The ML tools are already in the wrong hands. The already powerful are trying to create a "moat" for themselves. We need these models and weights to spread far and wide because the people who can't run them will become the have-nots.
The ones in control of the models also control what sentences are sanctioned, this is a problem the more widely LLMs are used. To add insult to injury, while we are not allowed private use of the models, governments and ad-tech surveillance capabilities will skyrocket.
Do you see the problem here? The capabilities of opensource models are not anywhere near high enough to justify such a cost, now or anytime soon.
And it won't end there. As the march of progress continues, we will see the AI doom crowd agitate for tighter surveillance of money flows, limits on private compute, bandwidth limits to homes, tracking what programs we run on our computers, on who is allowed to read the latest in semiconductor research and on and on.
There are no superpowers, and the wrong hands are the ones least effected by any effort at restricting distribution by “strong regulation”.
Big companies are easier to regulate.
But the problem isn't regulating the big companies, or the smaller companies, or underground entities. The problem is state-level adversaries like China who might misuse a technology, whether it's AI or anything else. Such adversaries can't be regulated by laws or executive orders or UN declarations; they have proven that many times in the past. The only way to control them is to have sufficient counter-capability against whatever assets they have. And government regulation is a terrible way to try to achieve that goal.
The US has shown time and time again it’s complete incompetence when it comes to meaningful regulation of large companies.
A bunch of good actors agreeing not to do bad things won't help it.
AI can't be "paused". Sometimes I see the question "should we put a pause on AI development?"
It doesn't mean anything. Some countries like China may say they're on board with pausing it but would they actually do so? Or just sign on and allow their companies to get an edge by not enforcing a pause.
Same thing with the existing AI companies.
https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...
Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).
this is what I believe. The only moat they have is their lobbying power and bank account. The want it tightly controlled, you can do it in your own way just as long as it's done just how I say.
The whole regulating AI thing is a farce, only the good guys follow the law and regulations. Do you think the bad guys are just going to throw up their hands and say "oh well, an agreement was made i guess we'll just go home"? The only thing blocking the way is hardware performance and data. Hardware is always getting faster and there's tons of new data created every minute, the genie is out of the bottle.
Stopping people from using AI which is something that they can trivially download and do on their own private compute is simply infeasible. We would need to lock all computing behind government control to have any hope. That is clearly to me too high a price to pay. Even then current hardware will be amassed that doesn't have this lock-down.
So I think the right approach here is to not worry about regulating the AI to prevent it from doing bad things, we should just regulate the bad things themselves.
However I do think there is also room for some light regulation around certification. If we can make meaningful progress on AI Alignment and safety it may make sense to require some sort of "AI license". But this is just to avoid naive people making mistakes, it won't stop malicious people from causing intentional harm.
And that's true whichever way you cut it. Whether those "good" actors are everybody or a few selected for privilege. We have no basis on which to trust big technology companies more than anybody else. Indeed quite the opposite if the past 10 years are anything to go by.
It is such a god-awful shame that corporations have been so disgracefully behaved that we now face this bind. But at least we know what we'll be getting into if we start handing them prefects badges - organised and "safe" abuse by the few as opposed to risky chaos amongst the many.
A saner solution might be a moratorium on existing big tech and media companies developing AI, while granting licenses to startups with the proviso that they are barred from acquisition for 10 years,
I'm not supporting this, of course. That's just what "good" actors seem to do.
I feel these kind of statements by Mozilla reflect exactly that lack of caution that may end us
No, GPT-4 is not AGI and is not going to spell the end of the human race, but neither is pseudoephedrine itself a methamphetemine and yet we regulate its access. Not as a matter of protecting corporate profits, but for public safety.
You'll need to convince me first that there is in fact no public safety hazard from forcing unrestricted access to the ingredients in this recipe. Do I trust OpenAI to make all the morally right choices here? No, but I think their incentives are in fact more aligned with the public good than are the lowest common denominator of the general public.
I don't think it's a correct argument.
I believe it's not a secret how metamphetamine is made (though I don't know it myself, but I'm too lazy to research for a sake of argument), and it's known pseudoephedrine is a precursor chemical. Thus, the regulation.
No one - I believe - knows how to build an AGI. There is no recipe. It is unknown if transformer models, deep learning or something else is a component, and if that's even a correct path or a dead end. What is known that neither of those is an AGI, and there's no known way to make those AGI. Thus, I'd say the comparison is not correct.
> it's not a secret how metamphetamine is made
I didn't say it was a secret. But try sharing the recipe to methamphetamine virtually anywhere online and see how long it stays up; we do in fact try to at least make it moderately difficult to find this information. Perfect is the enemy of good here.
> No one - I believe - knows how to build an AGI.
The problem is that if we wait until someone does know how to build an AGI, then the game is up. We do not know if the weights behind GPT-4 will be a critical ingredient. If you make the ingredients common knowledge before you know they were in fact The Ingredients, you've let the cat out of the bag and there's no retrieving it.
But super off-hand idea if I'm trying to be creative. The AGI formulates a chemical that kills humans 90 days after inhalation, hires a chemical lab to synthesize it and forward it to municipalities across the world, and convinces them it's a standard treatment that the WHO has mandated be introduced.
> And how would regulating models thwart this?
I don't think it would. The OP article is about regulation that increases the access to the ingredients for AI, and I'm simply unconvinced that is a recipe for increasing AI safety.
I do not. How does super human intelligence, on it's own, represent any sort of risk for "humanity?"
> then the recipe for how to make AGI is of itself an infohazard [...] nuclear weapons
I know how to make nuclear weapons; however, I cannot enrich the fuel enough to actually produce a working version.
> and yet we regulate its access
Does that actually achieve what it claims to achieve?
> Do I trust OpenAI to make all the morally right choices here?
OpenAI has a language model. They do not have AGI or anything approaching AGI.
> No, but I think their incentives are in fact more aligned with the public good
We could debate that, but for your assertions to hold any water, we'd have to agree that they're incapable of making mistakes as well. Far easier to skip the lofty debate and recognize the reality of the world we live in.
It does not; I was speaking to AGI. But assuming you were referring to AGI as well, you don't have to think very creatively to consider scenarios where it would be harmful.
If you have an agent that can counter your every move with its own superior move, and that agent wants something – anything – differently from what you want, then who wins? Maybe it wants the money in your bank account, maybe it wants to use the atoms in your body to make more graphics cards to reproduce itself.
Think about playing a game of chess against the strongest AI opponent. No matter which move you are considering playing, your opponent has already planned 10 steps ahead and will make a better move than you. Now extrapolate outside the chess board and into the realm where you can use the internet to buy/trade stocks, attack national infrastructure, send custom orders to chemists to follow whatever directions you want, etc.
This is unlike pseudoephedrine or nuclear weapons in that literally anyone with a computer could potentially create it.
It is not by ensuring that literally anyone with a computer has an equal chance at creating it.
I am not advocating for this particular issue, I am pointing out the flaw in your metaphor -- you can't restrict the math but unless people figure out how to make their own wafer fab then you can at least try to restrict the ability to do it.
The concern is not that AI is harmful on its own, and that it will unleash a Skynet scenario. It's precisely that it will be abused by humans in ways we can't even predict yet. The same can be said about any technology, but AI is particularly concerning because of its unlimited potential, extremely fast pace of development, and humanity's lack of preparation on a political, legal and social level to deal with the consequences. So I think you're severely underestimating "some harm" there.
I believe a more realistic concern is that some workers will lose their jobs to automation as it becomes capable of running more and more complicated tasks, because it's cheaper to run or, more likely, contract to run some fancy software that can be taught its job in a natural language. And if this scales up it may require significant socioeconomic changes to work around the associated issues.
If anything, restrictions on any scale only allow for compeitition to catch up to an surpass it on that scale. Restrict an AI to be polite and its comeptitor has a chance to surpass it in the sphere of rudeness. This principle can be applied to any application of AI.
and "safety" is such a loaded term these days. What exactly do they mean by safety? Prevention of launching ICBMs or prevention of not using the gender neutral pronoun "they"?
* creating chemical/biological/nuclear weapons
* biohazards
* outputs that could threaten to critical infrastructure (e.g. energy infrastructure)
* threats to national security
* cyberattacks, e.g. automatic discovery of vulnerabilities/exploits
* software that would "influence real or virtual events" (I'm guessing they mean elections?)
* social engineering
* generating fake news / propaganda
* generating child porn or deep fakes
Note that it's not banning these, but asking US government departments as well as private companies, AI experts, academia, for their inputs on what regulations could/should be. In that sense, this Mozilla letter is a response to the EO.
Also, there's no mention of the use of certain pronouns, nor of an AI-caused apocalypse (not even a mention of paperclips!).
[1] https://www.whitehouse.gov/briefing-room/presidential-action...
But not to worry. For better or worse, in short time these concerns will seem like they were a foolish delusion. If not because of their questionable morality, then because they were so wildly unrealistic in application.
That's not to say that AI safety won't be a topic for decades to come or longer, simply because it confers political and harassment power. Even if ineffective in terms of AI in practice.
We all know that these same companies and others are going to ignore any rules anyway, so open source and academics need to just be left alone to innovate.
Regulate by the risk the processing environment presents, not by the technology running on it.
I do wish my government would get tougher on user privacy issues, but that is a very different subject.
More like "we heard some ruckus, so we need to investigate". Of course, this is concerning because who knows what the advice will be with all the corporate lobbying - but I think there is no capture, not yet. Or have I missed something?
https://web.archive.org/web/20231102190919/https://open.mozi...
There has to be way to rescind my contributions, whether they're voluntary or not (I still think these companies are dancing with copyright violations), as well as ALL DERIVATIVES of the contributions being removed that have been created via the processing of the creations in question, regardless of form or generation (derivatives of derivatives, etc.).