But as the various regulatory and judicial and legislative processes grind through different parts of the modern intellectual property issue made so abundantly legible by the modern AI training data gold rush it seems ever more clear that one way or another, we’re going to get a new social contract on IP.
Leaving aside for a moment the thicket of laws, precedents, jurisdictions, and regulatory inertia: we can vote with our feet as both customers and contributors for common sense now.
So how about the following compromise: promote innovation by liberalizing the posture around training on roughly “the commons”, but insist that the resulting weights are likewise available to the public. Why do I have to take someone’s word for it that they’ve got a result around superposition or whatever on mech interp? I’d like to see it work given it’s everyone’s data pushing those weights.
I speak only for myself but plenty of people seem to agree: I don’t mind big companies training on generally available data, I mind the IP-laundering. Compete on cost, compete on value-added software stacks, compete on vertical integration. There is lots of money to be made building a better mousetrap in terms of code and infrastructure and product innovation.
Conduct the research in the open. None of this would be possible without an ocean of research and data subsidized in whole or in part by the public. Asserting any form of ownership over the result might end up being legal, but it will never be ethical.
Meta isn’t perfect on this stuff, but they’re by far the actor pulling the conversation in that direction. Let’s encourage them to continue pushing the pace on stuff like LLaMA 3.
what do you mean by that? as far i'm aware ANYTHING that you publish despite being on the internet or not, if there isn't a copyright notice, you should assume -> "all rights reserved"
However, does an LLM count as a derivative work or a transformative one? That's something for the lawyers to answer.
Meta really does not need to be subsidized when they have so many resources at hand—if LLMs are really hard to train without that much data, then perhaps that's a flaw with the approach instead of something the world has to accommodate.
large platforms saw AI and instantly closed their platforms making it hard or impossible for external actors to mine that "generally available data," hurting their own users and the open web in the process, and then they mined the data themselves.
The internet routes around censorship. Its impossible to hide information as long as its meant to be accessed by a human. If companies want to spend engineering hours putting locks then thats their waste.
Many businesses will fail by wasting time and money creating locks that can and will be circumvented.
I agree that a new social contract is inevitable because the only way to prevent data from being mined is to not produce it to begin with. Period. This I know.
The proposal in the article, however, is not about "the commons", it's about content that the users themselves produced, and then they voluntarily gave permission to Meta to use.
Or are you saying that if I produce some type of material, I shouldn't be able to license it for someone else to use it freely?
That's a funny way to describe DNT headers, disallowed Meta cookies, DNS blocking all their domains, and maintaining copyright over my content.
Your compromise is exactly the situation I desire but seems untenable to most people.
This removes the big scary emotional part of the debate. Without this, it's weakened quite a bit.
How much would you personally invest in a startup which would spend billions of dollars on a compute cluster only to release the weights publicly after the training is complete?
$8,820 * 365 = $3.2 million a year is pretty cheap for Meta to be able to do whatever they want with all the data from all 200 million Brazilians. Their annual net income is $39.10 billion, so 0.008%.
The fine for each privacy infraction is 2% of the company's last year earnings, limited to 50 million BRL (~9 million USD). If 500 brazillians had their privacy violated by a platform, that platform needs to pay 500 of these fines once per day until it is fixed. There's also all sorts of extra punishments for not fixing it in time (like mandatory suspension of services).
Facebook is not forbidden to use your data for AI. It can do so, as long as it provide means for you to delete it. A button to clean your data, for example. That would be legal. We know for LLMs is not that easy though.
Fair use is evaluated based on the purpose of use, the nature of the copyrighted work, the amount used, and the effect on the market. These factors generally favor the free use of openly published web content. The transformative nature of many reuses, the public availability of original works, the necessity of using entire works in some cases, and the absence of a traditional market for such content all support this interpretation.
This longstanding practice has driven unprecedented innovation and information dissemination, establishing a social contract between content creators and users that treats open web content as "freeware." Any move to impose strict copyright limitations now would stifle innovation and contradict decades of established legal precedent and digital norms.
The article only mentions that data could be used to train AI to make CSAM... which seems needlessly alarmist and inflammatory.
> The decision stems from “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” the agency said in the nation’s official gazette.
Meta spared no expenses to hide the opt-out page. The agency says that: “there were excessive and unjustified obstacles to accessing information and exercising this right”. This was one of the main reasons that obligated the agency to act.
The steps to get to the hidden opt-out page are bellow, obligating users to read the privacy policy to find a link buried deep down in the text, and requiring 2FA by email to opt out even for already logged in users - they should require 2FA to log in, not to opt out of AI training. There is no justification to require all this:
* Access your profile and go to the settings section, signaled by three bars in the top right corner
* Click on "about" at the bottom of the page
* Select the privacy policy. On this new page, the three bars in the top right corner lead to the privacy center
* Click on the arrow next to other policies and articles and select the option "How Meta uses information for generative AI features and models"
* In the nineteenth paragraph, not counting topics, is the "right to object" option. Click on it.
* Fill in and send the form. Meta confirms your identity with a numerical code sent to the email address registered on your account. Then just wait for the opt-out to be confirmed. This can take a few minutes.
TFA (with emphasis added):
> Brazil’s national *data protection* authority determined on Tuesday that Meta, the parent company of Instagram and Facebook, cannot use data originating in the country to train its artificial intelligence.
> The decision stems from “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” the agency said in the nation’s official gazette.
https://www.theregister.com/2024/06/14/meta_eu_privacy/ (with emphasis added):
> The decision to halt AI training using EU content follows complaints to *data protection* agencies in 11 European countries – and those agencies, led by Ireland, telling the Facebook giant to scrap the slurp.
While there is no shortage of IP, licensing, and copyright moral quandaries in training LLMs and their ilk, Meta/FB is not getting regulated on those grounds! They are getting regulated on privacy issues. It's even there on The Register path.
I'm seeing a lot of comments in these threads about IP, copyright, and licensing---which, please do take note, are well-defined legal terms and are not to be used interchangeably---but all that is irrelevant because that is not the question Meta is being made to answer for.
Even more frustrating are threads/arguments to what "irrevocable (copy)rights" you give FB per their TOS without even bothering to cite the relevant bits of the TOS to prove their point. Exercise to the reader: prove/disprove that [a] FB users retain copyright of their content even when posted to FB and [b] you are merely licensing FB to specific (not universal!) uses of your content posted in their platform and [c] said license is revocable any time. The astute reader is referred to the Berne Convention but Facebook's TOS will also do just fine. Standard question, one point per answer.
Bonus point question: if you have proven the points above, what action allows you to revoke the license you have granted FB?
(Of course, end of the day, I'm again playing lawyer in an online forum. I'm no better than anyone else here what do I know.)
The policy update seems to be global: https://www.facebook.com/privacy/policy
But ads are net negative and I'd argue that the influence of ads and paid actors on social media has been the single most destabilizing force in the world recently.
It is only for Meta, but I think it is because it caught the regulator's attention. The ban is due to lack of legal basis for the change in Meta's privacy policy regarding LGPD (brazilian privacy law - https://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei...)
They already have all the names, pictures, face biometrics, social graph, location information, political affiliation, relationships and everything that goes into an advertising profile. What else is needed?
They can just use the data already mined, which is probably 99% of everything they will ever need for many years to come. They probably have so much data they can use AI to predict what's missing with a fairly degree of accuracy, like what your face is going to look like in 20 years.
And due to the highly corrupt nature of politics, a few dollars here and there will undo this regulation fairly quickly. Or they could buy it from another company, because the law must be so poorly constructed that a clever lawyer will surely find workaround and they will be OK.