the United States declined an invitation to a pivotal conference in Berne in 1883, and did not sign the 1886 agreement of the Berne Convention which accorded national treatment to copyright holders. Moreover, until 1891 American statutes explicitly denied copyrights to citizens of other countries and the United States was notorious in the international sphere as a significant contributor to the "piracy" of foreign literary products. It has been claimed that American companies for the most part "indiscriminately reprinted books by foreign authors without even the pretence of acknowledgement" (Feather, 1994, 154). The tendency to freely reprint foreign works was encouraged by the existence of tariffs on imported books that ranged as high as 25 percent (see Dozer, 1949).
[1] http://socialsciences.scielo.org/scielo.php?script=sci_artte...
Sure, perhaps they would need a license to get the material, but I don't see how broken copyright laws will be of any help here.
China's lead efforts on the other hand look to the long view - by releasing their products as open-source they can improve on each other's work. No one controls the market but there is constant competition and innovation.
All this is besides the point, however, for this article claims that OpenAI is using China as an excuse to have unfettered access to all copyrighted works through the fair use loophole.
So the crux is whether we believe in "innovation uber alles" or intellectual property rights.
There is no winner take all in an ecosystem. If it happens, that ecosystem collapses !!
It is strange that you use this as an example yet fail to understand it fully.
Nature is a complex system ... with adaptive feedback. Every process is a cycle - has feedbacks that amplifies/regulates it. Yes there are apex predators, but there is no winner take all in an ecosystem. Living beings coexist.
It's a race with winners and losers because ego and money. Ego because … well, ego.
Money because whoever develops the most powerful AI and gets enough people to buy into it, will probably retain the top spot for quite a while because inertia (sort of like how Google got to be where it is).
I'm sure some level of paranoia feeds into it at some level. Whoever gets locked in the public's mindset will rule the world and if it's not a Silicon Valley magnate, then they are losers.
OpenAI doesn't want to do that.
Why, though? I can understand the companies involved wanting to be first in order to maximize their profit, but why should that matter to anybody else?
Perhaps the future of Silicon Valley is to be the home of defense contractors.
https://watson.brown.edu/costsofwar/papers/2024/SiliconValle...
At some point, it becomes a national security issue. This technology is going to be leveraged in ways we can't even dream up today. Copyright law needs to be re-imagined in a way that won't restrict advancement in AI, and AI-adjacent technology. It's not because we want to - it's because we have to.
For general questions, you can use the free wiki that's ingested into the LLM or pay a fee for general content like current events.
You keep the LLM free in the third-world out of necessity. OpenAI, in the first world, cannot ask to be treated as if it were a third-world company because we are too rich to be that ridiculous.
We cannot let that happen with AI technology, and it is a very difficult conversation when we're talking about technology that has already replaced likely hundreds of thousands of jobs in the form of extending the amount of productivity individuals can produce.
To you, this is a moral issue, and one I absolutely agree with at its core. But this is technology, in my opinion, has the risk of eventually triggering a form of social stratification. The focus should be on keeping the technology ubiquitous, accessible, and unrestricted.
Why is it a national security issue? Because people who could make billions of dollars say so?
How is it that we can theorize that the model would get better with more data, but we can't theorize that the business model would need to get bigger (pay the content creators) to train the model? Shoot first and ask questions later (or rather, BEG later).
Allow OpenAI and other AI companies to use all data for training, but require that they pay it forward by charging royalties on profits beyond X amount of profit, where X is a number high enough to imply true AGI was reached.
The royalties could go into a fund that would be paid out like social security payments for every American starting when they were 18 years old. Companies could likewise request a one time deferred payment or something like that.
It's having your cake and eating it. Also helping ease some tensions around job loss.
Sadly, what we'll likely get is a bunch of tech leaders stumbling into wild riches, hoarding it, and then having it taken from them by force after they become complacent and drunk on power without the necessary understanding of human nature or history to see why they've brought it on themselves.
Another would be that they couldn't sell access to customers directly but rather must license it out to various entities at rates set by regulators. Those entities then would compete with each other for end customers. This of course might be prone to regulatory capture like happens with utilities.
Who is we? How do you know? Never is a strong word.
> If we hold our principles to OpenAI (pay who you took from), they will go bankrupt.
i.e. their business wasn't feasible to begin with? Sounds fine? What's wrong with them being bankrupt (if needed).
I personally think that the odds of me me being able to both publicly publish my words and code and be able to keep them out of training data is pretty close to zero. Since that's unacceptable to me, my only option is not to publish that stuff at all.
And the red herring that "China will steal it if we don't do it first".
Even if copyright owners can’t point to how much damage, if any, they suffer from AI, it’s seen as wrong and bad. I think it’s getting boring to hear that story about copyright repeat itself. In most crimes, you need to be able point to a damage that was done to you.
Also, while there are edge cases in some LLM’s where you can make them spew some verbatim training material, often through jailbreaks or whatnot, an LLM is a destructive process involving ”fuzzy logic” where the content is generally not perfectly memorized, and seems no more of a threat to copyright than recording broadcasts onto cassette tapes or VHS were back in the day. You’d be insane to use that stuff as a source of truth on par with the original article etc.
But for other classes of user generated content, the problem is suddenly "impossible".
Everyone WILL start using hosted frontier Chinese models if they are demonstrably better at answering scientific questions than ChatGPT, sending essentially all US research questions into a Chinese data dump. This is even worse than the national security catastrophe that is TikTok (even aside from the EVEN BIGGER issue that China will have models that are staggeringly better than those in the US, because they are up to date on the science).
I understand the reflexivity against AI companies "stealing content" but we need to stay competitive and figure out the financial compensation later. This is not a case where our unbelievably generous copyright laws should take precedence over US competitiveness.
If OpenAI’s leadership weren’t saying precisely this, they wouldn’t be doing their jobs.
This isn't true at all. It has an obligation to follow the law of the society it operates in, even if that results in lower profits.
Unfortunately the society this company operates in is highly Machiavellian and can’t improve because people are too busy hating each other and also the rules it does have aren’t being enforced very well and finally, this type of lobbying is part of the culture in the US, it’s so expected it’d be weird if they didn’t do it.
[0]https://en.m.wikipedia.org/wiki/Dowling_v._United_States_(19...
No it’s always against commercialization. That’s why we have exceptions like political commentary, satire and in particular arts and sciences. The issue is about making money from someone else’s work.
You can still disagree with it of course, but let’s have an honest discussion.