Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.
You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.
If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.
I would do no justice to it by reiterating its points here.
It seems like the answer is to adjust IP owner rights very carefully, if that's possible. It sounds very hard, though.
The point the author was making was that the intent of GPL is to shift the balance of power from wealthy corporations to the commons, and that the spirit is to make contributing to the commons an activity where you feel safe in knowing that your contributions won't be exploited.
The corporations today have the resources to purchase AI compute to produce AI-laundered work, which wouldn't be possible without the commons the AI it got its training data from, and give nothing back to the commons.
This state of things disincentivizes contributing to the FOSS ecosystem, as your work will be taken advantage of while the commons gets nothing.
Share-alike clause of the GPL was the price that was set for benefitting from the commons.
Using LLMs trained on GPL code to x "reimplement" it creates a legal (but not a moral!) workaround to circumvent GPL and avoid paying the price for participation.
This means that the current iteration of GPL isn't doing its intended job.
GPL had to grow and evolve. The Internet services using GPL code to provide access to software without, technically, distributing it was a similar legal (but not moral) workaround which was addressed with an update in GPL.
The author argues that we have reached another such point. They don't argue what exactly needs to be updated, or how.
They bring up a suggestion to make copyrightable the input to the LLM which is sufficient to create a piece of software, because in the current legal landscape, creating the prompt is deemed equivalent to creating the output.
You can't have your cake and eat it too.
A vibe-coded API implementation created by an LLM trained on open source, GPL licensed code can only be considered one of two things:
— Derivative work, and therefore, subject to the requirement to be shared under the GPL license (something the legal system disagrees with)
— An original work of the person who entered the prompt into the LLM, which is a transformative fair use of the training set (the current position of the legal system).
In the later case, the input to the LLM (which must include a reference to the API) is effectively deemed to be equivalent to the output.
The vibe-coded app, the reasoning goes, isn't a photocopy of the training data, but a rendition of the prompt (even though the transformativeness came entirely from the machine and not the "author").
Personally, I don't see a difference between making a photocopy by scanning and printing, and by "reimplementing" API by vibe coding. A photocopy looks different under a microscope too, and is clearly distinguishable from the original. It can be made better by turning the contrast up, and by shuffling the colors around. It can be printed on glossy paper.
But the courts see it differently.
Consequently, the legal system currently decided that writing the prompt is where all the originality and creative value is.
Consequently, de facto, the API is the only part of an open source program that has can be protected by copyright.
The author argues that perhaps it should be — to start a conversation.
As for who the benefactors are from a change like that — that, too, is not clear-cut.
The entities that benefit the most from LLM use are the corporations which can afford the compute.
It isn't that cheap.
What has changed since the first days of GPL is precisely this: the cost of implementing an API has gone down asymmetrically.
The importance of having an open-source compiler was that it put corporations and contributors the commons on equal footing when it came to implementation.
It would take an engineer the same amount of time to implement an API whether they do it for their employer or themselves. And whether they write a piece of code for work or for an open-source project, the expenses are the same.
Without an open compiler, that's not possible. The engineer having access to the compiler at work would have an infinite advantage over an engineer who doesn't have it at home.
The LLM-driven AI today takes the same spot. It's become the tool that software engineers can and do use to produce work.
And the LLMs are neither open nor cheap. Both creating them as well as using them at scale is a privilege that only wealthy corporations can afford.
So we're back to the days before the GNU C compiler toolchain was written: the tools aren't free, and the corporations have effectively unlimited access to them compared to enthusiasts.
Consequently, locking down the implementation of public APIs will asymmetrically hurt the corporations more than it does the commons.
This asymmetry is at the core of GPL: being forced to share something for free doesn't at all hurt the developer who's doing it willingly in the first place.
Finally, looking back at the old days ignores the reality. Back in the day, the proprietary software established the APIs, and the commons grew by reimplementing them to produce viable substitutes.
The commons did not even have its own APIs worth talking about in the early 1990s. But the commons grew way, way past that point since then.
And the value of the open source software is currently not in the fact that you can hot-swap UNIX components with open source equivalents, but in the entire interoperable ecosystem existing.
The APIs of open source programs are where the design of this enormous ecosystem is encoded.
We can talk about possible negative outcomes from pricing it.
Meanwhile, the already happening outcome is that a large corporation like Microsoft can throw a billion dollars of compute on "creating" MSLinux and refabricating the entire FOSS ecosystem under a proprietary license, enacting the Embrace, Extend, Extinguish strategy they never quite abandoned.
It simply didn't make sense for a large corporation to do that earlier, because it's very hard to compete with free labor of open source contributors on cost. It would not be a justifiable expenditure.
What GPL had accomplished in the past was ensuring that Embracing the commons led to Extending it without Extinguishing, by a Midas touch clause. Once you embrace open source, you are it.
The author of the article asks us to think about how GPL needs to be modified so that today, embracing and extending open-source solutions wouldn't lead to commons being extinguished.
Which is exactly what happened in the case of the formerly-GPL library in question.
If you want to build a new world with out this, we can't do it while we are supporting the very companies that are creating the problem. The more power you give them, the strong they get and the weaker we become.
I think focus needs to shift completely off of for-profit companies. Although, not sure how that is going to happen..lol
[citation needed]
Where does your confidence come from?
GPL itself was precisely the "intellectual property nonsense" adding which made FOSS (free as in freedom) software possible.
The copyright law was awfully broken in the 1980s too. Adding "nonsense" then was the only solution that proved viable.
Historically, nothing but adding "more IP nonsense" has ever worked.
>The real solution is to force AI companies to open up their models to all.
Sure. Pray tell how you would do that without some "intellectual property nonsense".
We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
>We need free as in freedom LLMs that we can run locally on our own computers
Oh, on that note.
LLMs take a fuckton of compute to train and to even run.
Even if all models were open, we're not at the point where it would create an equal playing field.
My home computer and my dev machine at work have the same specs. But I don't have a compute farm to run a ChatGPT on.
From the fact that copyright infringement is trivial and done at massive scales by pretty much everyone on a daily basis without people even realizing it. You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend. Everybody does stuff like this every single day. Nobody cares. It is natural.
> GPL itself was precisely the "intellectual property nonsense"
Yes. In response to copyright protection being extended towards software. It's a legal hack, nothing more. The ideal situation would have been to have no copyright to begin with. The corporation can copy your code but you can copy theirs too. Fair.
> Pray tell how you would do that without some "intellectual property nonsense".
Intellectual property is irrelevant to AI companies.
Intellectual property is built on top of a fundamental delusion: the idea that you can publish information and simultaneously control what people do with it. It's quite simply delusional to believe you can control what people do with information once it's out there and circulating. The tyranny required to implement this amounts to totalitarian dictatorships.
If you want to control information, then your only hope is to not publish it. Like cryptographic keys, the ideal situation is the one where only a single copy of the information exists in the entire universe.
AI companies are not publishing any information. They are keeping their models secret, under lock and key. They need exactly zero intellectual property protection. In fact such protections have negative value to them since it restricts the training of their models.
> We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
Sure you do. The whole point of government is to do just that. Literally pass some kind of law that forces the corporations to publish the model weights. And if the government refuses to do it, people can always rise up.
> Even if all models were open, we're not at the point where it would create an equal playing field.
Hopefully we will be, in the future.
Pretty sure no one, (but me anyway) saw overt theft of IP by ignoring IP law through redefinition coming. Admittedly I couldn't articulate for you capital would skill transfer and commoditize it in the form of pay to play data centers, but give me a break, I was a teenager/twenty something at the time.
If "more freedom" is your goal, then this rewrite is inherently in that direction. It didn't "close" the old library down. The LGPL version remains under its license, for anyone to use and redistribute exactly as it always has. There is just now also an alternative that one can exercise different rights with. And that doesn't even get into the fact that "increased freedom" was never a condition of being allowed to clone a system from its interfaces in the first place. It might have been a fig leaf, but some major events in the legal landscape of all this came from closed reimplementations. Sony v. Connectix is arguably the defining case for dealing with cloning from public interfaces and behavior as it applies to emulators of all kinds, and Connectix Virtual Gamestation was very much NOT an open source or free product.
But to go a step further, the larger idea of AI assisted re-writes being "good", even if the human developers may have seen the original code seems to broadly increase freedoms overall. Imagine how much faster WINE development can go now that everyone that has seen any Microsoft source code can just direct Claude to implement an API. Retro gaming and the emulation scene is sure to see a boost from people pointing AIs at ay tests in source leaks and letting them go to town. No our "foreparents" weren't competing with corporations with unlimited access to AI trained on their work, they were competing with corporations with unlimited access to the real hardware and schematics and specifications. The playing field has always been un-level which was why fighting for the right to re-implement what you can see with your own eyes and measure with your own instruments was so important. And with the right AI tools, scrappy and small teams of developers can compete on that playing field in a way that previous developers could only dream of.
So no, I agree with the comment that you're responding to. The incredible mad dash to suddenly find strong IP rights very very important now that it's the open source community's turn to see their work commoditized and used in ways they don't approve of is off-putting and in my opinion a dangerous road to tread that will hand back years of hard fought battles in an attempt to stop the tides. In the end it will leave all of us in a weaker position while solidifying the hold large corporations have on IP in ways we will regret in the years to come.