I would argue this is inherent to the training and compute cost of all large language models.
There are also providers now that will let you upload low-rank adapters you have trained in top of open foundational models, so that you can use their efficient serverless infrastructure with your fine-tuned models. This requires even less capital.
None of this would exist had OpenAI’s vision of centralized, locked-down API access become the reality.
(As Microsoft explained years ago)
Unfortunately it's past the deadline for you to add your own comments on this, but I'm sure there will be future RFCs if you have thoughts.
When I say "the White House has quite competent people working on this" it's because the NTIA RFC I linked was based on a direct executive order from President Biden.
On October 30, 2023, President Biden issued an Executive Order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which directed the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, to conduct a public consultation process and issue a report on the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available. Pursuant to that Executive Order, the National Telecommunications and Information Administration (NTIA) hereby issues this Request for Comment on these issues. Responses received will be used to submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.
Here's a bit of help for them, development happens with our help or not. For every regulated repository that's removed, there's a mirror/fork.
I don't really have a point... just having a little fun at the expense of the latest thing in the harbor. I'll honestly stay apprised
Open weights vs training code is not a distinction worth drawing for the average person
Perhaps I'm wrong, willing to discuss. I'm simple so my thoughts on this are also.
Weights, training code, or... CAD files that just so happen to make child-size cages. Until I'm actually caging children - buzz off.
If it is open then everyone has access. Any restrictions would be akin to demanding that linux not use strong encryption or firefox implement content censorship.
The license really does nothing to protect your project from regulation its just that the government doesn't care about open source yet.
A lot of open source efforts depend on big companies training million-dollar models and giving them away. These companies will often apply some censoring adjustment to the weights, which the open source community then undoes through fine tuning.
But perhaps in the future new methods of censorship will be developed, which are radically harder to undo?
And of course, there's always heavy-handed options available - if we can require hairdressers to hold professional licenses, we could require the same of anyone who wants to upload to huggingface or civitai.
Apple is similarly partnering with OpenAI.
Google has released nerfed versions of its Gemini models (Gemma 2B and Gemma 7B).
It seems the only company truly setting an example for open weights is Meta. Hopefully the other companies realize that it’s in their best interest to do the same (as it seems Google is begrudgingly realizing with its nerfed releases).
Eventually open weights will win, and if OpenAI (the most ironically named company) continues to rely on closed models as its moat, it will lose.
Not only have all three of these companies released LLM model weights, Google and Microsoft especially have released nearly countless model architectures, weights, evaluation/training/inference/etc frameworks, toolkits, etc across an extremely wide spectrum.
Phi-3?
Do the Phi series models not count?
I think this is too generous of an interpretation for their deal.
If the govt wants to regulate commercial AI, they can do that by going to the companies responsible for building that AI and say "do X, Y and Z or else we will punish you with A, B or C". But what do you say to a collective of developers that could be all around the globe? What stops that group from just saying "F* You" and continuing on?
Updated link.