I can absolutely guarantee you that neither DeepSeek nor Alibaba's highly talented Qwen group will care even a little bit, in the long run. Not if there's value to be had in AI. (And I can tell you down to the dollar what LLMs can save in certain business use cases.)
If the US decides to unilaterally shut down LLMs, that just means that the rest of the world will route around us. Whether this is good or bad is another question.
LLMs are a drop on a hot stone compared to countless other factors why the world already is routing around the US - but I don't want to get political or economical.
You're talking as if they are some kind of nationalized or publically-owned asset, as opposed to a bunch of for-profit, privately-owned silos.
Even if ChatGPT, Huggingface, etc. died, we would still have the models and we would still be able to run them.
The firms pouring trillions of dollars into them want to own their creative output, and charge rent for access to it.
They won't be leapfrogging 'us'.
They'll be leapfrogging some privately owned, for-profit business.
Frankly, I don't give two shits about that, and I fail to see why anyone who doesn't own one of them should give one.
If they want me to have skin in the game, they should share either the profits, or the models. Until they do, it's not 'us', it's 'them'.
My fear is that the US tech won’t be able to compete with state sponsored open source out of China and will move to ban open source or suppress it somehow.
And Alibaba's Qwen team seems to be quite genuinely talented at "small" models, 32B parameters and below. Once you get Qwen3 properly configured, it punches well above its "weight class." I'm still running real benchmarks, but subjectively, it feels like the 32B model performs somewhere between 4o-mini and 4o on "objectively measureable" tasks. It's a little "stodgy" and formal by default, though. We'll see what it looks like when people start fine-tuning it.
If the US dropped off the planet, it would maybe set LLM technology back a year.
Meta did this first I believe?
Please do!!
But seriously, LLMs are useless for anything except the most basic secretarial tasks and even there they still require as much reviewing as a high school intern. The reason executives want to use AI instead of humans is because AI is a capital expense that does not hit the financial the same way that wages do. (Capital expenses are excluded from EBITDA, which is the most common metric for measuring a company's financial performance. But it's become so popular for companies to push expenses to ITDA whenever they can that financial analysts are starting to push back and include ITDA in their analyses.)
In a nutshell, there are 3 ways to look at company financials: the PR way (EBITDA), the financial reporting way (GAAP, which includes ITDA), and the tax way (starts from GAAP or IFRS but with numerous rules on what items are included or excluded).
Shouldn't they have to follow the law?
US AI companies will either make sure that a similar ruling will never be made or they will ignore it and pay the fines. They won't let anybody stop the gravy train.
But then they'd have to actually communicate with people and negotiate consent instead of just hoovering up everything they can get their hands on in their quest to replace it.