"Now AI has made everything more complex!" "AI is embedded in everything we do"...
Sounds like marketing gibberish and obfuscation, combined with self promotion.
That's just my read at first sniff.
I also recently wrote a blog explaining how reinforcement fine-tuning works, which is likely at least part of the pipeline used to train o1: https://openpipe.ai/blog/openai-rft
The correct URL is https://cdn.openai.com/o1-system-card-20241205.pdf , at least according to https://arxiv.org/html/2412.14135v1 (which contains typos that OP's submissions doesn’t).
All of them are here though. No .openai. https://www.iana.org/domains/root/db
.open is among the worst. :/
> Purchasing a .open domain name isn't available to the general public. This particular extension is owned by American Express and currently isn't for sale or open to registration, limiting its use to only selected entities associated with American Express. It's primarily designed to serve the interests of the corporation and its customers' claims or needs.
https://tld-list.com/tld/open - Who is able to buy a .open domain name?
You could also choose to enrich the discussion by elaborating on why you think this is worthless instead.
https://www.msn.com/en-us/money/markets/bytedance-plans-to-s...
- creating more efficient models such as MoE based DeepSeek
- getting their hands on cutting edge GPUs all the same
I think it was Dylan Patel (from semianalysis) on Dwarkesh that mentioned one scam is for a Chinese source to arrange for a SOTA NVidia cluster to be bought/installed in some non-embargoed country, then dismantled and shipped to China.
"HarmonyOS NEXT (Chinese: 鸿蒙星河版; pinyin: Hóngméng Xīnghébǎn) is a proprietary distributed operating system and a major iteration of HarmonyOS, developed by Huawei to support only HarmonyOS native apps."
https://www.youtube.com/watch?v=-haWhgmUheA
Detail starts about 7mins in
They spent a lot of money to find a lot of shallow gradients. Everyone else can climb those same gradients by putting in a little bit of money. Every single funded vertical and research org is proving this. Players in third place and below are incentivized to release their weights to develop an ecosystem around them. Meta and Tencent get to ensure the technology doesn't evolve beyond them by commoditizing their compliment and releasing stuff like Llama and Hunyuan for free.
Furthermore, OpenAI hasn't stumbled across a defensible moat. There's zero switching cost to move to another product, and they don't control any major panes of glass to stay as a default.
If OpenAI doesn't find a moat soon, they're gonna be cooked. The value of foundation models will plummet.
But also, if you want people to stop mocking "Open" AI, then maybe they should stop being such a mockable caricature of themselves.
Perhaps we should just string-sub to IAnepO or some such, so we can engage with the models and company as it is, without dealing with the (empty) semantics of the name.
Typos in the first sentence of the paper doesn't give confidence that I am about to read something worthwhile.
In an ideal world would second-language speakers of English proofread assiduously? Of course, yes. But time is finite, and in cases like this, so long as a threshold of comprehensibility is cleared, I always give the benefit of the doubt to the authors and surmise that they spent their limited resources focusing on what's more important. (I'd have a much different opinion if this were marketing copy instead of a research paper, of course.)
Not dismissing work for trivially avoidable mistakes risks wasting your precious, limited lifespan investing effort into nonsense. These signals are useful and important. If they couldn't be bothered to proofread, what else couldn't they be bothered to do?
>spent their limited resources focusing on what's more important
Showing that you give a crap is important, and it takes seconds to run through a spell checker.
Two of the authors are from "Shanghai AI Labs" rather than students, so one might hope it had at least been proofread and passed some sort of muster.
this is a terrible heuristic for evaluating AI papers. If you use it, you will miss a lot of good work by very strong researchers with below-average English writing skills.
I have not read this paper carefully so claim nothing one way or the other about its quality. It superficially seems like a pleasant and timely survey although a little flag-planty.
> OpenAI o1 represents a significant milestone in Artificial Inteiligence,
Inteiligence
Safe to say OpenAI has nothing to worry about
> the main techinique behinds o1 is the reinforcement learining.
How long until we see DeepSeek-o1 ?
What is going on?
I'm guessing English isn't their first language.
but also: no one knows what the red squiggle marks mean in any textarea anymore because they no longer use them to write anything