Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
cherioo
3mo ago
0 comments
Share
GPT4.5 was allegedly such a pre-train. It just didn’t perform good enough to announce and product it as such.
0 comments
default
newest
oldest
htrp
3mo ago
it wasn't economical to deploy but i expect it wasn't wasted, expect the openai team to pick that back up at some point
mips_avatar
3mo ago
The scoop Dylan Patel got was that part way through the gpt4.5 pretraining run the results were very very good, but it leveled off and they ended up with a huge base model that really wasn't any better on their evals.
j
/
k
navigate · click thread line to collapse