Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
jjmarr
4mo ago
0 comments
Share
Seeing a task-specific model be consistently better at
anything
is extremely surprising given rapid innovation in foundation models.
Have you tried Aristotle on other, non-Lean tasks? Is it better at logical reasoning in general?
0 comments
default
newest
oldest
runeblaze
4mo ago
Is it though? There is a reason gpt has codex variants. RL on a specific task raises the performance on that task
jjmarr
OP
4mo ago
Post-training doesn't transfer over when a new base model arrives so anyone who adopted a task-specific LLM gets burned when a new generational advance comes out.
runeblaze
3mo ago
Resouce-affording, if you are chasing the frontier of some more niche task you redo your training regime on the new-gen LLMs
j
/
k
navigate · click thread line to collapse