Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
leetharris
1y ago
0 comments
Share
They likely continue to train dense models because they are far easier to fine tune and this is a huge use case for the Llama models
0 comments
default
newest
oldest
whimsicalism
1y ago
It probably also has to do with their internal infra. If it were just about dense models being easier for the OSS community to use & build on, they should probably be training MoEs and then distilling to dense.
j
/
k
navigate · click thread line to collapse