Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
0 points
hn8726
1y ago
0 comments
Share
Which models do you recommend for that amount of memory?
undefined | Better HN
0 comments
default
newest
oldest
rpastuszak
1y ago
I asked the same question a few days back and I'm keeping the responses here:
https://bsky.app/profile/potato.horse/post/3lejngewfmc2n
mark_l_watson
1y ago
For reasoning: qwq:latest (19G file)
For coding: qwen2.5-coder:14b (9G file)
Misc. experiments, runs fast: llama3.2:latest )2 G file)
j
/
k
navigate · click thread line to collapse