Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
karel-3d
6d ago
0 comments
Share
Can I... somehow run this locally? DeepSeek is opensource? Do I even need their API key?
(I have no experience with running anything locally, maybe it's a stupid question)
0 comments
default
newest
oldest
zozbot234
6d ago
Waiting for official support in llama.cpp. There is a fork that can run a lightly quantized (Q2 expert layers) DeepSeek V4 Flash in 128GB RAM without offloading weight fetches from disk.
karel-3d
OP
6d ago
Ouch. Can't run that on my M4 mac with 48GB RAM.
j
/
k
navigate · click thread line to collapse