Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
Scaevolus
2mo ago
0 comments
Share
GGML is another neat ML abstraction layer, but I don't think much work has been dedicated to the Windows port.
0 comments
default
newest
oldest
tom_0
2mo ago
GGML still runs on llama.cpp, and that still requires CUDA to be installed, unfortunately. I saw a PR for DirectML, but I'm not really holding my breath.
lostmsu
2mo ago
You don't have to install the whole CUDA. They have a redistributable.
tom_0
2mo ago
Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks.
1 more reply
j
/
k
navigate · click thread line to collapse