Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
metalliqaz
1mo ago
0 comments
Share
Perhaps I should just google it, but I'm under the impression that ollama uses llama.cpp internally, not the other way around.
Thanks for that data point I should experiment with ROCm
0 comments
default
newest
oldest
naasking
1mo ago
From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time.
cpburns2009
1mo ago
I meant ollama uses llama.cpp internally. Sorry for the confusion.
j
/
k
navigate · click thread line to collapse