Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
0 points
samspenc
2y ago
0 comments
Share
Ah fascinating, just curious, what's the technical blocker? I thought most of the Llama models were optimized to run on GPUs?
undefined | Better HN
0 comments
default
newest
oldest
mayankchhabra
2y ago
It's fairly straightforward to add GPU support when running on the host, but LlamaGPT runs inside a Docker container, and that's where it gets a bit challenging.
stavros
2y ago
It shouldn't, nVidia provides a CUDA Docker plugin that lets you expose your GPU to the container, and it works quite well.
dicriseg
2y ago
See above if you're interested in that. It does work quite well, even with nested virtualization (WSL2).
1 more reply
j
/
k
navigate · click thread line to collapse