Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
0 points
narrator
1y ago
0 comments
Share
Isn't it much better to get a Mac Studio with an M2 Max and 192gb of Ram and 31 terraflops for $6599 and run llama.cpp?
undefined | Better HN
0 comments
default
newest
oldest
magic_hamster
1y ago
Macs don't support CUDA which means all that wonderful hardware will be useless when trying to do anything with AI for at least a few years. There's Metal but it has its own set of problems, biggest one being it isn't a drop in CUDA replacement.
adam_arthur
1y ago
You can do LLM inference without CUDA just fine. Download Ollama and see for yourself
doublepg23
1y ago
I'm assuming this won't support CUDA either?
egorfine
1y ago
For LLM inference - yes absolutely.
j
/
k
navigate · click thread line to collapse