It runs reasonably fast on CPU with ~8GB of RAM in full 32-bit precision. There's plenty of room to speed it up and reduce memory consumption by quantizing the model.
I posted a video of it running on my M2 Macbook Air (on CPU not MPS, so performance should be comparable on other hardware) on Twitter to demonstrate inference speed: https://twitter.com/vikhyatk/status/1740910503323734448
No comments yet.