Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
0 points
StevenWaterman
3y ago
0 comments
Share
Their models range from 70mb to 3gb. The largest model is smaller than the optimised stable diffusion. Not sure what the inference speed is like, haven't tried it myself yet.
undefined | Better HN
0 comments
default
newest
oldest
IceWreck
3y ago
I just tested it myself. Its fast enough on colab, couple of seconds but not sure if its fast enough to transcribe realtime audio yet.
lynguist
3y ago
"small" runs in realtime on Macbook Air M1 CPU.
MacsHeadroom
3y ago
Colab is using one of the larger models. Tiny probably runs in realtime on a single core of an RPi.
j
/
k
navigate · click thread line to collapse