Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
Otek
3y ago
0 comments
Share
Possibly but I think we’re talking years if not decades.
0 comments
default
newest
oldest
ultra_nick
3y ago
Everyone would need a next gen GPU to run it and someone would have to drop $500k on a gpu rack to train it.
A decade for general accessibility sounds right.
suyash
3y ago
No the ML inference can be done on the cloud or use tiny models and embed them in IDE code, possible today.
j
/
k
navigate · click thread line to collapse