Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
undefined | Better HN
0 points
reissbaker
9mo ago
0 comments
Share
It was natively trained in FP4. Probably both to reduce VRAM usage at inference time (fits on a single H100), and to allow better utilization of B200s (which are especially fast for FP4).
0 comments
default
newest
oldest
irthomasthomas
9mo ago
Interesting, thanks. I didn't know you could even train at FP4 on H100s
reissbaker
OP
9mo ago
It's impressive they got it to work — the lowest I'd heard of this far was native FP8 training.
j
/
k
navigate · click thread line to collapse