If the customers leave these things idle, then oxide is going to shine. But a busy rack is going to be dominated by CPU heat.
* https://www.youtube.com/watch?v=hTJYY_Y1H9Q
From their weblog post:
> Compared to a popular rackmount server vendor, Oxide is able to fill our specialized racks with 32 AMD Milan sleds and highly-available network switches using less than 15kW per rack, doubling the compute density in a typical data center. With just 16 of the alternative 1U servers and equivalent network switches, over 16kW of power is required per rack, leading to only 1,024 CPU cores vs Oxide’s 2,048.
* https://oxide.computer/blog/how-oxide-cuts-data-center-power...
Going from 40mm fans to 80mm fans will not take energy usage from 25% to 1-2%. They must have taken an extreme example to compare against. What they’re doing is cool, but this is a marketing exaggeration targeted at people who aren’t familiar with the space.
Oxide also isn’t the only vendor using form factors other than 1U or focusing on high density configurations. Using DC power distribution is also an increasingly common technique.
To be honest, a lot of this feels like Apple-esque marketing where they show incredible performance improvements, but the baseline used is something arbitrary.
It's just plain old engineering, optimized to sell whole racks not individual servers or <=8U units, sprinkled with opinions about low-level firmware etc, with a bespoke OS and management stack.
It's also about what we don't have. We don't have a UEFI, for example, which means we don't have UEFI vulnerabilities.
Yes, "just".