Allegedly broken this barrier. There is a reason that science is conducted in the open, in a reproducible and traceable manner. Those systems might not function properly at scale, or might not have run at an exaflop in double precision compute.
Frontier is certainly the first publicly verified system to achieve Exascale on the internationally accepted standard measurement.
https://www.tomshardware.com/news/chinese-exascale-supercomp...
Among other scientific purposes of course. But the 'quiet part' is that a lot of this comes down to simulated nukes. (Much like how the space program was really a nuke delivery project)
These computers remain useful for other physics simulations of course: atom to atom interactions, protein folding, weather modeling. So they also serve the scientific community.
So the supercomputer speed went 1000x from 2008-2022. But home computer speeds definitely did not go up that much, it was maybe around 10x. Does this mean there is more potential for home computers in the future?
Of course the supercomputers are massively parallel, but it's not like they got a 100x times larger building, or do they?
[1] https://en.wikipedia.org/wiki/Roadrunner_(supercomputer)
A lot of it is simply scale. This computer has ~8 million cores compared to ~12k full cores and ~100k "processing units" on the Roadrunner.
Secondly we have fundamentally changed how we do computation, by learning how to utilise GPUs (and GPU like architectures) better. This alone gives a far greater than 10x boost between 2008-2022
A 9800 GTX from March 2008 had 432.1 GFLOPS 32FP. A 3060 GTX from February 2021 is at a similar price point and 12.74 TFLOPS 32FP. A 3090 Ti is 40 TFLOPS 32FP or 100x the performance of the 9800 GTX.
Was going to say that it's always great to see GPU machines performing well. Looking forward to seeing how far off theoretical peak the benchmark hit.