> Microarchitecture design skills are not theoretical numbers you manage to put on a spec sheet
not only can you measure this, not only do they measure this, but it's literally the first component of the Rayleigh resolution equation and everyone is constantly optimizing for it all the time.
https://youtu.be/HxyM2Chu9Vc?t=196
https://www.lithoguru.com/scientist/CHE323/Lecture48.pdf
in the abstract, why does it surprise you that the semiconductor industry would have a way to quantify that?
like, realize that NVIDIA being on a tear with their design has specifically coincided with the point in time when they decided to go all-in on AI (2014-2015 era). Maxwell was the first architecture that showed what a stripped-down architecture could do with neural nets, and it is pretty clear that NVIDIA has been working on this ML-assisted computational lithography and computational design stuff for a while. Since then, I would say - but they've been public about it for several years now (and might be longer, I'd have to look back).
https://www.newyorker.com/magazine/2023/12/04/how-jensen-hua...
https://www.youtube.com/watch?v=JXb1n0OrdeI&t=1383s
Since that "mid 2010s" moment, it's been Pascal vs Vega, Turing (significant redesign and explicit focus on AI/tensor) vs RDNA1 (significant focus on crashing to desktop), Ampere vs RDNA2, etc. Since then, NVIDIA has almost continuously done more with less: beaten custom advanced tech like HBM with commodity products and small evolutions thereupon (like GDDR5X/6X), matched or beaten the efficiency of extremely expensive TSMC nodes with junk samsung crap they got for a song, etc. Quantitatively by any metric they have done much better than AMD. Like Vega is your example of AMD design? Or RDNA1, the architecture that never quite ran stable? RDNA3, the architecture that still doesn't idle right, and whose MCM still uses so much silicon it raises costs instead of lowering them? Literally the sole generation that's not been a total disaster from The Competition has been RDNA2, so yeah, solid wins and iteration is all it takes to say they are doing quantitatively better, especially considering NVIDIA was overcoming a node disadvantage for most of that. They were focused on bringing costs down, and frankly they were so successful despite that that AMD kinda gave up on trying to outprice them.
Contrast to the POSCAP/MLCC problem in 2020: despite a lot of hype from tech media that it was gonna be a huge scandal/cost performance, NVIDIA patched it dead in a week with basically no perf cost etc. Gosh do you think they might have done some GPGPU accelerated simulations to help them figure that out so quickly, how the chip was going to boost and what the transient surges were going to be etc?
literally they do have better design skills, and some of it is their systems thinking, and some of it is their engineers (they pay better/have better QOL and working conditions, and get the cream of the crop), and some of it is their better design+computational lithography techniques that they have been dogfooding for 3-4 generations now.
people don't get it: startup mentality, founder-led, with a $3t market cap. Jensen is built different. Why wouldn’t they have been using this stuff internally? That’s an extremely Jensen move.