I sympathize. Software/hardware is littered with one person's edge cases being another person's entire world. But in the grand scheme, yes, subnormals are exceedingly rare. Clearly Intel microarchitecture designers think that, as they seem perfectly willing to continue punishing some applications with a massive performance cliff. Their mitigation should never have been "we'll add a cheat switch for speed" but rather "we'll work as hard as our competitors do to make these cases fast." Standards are supposed to do that, but cheaters abound (and yes I am being a bit perjorative--cheaters don't think of themselves as cheating, they merely have important use cases that demand special dispensation).
GPU hardware is a different, but similar story, from what I can see. It saves transistors to do FTZ, and the originally niche usage of FP to put pixels on the screen didn't really care so much about niggling details. But GPUs became general purpose and important, and they've been dragged into full compliance by application demands. It's the only sane outcome in the end. Instead, all this FTZ stuff has just made a mess at layers above. It would all be unnecessary if subnormals were as fast as AMD, ARM, IBM, and other chip manufacturers have managed to make them.