Exactly. Demand for FP64 is significantly lower than for ML/AI.
Pascal isn’t incredibly cheap by comparison because it’s some secret hack. It’s cheap by comparison because most of the market (AI/ML) doesn’t want it. Speaking of which…
At the risk of “No True Scotsman” what qualifies as HPC gets interesting but just today I was at a Top500 site that was talking about their Volta system not being worth the power, which is relevant to parent comment but still problematic for reasons.
I mentioned llama.cpp because the /r/locallama crowd, etc has actually driven up the cost of used Pascal hardware because they treat it as a path to get VRAM on the cheap with their very very narrow use cases.
If we’re talking about getting a little FP64 for CFD that’s one thing. ML/AI is another. HPC is yet another.