This makes it mysterious since clearly CUDA is an advantage, but higher VRAM lower cost cards with decent open library support would be compelling.
But intel is still lost in it's hubris, and still thinks it's a serious player and "one of the boys", so it doesn't seem like they want to break the line.
Businesswise? Because Intel management are morons. And because AMD, like Nvidia, don't want to cannibalize their high end.
Technically? "Double the RAM" is the most straightforward (that doesn't make it easy, necessarily ...) way to differentiate as it means that training sets you couldn't run yesterday because it wouldn't fit on the card can now be run today. It also takes a direct shot at how Nvidia is doing market segmentation with RAM sizes.
Note that "double the RAM" is necessary but not sufficient.
You need to get people to port all the software to your cards to make them useful. To do that, you need to have something compelling about the card. These Intel cards have nothing compelling about them.
Intel could also make these cards compelling by cutting the price in half or dropping two dozen of these cards on every single AI department in the US for free. Suddenly, every single grad student in AI will know everything about your cards.
The problem is that Intel institutionally sees zero value in software and is incapable of making the moves they need to compete in this market. Since software isn't worth anything to Intel, there is no way to justify any business action isn't just "sell (kinda shitty) chip".
Given the high demand of graphic cards, is this a plausible scenario?
Given how young and volatile this domain still is, it doesn't seem unreasonable to be wary of it. Big players (google, openai and the likes) are probably pouring tons of money into trying to do exactly that
- less people care about VRAM than HN commenters give impression of
- VRAM is expensive and wouldn't make such cards profitable at the HN desired price points