There are a lot of local AI hobbyists, just visit /r/LocalLLama to see how many are using 8GB cards, or all the people asking for higher RAM version of cards.
This makes it mysterious since clearly CUDA is an advantage, but higher VRAM lower cost cards with decent open library support would be compelling.