Mirrors the geohotz rants about AMD at the time, though as others point out this - 2024 - is ancient news in AI world and not quite sure what value it adds to the current discussions
There are better learning resources and a better ecosystem available around Nvidia cards & software (cuda).
Long answer, it depends. It will add more challenges and require significantly more effort (even outside the GPU programming itself, debugging toolchain etc. is a somewhat separate skill). The smaller/less mature ecosystem also means you will have less examples to look at for references.
Still rocking a 3090 so can't speak from experience but general vibe around simple at home inference seems like it has improved (esp since both vulkan and rocm are now viable paths on newer cards).
>development using pytorch
Would probably still play it nvidia safe for more adventurous stuff than token generation even if it has improved
RDNA is a whole different (and much poorer supported) animal than CDNA. As someone with extensive experience in both, if you're asking the question, then, no.
(If you're just looking to learn, use the free Kaggle/Google Cola T4s/TPUs to get started.)