https://github.com/ROCm/ROCm/issues/1714
With Nvidia cards, I know that if I buy any Nvidia card made in the last 10 years, CUDA code will run on it. Period. (Yes, different language levels require newer hardware, but Nvidia docs are quite clear about which CUDA versions require which silicon.) I have an AMD Zen3 APU with a tiny Vega in it; I ought to be able to mess around with HIP with ~zero fuss.
The will-they-won't-they and the rapidly dropped support is hurting the otherwise excellent ROCm and HIP projects. There is a huge API surface to implement and it looks like they're making rapid gains.
I'm not particularly optimistic that ecosystem support will ever pan out for AMD to be viable but this seems to be giving a bit too much credit to Nvidia for democratizing AI development, which is a stretch.
This is, obviously, way overdue and it might not be enough to let AMD get back into the race but
Where can you rent time on one? Traditionally, AMD has only helped build super computers, like Frontier and El Capitan, out of these cards.
This time around Azure [0] and other CSP's (cloud service providers) are working to change that. I will have the best of the best of their cards/systems for rent soon.
[0] https://techcommunity.microsoft.com/t5/azure-high-performanc...
Case in point, OpenAI closed new signups because they couldn't keep up with demand and they literally have all the resources in the world to make things happen.
>The important thing to remember is don’t use possessive apostrophes with any pronouns, either possessive pronouns or possessive adjectives.
>If you see an apostrophe with a pronoun, it must be part of a contraction.
>its—possessive adjective of it
>it’s—contraction for “it is”
"its" would be correct in the root comment.
it's means it is or it has
I look at their financial performance and it’s staggering how they’ve missed the boat - and this is during a huge boom on gaming, crypto, and AI.
Compare:
VS
More likely, they wait to see how the AI HW startups shake up and then acquire the ones that have anything worth paying for.
Seems like a common thing in hardware companies, they chronically underpay, which for some reason hardware/electrical engineers seem to accept, but that makes them a last-choice for competent software engineers, who have much better-paying options.
You could probably get 80% there by dedicating enough AMD developers to improving AMD support in existing AI frameworks and software, in parallel with improving drivers and whatever CUDA equivalent they are betting on right now. But it would need a massive concerted effort that few companies seem to be able to pull off (probably it's hard to align the company on the right goals)
Salaries at semico companies are not even close to this
Also why would you even need ppl this good? People who earn 1 mil offer way, way more than just tech skills
I’m just one guy but my experience carries over to subsequent business decisions made by me, and there are many like me.
https://www.amd.com/en/newsroom/press-releases/2023-10-10-am...
At this point it's almost like it has to be intentional, like some perceived tradeoff ingrained in the culture that generates shit software.
They're underpaying their hardware engineers, and if they wanted to hire good software engineers they'd need to pay more, which would cause their hardware engineers to demand better pay too.
This is the actual source[1]:
> The AMD Instinct M1300A APU was launched in January 2023 and blends a total of 13 chiplets, of which many are 3D stacked, creating a single chip package with 24 Zen 4 CPU cores fused with a CDNA 3 graphics engine and eight stacks of HBM3 memory totaling 128GB.
Its literally a typo (or renamed SKU?) for the MI300A. So... the street is jumping on AMD because of a typo echoed by a ton of outlets?
https://www.datacenterdynamics.com/en/news/genci-upgrades-ad...
The discussion on the MI300X was on HN like 12 hours ago (after the AMD announcement event yesterday):
Are they just talking about MI300X availability?
2% is a "leap"?
It looks like NVDA is up ~1.5% since yesterday.
Maybe the LLM space is better about this, but the generative media side definitely isn't.
AMD has a market share of 0% here, and nobody publishes models with AMD support.
And I got PyTorch working on my AMD 7900 XT graphics card recently, though it was a bit of a hassle to do so.
AMD have their own thrust gpu impls, so from a high level they are somewhat interchangeable
AMD is so far behind on this.
In the other hand though, their 3D Cache chips are amazing
Don't make me start submitting aol links.
Do you have examples of both sides of this claim?
The other side of this claim: sales numbers of GPUs.
The stated claim was "According to the news and tech blogs AMD is always ahead of nvidia in all regards." Which necessitates sources that aren't the parent article being discussed. Because "always" and "in all regards" were invoked, not just AI
> The other side of this claim: sales numbers of GPUs.
Your claim "But then, you have real life..." implies that the news sites you didn't cite are wrong in claiming AMD technical supremacy, not that sales numbers are off.
To restate the question: do you have news references stating that AMD is always and in all regards ahead of nvidia? And then do you have real life data points to prove that said news sites are wrong?