I don't think the reckoning will come from AMD stealing Nvidia's market share, it'll come when the hype bubble collapses and businesses start treating neural networks like commodities, running whatever is cheapest instead of the absolute most powerful. AMD is in a great position, because they make both great GPU/NPU hardware and great CPU hardware.
AMD trades at a price to earnings ratio of 99, NVDA trades at a PE of ~38. PE isn't everything when looking at companies, but I don't see other reasons to think AMD is undervalued
Keep in mind that Nvidia gets to charge astronomical prices because AMD's software is crap. Nvidia charges 2-5x as much for equivalent or worse hardware compared to what AMD is selling. That PE ratio will collapse if AMD ever manages to get its act together.
I'm still astounded AMD has no fired every executive from the CEO down for this obvious multi-year failure.
[1] Co-developed proprietary software stacks running on highly proprietary and non-standard hardware targeting very specific workloads.
People think that because a company has grown very large very quickly that it can't grow as much anymore. But on the other hand, there is clear evidence that Nvidia continues to dominate AMD's offerings despite the latter having a competitive product now. So the metric for Nvidia isn't Nvidia vs. AMD but the growth aspect of AI market overall.
Take a look at their last quarter's income statement graph: https://i.imgur.com/mQwZ5o4.png - Once in a few years I see a Sankey graph looking like that. And it's only growing over the last 10 years.
Then NVDA pumped by trillions and AMD did not and even AMD's crack team of trained denial specialists could no longer stay the course, so they started to turn the ship. But that was only a year or two ago and even the tiniest changes take years in hardware land so the biggest sins are still baked in to the latest chips, but at least the software has started to suck a bit less.
I no longer see 100% of the people who try to build on AMD compute run away screaming. Many do, but not all. That's a change, an important and positive one. If AMD keeps it up maybe they can save us from 80% margins on matrix multiplication after all.
No it is not actually. They are making insane amounts of money and have very strong forward guidance. With the drop in the last month it is actually cheap (low peg ratio). When the market turns, nvidia is likely to soar once again.
Tell me you haven't looked at Nvidia's financials (especially the margins) without telling me. It basically prints money, now and in the foreseeable future, and all of its products are permanently sold out, even at the insane prices Nvidia is charging.
I think this is the arguable part. The more AI compute becomes valuable, the more reason there is to divest from their software moat. Their hardware is good, but not unassailable. I think their modest (by tech hype standards) P/E is recognition of this.
What happens to industries that have huge margins? They get compressed by competition. The biggest companies on the planet, literally, are figuring out how to not pay Nvidia those huge margins.
The MI300X was lacking in a few areas vs the H100 at the time, overall perf and power efficiency were two big areas. Power efficiency is critical atm, that's seriously the biggest barrier to scaling datacenter rollout right now.
AMDs next card might be better on this front, it might not. But this article doesn't talk to anything about the next card. It's referring to a card from 2023.
I didn’t even notice that. That obviously means George is living in the future.
So basically he got 2 MI300s and is currently trying to pump AMD?
Not two cards, two boxes (with 8 cards each I'd assume).
I'd love to see AMD get a multiplatorm product so mature that I can pip install PyTorch on Windows or MacOS with an AMD card (https://pytorch.org/get-started/locally/). But I don't think that their market cap will change quickly even if this happens. Many people have been bought AMD cards in the past because they are cheaper and then died waiting for AMD to have a mature CUDA equivalent. Nobody is going to be quickly buying AMD cards as soon as the software is good- they will gradually change when they replace NVDA hardware, and not everyone is going to make that choice.
If I were making a bet (and I'm not), I'd bet that NVDA is overvalued right now and their growth will slow to correct this but it won't crash, and I'd bet that AMD will gradually increase in value to reward them for software investments, but it won't spike as soon as their software looks good. Neither of these things would I want to put a lot of money on, since they are long term bets, and if you're going long then you might as well just invest in the broader market. And even if I thought that NVDA was going to crash and AMD was going to spike, I still wouldn't bet because I have no idea whether it would happen in the next 6 months or 6 years.
AMD is not undervalued, rather it is Nvidia that is overvalued.
It’s the same processor that is in that new Framework desktop that was on HN recently, but for some reason Framework is not putting it in laptops: https://frame.work/desktop
A lot of readers on this site have a good insight into this and it is a key question financial people are asking without the knowledge many people here possess.
"As fast as AMD tries to fill in the CUDA moat, NVIDIA engineers are working overtime to deepen said moat with new features, libraries, and performance updates."
Rewriting CUDA programs to run using ROCm is expensive and time consuming. It is difficult to justify this expense when in all likelihood the ROCm version will be less efficient, less performant, and less stable than the original. In the grand scheme of things, AMD hardware is indeed cheaper but it's not that much cheaper. From a business standpoint, it's just not worth it.
Knowing what I know about how management thinks, even if AMD managed to make an objectively superior product at a much better price, institutional momentum alone would keep people on CUDA for a long time.
AMD has been hobbled by the quality of their drivers
I always hear this and I believe it, but I've never been able to find any insight about what exactly is holding them back.Given the way nVidia is printing money, surely it absolutely cannot be a lack of motivation on AMD's part?
This is a very uninformed thought as I have no experience writing drivers, nor am I familiar with the various things supported by CUDA and ROCm. But how is AMD struggling with ROCm compute drivers, when their game drivers have been plenty stable as far as I have experienced? Surely the surface area of functionality needed for the graphics drivers is larger and therefore the compute drivers should be a relatively easier task? Or am I wrong and CUDA has a bunch of higher-level stuff baked into it and this is what AMD struggles to match?
and because they sold less performant hardware.
Does anybody have and insight into specifically what part of compute performance AMD is struggling to match? Did AMD bet on the wrong architectural horse entirely? Are they unable to implement really basic compute primitives as efficiently as they want because nVidia holds key patents? Did nVidia lock down the entire pool of engineers who can implement this shit in a performant way?I mean, aside from GPU compute stuff, it sure looks to me like AMD is executing well. It doesn't seem like they're a bunch of dunces over there. Quite the opposite?
There's movement to implement CUDA libraries that work on non-Nvidia cards, but I guess adoption could be hindered by legal fears.
What you're looking for is SCALE...
and they are making amazing progress.
The actual challenger is Cerebras. No need to load (VRAM->SRAM) all the parameters for every batch . But they have yet to prove they can scale and support the custom stack. We'll see.
It seems ADB is just sending more hardware. As far as I know the drivers are still lacking.