AMD's just taking advantage of Nvidia's blunder.
Can someone explain why they think nvidia really did this?
Hardware manufacturers have imposed weird arbitrary feature limits on things forever. It's how they segment out valuable markets. Even in graphics you get output limits to protect workstation cards. Nothing new.
I don't know, I just find producing something and then breaking it to be wasteful and borderline immoral.
https://www.overclock3d.net/news/gpu_displays/amd_s_rx_6700_...
It's not like AMD is above artificially limiting the performance of certain workloads when it suits them, they do it with CAD applications unless you pay the workstation tax.
I'm curious about that, do you have an example? I only know of nvidia consumer drivers giving you abysmal performance if you try to use fixed function pipeline line or wireframe rendering with opengl. Or at least they did a couple of years ago. As soon as you used shaders to render the same lines everything got super fast. I always suspected it's their way to force users of CAD systems to buy their pricey professional cards. No game would use opengl line rendering without shaders or if any game did in the long gone past it's probably so old and the line count so low that it would be still fine with the artificial slowdown.
The consumer RX580 and workstation WX7100 are based on the same Polaris silicon, and the consumer card is configured with a higher power limit of 185W vs 130W, but the workstation card usually performs similarly or much better anyway. It's most egregious in the Siemens NX test where the workstation card is about 8 times faster.
I think AMD strategically throttles legacy OpenGL paths similar to Nvidia, Autodesk Max/Maya use modern shader-based viewports and in those the consumer card pulls ahead as you'd expect given its higher power limit.
I don’t know what AMD is doing, to be honest. You would think that they’d want to get this all working ASAP so that they can serve as an alternative to nvidia for ML and gpgpu.
I guess there’s so much demand for Nvidia between ML and cryptocurrency and gaming that AMD having feature parity with nvidia in only one subcategory (gaming) is enough to still sell all their production capacity.
"device made specifically for gaming is not useful for people uninterested in gaming"
It seems like econ 101, prices rise when demand outstrips supply. But that isn't happening at the retail level. Instead scalpers are buying them en masse, and doing the price fixing.
Something I've always wondered, but wasn't sure who to ask that would know nuances of doing so.
Gamers are also perceived as being loyal to a given brand, which is always good to ensure a certain level of demand and interest for their future products. If they can't buy a given product, they hardly can be loyal to that particular brand.
Miners are more rational actors, and will pay as much as needed to secure as much supply as they possibly can, as long as they can turn a profit. If the price of cryptocurrency is high, so is the price that miners are willing to pay.
While this sounds good for nvidia, miners will also dump unprofitable cards on the used market, therefore competing against new products from nvidia. If the price of cryptocurrency crashes, as it has done before, then there might be tens of thousands of used GPUs all flooding the market at a certain time. As these would be priced to sell, and the miners have likely turned a profit on the card already, any amount of money that they can get on selling the cards is just pure profit.
Nvidia probably doesn't want to have this sword dangling above their heads. It would be pretty bad for them to announce a new series of GPUs at the same time that the cryptocurrency market crashes, as they'd have to compete against used GPUs at fire sale prices.
The reason this happens is because AMD/Nvidia can't sell for $X to person A and $X*10 to person B depending on how much they want something (there are some caveats here, such as being able to price discriminate consumers and enterprises, but some distinction must exist within the product for that). As such, if they price all their items at $X*10, they run the risk of not being able to sell their stock - a risk they are taking for no reason, since they already priced in the profit they wanted at $X.
Additionally, the appearance of scarcity actually works in their favor, since it keeps up demand.
Tesla is doing supply/demand based pricing though, so I think NVIDIA could get away with it as well as long as it gets the communication right.
They're probably selling a dozen Playstation or Xbox APUs for every one 6700XT dGPU. But by building a better dGPU today, they have the technology to make a better APU when it comes to bid out the PS6 and Xbox Series Q, or for the next generation Ryzen "G" APUs.
Also I am sure they share ideas and optimizations but I think overall it is a fairly different architecture. The APUs on the consoles as well as the ones they sell in laptops (or for OEM desktops) are chiplet based, similar to Ryzen. So they probably have smaller dies for GPU, CPU, and IO connected via something like Ryzen's Infinity Fabric (in contrast Intel and Apple put their GPU and CPU on the same die). Their dGPUs currently do not have this architecture although I have heard they want to move to this in the future since it has worked extremely well for ryzen.
Source: https://www.investopedia.com/how-nvidia-makes-money-4799532 https://s22.q4cdn.com/364334381/files/doc_downloads/2021/03/...
It is true that in some quarters, their datcenter revenue is more than 50% but that is because many consumers buy around christmas in Q4, and overall consumer GPUs still make up 60% of revenue.
It is not just PC gaming, it is also all the developers of these games, a lot of artists, architects, mechanical engineers who do CAD models etc. etc. They primarily buy the GeForce or Quadro graphics cards. Oh and of course, miners buy these cards too but even when mining declined consumer sales were still >50% of their revenue.
Datacenter sales are usually custom solutions, but they have also been focusing on their DGX boxes. I work at a competitor of Nvidia in the datacenter space, so I know this quite well. They do have 100% of the datacenter market right now (this will change - but that is an opinion) but that does not mean majority of their revenue comes from the datacenter.
Now as far as growth is concerned, that comes primarily from the datacenter. Thats why I guess their marketing focus on being an AI company. But their product is not an AI chip. It is a large vector unit designed for rendering frames and pushing pixels, no matter how much they try to repurpose it (which they have done quite well evidently).