In my experience CPU/GPU power is used up as much as possible. Increased efficiency just leads to more demand.
Any asset depreciates over time. But they usually get replaced.
My 286 was replaced by a faster 386 and that by an even faster 468.
I’m sure you see a naming pattern there.
That's why "those chips are very valuable" is not necessarily a good way to value companies - and it isn't if they can extract the value from the chips before they become worthless.
> But they usually get replaced.
They usually produce enough income to cover depreciation so you actually have the cash to replace them.
Given that inference time will soon be extremely valuable with agents and <thinking> models, H100s may yet be worth something in a couple years.
How much was your 286 chip worth when you bought your 486?
Isn't it because we insist on only using the latest nodes from a single company for manufacture?
I don't understand why we can't use older process nodes to boost overall GPU making capacity.
Can't we have tiers of GPU availability?
Why is Nvidia not diversifying aggressively to Samsung and Intel no matter the process node.
Can someone explain?
I've heard packaging is also a concern, but can't you get Intel to figure that out with a large enough commitment?
TSMC was way ahead of anyone else introducing 5nm. There's a long lead time porting a chip to a new process from a different manufacturer.
> I don't understand why we can't use older process nodes to boost overall GPU making capacity.
> Can't we have tiers of GPU availability?
NVidia do this. You can get older GPUs, but more performance is better for performance sensitive applications like training or running LLMs.
Higher performance needs better manufacturing processes.
This isn't true in the AI chip space (yet). And so much of this isn't just about compute but about the memory.
Chiplets have slowed the slowdown in AI, but you can see in the gaming space how much things have slowed to get an idea of what is coming for enterprise.