not a flex
If one doesn't require the latest DLSS or need first-class AI support, get an AMD card.
If you want for AI, your best bet is a used 3090 for $700-800. The ram is more important and that card is still faster than a 4070.
And Nvidia has enough mindshare that they could piss on consumers for the next 3 release cycles and still have more than half the market. I don't like it but it's reality.
So yeah they did leave $2600 from me on the table that is now becoming more likely to be spent on a bootleg 48GB 4090 than a 5090 and if I get that they won’t see money from me for many years till they beat 48GB in consumer form factor.
nVidia GPUs have basically been an inflation hedge for the past few years lol
My 4070 Ti that I bought (new, oops) this past December has appreciated 20 - 25%! At least according to what people have them listed for... no idea if anyone is actually buying them at those prices.
Of course even if I managed to sell it, everything else has gone up in that time, so it's not like I'd get to make money on the deal. Pretty wild, nonetheless!
Reminds me of the current egg "crisis".
Egg prices raised because the government asked for the killing of a hundred million of chickens - for fear of spreading bird flu. It may have been the right call, we’ll never know.
Small difference, but important.
Here, CNN is obscuring the fact that the chickens killed were not tested or confirmed to have flu, but some around them might have so they had to go too.
https://www.cnn.com/2025/01/28/business/chickens-avian-flu/i...
Well... Yes but no.
https://www.thebignewsletter.com/p/hatching-a-conspiracy-a-b...
On the one hand, this is a great situation to be in for Nvidia in terms of overall revenue.
On the other hand, this has allowed AMD to grab market share with the RX 9000 series launch, at least in the short term. So the narrow point that Geforce is sold out is decidedly not a flex.
The idea it’s hard to buy a standard Nvidia GPU in the consumer lineup is absurd.
An outcome of greed.
https://www.nvidia.com/en-us/products/workstations/dgx-spark...
DGX Spark has the same memory as AMD Strix Halo, a weaker CPU, but perhaps a stronger GPU, except that for now there is no data about the GPU, besides that it might be stronger for AI inference (only FP4 speed is given). For now it is not known whether for graphics it will be better than Strix Halo.
While DGX Spark might be weaker than AMD Strix Halo for anything else except AI Inference, it will still be stronger than any mini-PC made with Intel Arrow Lake H or with AMD Strix Point.
20 core Arm, 10 Cortex-X925 + 10 Cortex-A725 Arm
Anandtech's 2024 article about the X925 & A725 Arm cpu cores:
https://www.anandtech.com/show/21399/arm-unveils-2024-cpu-co...
So DGX Spark is equivalent with at most 10 * 3/4 + 10 * 1/4 = 10 Zen 5 cores versus 16 Zen 5 cores of Strix Halo.
In reality DGX Spark will be even slower, because it will have a lower clock frequency (especially the Cortex-A725 cores) and a worse cache memory.
For irregular code that does only operations with pointers and integers, the advantage of AMD Strix Halo will be significantly less, but even in that case the 10+10 cores of DGX Spark are unlikely to match more than 15 Zen 5 cores at the same clock frequency and less than that at the real clock frequencies.
On the other hand, like I have said, DGX Spark should be faster than Intel Arrow Lake H, the best that Intel can offer in a mini-PC of this size.
Could be 2x200 Gbit/s, could also be much less.
The GPU of Strix Halo is many times faster than the GPU of the biggest Orin AGX.
The Strix Halo has 25% more "CUDA cores" (2560 vs. 2048), which work at a clock frequency that is more than double (2.9 GHz vs. 1.3 GHz) and which can have for some operations a double throughput even at the same clock frequency. The memory throughput is also higher by 25%.
The GPU of DGX Spark will have to be 4 or 5 times faster than Orin to match Strix Halo as a GPU. This is not at all certain because NVIDIA has stressed only AI/ML applications without saying anything about graphics.
It is a profound change, but it will take time for significant number of digital twins being build. However digital twin is a CAPEX in a way. You build 3D model of your warehouse only once and then use it for years to manage operations, robots and everything else.
Compare the Disney robots at the end of 2024 and 2025
If the venue, where the presentation took place also has readily available 3D model made with OpenUSD in SimReady level of details, Pixar could just download this model to their cartoon tools and start training virtual model of the robot in the virtual model of the venue before the event and troubleshoot any issues with the program.
Isn't it mindblowing?
It's significant in the scheme of things in that LLMs have got quite good at text chat but for AI do to real world things like build you a house or fix your car it's got to get good at physical robot stuff too.
how much is it expected, my guess barely fits in 5 digits. Would be nice to have something in between Spark and Station. I.e. some desktop withing $20K.
https://www.theverge.com/news/631957/nvidia-dgx-spark-statio...
10Gb vs 5Gb Ethernet
1 HDMI 2.1, 4 x USB TypeC vs 1 HDMI 2.1, 2 Displayport, 2 USB TypeC
I think a better comparison might be the Mac Studio ultra.