Apple really stumbled into making the perfect hardware for home inference machines. Does any hardware company come close to Apple in terms of unified memory and single machines for high throughput inference workloads? Or even any DIY build?
When it comes to the previous “pro workloads,” like video rendering or software compilation, you’ve always been able to build a PC that outperforms any Apple machine at the same price point. But inference is unique because its performance scales with high memory throughput, and you can’t assemble that by wiring together off the shelf parts in a consumer form factor.
It’s simply not possible to DIY a homelab inference server better than the M3+ for inference workloads, at anywhere close to its price point.
They are perfectly positioned to capitalize on the next few years of model architecture developments. No wonder they haven’t bothered working on their own foundation models… they can let the rest of the industry do their work for them, and by the time their Gemini licensing deal expires, they’ll have their pick of the best models to embed with their hardware.
Nvidia outperforms Mac significantly on diffusion inference and many other forms. It’s not as simple as the current Mac chips are entirely better for this.
I never liked apple hardware, but they are now untouchable since their shift to own sillicon for home hardware.
https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-stu...
Also why Swift nowadays has to have good Linux support, if app developers want to share code with the server.
If you want to get usable speeds from very large models that haven't been quantitized to death on local machines, RDMA over Thunderbolt enables that use case.
Consumer PC GPUs don't have enough RAM, enterprise GPUs that can handle the load very well are obscenely expensive, Strix Halo tops out at 128 Gigs of RAM and is limited on Thunderbolt ports.
I have a feeling that Mac fans obsess more about being able to run large models at unusably slow speeds instead of actually using said models for anything.
For LLMs. For inference with other kinds of models where the amount of compute needed relative to the amount of data transfer needed is higher, Apple is less ideal and systems worh lower memory bandwidth but more FLOPS shine. And if things like Google’s TurboQuant work out for efficient kv-cache quantization, Apple could lose a lot of that edge for LLM inference, too, since that would reduce the amount of data shuffling relative to compute for LLM inference.
https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...
I don't think they expect anyone to actually buy these.
Most companies looking to buy these for developers would ideally have multiple people share one machine and that sort of an arrangement works much more naturally with a managed cloud machine instead of the tower format presented here.
Confirming my hypothesis, this category of devices more or less absent in the used market. The only DGX workstation on ebay has a GPU from 2017, several generations ago.
Maybe you spend 1000$ more for a PC of comparable performance, well tomorrow you need more power, change or add another GPU, add more RAM, add another SSD. A workstation you can keep upgrade it for years, adding a small cost for an upgrade in performance.
An Apple machine is basically throw away: no component inside can be upgraded, you need more RAM? Throw it away and buy a new one. You want a new GPU technology? You have to change the whole thing. And if something inside breaks? You of course throw away the whole computer since everything is soldered on the mainboard.
There is then the software issue, with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage. True nowadays you can install Linux on it, but the GPU it's not that well supported, thus you loose all the benefits. You have to stuck with an OS that sucks, while in the PC market you have plenty of OS choices, Windows, a million of Linux distributions, etc. If I need a workstation to train LLM why do I care about a OS with a GUI? It's only a waste of resources, I just need a thing that runs Linux and I can SSH into it. Also I don't get the benefit of using containers, Docker, etc.
Mac suck even hardware side form a server point of view, for example it's not possible to rack mount them, it's not possible to have redundant PSU, key don't offer remote KVM capability, etc.
Or sell it, which is much easier to do with Macs because they're known quantities and not "Acer Onyx X321 Q-series Ultra".
There is then the software issue, with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage
That's a fair point. Apple would get a ton of goodwill if they released enough documentation to let Asahi keep up with new hardware. I can't imagine it would harm their ecosystem; the people who would actually run Linux are either not using Macs at all, or users like me who treat them as Unix workstations and ignore their lock-in attempts.
It isnt 2005 any more where RAM/CPU/etc. progress benefits from upgrading every 6mo. It's closer to 6yr to really notice
On the upgrade path I don’t think upgrades are truly a thing these days. Aside from storage for most components by the time you get to whatever your next cycle is, it’s usually best/easiest to refresh the whole system unless you underbought the first time around.
you can just install linux?
Windows is 10x more enshittified than OSX
> An Apple machine is basically throw away: no component inside can be upgraded, you need more RAM? Throw it away and buy a new one.
Tell that to all the people rocking 5-10 year old macbook that still run great
I really don't get why anybody would want that. What's the use case there?
If someone doesn't care about privacy, they can use for-profit services because they are basically losing money, trying to corner the market.
If they care about privacy, they can rent cloud instances in order to setup, run, close and it will be both cheaper, faster (if they can afford it) but also with no upfront cost per project. This can be done with a lot of scaffolding, e.g. Mistral, HuggingFace, or not, e.g. AWS/Azure/GoogleCloud, etc. The point being that you do NOT purchase the GPU or even dedicated hardware, e.g. Google TPU, but rather rent for what you actually need and when the next gen is up, you're not stuck with "old" gen.
So... what use case if left, somebody who is both technical, very privacy conscious AND want to do so offline despite have 5G or satellite connectivity pretty much anywhere?
I honestly don't get who that's for (and I did try a dozens of local models, so I'm actually curious).
PS: FWIW https://pricepertoken.com might help but not sure it shows the infrastructure each rely on to compare. If you have a better link please share back.
I'm a somewhat tech heavy guy (compiles my own kernel, uses online hosting, etc).
Reading your comment doesn't sound appealing at all. I do almost no cloud stuff. I don't know which provider to choose. I have to compare costs. How can I trust they won't peek at my data (no, a Privacy Policy is not enough - I'd need encryption with only me having the key). What do I do if they suddenly jack up the rates or go out of business? I suddenly need a backup strategy as well. And repeat the whole painful loop.
I'll lose a lot more time figuring this out than with a Mac Studio. I'll probably lose money too. I'll rent from one provider, get stuck, and having a busy life, sit on it a month or two before I find a fix (paying money for nothing). At least if I use the Mac Studio as my primary machine, I don't have to worry about money going to waste because I'm actually utilizing it.
And chances are, a lot of the data I'll use it with (e.g. mail) is sitting on the same machine anyway. Getting something on the cloud to work with it is yet-another-pain.
Similarly if your use case depends on a whole lot of fast storage (eg, the 4x NVME to PCI-E x16 bifurcation boards), well that's also now something Apple just doesn't support. They didn't figure out something else. They didn't do super innovative engineering for it. They just walked away from those markets completely, which they're allowed to do of course. It's just not exactly inspiring or "deserves credit" worthy.
Apple removing/adding something to their product line matters nothing, for all we know, they have a new version ready to be launched next month, or whatever. Unless you work at Apple and/or have any internal knowledge, this is all just guessing, not a "testament" to anything.
But yeah, right now Apple actually has price <-> performance captured a lot of you’re buying a new computer just in general.
I can live without the RAM for a couple of months to get a good price for it, especially since Apple don’t sell that model (with the RAM) any more.
Wish you a speedy recovery for your back!
There are none currently on eBay.co.uk, so I'm going to try there. I'll also try some of the reddit UK-specific groups.
As far as not being scammed - it's a really high value one-off sale, so it'll either be local pickup (and cash / bank-transfer at the time, which happens in seconds in the UK) or escrow.com (for non-eBay) with the buyer paying all the fees etc.
I'd prefer local pickup because then I have the money, the buyer can see it working, verify everything to their satisfaction etc. etc.
> Wish you a speedy recovery for your back!
Thank you :) It is a little better today. Sitting down is now tolerable for short periods... :)
(I speak as an experienced third-party seller on Amazon/Walmart/eBay.)
https://appleinsider.com/articles/26/03/06/forget-512gb-ram-...
You may want to hold on to your M3 Ultra! There's no guarantee there will be a M5 Ultra with 512 Gb ram.
> I bet there’s gonna be a banger of a Mac Studio announced in June. Apple really stumbled into making the perfect hardware for home inference machines.
This I'm not actually as sure about. The current Studio offerings have done away with the 512GB memory option. I understand the RAM situation, but they didn't change pricing they just discontinued it. So I'm curious to see what the next Studio is like. I'd almost love to see a Studio with even one PCI slot, make it a bit taller, have a slide out cover...
Your point would have been largely correct in the first half of 2025.
Now, you're going to have a much better experience with a couple of Nvidia GPUs.
This is because of two reasons - the reasoning models require a pretty high number of tokens per second to do anything useful. And we are seeing small quantized and distilled reasoning models working almost as well as the ones needing terabytes of memory.
Is the Mac Studio great? Yeah, for Apple users.
That's a pretty good deal I would think
https://frame.work/de/de/products/desktop-diy-amd-aimax300/c...
So even if the model fits in the memory buffer on the Ryzen Max, you're still going to hit something like half the tokens/second just because the GPU will be sitting around waiting for data.
Personally, I'd rather have the Framework machine, but if running local LLMs is your main goal, the offerings from Apple are very compelling, even when you adjust for the higher price on the Apple machine.
At best we probably get a chassis to awkwardly daisy chain a bunch of Mac Studios together
Seem odd that a computer from a decade ago could have more than a 1TB of incremental RAM vs what we can buy today from Apple.
A cluster of 4 Apple’s M3 ultra Mac studios by comparisons will consume near 1100W under load.
The market for this use case is tiny
If the OpenAI domino falls, and I'd be happy to admit if I'm wrong, we're going to see a near catastrophic drop in prices for RAM and demand by the hyperscalers to well... scale. That massive drop will be completely and utterly OpenAI's fault for attempting to bite off more than it can chew. In order to shore up demand, we'll see NVidia and AMD start selling directly to consumers. We, developers, are consumers and drive demand at the enterprises we work for based on what keeps us both engaged and productive... the end result being: the ol' profit flywheel spinning.
Both NVidia and AMD are capable of building GPUs that absolutely wreck Apple's best. A huge reason for this is Apple needs unified memory to keep their money maker (laptops) profitable and performant; and while, it helps their profitability it also forces them into less performant solutions. If NVidia dropped a 128GB GPU with GDDR7 at $4k-- absolutely no one would be looking for a Mac for inference. My 5090 is unbelievably fast at inference even if it can't load gigantic models, and quite frankly the 6-bit quantized versions of Qwen 3.5 are fantastic, but if it could load larger open weight models I wouldn't even bother checking Apple's pricing page.
tldr; competition is as stiff as it is vicious-- Apple's "lead" in inference is only because NVidia and AMD are raking in cash selling to hyperscalers. If that cash cow goes tits up, there's no reason to assume NVidia and AMD won't definitively pull the the rug out from Apple.
None of the things people care about really get much out of "unified memory". GPUs need a lot of memory bandwidth, but CPUs generally don't and it's rare to find something which is memory bandwidth bound on a CPU that doesn't run better on a GPU to begin with. Not having to copy data between the CPU and GPU is nice on paper but again there isn't much in the way of workloads where that was a significant bottleneck.
The "weird" thing Apple is doing is using normal DDR5 with a wider-than-normal memory bus to feed their GPUs instead of using GDDR or HBM. The disadvantage of this is that it has less memory bandwidth than GDDR for the same width of the memory bus. The advantage is that normal RAM costs less than GDDR. Combined with the discrete GPU market using "amount of VRAM" as the big feature for market segmentation, a Mac with >32GB of "VRAM" ended up being interesting even if it only had half as much memory bandwidth, because it still had more than a typical PC iGPU.
The sad part is that DDR5 is the thing that doesn't need to be soldered, unlike GDDR. But then Apple solders it anyway.
the bottleneck in lots of database workloads is memory bandwidth. for example, hash join performance with a build side table that doesn't fit in L2 cache. if you analyze this workload with perf, assuming you have a well written hash join implementation, you will see something like 0.1 instructions per cycle, and the memory bandwidth will be completely maxed out.
similarly, while there have been some attempts at GPU accelerated databases, they have mostly failed exactly because the cost of moving data from the CPU to the GPU is too high to be worth it.
i wish aws and the other cloud providers would offer arm servers with apple m-series levels of memory bandwidth per core, it would be a game changer for analytical databases. i also wish they would offer local NVMe drives with reasonable bandwidth - the current offerings are terrible (https://databasearchitects.blogspot.com/2024/02/ssds-have-be...)
Apple needs to solder it because they are attaching it directly to the SOC to minimize lead length and that is part of how they are able to get that bandwidth.
Isn't that also because that's world we have optimized workloads for?
If the common hardware had unified memory, software would have exploited that I imagine. Hardware and software is in a co-evolutionary loop.
These companies always try to preserve price segmentation, so I don’t have high hopes they’d actually do that. Consumer machines still get artificially held back on basic things like ECC memory, after all . . .
https://docs.nvidia.com/cuda/cuda-programming-guide/04-speci...
Can we also stop giving Apple some prize for unified memory?
It was the way of doing graphics programming on home computers, consoles and arcades, before dedicated 3D cards became a thing on PC and UNIX workstations.
Apple are winning a small battle for a market that they aren’t very good in. If you compare the performance of a 3090 and above vs any Apple hardware you would be insane to go with the Apple hardware.
When I hear someone say this it’s akin to hearing someone say Macs are good for gaming. It’s such a whiplash from what I know to be reality.
Or another jarring statement - Sam Altman saying Mario has an amazing story in that interview with Elon Musk. Mario has basically the minimum possible story to get you to move the analogue sticks. Few games have less story than Mario. Yet Sam called it amazing.
It’s a statement from someone who just doesn’t even understand the first thing about what they are talking about.
Sorry for the mini rant. I just keep hearing this apple thing over and over and it’s nonsense.
For me, aesthetics and size are important. That workstation on your desk should justify its presence, not just exist as some hulking box.
When Apple released the Mac Studio, it made perfect sense from a form-factor point-of-view. The internal expansion slots in the M2 Mac Pro didn't make any sense. It was like a bag of potato chips - mostly air. And far too big and ugly to be part of my work area! I'm surprised that Apple didn't discontinue it sooner.
Even so, the ARM Mac Pro felt more like a halo car rather than a workhorse. The ARM Mac Pro may have been more compelling had it supported GPUs. Without this support, the price premium of the Mac Pro over the Mac Studio was too great to justify purchasing the Pro for many people, unless they absolutely needed internal expansion.
I’d love a user-upgradable Mac like my 2013 Mac Pro, but it’s clear that Apple has long moved on with its ARM Macs. I’ve moved on to the PC ecosystem. On one hand ARM Macs are quite powerful and energy-efficient, but on the other hand they’re very expensive for non-base RAM and storage configurations, though with today’s crazy prices for DDR5 RAM and NVMe SSDs, Apple’s prices for upgrades don’t look that bad by comparison.
Between cloud computing and server racks, is this still a real niche?
Opinions are my own obvs.
SR-IOV is just that? and is well supported by both Windows and Linux.
Whose else would they be?
They're trying to make it very clear they're not speaking on behalf of Apple Inc, despite having worked (or working) there.
Big companies like to give employees some minimal "media training", which mostly amounts to "do not speak for the company, do not say anything that might even slightly sound like you're speaking for the company".
> > Opinions are my own obvs.
> Whose else would they be?
On the internet? Often the opinions of others they see getting upvotes.> Whose else would they be?
takes a look at the user profile
Oh, they are a journalist/writer for a big name outfit
The allure of the Mac Pro is that you could dodge the Apple Tax by loading it up with RAM and compute accelerators Apple couldn't mark up. Well, Apple Silicon works against all of that. The hardware fabric and PCIe controller specifically prohibit mapping PCIe device memory as memory[0], which means no GPU driver ever will work with it. Not even in Asahi Linux. And the RAM is soldered in for performance. An Ultra class chip has like 16 memory channels, which even in a 1-DIMM per channel routing would have trace lengths long enough to bottleneck operating frequency.
The only thing the socketed RAM Mac Pros could legitimately do that wasn't a way to circumvent Apple's pricing structure was take terabytes of memory - something that requires special memory types that Apple's memory controller IP likely does not support. Intel put in the engineering for it in Xeon and Apple got it for free before jumping ship.
Even then, all of this has gone completely backwards. Commodity DRAM is insanely expensive now and Apple's royalty-bearing RAM prices are actually reasonable in comparison. So there's no benefit to modularity anymore. Actually, it's a detriment, because price-discovery-enforcing scalpers can rip RAM out of perfectly working computers and resell the RAM. It's way harder to scalp RAM that's soldered on the board.
[0] In violation of ARM spec, even!
CAMM fixes this, right?
> Actually, it's a detriment, because price-discovery-enforcing scalpers can rip RAM out of perfectly working computers and resell the RAM. It's way harder to scalp RAM that's soldered on the board.
Scalping isn't a thing unless you were selling below the market price to begin with which, even with the higher prices, Apple isn't doing and would have no real reason to do.
Notice that in real life it only really happens with concert tickets and that's because of scam sandwich that is Ticketmaster.
It's dumb from a practical perspective. But I keep hoping they'll vertically compress their trashcan design so it looks like their Cupertino headquarters.
It does the work you want it to do is not enough to justify its presence ?
Under your desk, right? Right?!
Nothing as swish looking as a Mac Pro though, it's a plain black Lian Li behemoth from the late 00s.
I have a Lian Li anniversary edition snail case and I don’t think any moveable desk could hold it.
Here's a good video how it looks like: https://www.youtube.com/watch?v=kIQINCWMd6I&list=PLi2i2YhL6o... (at 1:40 Neil Parfitt shows Mac audio setup his before and after).
I don't know if such a solution exists right now, but I'm thinking there's a fair chance it will soon as the Mac Pro disappearing creates a demand for something like it.
This is a big reason why things like eGPUs kinda suck. Thunderbolt is fast for external I/O, but it's quite pathetic compared to internal PCI-E.
It really could have been a bigger market for them than even the iPhone.
Intel should have shipped their GPUs with much more VRAM from day one. If they had done this, they'd have carved out a massive niche and much more market share, and it would have been trivially simple to do.
AMD should have improved their tools and software, etc.
Apple should have done as you say.
Google had nigh on a decade to boost TPU production, and they're still somehow behind the curve.
Such a lack of vision. And thus Nvidia is, now quite durably, the most valuable company in the world. Imagine telling that to a time traveler from 2018.
And as of now I do believe AMD is in the second strongest position in the datacenter space after Nvidia, ahead of even Google.
Nvidia is the most valuable company in the world right up until the AI bubble pops. Which, while it's hard to nail down when, is going to happen. I wouldn't call their position durable at all.
For all the faults of them leaning in hard on these things for stock market and personal gains, Nvidia still has some of the best quality products around. That is their saving grace.
They will not be the world most valuable company once the bubble pops, will probably never get back there again, but they will continue to be a decent enough business. I just want them going back to talking about graphics more than AI again, that will be nice.
As handwriting code is rapidly going out of fashion this year, it seems likely AI is coming for most of knowledge work next.
And who is to say that manual labor is safe for long?
For money, probably.
Apple is presumably leaving a lot of money on the table by not trying to sell Apple Silicon for AI inference and training. They're the only ones who can attach reasonably large GPUs (M3 Ultra) to very large amounts of cheaper memory (512GB SO-DIMM per GPU). Apple could e.g. sell server SKUs of Mac Studios, heck they can sell M3 Ultra chips on PCIe cards. And they could further develop Apple Silicon in that direction. Presumably they would be seen as a very legit competitor to Nvidia that way, perhaps moreso than Intel and AMD. I'd assume that in the current climate this would be extremely lucrative.
Now, actually doing this would disrupt Apple's own supply chain as well as force it to spend significant internal resources and cultural change for this kind of product line. There's a good argument to be made it would disproportionally negatively affect its Mac business, so this would be a very risky move.
But given that AI hardware is likely much higher margin than the Mac business an argument could probably (sadly) be made that it'd be lucrative for them to try it. I personally don't think Apple is inclined to take this kind of risk to jeopardize the Mac, but I'm sure some people at Apple have considered this.
From inside news: They were not breaking even on their existing GPUs. The strategy was to take a loss just to have a presence in the space.
But for some reason Apple thought the sound recording engineer or the video editor market was more important... like, WTF dude? Have some vision at least!
Even if Apple had an amazing GPU for AI it wouldn’t matter hugely - local inference hasn’t taken off yet and cloud inference and training all uses servers where Apple has no market share and wasn’t going to get it since people had already built all the stacks around CUDA before Apple could even have awoken to that.
Sound recording engineers and video editors will not disappear after the AI bubble bursts, and Apple is wise to keep that market. Bursting the AI bubble will not make AI disappear, it will just end the crazy cashflows we are seeing now. And in that regard, with the capabilities of their hardware, Apple is in a pretty good spot I think.
$1t backlog in orders in next 2 years.
Remember when a $1 billion valuation used to be a big thing? That is nothing compared with nowadays.
They want to be able to sell handsets, desktops and laptops to their customer base.
Pursing a product line that would consume the finite amount of silicon manufacturing resources away from that user base would be corporate suicide.
Even nvidia has all but dropped support for its traditional gaming customer base to satisfy its new strategy.
At any rate, the local inference capabilities are only going to get cheaper and more accessible over the coming years, and Apple are probably better placed than anyone to make it happen.
At some point, they will converge and an inflection for local LLMs will happen. Local LLMs will never be as smart or fast as cloud LLMs but they will be very useful for lower value tasks.
Obviously Siri from WWDC 2yrs ago was a disaster for Apple. Other than that they seem to have done pretty well navigating the new LLM world. I do think they would benefit from having their own SOA LLM, but I don’t think its is necessary for them. My mental model for LLMs and Apple is that they are similar Garage Band - “Now everyone can play an instrument” becomes “now anyone can make an app”. Apple owns the interface to the user (i don’t see anyone making nicer to use consumer hardware) and can use what ever stack in the background to deliver the technical features they decide to.
Apple is counting on something else: model shrink. Every one is now looking at "how do we make these smaller".
At some point a beefy Mac Studio and the "right sized" model is going to be what people want. Apple dumped a 4 pack of them in the hands of a lot of tech influencers a few months back and they were fairly interesting (expensive tho).
The most powerful AI interactions I've had involved giving a model a task and then fucking off. At that point, I don't actually care if it takes 5 minutes or an hour. I've cued up a list of background tasks it can work on, and that I can circle back to when I have time. In that context, smaller isn't even the virtue at hand–user patience is. Having a machine that works on my bullshit questions and modelling projects at one tenth the speed of a datacentre could still work out to being a good deal even before considering the privacy and lock-in problems.
It's pretty clear that this isn't going to happen any time soon, if ever. You can't shrink the models without destroying their coherence, and this is a consistently robust observation across the board.
Smaller models have gotten much more powerful the last 2 years. Qwen 3.5 is one example of this. The cost/compute requirements of running the same level intelligence is going down
Give every iPhone family a in house Siri that will deal with canceling services and pursuing refunds.
Your customer screw up results in your site getting an agent drive DDOS on its CS department till you give in.
Siri: "Hey User, here's your daily update, I see you haven't been to the gym, would you like me to harass their customer service department till they let you out of their onerous contract?"
For multi-gpu you can network multiple Macs at high speed now. Their biggest disadvantage to Nvidia right now is that no one wants to do kernel authoring in Metal. AMD learned that the hard way when they gave up on OpenCL and built HIP.
Inference has never been an issue for M series, and MLX just ramped it up further.
You can do training on the latest MBPs, although any serious models you are going to the cloud anyway.
What are they wasting, exactly?
G5 was the thing. And companies were buying G5 and other macs like that all the time, because you were able to actually extend it with video cards and some special equipment.
But now we have M chips. You don't need video for M chips. You kinda do, but truthfully, it's cheaper to buy a beefier Mac than to install a video card.
Pro was a great thing for designers and video editors, those freaks who need to color-calibrate monitors. And right now even mini works just fine for that.
And as for extensions - gone are the days of PCIe. Audio cards and other specialized equipment works and lives just fine on USB-C and Thunderbolt.
I remember how many months I've spent trying to make Creative Labs Sound Blaster to work on my 486 computer. At that time you had to have a card to extend your system. Right now I'm using Scarlett 2i2 from Focusrite. It works over USB-C with my iPhone, iPad and Mac. DJIs mics work just as good.
Damn, you can buy Oscilloscope that works over USB-C or network.
It's not the Mac's or Apple's fault. We are actually live in the age where systems are quite independent and do not require direct installations.
Grumble grumble. Well, there used to more than audio cards, back before the first time Apple canceled the Mac Pro and released the 2013 Studio^H^H Trash Can^H^H Mac Pro.
Then everyone stopped writing Mac drivers because why bother. So when they brought the PCIe Pro back in 2019, there wasn't much to put in it besides a few Radeon cards that Apple commissioned.
The nice thing about PCIe is the low latency, so you can build all sorts of fun data acquisition and real time control applications. It's also much cheaper because you don't need multi-gigabit SERDES that can drive a 1m line. That's why LabVIEW (originally a Mac exclusive) and NI-DAQ no longer exist on Mac.
USB-C oscilloscopes work because the peripheral contains all the hardware, so it doesn't particularly matter that the device->host latency is high. They also don't require much bandwidth because triggering happens inside the peripheral, and only the triggered waveform record is sent a few dozen times per second.
> It's not the Mac's or Apple's fault. We are actually live in the age where systems are quite independent and do not require direct installations.
It is, and we don't. Maybe you don't notice it, but others do.
Yeah, that's basically the way accessories have gone. Powerful mcu's and soc's have gotten cheap enough to make it viable. Makes me a little sad though, I liked having low latency "GPIO's" straight to software running on my PC (but I'm thinking as far back as the parallel port... love how simple that was).
With USB4/TB you can get quite far in both latency and throughput. Actually there are network adapters with TB connection that are just TB to PCIe adapters and PCIe network card.
My GPU, NVMe drives and motherboard might disagree.
…so what do you actually need PCIe for?
Thunderbolt is also too slow for higher-end networks. A single port is already insufficient for 100-gigabit speeds.
I/O expansion
Networking
This is a wild and very wrong take.
Just about every single consumer computer shipped today uses PCIe. If you were referring to only only the physical PCIe slots, that's wrong too: the vast majority of desktop computers, servers, and workstations shipped in 2025 had physical PCIe slots (the only ones that didn't were Macs and certain mini-PCs).
The 2023 Mac Pro was dead on arrival because Apple doesn't let you use PCIe GPUs in their systems.
That's what happens when you quote only part of a statement. Taken in context, it was referring to a very real decline in expansion cards. Now that NICs (for WiFi) and SSDs have been moved into their own compact specialized slots, and Ethernet and audio have been standard integrated onto the motherboard itself for decades, the regular PCIe slots are vestigial. They simply are not widely used anymore for expanding a PC with a variety of peripherals (that era was already mostly over by the transition from 32-bit PCIe to PCIe).
Across all desktop PCs, the most common number of slots filled is one (a single GPU), and the average is surely less than one (systems using zero slots and relying on integrated graphics must greatly outnumber systems using more than one slot).
Even GPUs themselves are a horrible argument in favor of PCIe slots. The form factor is wildly unsuitable for a high-power compute accelerator, because it's ultimately derived from a 1980s form factor that prioritized total PCB area above all else, and made zero provisions for cards needing a heatsink and fan(s).
Unless the one it comes with isn't as fast as the one you want, or they didn't integrate one at all, or you need more than one.
> Across all desktop PCs, the most common number of slots filled is one (a single GPU), and the average is surely less than one (systems using zero slots and relying on integrated graphics must greatly outnumber systems using more than one slot).
There is an advantage in having an empty slot because then you can put something in it.
Your SSD gets full, do you want to buy one which is twice as big and then pay twice as much and screw around transferring everything, or do you want to just add a second one? But then you need an empty slot.
You bought a machine with an iGPU and the CPU is fine but the iGPU isn't cutting it anymore. Easy to add a discrete GPU if you have somewhere to put it.
The time has come to replace your machine. Now you have to transfer your 10TB of junk once. You don't need 100Gbps ethernet 99% of the time, but using the builtin gigabit ethernet for this is more than 24 hours of waiting. A pair of 100Gbps cards cuts that >24 hours down to ~15 minutes. If the old and new machines have an empty slot.
I don't see it disappearing, at most we'll get PCIe 6/7/etc.
I don't understand how this is a response to anything I said.
With USB3 you have 94 i/o…
For years pci has not been mandatory for audio. UAD, Apogee, RME and other high end brands will push you to them. Or even only provide them as usb device… even Thunderbolt is not needed here.
And that’s been the case for a while! My Fireface UC from 15 years ago can deal with 16 channels at 96khz at 256 sample. On PC and Mac.
Thunderbolt is external PCIe.
Thunderbolt can kinda-sorta mimic PCIe, but it needs to chop up the PCIe signal into smaller packets, transmit them and then put them back together and this introduces a big jump in latency, even when bandwidth can be rather high.
For many applications this isn't a big deal, but for others it causes major problems (gaming being the big one, but really anything that's latency sensitive is going to suffer a lot).
The M5 generation Pro and Max chips have moved to a chiplet based architecture, with all the CPU cores on one chiplet, and all the GPU cores on another.
https://www.wikipedia.org/wiki/Apple_M5
So what will the M5 Ultra look like?
If you integrate two CPU chiplets and two GPU chiplets, you're looking at 36 CPU cores, 80 GPU cores, and 1228 GB/s of memory bandwidth.
Still, there are a few things which could be improved relative to the current Studio. First, the ability to easily clean the internals from dust. You should be able to just lift the lid and clean the computer. Also, it would be great to have one Mac which you could just plug in a bunch of NVMe disks.
On the other side, they might replace the Mac Pro with a rack mountable machine as the demand for ARM servers in the cloud raises.
• Multiple hard drive bays for easy swapping of disks, with a side panel that the user could open and close
• Expandable RAM
• Lots of ports, including audio
• The tower took up no desktop space
• It was relatively affordable, starting at $2500. Many software developers had one. (The 2019 and later Mac Pros were insanely expensive, starting at $6000.)
The Mac Studio is affordable, but it lacks those other features. It has more ports than other Macs but fewer in number and kind than the old Mac Pro, because the Mac Studio is a pointlessly small desktop instead of floor tower.
I knew it was all over when third party companies had to develop the necessarily-awkward rack mount kits for those contraptions. If Apple actually cared about or understood their pro customers, they would have built a first party solution for their needs. Like sell an actual rack-mount computer again—the horror!
Instead, an editing suite got what looked like my bathroom wastebasket.
Then they said they couldn't upgrade the components because of heat. Everyone knows that wasn't true.
By the time Apple said they had issues with it in 2017, AMD were offering 14nm GCN4 and 5 graphics (Polaris and Vega) compared to the 28nm GCN1 graphics in the FirePro range. Intel had moved from Ivy Bridge to Skylake for Xeons. And if they wanted to be really bold (doubtful, as the move to ARM was coming) then the 1st gen Epyc was on the market too.
Moore's Law didn't stop applying for 6 years. They had options and chose to abandon their flagship product (and most loyal customers) instead.
If you take one apart you'll see why, it's not the case that you could have ever swapped around the components to make it dual-CPU instead; it really was "dual GPU or bust".
Somewhat ironically, in todays ML ecosystem, that architecture would probably do great. Though I doubt it could possibly do better than what the M-series is doing by itself using unified memory.
At which point I'll decide whether to replace my Mac Pro with a Mac Studio or a Linux workstation; honestly, I'm about 60/40 leaning towards Linux at this point, in which case I'd also buy a lower-end Mac, probably a MacBook Air.
https://macdailynews.com/2012/06/12/rush-limbaugh-okay-apple...
Although to be fair the latest two eps have been refreshingly technical
This Mac Pro was about resetting and giving a clear signal that Apple was willing to invest in the Mac far more than it was about ‘slots’.
Today, Mac hardware is the best it has ever been, and no one is reasonably questioning apple’s commitment to a Mac hardware.
So it makes sense for the Mac Pro to make a graceful exit.
But it’s a great product, does fulfill the bulk of needs for most “Pro” desktop use cases, and what’s left isn’t interesting or profitable enough to sustain a separate product line.
It'd be nice if the people in charge of the software would get the message.
It had many hardware upgrades over the years - upgraded CPUs, 128GB RAM, 4TB NVME storage, a modern AMD GPU, USB3/c, thunderbolt, etc
The only reason it got replaced is because it became too much of a PITA to keep modern OSX running on it (via OCLP)
Replaced with an M4 Max Mac Studio, which is a nice and faster machine but with no ability to upgrade anything and much worse hardware resale value on M-series I'll have to replace it in 2-3 years
Absolutely recommend you purchase the 4-bay Terramaster external enclosure — gives you four SATA slots that are hot-swappable (unlike MacPro's). 10gbps via USB-C.
If you're self employed, the cost of equipment and depreciation make hanging on to that 2009 system even more of a poor choice.
If you were still using a 2009 system I don't see why you'd "have to replace in 2-3 years."
The most notable feature was that there were mac-specific graphics cards, and you could also run PC graphics cards (without a nice boot screen). They had a 1.4kw power supply I believe, and there was extra pcie power for higher-end graphics cards. You could upgrade the memory, add up to 6 or more sata hard disks (2 in dvd slot). You could run windows, dual booting if you wanted and apple supported the drivers.
The 2013 was kind of a joke. small and quiet, but expansion was minimal.
2019 looked beefy, but the expansion was more like a cash register for apple, not really democratic. There were 3rd party sata hard disk solutions,
the 2023 model was basically a joke. I think maybe the pcie slots were ok for nvme cards, not a lot else (unless apple made it).
nowadays an apple computer is more like an iphone - apple would prefer if everything was welded shut.
Funny timing to say that
https://www.ifixit.com/News/116152/macbook-neo-is-the-most-r...
The only real drawback that I’ve experienced with the Mac Pro has been the lack of support for large language models on the AMD GPU due to Apple's lacklustre metal drivers but I’ve been working with a couple of other developers to port a MoltenVK translation layer to Ollama that enables LLM’s on the GPU. We’re trying to get it on the main branch since testing has gone well.
One thing a lot of commenters in this thread are overlooking is that this is the death nell for repairable and upgradable computing for Mac, which is super disappointing.
Apple's new "Pro" definition seems more like "Prosumer".
None of the Apple Silicon hardware can seemingly justify this form factor, though. The memory isn't serviceable, PCIe devices aren't really supported, the PSU doesn't need much space, and the cooling can be handled with mobile-tier hardware. Apple's migration path is "my way or the highway" for Mac Pro owners.
One of those with an M* Ultra, and some sort of Thunderbolt storage expansion would probably cover most of the Pro's use cases. And Apple probably doesn't want to deal with anything more exotic than those.
https://www.youtube.com/watch?v=x4_RsUxRjKU or something
It seemed like the guts of the Mac Pro were essentially shoved inside of a box and stuck in the corner of the tower. It would seem like they could decouple it and sell a box that pro users could load cards into (like other companies do for eGPUs). It wouldn’t feel like a very Apple-like setup, but it would function and allow Apple to focus where they want to focus without simply leaving those users behind.
I suppose the other option would be to dispense with the smoke and mirrors and let people slot a Mac Studio right into the Mac Pro tower, so it could be upgraded independently of the tower.
The alternative is people leave the platform or end up with a bunch of Thunderbolt spaghetti. Neither of which seem ideal.
I always hoped we’d get a consumer version of what they have internally - 10 or 20 or more Apple Silicon chips for 1000 cores or so.
That's a cute way of saying that GPUs aren't supported.
Apple tried before to push everything out into external PCIe enclosures and people hated it. Maybe this'll go differently this time, the Mac Studio is certainly a much more compelling offering than the trashcan Mac Pro. But I think this is still a shitty and painful situation for a lot of specific users.
> But once again, Steve Jobs objected...
> He just left it in there and no one bothered to mention it to Steve
I'm still not going to use windows or linux. Don't want to be an IT guy on the side just to keep linux machines working. This may not be obvious to some unless you try to use printers and scanners that are more than 5 years old and what them to be on the network. And, you don't install virtualization tools like vmware that require compiling and loading kernel drivers which ends up being incompatible with new OS releases...etc.
Windows is just too much of a painful acceptance of mediocrity and apathy in product design for me.
For the SSD, no. For the memory, yes. The memory lives on the same chip as the CPU and the GPU, it's even more tightly bound than just being soldered on. The memory being there has legitimate technical benefits that make it much easier/cheaper for them to reach the extremely high memory bandwidths that they do.
Same reason as a) GDDR on dGPUs (I think I read somewhere that GDDR is very much like regular DDR, just with much tighter paths and thus soldered in) and b) Framework Desktop (performance would reportedly halve if RAM were not soldered)
SSD reasons I seem to recall are architectural for security: some parts (controller?) that usually sit on a NVMe SSD are embedded in the SoC next to (or inside?) the secure enclave processor or whatever the equivalent of the T2 thing is in Mx chips, so what you'd swap would be a bank of raw storage chips which don't match the controller.
The soldering does serve a purpose though, the shorter traces allow for better signal integrity at higher speeds. This isn't something special about what Apple is doing though, Intel and AMD are doing the exact same thing with the exact same LPDDR5 chips on their respective APUs.
HBM is still almost purely reserved for datacentre GPUs.
No. There is a reason for it but no, it's just soldered on the same carrier board as the APU, in order to be really close to it. Apple could have used a form factor like CAMM2 and it would have worked the same, be it at slightly higher cost. The reason is simply to kill upgrade options and cut manufacturing costs - same as for any other soldered ram.
Well, not exactly. Apple’s desktop Macs actually all have modular SSD storage, and third parties sell upgrade kits. And it’s not like Thunderbolt is a slouch as far as expandability.
I can see why the Mac Pro is gone. Yeah, it has PCIe slots…that I don’t really think anyone is using. It’s not like you can drop an RTX 5090 in there.
The latest Mac Pro didn’t have upgradable memory so it wasn’t much different than a Mac Studio with a bunch of empty space inside.
The Mac Studio is very obviously a better buy for someone looking for a system like that. It’s just hard to imagine who the Mac Pro is for at its pricing and size.
I think what happened is that the Studio totally cannibalized Mac Pro sales.
Every PCIe card I have requires it's own $150+ PCIe to Thunderbolt Dock and its own picoPSU plus 12V power supply.
External PCIe is convenient for portables. Not for desktops. It's a piss-poor replacement for a proper PCIe slot.
We should demand better of our computer-manufacturing overlords.
> It’s not like you can drop an RTX 5090 in there.
Why not? Oh, right, because Apple won't let you. Sad.
It was exactly as modular as the Mac mini and Mac Studio.
The only difference is that it had some PCIe slots that basically had no use since you couldn’t throw a GPU in there, and because thunderbolt 5 exists.
Yeah, sure, there were some niche PCIe things that two people probably used. Hence the discontinuation.
I am an ex-Mac user, I own a Framework. Don’t worry, you’re preaching to the choir.
"Modular" does not mean that it's serviceable, repairable or upgradable. Apple's refusal to adopt basic M.2 spec is a pretty glaring example of that.
I get the ideological angle, but in practical terms that's not a barrier: https://www.aliexpress.us/w/wholesale-apple-ssd-adapter.html...
Gonna miss it, though. If they had reduced the add-in card slots to something more reasonable, lowered the entry price, and given us multi-socket options for the CPU (2x M# Ultras? 4x?), it could have been an interesting HPC or server box - though they’ve long since moved away from that in software land, so that was always but a fantasy.
At least the Mac Studio and Minis are cute little boxes.
What I find fascinating is how people pay so much for Apple-related products. Perhaps the quality requires a premium (I don't share that opinion, but for the sake of thinking, let's have it as an option here), but this seems more deliberate milking by Apple with such price tags. People must love being milked it seems.
https://www.macrumors.com/2026/03/26/mac-pro-wheels-kit-disc...
To me, this discontinuation is less about the product and more about making a statement. The M2 Mac Pro was a dysfunctional product of an internal conflict of interests, but it cast a ray of hope that the M series would develop past the current scaled-up-but-still-disposable phone/embedded SoCs and that Apple had some interest in bringing them closer to the offerings of the competitors from the workstation/server market. Now, with this move, they've made it clear that they would rather give up an entire segment than make at least a narrow part of their ecosystem open enough for the PCIe slots of the Mac Pro to find any serious use.
The Mac Pro was at the same time bizarrely over the top while also weirdly limited in some ways - while also being way to expensive…
As for not having a Pro or otherwise expandable system? It’s shit. They make several variations of their chips, and I don’t think it would hurt them to make an SoP for a socket, put a giant cooling system in it, and give it 10 or 12 PCIe slots. As for what would go in those slots? Make this beast rack mountable and people would toss better network cards, sound/video output or capture, storage controllers, and all kinds of other things in there. A key here would be to not charge so much just because they can. Make the price reasonable.
The Xserve has been dead for 15 years now, and it was never tremendously amazing (though it was nice kit).
Apple apparently has some sort of "in-house" xserve-like thing they don't sell; but turning that into a product would likely be more useful than a Mac Pro, unless they add NUMA or some other way of allowing an M5 to access racks and racks of DIMMs.
Would be a killer local AI setup...for $40k.
I bought a GPU maybe a decade ago for this, and it's not worth the hassle (for me at least), but a nice out-of-the box solution, I would pay for.
If they had done more with NUMA in the M series maybe you could have a Mac Pro with M5 Ultras that can take a number of M5 "daughter cards" that do something useful.
I don't find the external GPU houses for Mac Studio as appealing to use.
Believe t-shirts at WWDC were not enough.
Thus the workstation market joins OS X Server.
They made a shirt. It was fun.
Hardly workstation class.
If you bought the $35k Mac Pro in 2023 when it was released and have a $50/hr rate it's been paid off for about 30 months. So as of today those owners probably aren't too broken hearted. They'll likely get at least another three years out of them.
People buying $35k Mac Pros probably paid them off after a single contract. So they've just been making money rather than costing money.
If you spend $35k on a nice computer, and then earn $35k from doing some work using it, that doesn't mean that buying the computer has paid for itself unless the computer is solely responsible for that income. It probably isn't.
It's not necessarily even true that after doing that work it's "paid for", in the sense that getting the $35k income means that you were able to afford the $35k computer: that only follows if you didn't need any of that income for other luxuries, such as food and shelter.
If you're earning $50/hour, 40hr/week then what you've done after 17.5 weeks is earned enough to buy that $35k computer. Assuming you don't need any of that money for anything else, like food and shelter.
If the fancy computer helps you get that income then of course it's perfectly legit to estimate how much difference it makes and decide it pays for itself, but it's not as simple as comparing the price of the computer with your total income.
Regardless of how much it contributes, if you have plenty of money then it's also perfectly legit to say "I can comfortably afford this and I want it so I'll but it" but, again, it's not as simple as comparing the price of the computer with your total income.
Are you working 996 weeks or something?
At standard 40h work-week the math works out to 8.75 weeks to "pay for itself".
Mac OS is a horrible experience.
(but yes, Apple seems happy to ship buggy software these days)
Apple's hardware is great, but without choice of software, they need to provide an amazing default option.
I like Apple when they make pretty stuff. Especially small, shiny, and quiet.
The money's all in selling phones to teen girls now, and taking their mafia cut of app store sales.
They replaced it with Mac Neo. Did you notice the wonderful build quality, the accesible price and that everyone is buying it ? And it has USB: U from universal.