My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.
I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.
The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.
The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”
I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!
To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.
Cheers, Stephen
If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.
Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.
Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.
All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.
Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.
Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.
Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.
Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.
All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.
Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.
I've worked in video delivery for quite a while.
If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.
A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.
To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.
Incredible discipline. The Chrome graph in comparison was a mess.
Looks like general purpose CPUs are on the losing train.
Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.
it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.
Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.
The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.
I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.
The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.
But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.
It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.
Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.
Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.
With Nvidia Now I can even play games on it, though I wouldn't recommend it for any serious gamers.
I need to point this out all the time these days it seems, but this opinion is only valid if all you use is a laptop and all you care about is single core perfomance.
The computing world is far bigger than just laptops.
Big music/3d design/video editing production suites etc still benefit much more from having workstation PCs with higher PCI bandwidth, more lanes for multiple SSDs and GPUs, and high level multicore processing performance which cannot be matched by Apple silicon.
I guess I'd slightly change that to "MacBook" or similar, as Apple are top-in-class when it comes to laptops, but for desktop they seem to not even be in the fight anymore, unless reducing power consumption is your top concern. But if you're aiming for "performance per money spent", there isn't really any alternative to non-Apple hardware.
I do agree they do the best hardware in terms of feeling though, which is important for laptops. But computing is so much larger than laptops, especially if you're always working in the same place everyday (like me).
AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)
AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.
That said Hardware Canucks did a review of the 395 in a mobile form factor (Asus ROG Flow F13) with TDP at 70w (lower than the max 120w TDP you see in desktop reviews). This lower-than-max TDP also gets you closer to the perf/watt sweet spot.
The M4 Pro scores slightly higher in Cinebench R24 despite being 10P+4E vs a full 16P cores on the 395 all while using something like a 30% less power. M4 Pro scores nearly 35% higher than the single-core R24 benchmark too. 395 GPU performance is than M4 Pro in productivity software. More specifically, they trade blows based on which is more optimized in a particular app, but AMD GPUs have way more optimizations in general and gaming should be much better with an x86 + AMD GPU vs Rosetta 2 + GPU translation layers + Wine/crossover.
M4 Pro gets around 50% better battery life for tasks like web browsing when accounting for battery size differences and more than double the battery life per watt/hr when doing something simple like playing a video. Battery life under full load is a bit better for the 395, but doing the math, this definitely involves the 395 throttling significantly down from it's 70w TDP.
Rule of the thumb is roughly 15% advantage to distribute between power and performance there.
Catching up while remaining on older nodes is no joke.
Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.
Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.
In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.
1) Apple Silicon outperforms all laptop CPUs in the same power envelope on 1T on industry-standard tests: it's not predominantly due to "optimizing their software stack". SPECint, SPECfp, Geekbench, Cinebench, etc. all show major improvements.
2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.
3) x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.
4) Large buffers, L1, L2, L3, caches, etc. are not exclusive to any CPU microarchitecture. Anyone can increase them—the question is, how much does your core benefit from larger cache features?
5) Ryzen AI Max 300 (Strix Halo) gets nowhere near Apple on 1T perf / W and still loses on 1T perf. Strix Halo uses slower CPUs versus the beastly 9950X below:
Fanless iPad M4 P-core SPEC2017 int, fp, geomean: 10.61, 15.58, 12.85 AMD 9950X (Zen5) SPEC2017 int, fp, geomean: 10.14, 15.18, 12.41 Intel 285K (Lion Cove) SPEC2017 int, fp, geomean: 9.81, 12.44, 11.05
Source: https://youtu.be/2jEdpCMD5E8?t=185, https://youtu.be/ymoiWv9BF7Q?t=670
The 9950X & 285K eat 20W+ per core for that 1T perf; the M4 uses ~7W. Apple has a node advantage, but no node on Earth gives you 50% less power.
There is no contest.
Can we please stop with this myth? Every superscalar processor is doing the exact same thing, converting the ISA into the µops (which may involve fission or fusion) that are actually serviced by the execution units. It doesn't matter if the ISA is x86 or ARM or RISC-V--it's a feature of the superscalar architecture, not the ISA itself.
The only reason that this canard keeps coming out is because the RISC advocates thought that superscalar was impossible to implement for a CISC architecture and x86 proved them wrong, and so instead they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.
ARM processors ALSO decode instructions to micro-ops. And Apple chips do too. Pretty much a draw. The first stage in the execution pipelines of all modern processors is a a decode stage.
* Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.
* Apple was willing to throw out legacy support on a regular basis. Intel and AMD, by comparison, are still expected to run code written for DOS or specific extensions in major Enterprises, which adds to complexity and cost
* The “standard” of x86 (and demand for newly-bolted-on extensions) means effort into optimizations for efficiency or performance meet diminishing returns fairly quickly. The maturity of the platform also means the “easy” gains are long gone/already done, and so it’s a matter of edge cases and smaller tweaks rather than comprehensive redesigns.
* Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.
It boils down to comparing two different products and asking why they can’t be the same. Apple’s hardware is purpose-built for its userbase, operating systems, and software; x86 is not, and never has been. Those of us who remember the 80s and 90s of SPARC/POWER/Itanium/etc recall that specialty designs often performed better than generalist ones in their specialties, but lacked compatibility as a result.
The Apple ARM vs Intel/AMD x86 is the same thing.
Apple also has a particular advantage in owning the os and having the ability to force independent developers to upgrade their software, which make incompatible updates (including perf optimizations) possible.
They could also just sit down with Microsoft and say "Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do".
Apple did this twice in the last 20 years - once on the move from PowerPC chips to Intel, and again from Intel to Apple Silicon.
If Microsoft and enough large OEMs (Dell, etc.), thought there was enough juice in the new proposed architecture to cause a major redevelopment of everything from mobile to data centre level compute, they'd line right up, because they know that if you can significantly reduce the amount of power consumption while smashing benchmarks, there are going to long, long wait times for that hardware and software, and its pay day for everyone.
We now know so much more about processor design, instruction set and compiler design than we did when the x86 was shaping up, it seems obvious to me that:
1. RISC is a proven entity worth investing in
2. SoC & SiP is a proven entity worth investing in
3. Customers love better power/performance curves at every level from the device in their pocket to the racks in data centres
4. Intel is in real trouble if they are seriously considering the US government owning actual equity, albeit proposed as non-voting, non-controlling
Intel can keep the x86 line around if they want, but their R&D needs to be chasing where the market is heading - and fast - while bringing the rest of the chain along with them.
Each time they had a pretty good emulation story to keep most stuff (certainly popular stuff) working through a multi-year transition period.
IMO, this is better then carrying around 40 years of cruft.
But as you mention - they've at multiple times changed the underlying architecture, which surely would render å large part of prior optimizations obsolete?
> Software in x86 world is not optimized, broadly, because it doesn’t have to be.
Do ARM software need optimization more than x86?
This is why I get so livid regarding Electron apps on the Mac.
I’m never surprised by developer-centric apps like Docker Desktop — those inclined to work on highly technical apps tend not to care much about UX — but to see billion-dollar teams like Slack and 1Password indulge in this slop is so disheartening.
Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.
Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.
Switching to another scheduler, reducing interrupt rate etc. probably help too.
Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.
> Is x86 just not able to keep up with the ARM architecture?
Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.
That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.
Nothing in x86 prohibits you from an implementation less efficient than what you could do with ARM instead.
x86 and ARM have historically served very different markets. I think the pattern of efficiency differences of past implementations is better explained by market forces rather than ISA specifics.
Before the M1, I was stuck using an intel core i5 running arch linux. My intel mac managed to die months before the M1 came out. Let's just say that the M1 really made me appreciate how stupidly slow that intel hardware is. I was losing lots of time doing builds. The laptop would be unusable during those builds.
Life is too short for crappy hardware. From a software point of view, I could live with Linux but not with Windows. But the hardware is a show stopper currently. I need something that runs cool and yet does not compromise on performance. And all the rest (non-crappy trackpad, amazingly good screen, cool to the touch, good battery life, etc.). And manages to look good too. I'm not aware of any windows/linux laptop that does not heavily compromise on at least a few of those things. I'm pretty sure I can get a fast laptop. But it'd be hot and loud and have the unusable synaptics trackpad. And a mediocre screen. Etc. In short, I'd be missing my mac.
Apple is showing some confidence by just designing a laptop that isn't even close to being cheap. This thing was well over 4K euros. Worth every penny. There aren't a lot of intel/amd laptops in that price class. Too much penny pinching happening in that world. People think nothing of buying a really expensive car to commute to work. But they'll cut on the thing that they use the whole day when they get there. That makes no sense whatsoever in my view.
Yeah, those glossy mirror-like displays in which you see yourself much better than the displayed content are polished really well
Hah, it's exactly the other way around for me; I can't stand Apple's hardware. But then again I never bought anything Asus... let alone gamer laptops.
First two years it was solid, but then weird stuff started happening like the integrated GPU running full throttle at all times and sleep mode meaning "high temperature and fans spinning to do exactly nothing" (that seems to be a Windows problem because my work machine does the same).
Meanwhile the manufacturer, having released a new model, lost interest, so no firmware updates to address those issues.
I currently have the Framework 16 and I'm happy with it, but I wouldn't recommend it by default.
I for one bought it because I tend to damage stuff like screens and ports and it also enables me to have unusual arrangements like a left-handed numpad - not exactly mainstream requirements.
Apple is just off the side somewhere else.
If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.
What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.
You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.
Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.
notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.
Most Linux distributions are not well tuned, because this is too device-specific. Spending a few minutes writing custom udev rules, with the aid of powertop, can reduce heat and power usage dramatically. Another factor is Safari, which is significantly more efficient than Firefox and Chromium. To counter that, using a barebones setup with few running services can get you quite far. I can get more than 10 hours of battery from a recent ThinkPad.
The cost is flexibility and I think for now they don't want to move to fixed RAM configurations. The X3D approach from AMD gets a good bunch of the benefits by just putting lots of cache on board.
Apple got a lot of performance out of not a lot of watts.
One other possibility on power saving is the way Apple ramps the clockspeed. Its quite slow to increase from its 1Ghz idle to 3.2Ghz, about 100ms and it doesn't even start for 40ms. With tiny little bursts of activity like web browsing and such this slow transition likely saves a lot of power at a cost of absolute responsiveness.
No, it's not. DRAM latency on Apple Silicon is significantly higher than on the desktop, mainly because they use LPDDR which has higher latencies.
Not necessarily. Running longer at a slower speed may consume more energy overall, which is why "race to sleep" is a thing. Ideally the clock would be completely stopped most of the time. I suspect it's just because Apple are more familiar with their own SoC design and have optimised the frequency control to work with their software.
On package memory increases efficiency, not speed.
However, most of the speed and efficiency advantages are in the design.
Framework does not have the volume, it is optimized for modularity, and the software is not as optimized for the hardware.
As a general purpose computer Apple is impossible to beat and it will take a paradigm shift for that for to change (completely new platform - similar to the introduction of the smart phone). Framework has its place as a specialized device for people who enjoy flexible hardware and custom operating systems.
What about all the money that they make from abusive practices like refusing to integrate with competitors' products thus forcing you to buy their ecosystem, phoning home to run any app, high app store fees even on Mac OS, and their massive anti repair shenanigans?
So this is precisely what Apple did, and we can argue it was long time in the making. The funny part is that nobody expected x86 to make way for ARM chips, but perhaps this was a corporate bias stemming from Intel marketing, which they are arguably very good at.
Only if all you care about is having a laptop with really fast single core performance. Anything that requires real grunt needs a workstation or server which Apple silicon connot provide.
It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.
The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.
I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.
Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.
I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):
killall -STOP google-chrome
When you want to go back to using it, run: killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.https://birchtree.me/blog/everyone-says-chrome-devastates-ma...
That might be different on other platforms
Auto Tab Discard exists and works fine, but I am not sure it's what people call "suspending tabs". They need to reload when you click them and they objectively free the memory they used (I watch my memory usage closely).
Its also probably worth putting the laptop in "efficiency" mode (15W sustained, 25W boost per Framework). The difference in performance should be fairly negligible compared to balanced mode for most tasks and it will use less energy.
You call five hours good?! Damn... For productivity use, I'd never buy anything below shift-endurance (eight hours or more).
I feel like I've tried several times to get this working in both Linux and Windows on various laptops and have never actually found a reliable solution (often resulting in having a hot and dead laptop in my backpack).
As a layman there’s no way I’m running something called “Pop!_OS” versus Mac OS.
There's probably a lot still missing: Apple integrated the memory on the same die, and built Metal for software to directly take advantage of that design. That's the competitive advantage of vertical integration.
I'm planning to buy HP ZBook Ultra G1a laptop with AMD Ryzen Strix and it seems to be a very good alternative to Apple M series laptop [1]. It can support up to 128 GB RAM (up to 96 GB VRAM) and should be able to run GPT-OSS 120B model.
[1] HP ZBook Ultra G1a 14" Mobile Workstation PC:
Also, especially the MacBook Pros have really large batteries, on average larger than the competition. This increases the battery runtime.
- A properly written firmware. All Chromebooks are required to use Coreboot and have very strict requirements on the quality of the implementation set by Google. Windows laptops don't have that and very often have very annoying firmware problems, even in the best cases like Thinkpads and Frameworks. Even on samples from those good brands, just the s0ix self-tester has personally given me glaring failures in basic firmware capabilities.
- A properly tuned kernel and OS. ChromeOS is Gentoo under the hood and every core service is afaik recompiled for the CPU architecture with as many optimisations enabled. I'm pretty sure that the kernel is also tweaked for battery life and desktop usage. Default installations of popular distros will struggle to support this because they come pre-compiled and they need to support devices other than ultrabooks.
Unfortunately, it seems like Google is abandoning the project altogether, seeing as they're dropping Steam support and merging ChromeOS into Android. I wish they'd instead make another Pixelbook, work with Adobe and other professional software companies to make their software compatible with Proton + Wine, and we'd have a real competitor to the M1 Macbook Air, which nothing outside of Apple can match still.
Windows does a lot of useless crap in the background that kills battery and slows down user-launched software
2. Much more cache
3. No legacy code
4. High frequencies (to be 1st in game benchmarks, see what happens when you're a little behind like the last Intel launch, the perception is Intel has bad CPUs because they are some percentage points behind AMD on games, pressure Apple doesn't have - comparisons are mostly Apple vs. Apple and Intel vs. Amd)
The engineers at AMD are the same as at Apple, but both markets demand different chips and they get different chips.
Since some time now the market is talking about energy efficiency, and we see
1. AMD soldering memory close to the CPU
2. Intel and AMD adding more cache
3. Talks about removing legacy instructions and bit widths
4. Lower out of the box frequencies
Will take more market pressure and more time though.
How many iterations to match Apple?
This assumes Apple's M series performance is a static target. It is not. Apple is iterating too.
I don't think many people have appreciated just how big a change the 64 bit Arm was, to the point it's basically a completely different beast than what came before.
From the moment the iPhone went 64 bit it was clear this was the plan the whole time.
However, this doesn't really hold up as the cause for the difference. The Zen4/5 chips, for example, source the vast majority of their instructions out of their uOp trace cache, where the instructions have already been decoded. This also saves power - even on ARM, decoders take power.
People have been trying to figure out the "secret sauce" since the M chips have been introduced. In my opinion, it's a combination of:
1) The apple engineers did a superb job creating a well balanced architecture
2) Being close to their memory subsystem with lots of bandwidth and deep buffers so they can use it is great. For example, my old M2 Pro macbook has more than twice the memory bandwidth than the current best desktop CPU, the zen5 9950x. That's absurd, but here we are...
3) AMD and Intel heavily bias on the costly side of the watts vs performance curve. Even the compact zen cores are optimized more for area than wattage. I'm curious what a true low power zen core (akin to the apple e cores) would do.
ARM has better /security/ though - not only does it have more modern features but eg variable length instructions also mean you can reinterpret them by jumping into the middle of one.
I agree that it's unfortunate that the power usage isn't better tuned out of the box. An especially annoying aspect of GNOME's "Power Saver" mode is that it disables automatic software updates, so you can't have both automatic updates and efficient power usage at the same time (AFAIK)
In the x86 laptop space the 'big' vendors like Dell, HP, Asus, Lenovo, Etc. Can do that sort of thing. Framework doesn't have the leverage yet. Linux is an issue too because that community isn't aligned either.
Alignment is facilitated by mutual self interest, vendors align because they want your business, etc. The x86 laptop industry has a very wide set of customer requirements, which is also challenging (need lots of different kinds of laptops for different needs).
The experience is especially acute when one's requirements for a piece of equipment have strayed from the 'mass market' needs so the products offered are less and less aligned with your needs. I feel this acutely as laptops move from being a programming tool to being an applications product delivery tool.
- Try running powertop to see if it says what the issue is.
- Switch to firefox to rule out chrome misconfigurations.
- If this is wayland, try x11
I have an amd SOC desktop and it doesn’t spin up the fans or get warm unless its running a recent AAA title or an LLM. (I’m running devuan because most other distros I’ve tried aren’t stable enough these days).
In scatterplots of performance vs wattage, AMD and Apple silicon are on the same curve. Apple owns the low end and AMD owns the high end. There’s plenty of overlap in the middle.
First, op is talking about Chrome which is not an Apple software. And I can testify that I observed the same behavior with other software which are really not optimized for macOS or even at all. Jetbrains IDEs are fast on M*.
Also, processor manufacturers are contributors of the Linux kernel and have economical interest in having Linux behave as fast as they can on their platforms if they want to sell them to datacenters.
I think it’s something else. Probably unified the memory ?
You can run Linux on a MacBook Pro and get similar power efficiency.
Or run third party apps on macOS and similarly get good efficiency.
Difference are more software related in my opinion. And it might be just appearance as Apple is used to do tricks. Like for example it was shown in a good old time that people were thinking that the iphone was faster to load things because of using animation at load time despite taking the same time as other phones.
For example, for the many tabs in Chrome, the difference might be that macos is aggressively throttling things when your linux laptop will give you the maximum performance possible and so producing more heat. I often noted that with osx and especially when you don't have a lot of ram. The os will easily put to sleep and evict other programs, but also other windows and other tabs I guess as part of them are separated process. Then, when you need them, it reload the memory. Good in term of power efficiency but in my experience I was experiencing terrible latencies like going from one window to another. Let's say like 1s. Not obvious if you are not used to better.
In the same way that a lot of persons are used to electron based ide like vscode and feels perfectly ok, but for me the latency of typing code and it showing on the screen is awful compared to my native ide.
In the same way for macos, you can see how often the laptop will go to sleep, or lower the display light unexpectedly with default settings. Like these guys that suddenly quit Google meet meetings because the mac went to sleep despite the active call.
The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.
I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.
No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.
Weirdly, the choice pool seems to have shrunken in the last few years.
Oh you've gotten lucky then. Or somehow disabled crowdstrike.
It runs Debian headless now (I didn't have particular use for a laptop in the first place). Not sure just how unpopular this suggestion'd be, but I'd try booting Windows on the laptop to get an idea of how it's supposed to run.
One of the things Apple has done is to create a wider core that completes more instructions per clock cycle for performance while running those cores at conservative clock speeds for power efficiency.
Intel and AMD have been getting more performance by jacking up the clock speeds as high as possible. Doing so always comes at the cost of power draw and heat.
Intel's Lunar Lake has a reputation for much improved battery life, but also reduces the base clock speed to around 2 gigahertz.
The performance isn't great vs the massively overclocked versions, but at least you get decent battery life.
This isnt the only no-show position I've heard about at Intel. That is why Intel cannot catch up. You probably cannot get away with that at Apple.
Take a look at the mobile arena: For an average everyday-Joe, he'd be hard-pressed to notice that much of an actual difference between Apple iOS and Android. They're both highly refined mobile OSs that offer remarkably similar core experiences: instant on, home screens with Apps that run sandboxes and are only installable via an App Store, extremely efficient hardware inside that offer amazing performance and efficiency in battery life when compared to a notebook.
It's that Apple took one step further and decided to make a Smartphone INTO a Laptop with their M-series MacBooks. It was easy to do with the iPadOS, as it's really just a giant phone, but it was impressive they were also able to do that with MacOS as well, leading me to believe it should be possible with Windows too.
Qualcomm and its X-Elite chips is a step in a better direction Microsoft needs to offer a competitive experience to the MacBook, but I'm not sure it's enough, not without a complete re-write of their OS, to overcome all its WinTel baggage.
I think a completely new PC Laptop OS is needed, one that runs as efficiently as MacOS does with modern ARM hardware (Qualcomm, MediaTek, Samsung Exynos). Maybe Google via some sort of full Desktop Android OS (not Chromebook) or perhaps Microsoft with a completely built-from-scratch Linux-derived Windows? Or, indeed, maybe SteamOS?
Regardless, if the last five years has shown us anything, the divide between Apple's MacOS/iPadOS+M-seriesChips and the old WinTel world has just widened and everyone thinks it will continue that trend. How long will the general public stay with a clearly inferior PC product? History has suggested not long (iPhone vs BlackBerry for example!)...
- Linux is a patchwork of SW, written in a variety of programming languages, using a variety of libraries, some of which having the same functionality. There is duplication, misalignment, legacy.
- MacOS is developed by a single company. It is much more integrated and coherent.
Same for the CPU:
- x86 accesses memory through an external bus. The ability to install a third party GPU requires an external bus, with a standardized protocol, bus width, etc. This is bound to lag behind state of the art
- Apple chips have on die memory, GPU (actually same package but not same die). Higher speeds, optimization, evading from standardized protocols: all this is possible.
This has an impact on kernel/drivers/compilers:
- x86: so many platforms, CPU versions, protocol revisions to support. Often with limited documentation. This wastes hell a lot of engineering time!
- Apple: limited number of HW platforms to support, full access to internals.
My Apple friends get 12+ hrs of battery life. I really wish Lenovo+Fedora or whoever would get together and make that possible.
I'm a Linux guy too, when I have to use a Mac I turn all the gloss off and it's ok, but without going to Nix I miss a system wide package manager and I like an open-as-possible community OS that runs everywhere. It's a shame Apple doesn't license their chips.
About a year ago I got a maxed out Macbook Pro, but the above combined with the fact I wasn't comfortable travelling with something that cost as much as a good used car made me return it.
Now I'm using a Thinkpad that was ¼ the price and it's great, AMD chip, 64GB of RAM, replaceable storage, fantastic screen, keyboard (and Trackpoint) means it can do just about anything. Yes, battery life is limited, around four hours with the 16" OLED (I haven't put any work into optimizing it, and this isn't a battery-first model), but I can handle it. I'll maybe get a Strix Halo laptop since I like running LLMs, but otherwise x86 has improved enough that it's pretty good. That said, I won't complain if it matches/surpasses Apple chips, and I'd consider running a headless Apple 'server' at home.
Now, 7.5 years later, the battery is not so healthy any more, and I'm looking around for something similar, and finding nothing. I'm seriously considering just replacing the battery. I'll be stuck with only 8GB RAM and an ancient CPU, but it still looks like the best option.
Another useful thing is that you can buy small portable battery packs that are meant for jump-starting car engines, and they have a 12V output (probably more like 14V), which could quite possibly be piped straight into the DC input of a laptop. My laptop asks for 19V, but it could probably cope with this.
That doesn't sound super secure to me.
> for five hours.
My experience with anything that is not designed to be an office is that it will be uncomfortable in the long run. I can't see myself working for 5 hours in that kind of place.
Also it seems it is quite easily solved with an external battery pack. They may not last 12hours but they should last 4 to 6 hours without a charge in powersaving mode.
Don't you drink any coffee in the coffee shop? I hope you do. But, still, being there for /five/ hours is excessive.
I'm guessing you're well aware, but just in case you're not: Asahi Linux is working extremely well on M1/M2 devices and easily covers your "5 hours of work at a coffee shop" use case.
It's not 8-12, and the fans do kick up. The track pad is fine but not as nice as the one on the MacBook. But I prefer to run Linux so the tradeoff is worth it to me.
just... take your charger...
HP has Ubuntu-certified strix halo machines for example.
Intel provides processors for many vendors and many OS. Changing to a new architecture is almost impossible to coordinate. Apple doesn't have this problem.
Actually in de 90s Intel and Microsoft wanted to move to a RISC architecture but Compaq forced them to stay on x86.
Windows NT has always been portable, but didn't provide any serious compat with Windows 4.x until 5.0. At that time, AMD released their 64-bit extension to x86. Intel wanted to build their own, Microsoft went "haha no". By that time they've been dictating the CPU architecture.
I guess at that point there was very little reason to switch. Intel's Core happened; Apple even went to Intel to ask for a CPU for what would become the iPhone - but Intel wasn't interested.
Perhaps I'm oversimplifying, but I think it's complacency. Apple remained agile.
RAM in particular can be a big performance bottleneck, Apple M as way better bandwidth than most x86 CPUs, having well specified RAM chips soldered right next to the CPU instead of having to support DIMM modules certainty helps. AMD AI MAX chips, which also have great memory bandwidth and the most comparable to Apple M also use soldered RAM.
Maybe some details like ARM having a more efficient instruction decoder plays a part, but I don't believe it is that significant.
Windows on the other hand is horribly optimized, not only for performance, but also for battery life. You see some better results from Linux, but again it takes a while for all of the optimizations to trickle down.
The tight optimization between the chip, operating system, and targeted compilation all come together to make a tightly integrated product. However comparing raw compute, and efficiency, the AMD products tend to match the capacity of any given node.
I would try it with Windows for a better comparison, or get into the weeds of getting Linux to handle the ryzen platform power settings better.
With Ubuntu properly managing fans and temps and clocks, I'll take it over the Mac 10/10 times.
I've got the Framework 13 with the Ryzen 5 7640U and I routinely have dozens of tabs open, including YouTube videos, docker containers, handful of Neovim instances with LSPs and fans or it getting hot have never been a problem (except when I max out the CPU with heavy compilation).
The issue you're seeing isn't because x86 is lacking but something else in your setup.
It's a design choice.
Also, different Linux distros/DEs prioritize different things. Generally they prioritize performance over battery life.
That being said, I find Debian GNOME to be the best on battery life. I get 6 hours on an MSI laptop that has an 11th gen Intel processor and a battery with only 70% capacity left. It also stays cool most of the time (except gaming while being plugged in) but it does have a fan...
In special cases, such as not caring about battery life, x86 can run circles around M1. If you allow the CPU rated for 400W to actually consume that amount of power, it's going to annihilate the one that sips down 35W. For many workloads it is absolutely worth it to pay for these diminishing returns.
Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.
Apple M1: 23.3
Apple M4: 28.8
Ryzen 9 7950X3D (from 2023, best x86): 10.6
All other x86 were less efficient.The Apple CPUs also beat most of the respective same-year x86 CPUs in Cinebench single-thread performance.
[1] https://www.heise.de/tests/Ueber-50-Desktop-CPUs-im-Performa... (paywalled, an older version is at https://www.heise.de/select/ct/2023/14/2307513222218136903#&...)
Your memory served you wrong. Experience eith Intel based Macs was much worse than recent AMD chips.
My 2019 Thinkpad T495 (Ryzen 3600) does get hot under load, but it's still fine to type on.
That's a legacy of iPhone. And that's a fundamental philosophical difference between Apple and everyone else.
I suspect that's why Apple's caches are structured the way they are: the goal is to stop working. More instruction cache means more work, which leads to less work.
(Edit, I read lower in the thread that the software platform also needs to know how to make efficient use of this performance per watt, ie, by not taking all the watts you can get.)
[0] https://www.phoronix.com/review/ryzen-ai-max-395-9950x-9950x...
In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.
That being said, my M2 beats the ... out of my twice as expensive work laptop when compiling an arduino project. Literall jaw drop the first time I compiled on the M2.
Same, I just realized it's three years old, I've used every day for hours and it still feels like the first day I got it.
They truly revindicated on this as their laptops were getting worse and worse and worse (keyboard fiasco, touchbar, ...).
If you actually benchmark said chips in a computational workload I'd imagine the newer chip should handily beat the old M1.
I find both windows and Linux have questionable power management by default.
On top of that, snappiness/responsiveness has very little to do with the processor and everything to do with the software sitting on top of it.
The biggest quality of life issue for me personally is the trackpad. Although support for gestures and so on has gotten quite decent in Linux land, Parallels only sends the VM scroll wheel events, so there's no way to have smooth scrolling and swipe gestures inside the VM, so it feels much worse than native macOS or Asahi Linux running on the bare metal.
OTOH if you're fine with macOS GUI but you want something like WSL for CLI and server apps, there's https://lima-vm.io
Here's a video about it. Skip to 4:55 for battery life benchmarks. https://www.youtube.com/watch?v=ymoiWv9BF7Q
An Airbook sets me back €1000, enough to buy a used car, and AFAICT is much more difficult to get fully working Linux on than my €200 amd64 build.
Why hasn't apple caught up?
And he was right. Netbooks mostly sucked. Same with Chromebooks.
There’s nothing to be gained by racing to the bottom.
You can buy an m1 laptop for $599 at Walmart. That’s an amazing deal.
Precisely because of that they haven't caught up. They don't want to compete in the PC race to them bottom that nearly bankrupted them in the 90s before they invented the iPod.
Apple got rich by creating its own markets.
pcpartpicker link?
On the Mac, you can fix neither the hardware nor the software; it's like a car with the hood welded shut.
On the Framework, you can fix (or change) both, and there is no built-in expiry date beyond which you cannot update the software.
Is performance really the only thing that matters?
Looking beyond Apple/Intel, AMD recently came out with a cpu that shares memory between the GPU and CPU like the M processors.
The Framework is a great laptop - I'd love to drop a mac motherboard into something like that.
My M1 Air still beats top of the line i7 MacBook Pros.
the build quality of surface laptop is superb also.
My experience has been to the contrary. Moving to Linux a couple months ago from Windows doubled my battery life and killed almost all the fan noise.
Note those docker containers are running in a linux VM!
Of course they are on Windows (WSL2) as well.
So literally, VW partially donate Veyron to their clients, selling it under-priced.
I think, same happen with Apple M architecture - it is extraordinary and different from anything on market, but Apple sell it under-priced, so to limit losses, they decided to limit it to very few models.
How such things happen? Well, hardware is hard - usually so sophisticated SoC need 7..8 iterations to achieve production, and this could cost million or even more. And mostly happen problem, just low output, mean, for example you make 100 cores on one die, but only 5..6 working.
How AMD/Intel deal with such things? It's hard, mean complex.
First, they just have huge experience and very wide portfolio of different SoCs, but used some tricks, so could for example downgrade Xeon to Core-i7 with jumpers.
Second, for large patterns like RAM/Cache, could disable broken parts of die with jumpers, or even could disable cores. That's why there are so many DRAM PCB designs - they usually made as 6 RAM fields with one controller, and with jumpers could sell chips with literally 1, 2, 3,4,5 or 6 fields enabled; some AMD SoCs exists with odd number of cores because of this (for example 3-cores), and other tricks, which could made some averaged profits from wide line of SoCs.
Third, for some designs, Intel/AMD use already proven technologies, like Atom was basically just first Pentium on new semiconductor process, or for long time, I7 series was basically Xeons previous generations.
Unfortunately for Apple, they have not such luxury to make wide product line, and don't have significant place to dump low grade chips, so they limited M line to one which as I think just appear to have largest output.
From my experience, I could speculate, Apple tops consider to make wider product line, when achieve better output, but for now without much success.
Apple often lets the device throttle before it turns on the fans for "better ux" linux plays no such mind games.
That's actually wild. I think we're in a kind of unique moment, but one that is good for Apple mainly, because their OS is so developer-hostile that I pay back all the performance gains with interest. T_T
I wonder what specs a MacBook would need to give me similar performance. For example, on Linux with 32 GB of RAM, I can sometimes have 4 or 5 instances of WebStorm open and forget about them running in the background. Could a MacBook with 16 GB of RAM handle that? Similarly, which MacBook processor would give me the real-world, daily-use performance I get from my 14700H? Should I continue using cheap and powerful Windows/Linux laptops in the future, or should I make the switch to a MacBook?
(Translated from my native language to English using Gemini.)
There are different kinds of transistors that can be used when making chips. There are slow, but efficient transistors and fast, but leaky transistors. Getting an efficient design is a balancing act where you limit use of the fast transistors to only the most performance critical areas. AMD historically has more liberally used these high performance leaky transistors, which enabled it to reach some of the highest clock frequencies in the industry. Apple on the other hand designed for power efficiency first, so its use of such transistors was far more conservative. Rather than use faster transistors, Apple would restrict itself to the slower transistors, but use more of them, resulting in wider core designs that have higher IPC and matched the performance of some of the best AMD designs while using less power. AMD recently adopted some of Apple’s restraint when designing the Zen 5c variant of its architecture, but it is just a modification of a design that was designed for significant use of leaky transistors for high clock speeds:
https://www.tomshardware.com/pc-components/cpus/amd-dishes-m...
The resulting clock speeds of the M4 and the Ryzen AI 340 are surprisingly similar, with the M4 at 4.4GHz and the Ryzen AI 340 at 4.8GHz. That said, the same chip is used in the Ryzen AI 350 that reaches 5.0GHz.
There is also the memory used. Apple uses LPDDR5X on the M4, which runs at lower voltages and has tweaks that sacrifice latency to an extent for a big savings in power. It also is soldered on/close to the CPU/SoC for a reduction needed in power to transmit data to/from the CPU. AMD uses either LPDDR5X or DDR5. I have not kept track of the difference in power usage between DDR versions and their LP variants, but expect the memory to use at least half the power if not less. Memory in many machines can use 5W or more just at idle, so cutting memory power usage can make a big impact.
Additionally, x86 has a decode penalty compared to other architectures. It is often stated that this is negligible, but those statements began during the P4 era when a single core used ~100W where a ~1W power draw for the decoder really was negligible. Fast forward to today where x86 is more complex than ever and people want cores to use 1W or less, the decode penalty is more relevant. ARM, using fixed length instructions and having a fraction of the instructions, uses less power to decode its instructions, since its decoder is simpler. To those who feel compelled to reply to repeat the mantra that this is negligible, please reread what I wrote about it being negligible when cores use 100W each and how the instruction set is more complex now. Let’s say that the instruction decoder uses 250mW for x86 and 50mW for ARM. That 200mW difference is not negligible when you want sub-1W core energy usage. It is at least 20% of the power available to the core. It does become negligible when your cores are each drawing 10W like in AMD’s desktops.
Apple also has taken the design choice of designing its own NAND flash controller and integrating it into its SoC, which provides further power savings by eliminating some of the power overhead associated with an external NAND flash controller. Being integrated into the SoC means that there is no need to waste power on enabling the signals to travel very far, which gives energy savings, versus more standard designs that assume a long distance over a PCB needs to be supported.
Finally, Apple implemented an innovation for timer coalescing in Mavericks that made a fairly big impact:
https://www.imore.com/mavericks-preview-timer-coalescing
On Linux, coalescing is achieved by adding a default 50ms slack to traditional Unix timers. This can be changed, but I have never seen anyone actually do that:
https://man7.org/linux/man-pages/man2/pr_set_timerslack.2con...
That was done to retroactively support coalescing in UNIX/Linux APIs that did not support it (which were all of them). However, Apple made its own new API for event handling called grand central dispatch that exposed coalescing in a very obvious way via the leeway parameter while leaving the UNIX/BSD APIs untouched, and this is now the preferred way of doing event handling on MacOS:
https://developer.apple.com/documentation/dispatch/1385606-d...
Thus, a developer of a background service on MacOS that can tolerate long delays could easily set the slack to multiple seconds, which would essentially guarantee it would be coalesced with some other timer, while a developer of a similar service on Linux, could, but probably will not, since the scheduler slack is something that the developer would need to go out of his way to modify, rather than something in his face like the leeway parameter is with Apple’s API. I did check how this works on Windows. Windows supports a similar per timer delay via SetCoalescableTimer(), but the developer would need to opt into this by using it in place of SetTimer() and it is not clear there is much incentive to use it. To circle back not Chrome, it uses libevent, which uses the BSD kqueue on MacOS. As far as I know, kqueue does not take advantage of timer coalescing on macOS, so the mavericks changes would not benefit chrome very much and the improvements that do benefit chrome are elsewhere. However, I thought that the timer coalescing stuff was worthwhile to mention given that it applies to many other things on MacOS.
However, with AMD Strix Halo aka AMD Ryzen AI Max+ 395 (PRO) there are Notebooks like the ZBook Ultra G1a and Tablets like the Asus ROG Flow Z13, that come close to the MacBook power / performance ratio[2] due to the fact, that they used high bandwidth soldered on memory, which allows for GPUs with shared VRAM similar to Apple's strategy.
Framework did not manage to put this thing in notebook yet, but shipped a Desktop variant. They also pointed out, that there was no way to use LPCAMM2 or any other modular RAM tech with that machine, because it would have slowed it down / increased latencies to an unusable state.
So I'm pretty sure the main reason for Apple's success is the deeply integrated architecture and I'm hopeful that AMD's next generation STRIX Halo APUs might provide this with higher efficiency and hopefully Framework adapts these chips in their notebooks. Maybe they just did in the 16?! Let's wait for this announcement: https://www.youtube.com/watch?v=OZRG7Og61mw
Regarding the deeply thought through integration there is a story I often tell: Apple used to make iPods. These had support for audio playback control with their headphone remotes (e.g. EarPods), which are still available today. These had a proprietary ultra sonic chirp protocol[3] to identify Apple devices and supported volume control and complex playback control actions. You could even navigate through menus via voiceover with longpress and then using the volume buttons to navigate. Until today with their USB-C-to-AudioJack Adapters these still work on nearly every apple device published after 2013 and the wireless earbuds also support parts of this. Android has tried to copy this tiny little engineering wonder, but until today they did not manage to get it working[4]. They instead focus on their proprietary "longpress" should work in our favour and start "hey google" thing, which is ridiculously hard to intercept / override in officially published Android apps... what a shame ;)
1: https://youtu.be/51W0eq7-xrY?t=773
2: https://youtu.be/oyrAur5yYrA
A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge). That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.
Another part of the issue when it comes to cooling is that Apple is virtually the only laptop manufacturer that makes solid full aluminium frames, whereas most x86 laptops are made out of plastic and, for higher-end ones, magnesium alloy. That gives Apple the advantage of being able to use the entire frame to cool the laptop, allowing far more thermal input before saturation occurs and the fans have to activate.
Is that your metric of performance? If so...
$ sudo cpufreq-set -u 50MHz
done!Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?
For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.
Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.
I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.
x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel (and apps that want the full performance gains) must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.
That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.
+ What everyone else has already said about node size leads, specific benchmarks, etc
Change TDP, TDC, etc. and fan curves if you don't like the thermal behavior. Your Ryzen has low enough power draw that you could even just cool it passively. It has a lower power draw ceiling than your M1 Pro while exceeding it in raw performance.
Also comparing chips based on transistor density is mostly pointless if you don't also mention die size (or cost).
ARM is great. Those M are the only thing I could buy used and put Linux on it.
I did ask LLM for some stats about this. According to Claude Sonnet 4 through VS Code (for what that's worth), my Macbook's display can consume same or even more power than CPU does for "office work". Yet my M1 Max 16" seems to last a good while longer than whatever it was I got from work this year. I'd like to know how those stats are produced (or are they hallucinated...). There doesn't seem to be a way to get display's power usage in M series Macs. So, you'd need to devise a testing regime for display off and display on 100% brightness to get some indication of its effect on power use.