What in the world is this disaster of an opening paragraph? From the weird "AI PC platform" (not sure what that is) to the "will be the most broadly adopted and globally available AI PC platform" (is that a promise? a prediction? a threat?).
And you just gotta love the processor names "Intel Core Ultra Series 3 Mobile X9/X7"
Oh, the number of times I’ve heard someone assume their five- or ten-year-old machine must be powerful because it’s an i7… no, the i3-14100 (released two years ago) is uniformly significantly superior to the i7-9700 (released five years before that), and only falls behind the i9-9900 in multithreaded performance.
Within the same product family and generation, I expect 9 is better than 7, but honestly it wouldn’t surprise me to find counterexamples.
At best, 14700KF-Intel+AMD might yield relevant results.
It’s not really meant for consumer. Who would even visit newsroom.intel.com?
What is an AI PC? ('Look, Ma! No Cloud!')
An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. https://newsroom.intel.com/artificial-intelligence/what-is-a...
> Are ZBooks good or do I want an OmniBook or ProBook? Within ZBook, is Ultra or Fury better? Do I want a G1a or a G1i? Oh you sell ZBook Firefly G11, I liked that TV show, is that one good?
https://geohot.github.io/blog/jekyll/update/2025/11/29/bikes...
It’s an AI PC platform. It can do AI. It has an NPU and integrated GPU. That’s pretty straightforward. Competitors include Apple silicon and AMD Ryzen AI.
They’re predicting it’ll sell well, and they have a huge distribution network with a large number of partner products launching. Basically they’re saying every laptop and similar device manufacturer out there is going to stuff these chips in their systems. I think they just have some well-placed confidence in the laptop segment, because it’s supposed to combine the strong efficiency of the 200 series with the kind of strong performance that can keep up with or exceed competition from AMD’s current laptop product lineup.
Their naming sucks but nobody’s really a saint on that.
silicon taken up that couldve been used for a few more compute units on the GPU, which is often faster at inference anyway and way more useful/flexible/programmable/documented.
It's... the launch vehicle for a new process. Literally the opposite of "cost cutting", they went through the trouble of tooling up a whole fab over multiple years to do this.
Will 18A beat TSMC and save the company? We don't know. But they put down a huge bet that it would, and this is the hand that got dealt. It's important, not something to be dismissed.
But I wont be investing time and money again on Intel while the same anti-engineering beancounter board is still there. For example, they never owned the recent Raptor Lake serious hardware issues and they never showed clients how this will never happen again.
https://en.wikipedia.org/wiki/Raptor_Lake#Instability_and_de... "Intel has decided not to halt sales or recall any units"
The only reason INTC isn't in a death spiral is because the US Govt. won't let that happen
A laser focus on five things is either business nonsense or optics nonsense. Who was this written for?
Intel called it a “one-off mistake”, it’s the best mistake Intel ever made.
https://download.intel.com/newsroom/2026/CES2026/Intel-CES20...
I was in CES2024 and saw snapdragon X elite chip running a local LLM (llama I believe). How did it turn out? Users cannot use that laptop except for running an LLM. They had no plans for translation layer like Apple Rosetta. Intel would be different for sure in that regard, but I just don't think that it will fly against Ryzen AI chips or Apple silicon.
I agree with losing faith in Intel chips though.
I think maybe what OP meant was that the memory occupied by the model meant you couldn't do anything alongside inferecing, e.g. have a compile job or whatever running (unless you unload the model once you've done asking it questions.)
to be honest, we could really do with RAM abundance. Imagine if 128GB ram became like 8GB ram is today - now that would normalize local LLM inferencing (or atleast, make a decent attempt.)
ofcourse youd need the bandwidth too...
The new intel node seems to be kinda weaker than tsmc's going by the frequency numbers of the CPUs, but what'll matter the most in a laptop is real battery life anyway
Lunar Lake is also very slow in ST and MT compared to Apple.
Qualcomm's X Elite 2 SoCs have a much better chance of duplicating the Macbook experience.
What is the AI PC platform? The experience on windows with windows 11 for just the basic UI of the start menu leaves a lot to be desired, is copilot adoption on windows that popular and does it take advantage of this AI PC platform?
Ryzen AI 400 mobile CPU chips are also releasing soon (though RocM is still blah I think)
Nvidia is still playing in the AI space despite all the noise of others on their AI offerings - and despite intel hype, Nividias margins at least recently have been incredible (ie, people still using them) so their platform hasn't yet been killed by intel's "most widely adoptoped" AI platform offering
>Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.
The true competitor is Ryzen AI, Nvidia doesn't produce these integrated CPU/GPU/AI products in the PC segment at all.
What actually makes it an AI platform? Some tight integration of an intel ARC GPU, similar to the Apple M series processors?
They claim 2-5x performance for soem AI workloads. But aren't they still limited by memory? The same limitation as always in consumer hardware?
I don't think it matters much if you're limited by a nvidia gpu with ~max 16gb or some new intel processor with similar memory.
Nice to have more options though. Kinda wish the intel arc gpu would be developed into an alternative for self hosted LLMs. 70b models can be quite good but still difficult / slow to use self-hosted.
The latest Ryzen mobile CPU line didn't improve performance compared to its predecessor (the integrated GPU is actually worse), and I think the NPU is to blame.
Logic Density (may be inaccurate, also it's not the only metric for performance ): Raipidus 2nm ≈ TSMC N2 > TSMC N3B > TSMC N3E/P > Intel 18A ≈ Samaung 3GAP
But 18A/20A already has PowerVia, while TSMC will implement Backside Power Delivery in A16 (next generation of N2)
As for comparison between the two: According to TechInsights, Intel's 18A could offer higher performance, whereas TSMC's N2 may provide higher transistor density - [1]
[0] - https://www.tomshardware.com/pc-components/cpus/intel-announ...
[1] - https://www.tomshardware.com/tech-industry/intels-18a-and-ts...
The CPU are also probably also fine!
Intel is so far ahead with consumer multi-chip. AMD has done amazing with having an IOD+CCD (io / core complex dies) chiplet split up (basically having a northbridge on package), but is just trying to figure out how in 2027's Medusa Point they're going to make a decent mainline APU multi-chip, can't keep pushing monolithic APU dies like they have (but they've been excellent FWIW). Like Intel's been doing with sweet EIMB, breaking the work up already, and hopefully is reaping the reward here. Stashing some tiny / very low power cores on the "northbridge" die is a genius move that saves incredible power for light use, a big+little+tiny design that let's the whole CCD shut down while work happens. Some very nice high core configs. Panther Lake could be super exciting.
18A with backside power delivery / "PowerVia" could really be a great leap for Intel! Nice big solid power delivery wins, that could potentially really help. My fingers are so very crossed. Really hope the excitement for this future arriving pans out, at least somewhat!
Their end of year Nova Lake with b(ig)LLC and an even bigger newer NPU6 (any new features beyond TOps?) is also exciting. I hope that also includes the incredible Thunderbolt/USB4 connectivity Intel has typically included on mobile chips but not holding my breath. Every single mobile part is capable of 4X Thunderbolt 5. That is sick. I really hope AMD realizes the ball is in it's court on interconnects at some point!! 20 Lane PCIe configs are also very nice to have for mobile.
Lunar Lake was quite good for what it was, very amazing well integrated chip, with great characteristics. As a 2+4 big/little it wasnt enough for developers. But great consumer chip. I think Intel's really going to have a great total system design with Panther Lake. Yes!
https://www.tomshardware.com/pc-components/cpus/intel-double...
Healthy Intel/GF/TSMC competition at the head of the pack is great for the tech industry, and the global economy at large.
Perhaps even more importantly, with armed conflict looming over Taiwan and TSMC... well, enough said.
Yes, you do need to spend more energy sending data between chiplets. Intel has been relentlessly optimizing that and is probably the furthest ahead of the game on that, with EIMB and Foveros. AMD just got to a baseline sea-of-wires, where they arent using power hungry PHY to send data, and that is only shipping on Strix Halo at the moment & is slated to be a big change for Zen6. But Intel's been doing all that and more, IMO. https://chipsandcheese.com/p/amds-strix-halo-under-the-hood https://www.techpowerup.com/341445/amd-d2d-interconnect-in-z...
That also has some bandwidth constraints on your system too.
There's the labor cost of doing package assembly! Very non trivial, very scary, very intimidating work. Just knowing that TSMC's Arizona chips have to be shipped back to Taiwan, assembled/packaged there, then potentially shipped where-ever is anec-data but a very real one. This just makes me respect Intel all the more, for having such interesting chips, such as Lakefield ~6 years ago, and their ongoing pursuit of this as a challenge.
So yeah, there are many optimal aspects to a single die. You're making a problem really hard by trying to split it up across chips.
It's not even clear why we want multi chip. As a consumer, if you had your choice, yes, you are right: we do want a big huge slab of a chip. There aren't many structural advantages for us, to get anything other than what we want, on one big chip.
And yet. Your cost savings can potentially be fantastically huge. Yields increase as your square millimeter-age shrinks, at some geometric or some such rate. Being able to push more advanced nodes that don't have the best yields and not have it be an epic fail allows for ongoing innovation & risk acceptance.
There's the modularity dividends. You can also tune appropriately: just as AMD keeps re-using the IOD across generations, Intel can innovate one piece at a time. This again is extremely liberating from a development perspective, to not have to get everything totally right, to be able to suffer faults, but not in the wafer, but at the design level, where maybe ok the new GPU isn't going to ship in 6 months after all, so we'll keep using the old one, but we can still get the rest of the upgrades out.
There's maybe some power wins. I don't really know how much difference it makes, but Intel just shutting down their CCD and using the on IOD (to use AMD's terms) tiny cores is relishably good. It's easy for me to imagine a big NPU or a big GPU that does likewise. I'm expecting similar from AMD with Medusa Point, their 2027 Big APU (but still sub Medusa Halo, which I cannot frelling wait to see).
I think Intel's been super super smart & has incredible vision about where chipmaking is headed, and has been super ahead of the curve. Alas their P-core has been around in one form or another for a long time & is a bit of a hog, and it's been a disaster for shipping new nodes. But I think they're set up well, and, as frustrating and difficult as it is leaving the convenience of a big chip APU, it feels like that time is here, and Intel's top of class at multi-chip, in a way few others are. We are seeing AMD have to do the same (Medua Point).
Optimal is a suboptimal statement. Only the Sith deal in absolutes, Anakin.
P-Core Max Frequency 5.1 on the highest end, and the lowest at 4.4.
There's no hyperthreading: https://www.pcgamer.com/hardware/processors/now-youve-got-so...
Dunno about AVX and APX. They're not making it easy to find, so... probably not.
https://www.intel.com/content/www/us/en/products/sku/245716/...
Now, unified memory shared freely between CPU and GPU would be cool, like Apple and AMD SH have, if that’s what you meant.
Qualcomm's laptop chips thus far have also not had on-package RAM. They have announced that the top model from their upcoming Snapdragon X2 family will have a 192-bit wide memory bus, but the rest will still have a 128-bit memory bus.
Intel Lunar Lake did have on-package RAM, running at 8533 MT/s. This new Panther Lake family from Intel will run at 9600 MT/s for some of the configurations, with off-package RAM. All still with a 128-bit memory bus.
edit: fix typo
Update: Looks like Trump admin converted billions in unpaid CHIPS act grants into an equity in Intel last year https://techhq.com/news/intel-turnaround-strategy-panther-la...
1) Battery life claims are specific and very impressive, possibly best in class 2) Performance claims are vague and uninspiring.
Either this is an awful press release or this generation isn't taking back the performance crown.