If the two are entirely unlinked, what's stopping Intel from slapping "Now 3nm!" on their next gen processors? Surely some components must be at the advertised size, even if it's no longer a clear cut all-or-nothing descriptor, right? What's actually being sized down and why is it seemingly posing so many challenges for Intel's supply chain?
Nothing.
It started when Samsung were using features size just to gain competitive marketing advantage. And then TSMC had to follow because their customers and shareholders were putting a lot of pressure on them
While ingredient branding is important, at the end of the day the chip has to perform. Otherwise your ingredient branding would suffer and such strategy would no longer work. Samsung are already tasting their own medicine.
P.S - That "Now 3nm!" reminds me of "3D Now!" from AMD.
Intel's 5-nanometer process node is expected to ramp around the 2023 timeframe.
Intel's 14nm chips are already competitive with AMD's (TSMC's, really) 7nm chips. The i7-11700 or whatever the newest one coming out soon is called, is going to be pretty much exactly on parity with AMD's Ryzen 5000 series.
So if node shrinkage is such a dramatic increase in performance and power usage, then when Intel unfucks themselves and refines their 10nm node and 7nm node and whatever-node after that, they'll clearly be more performant than AMD... and Apple's M1.
Process technology is holding Intel back. They fix that, they get scary again.
Intel's 14-nm process has only 1 advantage over any other process node, including Intel's own 10 nm: the highest achievable clock frequency, of up to 5.3 GHz.
This advantage is very important for games, but not for most other purposes.
Since the first 7-nm chip of AMD, their CPUs consume much less power at a given clock frequency than Intel's 14 nm.
Because of this, whenever more cores are active, so that the clock frequency is limited by the total power consumption, the clock frequency of the AMD CPUs is higher than of any Intel CPU with the same number of active cores, which lead to AMD winning any multi-threaded benchmark even with Zen 2, when they still did not have the advantage of a higher IPC than Intel, like they have with Zen 3.
With the latest Intel's 10 nm process variant, Intel has about the same power consumption at a given frequency and the same maximum clock frequency as the TSMC 7 nm proces.
So Intel should have been able to compete now with AMD, except that they still appear to have huge difficulties in making larger chips in sufficient quantities, so they are forced to use workarounds, like the launch of the Tiger Lake H35 series of laptop CPUs with smaller dies, to have something to sell until they will be able to produce the larger 8-core Tiger Lake H CPUs.
I disagree. The majority of desktop applications are only lightly threaded e.g. Adobe products, office suites, Electron apps, anything mostly written before 2008.
The only metric that Intel's 14nm is better than TSMC's 7nm is clock speed ceiling. Other than that there is nothing competitive from an Intel 14nm chip compares to AMD ( TSMC ) 7nm chip from a processing perspective.
And that is not a fault of TSMC or AMD. They just decide not to pursuit that route.
I think it's more that people attribute too much significance to process node technology when trying to understand why performance & power are what they are.
For single-core performance the gains from a node shrink are in the low teen percentage increases. Power improvements at the same performance are a bit better, but still not as drastic as people tend to treat it as.
10-20 years ago just having a better process node was a massive deal. These days it's overwhelmingly CPU design & architecture that dictate things like single-core performance. We've been "stuck" at the 3-5ghz range for something like half a decade now and TSMC has worse performance here than Intel's existing 14nm. Still hasn't been a single TSMC 7nm or 5nm part that hits that magical 5ghz mark reliably enough for marketing, for example. And that's all process node performance is - clock speed. M1 only runs at 3.2ghz - you could build that on Intel's 32nm without any issues. Power consumption would be a lot worse, but you could have had "M1-like" single-core performance way back in 2011 if you had a time machine to take back all the single-core CPU design lessons & improvements, that is.
To achieve its very high IPC, M1 multiplies a lot of internal resources and also uses very large caches. All those require a huge number of transistors.
Implementing an M1-like design in an earlier technology would have required a very large area, resulting in a price so large and also in a power consumption so large that such a design would have been infeasible.
However, you are partially right in the sense that Intel clearly was overconfident due to their clock frequency advantage and they have decided on a roadmap to increase the IPC of their CPUs in the series Skylake => Ice Lake => Alder Lake that was much less ambitious than it should have been.
While Tiger Lake and Ice Lake have about the same IPC, Alder Lake is expected to bring a similar increase like from Skylake to Ice Lake.
Maybe that will be competitive with Zen 4, but it is certain that the IPC of Alder Lake will still be lower than the IPC of Apple M1, so Intel will continue to be able to match the Apple performance only at higher clock frequencies, which cause a higher power consumption.
It’s closer to two decades, actually. Pentium 4 (Northwood) reached 3.06 GHz in 2002, using 130 nm fabrication process.