I know for me he has earned trust so to see this official denial has left me conflicted.
Which is fine: I realize he writes like that so that he can get paid. But still, it means you have to take the rumors he hands out "for free" with a grain of salt.
Chances are: "10nm is cancelled" is closer to "some version of 10nm has been cancelled: Intel may be making progress on EUV or some other technology which they will THEN call 10nm in the future." But those details are what matters with regards to Intel's Ice Lake release plans, as well as AMD's plan for Zen2 / EPYC Rome in the upcoming years.
Intel Split Technology and Manufacturing Into 3 Divisions https://www.game-debate.com/news/25940/intel-split-technolog...
As of this week, Intel has announced it’s splitting its manufacturing group into three distinct segments in a massive shake-up aimed at bolstering its development.
The move is tied into the departure of long-time senior VP Sohail Ahmed, who’s been with Intel for 34 years and is currently the head of technology and manufacturing at Intel. Ahmed will be moving on shortly, and Intel will be using this moment to restructure its business.
That MS has a working x86 compatibility layer makes me think that Apple could well have something like this too. If the new chips have performance comparable with Intel's, even a 5x performance hit for "legacy" apps might not be catastrophal, since developers in the Apple ecosystem are usually fast to update their stuff and cater to the demands of the early adopters with lots of money.
Apple has gone the emulation route with the PPC/x86 transition before. With their tight grip on the ecosystem, including the development tool chain, I think most software will be updated very quickly.
This is quite different from the situation with Windows software, most of which feels like the Devs hate the it, the platform and themselves.
https://www.macrumors.com/2012/06/10/a-bit-of-history-behind...
A lot of the guts are shared with iOS, which runs natively on those chips already. I think it's safe to assume they have internal macOS builds running on their A series processors as well. They've probably been testing that for years now.
This would allow Apple to avoid much of the overhead of software emulation and I'm sure AMD would be happy to play along since it gets them a (thin) slice of Apple's margins which they would otherwise not have. After a few generations when x86+ARM fat binaries are the norm in the MacOS ecosystem they could drop the x86 decoder (falling back to software emulation only) and presto.
Most command line code installed through homebrew is compiled on the user machine anyway, which leaves the closed-source and legacy UI applications not distributed through the app store (but by the time Macs switch to ARM, OSX probably will forbid to run those anyway).
Even if they went the full-blown emulation route, A12 is almost an order of magnitude faster than the old designs Windows was running on.
They've missed like at least 3 already. 10nm was supposed to be ready in the second half of 2015. Now their most optimistic schedule is late 2019.
Too late for that, more like missing 5 deadlines at this point. 10nm at Intel is a disaster, I am not surprised they would scrap the whole thing.
The best speculation I've seen is this:
https://wccftech.com/analysis-about-intels-10nm-process/
> Our sources tell us it had to do primarily with Intel overextending too early. SAQP or Self Aligning Quad Patterning is the technique the company used to make its 10nm process and it was the first in the industry to attempt to do so.
Process node names have been detached from the reality of corresponding to specific feature sizes. It's up to the companies to figure out what performance they want to label 10nm versus 7nm, and Intel's processes have generally been more aggressive than the others. What's changed is that Intel has gone from unreachably ahead to merely in the competition.
Though, tsmc certainly isn't letting them have it for free.
Its likely that Intel will roll out a node at that size, but being forced to abandon their previous attempt is a crushing blow that has set them back years, and will likely set them back years more to come.
It takes years to develop a node size, and they have had to throw out most of their work.
DARPA has long played a huge role in furthering US semiconductor capabilities.
Maybe DARPA should step up their game then, as instead of nationalising the Intel they could nationalise their IP and work on affordable and _fast_ CPU for the people as Intel fails to deliver.
We are yet to see a private company landing human on the moon. Imagine what CPU we could have today if state really took over.
Such polar implications about the efficacy of private vs. government (whether in R&D or other domains) represent reality poorly, and in fact your example contradicts itself with the enormous amounts of beneficial R&D NACA and NASA did and which industry built upon. This doesn't excuse the extremely wasteful behavior of NASA (e.g., SLS); only to say that you can't paint it all with the same brush of "wasteful, inefficient, government".
It is necessary to have capable people acting with good judgment to do "good and effective" work. Neither government nor industry have a monopoly on people of mediocre effectiveness or judgment. It is true that the government doesn't have market pressure to call it on the carpet for wastefulness. But that's the same attribute that enables it to undertake moonshots or do hard, expensive R&D that benefits society as a whole.