I know for me he has earned trust so to see this official denial has left me conflicted.
Which is fine: I realize he writes like that so that he can get paid. But still, it means you have to take the rumors he hands out "for free" with a grain of salt.
Chances are: "10nm is cancelled" is closer to "some version of 10nm has been cancelled: Intel may be making progress on EUV or some other technology which they will THEN call 10nm in the future." But those details are what matters with regards to Intel's Ice Lake release plans, as well as AMD's plan for Zen2 / EPYC Rome in the upcoming years.
By the way, even if it means more delays and more money, it could still be an improvement for Intel if their previous path wasn't working well. But they would be motivated to hide it anyway because they would have hidden all the wasted time and money up to this point and wouldn't want to have to own all that.
Intel Split Technology and Manufacturing Into 3 Divisions https://www.game-debate.com/news/25940/intel-split-technolog...
As of this week, Intel has announced it’s splitting its manufacturing group into three distinct segments in a massive shake-up aimed at bolstering its development.
The move is tied into the departure of long-time senior VP Sohail Ahmed, who’s been with Intel for 34 years and is currently the head of technology and manufacturing at Intel. Ahmed will be moving on shortly, and Intel will be using this moment to restructure its business.
That MS has a working x86 compatibility layer makes me think that Apple could well have something like this too. If the new chips have performance comparable with Intel's, even a 5x performance hit for "legacy" apps might not be catastrophal, since developers in the Apple ecosystem are usually fast to update their stuff and cater to the demands of the early adopters with lots of money.
Apple has gone the emulation route with the PPC/x86 transition before. With their tight grip on the ecosystem, including the development tool chain, I think most software will be updated very quickly.
This is quite different from the situation with Windows software, most of which feels like the Devs hate the it, the platform and themselves.
https://www.macrumors.com/2012/06/10/a-bit-of-history-behind...
A lot of the guts are shared with iOS, which runs natively on those chips already. I think it's safe to assume they have internal macOS builds running on their A series processors as well. They've probably been testing that for years now.
NeXTSTEP/OPENSTEP ran on x86 already. The early Apple releases, called Rhapsody, were released for x86 and PowerPC.
It's possible they dropped support for x86 in early Mac OS X Server 1 releases (1999/2000), and readded it around the time of Mac OS X 10.0 to 10.2 (2001/2002), but I expect there was support in the codebase for the whole time.
It may have been practically unmaintained (and untested, and maybe even without ensuring it compiles), but I doubt they actually removed the x86 code that was there.
This would allow Apple to avoid much of the overhead of software emulation and I'm sure AMD would be happy to play along since it gets them a (thin) slice of Apple's margins which they would otherwise not have. After a few generations when x86+ARM fat binaries are the norm in the MacOS ecosystem they could drop the x86 decoder (falling back to software emulation only) and presto.
Last time, the Mac platform basically existed in isolation, thus the only problem was that apps for this platform had to be recompiled. This time, the Mac is no longer isolated - millions of developers write client- and server-side applications on Macs that are to be run on mostly x86-based servers, and their toolchain implicitly relies on the architecture being the same on dev and prod machines. That is not to say that it's impossible to change the architecture of the dev machines to something else - it's just a huge additional drawback that was not to be considered at all back then in the PowerPC->x86 transition.
These two facts tend to get downplayed or overlooked pretty frequently when it comes to the "ARM-based MacBooks" discussion, but I consider them fairly substantial and they dampen my enthusiasm for such a transition quite a lot.
Most command line code installed through homebrew is compiled on the user machine anyway, which leaves the closed-source and legacy UI applications not distributed through the app store (but by the time Macs switch to ARM, OSX probably will forbid to run those anyway).
LLVM bitcode remains architecture-specific (if not platform-specific), you can not just recompile x86 bitcode for ARM.
> or upload fat-binaries with ARM and x86 machine code (NextStep aka OSX did this already a quarter century ago)
That doesn't obviate the need for a transition compatibility layer, complex software can take years to port to different architectures.
LLVM bitcode is platform specific. It deliberately isn't designed to be portable.
Even if they went the full-blown emulation route, A12 is almost an order of magnitude faster than the old designs Windows was running on.
They've missed like at least 3 already. 10nm was supposed to be ready in the second half of 2015. Now their most optimistic schedule is late 2019.
Too late for that, more like missing 5 deadlines at this point. 10nm at Intel is a disaster, I am not surprised they would scrap the whole thing.
The best speculation I've seen is this:
https://wccftech.com/analysis-about-intels-10nm-process/
> Our sources tell us it had to do primarily with Intel overextending too early. SAQP or Self Aligning Quad Patterning is the technique the company used to make its 10nm process and it was the first in the industry to attempt to do so.
Process node names have been detached from the reality of corresponding to specific feature sizes. It's up to the companies to figure out what performance they want to label 10nm versus 7nm, and Intel's processes have generally been more aggressive than the others. What's changed is that Intel has gone from unreachably ahead to merely in the competition.
Note that there isn't a single widely accepted way to say who has the better density, but this table sums up the various metrics pretty well.
https://www.semiwiki.com/forum/content/7544-7nm-5nm-3nm-logi...
And: https://wccftech.com/analysis-about-intels-10nm-process/
tl;dr: Intel 10nm gets 106.1M transistors per mm^2. TSMC 7FF gets 96.49. Intel 10nm has an HD SRAM cell size of 0.0312 micrometers. TSMC 7LP is 0.0270.
Intel gets a few more transistors per area, TSMC gets more SRAM per area, but on balance, they're pretty similar. From the second article:
"From figure 3 the 4 processes have similar overall process density. GF has the smallest CPP x M2P x Tracks, Intel has the highest MTx/mm2 value and Samsung has the smallest SRAM cell size. The size of a design in each of these processes will therefore be design dependent and I would not judge any of the four processes to be significantly denser than the others. In terms of relative performance, we have no way to judge that currently."
[Note: Updated this post to quote the TSMC numbers instead of the GF numbers, since TSMC is shipping and GF has pulled the plug on 7nm]
But it would be pretty quiet on here if people only talked about what they actually know.
Though, tsmc certainly isn't letting them have it for free.
Its likely that Intel will roll out a node at that size, but being forced to abandon their previous attempt is a crushing blow that has set them back years, and will likely set them back years more to come.
It takes years to develop a node size, and they have had to throw out most of their work.
DARPA has long played a huge role in furthering US semiconductor capabilities.
Maybe DARPA should step up their game then, as instead of nationalising the Intel they could nationalise their IP and work on affordable and _fast_ CPU for the people as Intel fails to deliver.
We are yet to see a private company landing human on the moon. Imagine what CPU we could have today if state really took over.
http://mitsloan.mit.edu/shared/ods/documents/?DocumentID=461...
https://www.nature.com/articles/s41928-017-0005-9
and
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1545155
[Full disclosure, I'm married to one of the authors.]
Such polar implications about the efficacy of private vs. government (whether in R&D or other domains) represent reality poorly, and in fact your example contradicts itself with the enormous amounts of beneficial R&D NACA and NASA did and which industry built upon. This doesn't excuse the extremely wasteful behavior of NASA (e.g., SLS); only to say that you can't paint it all with the same brush of "wasteful, inefficient, government".
It is necessary to have capable people acting with good judgment to do "good and effective" work. Neither government nor industry have a monopoly on people of mediocre effectiveness or judgment. It is true that the government doesn't have market pressure to call it on the carpet for wastefulness. But that's the same attribute that enables it to undertake moonshots or do hard, expensive R&D that benefits society as a whole.
I don't think semiconductors would be better handled by the public sector - even if one could somehow get political support for that, which seems very unlikely in the US.