My standing theory is that the m1 will accelerate it. Obviously all the wholly managed AWS services (Dynamo, Kinesis, S3, etc.) can change over silently, but the issue is EC2. I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant, especially since Graviton 2 is already cheaper per compute unit than x86 is for some work loads; imagine what Graviton 3 & 4 will offer.
Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops? Agreed that developing on ARM and deploying on x86 is unpleasant, but so too is developing on macOS and deploying on Linux. Apple’s GNU userland is pretty ancient, and while the BSD parts are at least updated, they are also very austere. Given that friction is already there, is it likelier that folks will try to alleviate it with macOS in the cloud or GNU/Linux locally?
Mac OS X was a godsend in 2001: it put a great Unix underneath a fine UI atop good hardware. It dragged an awful lot of folks three-quarters of the way to a free system. But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted). Meanwhile, the negatives of using a proprietary OS are worse, not better.
Has Linux desktop share been increasing lately? I'm not sure why a newer Mac with better CPU options is going to result in increasing Linux share. If anything, it's likely to be neutral or favor the Mac with it's newer/ faster CPU.
> But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted).
Maybe? I'm not as sold on Linux gaining a ton of ground here. I'm also not sold on the idea that the Mac as a whole is worse off interface wise than it was 10 years ago. While there are some issues, there are also places where it's significantly improved as well. Particularly if you have an iPhone and use Apple's other services.
And I personally hope that by then, GNU/Linux will have an M1-like processor available to happily run on. The possibilities demonstrated by this chip (performance+silence+battery) are so compelling that it's inevitable we'll see them in non-Apple designs.
Also, as it usually happens with Apple hardware advancements, Linux experience will be gradually getting better on M1 Macbooks as well.
MacPorts and Homebrew exist. Both support M1 more or less and support is improving.
Big Sur is a Big Disaster but hopefully this is just the MacOS version of iOS 13 and the next MacOS next year goes back to being mostly functional. I have more faith in that than a serviceable Linux desktop environment.
I agree generally though. I see macOS as an important Unix OS for the next decade.
Sadly, fewer of my coworkers use Linux now than they did 10 years ago.
Could we do a roll call of experiences so I know which ones work and which ones don't? Here are mine.
Dell Precision M6800: Avoid.
Supported Ubuntu: so ancient that Firefox
and Chrome wouldn't install without source-building
dependencies.
Ubuntu 18.04: installed but resulted in the
display backlight flickering on/off at 30Hz.
Dell Precision 7200:
Supported Ubuntu: didn't even bother.
Ubuntu 18.04: installer silently chokes on the NVMe
drive.
Ubuntu 20.04: just works.Some definitely will. Significant enough to assume they're not well-situated other-configs? Probably not. Even the most VIM- and CLI-oriented devs I know still prefer a familiar GUI for normal day to day work. Are they all going Ubuntu? Or Elementary? I mean, I welcome any migration that doesn't fracture the universe. But I don't think it's likely.
I’ve known colleagues that tried to run Linux professionally using well reviewed Linux laptops, and their experience has been universally awful. Like “I never managed to get the wifi to work, ever” bad. The idea of gambling every developer on that is a non-starter even at my level, let alone across the org.
I wouldn’t be surprised if they sponsor their Graviton offering taking profits elsewhere. This might make it seem like a good deal for customers, but I don’t think it is, at least not in the long run.
This doesn’t mean Graviton is useless. For services running Amazon’s code as opposed to customer’s code (like these PAAS things billed per transaction) the lock-in is already in place, custom processors aren’t gonna make it any worse.
Graviton ARM is certainly vendor lock-in to Amazon. But a Graviton ARM is just a bog-standard Neoverse N1 core. Which means the core is going to show similar characteristics as the Ampere Altra (also a bog-standard Neoverse N1 core).
There's more to a chip than its core. But... from a performance-portability and ISA perspective... you'd expect performance-portability between Graviton ARM and Ampere Altra.
Now Ampere Altra is like 2x80 core, while Graviton ARM is... a bunch of different configurations. So its still not perfect compatibility. But a single-threaded program probably couldn't tell the difference between the two platforms.
I'd expect that migrating between Graviton and Ampere Altra is going to be easier than Intel Skylake -> AMD Zen.
That is the beauty of having proper defined numeric types and memory model, instead of the C and derived approaches of whatever the CPU gives, with whatever memory model.
AWS could very well run their platform systems entirely on graviton. After all, serverless and cloud is in essence someone else's server. AWS might as well run all their paas software on in-house architecture
The main difference for us was lower bills.
Any ARM licensee (IP or architecture) has access to them. They're just NeoVerse N1 cores and can be synthesized on Samsung or TSMC processes.
Do you think that vendor lock in has stopped people in the past (and future)? Thinking about those kinds of things are long term and many companies think short term.
In theory, your software could run faster, or slower, depending upon Amazon's use of their extensions within their C library, or associated libraries in their software stack.
Maybe the wildest thing that I've heard is Fujitsu not implementing either 32-bit or Thumb on their new supercomputer. Is that a special case?
"But why doesn’t Apple document this and let us use these instructions directly? As mentioned earlier, this is something ARM Ltd. would like to avoid. If custom instructions are widely used it could fragment the ARM ecosystem."
https://medium.com/swlh/apples-m1-secret-coprocessor-6599492...
Amazon already has lock-in. Lambda, SQS, etc. They've already won.
You might be able to steer your org away from this, but Amazon's gravity is strong.
My impression is that we have been living under the cruft of x86 because of inertia, and what are mostly historical reasons, and it's mostly a good thing if we move away from it.
"RISC" and "CISC" distinctions are murky, but modern ARM is really a CISC design these days. ARM is not at all in a "an instruction only does one simple thing, period" mode of operation anymore. It's grown instructions like "FJCVTZS", "AESE", and "SHA256H"
If anything CISC has overwhelmingly and clearly won the debate. RISC is dead & buried, at least in any high-performance product segment (TBD how RISC-V ends up fairing here).
It's largely "just" the lack of variable length instructions that helps the M1 fly (M1 under Rosetta 2 runs with the same x86 memory model, after all, and is still quite fast).
I would argue it isn't time for Intel to switch until we see a little more of the future as process nodes may shrink at a slower rate. Will we have hundreds of cores? Field programmable cores? More fixed function hardware on chip, or less? How will high-bandwidth high-latency gddr style memory mix with lower-latency lower-bandwidth ddr memory? Will there be on die memory like hbm for cpus?
Running on the JVM, Ruby, Python, Go, Dlang, Swift, Julia or Rust and you won't notice a difference. It will be sooner than you think.
What fraction of products deployed to the cloud even has its developers seen doing _any_ microbenchmarking?
One thing to also consider is why amazon hugely prioritizes using their "services" and not deploying on bare metal is likely because they can execute their "services" on cheapo arm hardware. Bare metal boxes and VM's give the impression that customer's software will perform in an x86 esque matter. For amazon, the cost of the underlying compute per core is irrelevant since they've already solved the issue of using blazing fast network links to mesh their hardware together - in this way, the ball is heavily in Arm's court for the future of Amazon data centers, although banking and gov clients will likely not move away from X86 any time soon.
>Cloud (Intel) isn’t really challenged yet....
AWS are estimated to be ~50% of HyperScalers.
HyperScalers are estimated to be 50% of Server and Cloud Business.
HyperScalers are expanding at a rate faster than other market.
HyperScaler expanding trend are not projected to be slowing down anytime soon.
AWS intends to have all of their own workload and SaaS product running on Graviton / ARM. ( While still providing x86 services to those who needs it )
Google and Microsoft are already gearing up their own ARM offering. Partly confirmed by Marvell's exit of ARM Server.
>The problem is single core Arm performance outside of Apple chips isn’t there.
Cloud computing charges per vCPU. On all current x86 instances, that is one hyper-thread. On AWS Graviton, vCPU = Actual CPU Core. There are plenty of workloads, and large customers like Twitter and Pinterest has tested and shown AWS Graviton 2 vCPU perform better than x86. All while being 30% cheaper. At the end of the day, it is workload / dollars that matters on Cloud computing. And right now in lots of applications Graviton 2 are winning, and in some cases by large margin.
If AWS sell 50% of their services with ARM in 5 years time, that is 25% of Cloud Business Alone. Since it offer a huge competitive advantage Google and Microsoft has no other choice but to join the race. And then there will be enough of a market force for Qualcomm, or may be Marvell to Fab a commodity ARM Server part for the rest of the market.
Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone. ( And I haven't factored in AMD yet ) 5 years in Tech hardware is basically 1-2 cycles. And there is nothing on Intel's roadmap that shown they have the chance to compete apart from marketing and sales tactics. Which still goes a long way if I have to be honest, but not sustainable in long term. It is more of a delaying tactics. Along with a CEO that despite trying very hard, had no experience in market and product business. Luckily that is about to change.
Evaluating ARM switch takes time, Software preparation takes time, and more importantly, getting wafer from TSMC takes time as demand from all market are exceeding expectations. But all of them are already in motion, and if these are the kind of response you get from Graviton 2, imagine Graviton 3.
Right. I suspect in time we'll look back to this time, and realize that it was already too late for Intel to right the ship, despite ARM having a tiny share of PC and server sales.
Their PC business is in grave danger as well. Within a few years, we're going to see ARM-powered Windows PCs that are competitive with Intel's offerings in several metrics, but most critically, in power efficiency.
These ARM PCs will have tiny market share (<5%) for the first few years, because the manufacturing capacity to supplant Intel simply does not exist. But despite their small marketshare, these ARM PCs will have a devastating impact on Intel's future.
Assuming these ARM PCs can emulate x86 with sufficient performance (as Apple does with Rosetta), consumers and OEMs will realize that ARM PCs work just as well as x86 Intel PCs. At that point, the x86 "moat" will have been broken, and we'll see ARM PCs grow in market share in lockstep with the improvements in ARM manufacturing capacity (TSMC, etc...).
Intel is in a downward spiral, and I've seen no indication that they know how to solve it. Their best "plan" appears to be to just hope that their manufacturing issues get sorted out quickly enough that they can right the ship. But given their track record, nobody would bet on that happening. Intel better pray that Windows x86 emulation is garbage.
Intel does not have the luxury of time to sort out their issues. They need more competitive products to fend off ARM, today. Within a year or two, ARM will have a tiny but critical foothold in the PC and server market that will crack open the x86 moat, and invite ever increasing competition from ARM.
I guess the idea is to run a Linux flavor that supports both the M1 and Graviton on the macs and hope any native work is compatible?
Windows ARM development (in a VM) should be much faster on an M1 Mac than on an x86 computer since no emulation is needed.
Also, even if you're running against a VM, your VM is running on an ISA, so performance differences between them are still relevant to your code's performance.
Intel has fabs, yes it’s what maybe holding them back atm but it also a big factor in what maintains their value.
If x86 dies and neither Intel nor AMD pivot in time Intel can become a fab company they already offer these services, yes no where near the scale of say TSMC but they have a massive portfolio of fabs and their fabs are located in the west, they also have a massive IP portfolio related to everything form IC design to manufacturing.
Not unless they catch up with TSMC in process technology.
Otherwise, they become an uncompetitive foundry.
Until fairly recently, Intel had a clear competitive advantage: Their near monopoly on server and desktop CPUs. Recent events have illustrated that the industry is ready to move away from Intel entirely. Apple's M1 is certainly the most conspicuous example, but Microsoft is pushing that way (a bit slower), Amazon is already pushing their own server architecture and this is only going to accelerate.
Even if Intel can get their 7nm processes on line this year, Apple is gone, Amazon is gone, and more will follow. If Qualcomm is able to bring their new CPUs online from their recent acquisition, that's going to add another high performance desktop/ server ready CPU to the market.
Intel has done well so far because they can charge a pretty big premium as the premier x86 vendor. The days when x86 commands a price premium are quickly coming to and end. Even if Intel fixes their process, their ability to charge a premium for chips is fading fast.
Knowing something happened is not the same as knowing "why" it happened. That's the point of my comment. We don't know why they were not able to achieve volume production on 10 nm earlier.
I worked at a site (in a unrelated industry) where there was a lot of collaborative semiconductor stuff going on, and the only logo “missing” was Intel.
If you look at this from an engineering standpoint, I think you'll miss the forest for the trees. From a business and strategy standpoint, this was classic case of disruption. Dominant player, Intel, was making tons of money on x86 and missed mobile opportunity. TSMC and Samsung seized on the opportunity to manufacture these chips when Intel wouldn't. As a result, they had more money to build/invest in research to build better fabs, which could be funded by the many customers buying mobile chips. Intel, being the only customer of their fabs, would only have money to improve their fabs if they sold more x86 chips (which were stagnating). By this time, it was too late.
(For that matter, I'm astounded that after 2014 the status quo returned on rare earths with very little state-level strategy or subsidy to address the risk there.)
Taiwan's share of the semiconductor industry is 66% and TSMC is the leader of that industry. Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.
To Taiwan, semiconductor leadership is an existential question. To America, semiconductors are just business.
This means Taiwan is also likely to do more politically to keep TSMC competitive, much like Korea with Samsung.
Only ASML currently has that technology.
And it turns out, the photolithography device isn’t really a plug and play device. It’s very fussy. It breaks often. And it requires an army of engineers (as cheap as possible), to man the devices, and to produce the required yield, in order to make the whole operation profitable.
This is the Achilles’ Heel of the whole operation.
I suspect that China is researching and producing their own photolithography devices, independent of American, or western technology. And when they crack it, then they will recapture the entire Chinese market for themselves. And TSMC will become irrelevant to any strategic or tactical plans for them.
Are there any signed agreements that would enforce this? If China one day suddenly decides to take Taiwan, would the US or Europe step in with military forces?
Our political system and over financialized economy seem to suffer from same hyper short term focus that many corporations chasing quarterly returns run in to. No long term planning or focus, and perpetual "election season" thrashing one way or another while nothing is followed through with.
Plus, in 2, 4 or 8 years many of the leaders are gone and making money in lobbying or corporate positions. No possibly short-term-painful but long term beneficial policy gets enacted, etc.
And many still uphold our "values" and our system as the ideal, and question any that would look towards the Chinese model as providing something to learn from. So, I anticipate this trend will continue.
I don't think this will be hard. Anyone with a brain looking at the situation realizes we're setting ourselves up for a bleak future by continuing the present course.
The globalists can focus on elevating our international partners to distribute manufacturing: Vietnam, Mexico, Africa.
The nationalists can focus on domestic jobs programs and factories. Eventually it will become clear that we're going to staff them up with immigrant workers and provide a path to citizenship. We need a larger population of workers anyway.
I'm not too concerned:
- There are still a number of foundries in western countries that produce chips which are good enough for "military equipment".
- Companies like TSMC are reliant on imports of specialized chemicals and tools mostly from Japan/USA/Europe.
- Any move from China against Taiwan would likely be followed by significant emigration/"brain drain".
As for moves again at Taiwan, China hasn't given up that prize. Brain drain would be moot if China simply prevented emigration. I view Hong Kong right now as China testing the waters for future actions of that sort.
Happily though I also view TSMC's pending build of a fab in Arizona as exactly that sort of geographical diversification of industrial and human resources necessary. We just need more of it.
The only comparable data point says that this is a terrible idea. AMD spun out GlobalFoundries after a deep slide in their valuation, and the stock (as well as the company's reputation) remained in the doldrums for several years after that. Chipmaking is a big business and there are many advantages to vertical integration when both sides of the company function appropriately. If you own the fabs and there is a surge in demand (as we see now at the less extreme end of the lithography spectrum), your designs get preferential treatment.
Intel's problem isn't the structure of the company, it's the execution. Swan was not originally intended as the permanent replacement to Krzanich[0], and it's a bit strange to draw conclusions about whether the company can steer away from the rocks when the new captain isn't even going to take the helm until the middle of next month.
People are viewing Intel's suggestion that it may use TSMC's fabs for some products as a negative for Intel, but I just see it as a way to exert pressure on AMD's gross margin by putting some market demand pressure on the extreme end of the lithography spectrum (despite sustained demand in TSMC's HPC segment, TSMC's 7nm+ and 5nm are not the main driver of current semiconductor shortages).
[0] https://www.engadget.com/2019-01-31-intel-gives-interim-ceo-...
Huh, I would say completely opposite thing. AMD wouldn't have survived if it kept trying to improve their own process instead of going to TSMC.
While Capitalism will likely be part of the solution, through subsidizes for Intel or some other form, it must take a back seat to preventing the scenario described above from becoming reality. We are on the brink of this happening already with so many people suggesting such a split and ignoring what happened to AMD and GF.
The geopolitical ramifications of completely centralizing the only leading process node in such a sensitive area between the world's super powers cannot be understated.
Full disclosure: I'm a shareholder in Intel, TSMC, and AMD.
Maybe there is a way for Intel to open up its fab business to other customers and make it more independent, without splitting it off into another company. However, it seems like that would require a change in direction that goes against decades of company culture. It might be easier to achieve that by actually splitting the fab business off.
> Maybe there is a way for Intel to open up its fab business to other customers and make it more independent, without splitting it off into another company.
Intel Custom Foundry. They have several years of experience doing exactly what you describe, and that's how their relationship with Altera (which they later acquired) began. I see AMD's subsequent bid for Xilinx as a copycat acquisition that demonstrates one of the competitive advantages of Intel's position as an IDM: information.
It appears to me that Intel stalling at 14nm is what opened the door for TSMC and Samsung to catch up. Does the same thing happen in 2028 and allow China to finally catch up?
If the two are entirely unlinked, what's stopping Intel from slapping "Now 3nm!" on their next gen processors? Surely some components must be at the advertised size, even if it's no longer a clear cut all-or-nothing descriptor, right? What's actually being sized down and why is it seemingly posing so many challenges for Intel's supply chain?
whatever marketing people come up? Moore's law is not a law but an observation. it doesn't really matter tho. we are going to 3D chip, chiplet, advance packaging ...etc.
https://www.extremetech.com/computing/309889-tsmc-starts-dev...
Once that happens the value of the design part of the business will be much, much lower - especially if they have to compete with an on form AMD. Can they innovate their way out of this? Doesn't look entirely promising at the moment.
Instruction set architecture at this point is a bikeshed debate, it's certainly not what is holding Intel back.
https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...
..a big part of the reason the M1 is so fast is the large reorder buffer, which is enabled by the fact that arm instructions are all the same size, which makes parallel instruction decoding far easier. Because x86 instructions are variable length, the processor has to do some amount of work to even find out where the next instruction starts, and I can see how it would be difficult to do that work in parallel, especially compared to an architecture with a fixed instruction size.
The x86 atomic operations are fundamentally expensive. ARM’s new LSE extensions are more flexible and can be faster. I don’t know how much this matters in practice, but there are certainly workloads for which it’s a big deal.
x86 cannot context-switch or handle interrupts efficiently. ARM64 can. This completely rules x86 out for some workloads.
ARM64 has TrustZone. x86 has SMM. One can debate the merits of TrustZone. SMM has no merits.
Finally, x86 is more than an ISA - it’s an ecosystem, and the x86 ecosystem is full of legacy baggage. If you want an Intel x86 solution, you basically have to also use Intel’s chipset, Intel’s firmware blobs, Intel’s SMM ecosystem, all of the platform garbage built around SMM, Intel’s legacy-on-top-of-legacy poorly secured SPI flash boot system, etc. This is tolerable if you are building a regular computer and can live with slow boot and with SMM. But for more embedded uses, it’s pretty bad. ARM64 has much less baggage. (Yes, Intel can fix this, but I don’t expect them to.)
IOW even if Intel switched ISA to ARM it won't magically fix any of the issues. We've had a lot of ARM vendors trying to do what Apple did for too long.
Even if every PC and server chip manufacturer were to eradicate x86 from their product offerings tomorrow, you'd still have over a billion devices in use that run on x86.
Those are different things. We have seen a minuscule movement on the first, but we've been running towards the second since the 90's, and looks like we are close now.
Intel's problems are a lot more structural in nature. They lost mobile, they lost the Mac, and we could very well be in the early stages of them losing the server (to Graviton, etc...) and the mobile PC market (if ARM PC chips take off in response to M1). Intel needs to right the ship expeditiously, before ARM gets a foothold and the x86 moat is irreversibly compromised. Thus far, we've seen no indication that they know how to get out of this downward spiral.
This is a terrible example for the reasons stated in the article. Microsoft is already treating windows more and more like a step child everyday - office and azure are the new cool kids
But it doesn't have to for Intel to feel the ill effects. There just have to be viable alternatives that drive down the price of their x86 offerings.
If X86 finally goes, and Intel and AMD both switched elsewhere we'd be seeing the same battle as usual but in different clothes.
On top of the raw uarch design, there is also the peripherals and ram standard etc. etc.
I suppose Microsoft would be influential here. Native Arm64 MS Office, for example.
I went and dug that shirt out of a box and had a good laugh when Apple dropped the M1 macs.
Back then, the company was confident that they could make the transition to EUV lithography and had marketing roadmaps out to 5nm...
US organization, economic, and financial management at the macro scale is going through a kind of "architecture astronaut" multi-decade phase with financialization propping up abstracted processes of how to lead massive organizations as big blocks on diagrams instead of highly fractal, constantly shifting networks of ideas and stories repeatedly coalescing around people, processes, and resources into focused discrete action in continguous and continuous OODA feedback loops absorbing and learning mistakes along the way. Ideally, the expensive BA and INTC lessons drive home the urgent need for an evolution in organizational management.
I wryly think how similar the national comparative advantage argument looks to much young adult science fiction portrayal of space opera galactic empire settings with entire worlds dedicated solely to one purpose. This world only produces agricultural goods. That world only provides academician services. It is a very human desire to simplify complex fractal realities, and effective modeling is one of our species' advantages, but at certain scales of size, agility and complexity it breaks down. We know this well in the software world; some problems are intrinsically hard and complex, and there is a baseline level of complexity the software must model to successfully assist with the problem space. Simplifying further past that point deteriorates the delivery.
There are so many things that the US would have to reform to become more competitive again, but we are so invested into the FIRE economy that it's not unlike the position of the southern states before the Civil War: they were completely invested into the infrastructure of slavery and could not contemplate an alternative economic system because of that. The US is wedded to an economy based on FIRE and Intellectual Property production, with the rest of the economy just in a support role.
I'm not really a pro-organized-labor person, but I think that as a matter of national security we have to figure out a way to reform and compromise to get to the point to which we develop industry even if it is redundant due to globalization. The left needs to compromise on environmental protection, the rich need to compromise on NIMBYism, and the right needs to compromise on labor relations. Unfortunately none of this is on the table even as a point of discussion. Our politics is almost entirely consumed by insane gibberish babbling.
This became very clear when COVID hit and there was no realistic prospect of spinning up significant industrial capacity to make in-demand goods like masks and filters. In the future, hostile countries will challenge and overtake the US in IP production (which is quite nebulous and based on legal control of markets anyway) and in finance as well. The US will be in a very weak negotiating position at that point.
I had trouble reading this without falling into the cadence of Howl! by Allen Ginsberg.
By the way, I'm not sure the hnchat.com service linked in your profile works any more?
Are processor fabs analagous to auto factories and shipyards in World War II? Is the United States military's plan for a nuclear exchange with China dependent on a steady supply of cutting edge semiconductors? Even if it is, is that strategy really going to help?
This article is mostly concerened with Intel's stock price. Why bring this into it? Let's say Intel gets its mojo back and is producing cutting edge silicon at a level to compete with TSMC and supplying the Pentagon with all sorts of goodies... and then China nukes Taiwan? And now we cash in our Intel options just in time to see the flash and be projected as ash particles on a brick wall?
"The U.S. needs cutting edge fabs on U.S. soil" is true only if you believe the falied assumptions of the blue team during the Millenium Challenge, that electronic superiority is directly related to battlefield superiority. If semiconductors are the key to winning a war, why hasn't the U.S. won one lately?
And what does any of this have to do with Intel? Why are we dreaming up Dr. Strangelove scenarios? Is it just that some people are only comfortable with Keynesian stimulus if it's in the context of war procurement?
The world does need some meaningful fabs outside of Taiwan/South Korea. All of the <10nm semiconductor and most of the >10nm semiconductor fabrication takes place within a 750km/460mile radius circle today. That is risky.
Israel, Mexico, Germany, Canada, Japan (not that it would grow the circle much...) are all viable places to run a foundry. The fact that Intel is one of the few outside that circle doesn't inspire confidence in the security of the global supply chain.
Samsung seems to be keeping it close.
Semiconductors aren’t going to help change people’s cultures or religion or tribal affiliations without decades of heavy investment in education and infrastructure, or other large scale wealth transfers.
But if “winning a war” means killing the opposing members while minimizing your own losses, surely electronic superiority will help.
Intel hasn't lost to Apple and AMD because they employ idiots, or because of their shitty company culture (in fact, they're doing surprisingly well in spite of their awful company culture). Intel lost because they made the wrong bet on the wrong type of process technology. 10 years ago (or thereabouts), Intel's engineers were certain that they had the correct type of process technology outlined to successfully migrate down from 22nm to 14nm, then down to 10nm and eventually 7, 5, and 3nm. They were betting on future advances in physics, chemistry, and semiconductor processes. Advances that didn't materialize.
EUV turned out to be the best way to make a wafer at lower transistor size.
So now Intel's playing catch up. Their 10nm process is still error-prone and far from stable. There are no high-performance 10nm desktop or server chips.
That's not going to continue forever though. Even on 14nm, Intel chips, while not as fast as Apple's M1 or AMD's Ryzen 5000 series, are still competitive in many areas. Intel's 14nm chips are over 6 years old. The first was Broadwell in October 2014. What do you think will happen when Intel solves the engineering problems on 10nm, and then 7nm? And then 5nm?
It took AMD 5 years to become competitive with Intel, and over 5 to actually surpass them.
If you think the M1 and 5950X are fast, then wait till we have an i9-14900K on 5nm. It'll make these offers look quaint by comparison.
EDIT: I say this as a total AMD fanboy by the way, who bought a 3900X and RX 5700 XT at MicroCenter on 7/7/2019 and stood in line for almost five hours to get them, and as someone who now has a Threadripper 3990X workstation. I love AMD for what they've done... they took us out of the quad-core paradigm and brought us into the octa-core paradigm of x86 computing.
But I am under no illusions that they're technically superior to Intel. Their process is what allows them to outperform Intel, not their design. I guarantee you that if Intel could mass produce their CPUs on their 7nm process (which is far, far more transistor dense than TSMC's 7nm), AMD would be 15-25% behind on performance.
It isn't so much that AMD is succeeding because they're technically superior... they're succeeding because Zen's design team made the right bet and because Intel's engineering process team made the wrong bet.
Intel certainly has the potential of being technically superior to AMD, but they do not appear to have focused on the right things in their roadmaps for CPU evolution.
Many years before the launches of Ice Lake and Tiger Lake, the enthusiastic presentations of Intel about the future claimed that these will bring some sort of marvelous improvements in microarchitecture, but the reality has proven to be much more modest.
While from Skylake in 2015 to Ice Lake in 2019 there was a decent increase in IPC, it was still much less than expected after so many years. While they were waiting for a manufacturing process, they should have redesigned their CPU cores to get something better than this.
Moreover the enhancements in Ice Lake and Tiger Lake seem somewhat unbalanced and random, there is no grand plan that can be discerned about how to improve a CPU.
On the other hand the evolution of Zen cores was perfect, every time the AMD team seems to have been able to add precisely all improvements that could give a maximum performance increase with a minimum implementation effort.
Thus they were able to pass from Zen 1 (2017) with an IPC similar to Intel Broadwell (2014), to Zen 2 (2019) with an IPC a little higher than Intel Skylake (2015) and eventually to Zen 3 (2020) with an IPC a little higher than Intel Tiger Lake (2020).
So even if the main advantage of AMD remains the superior CMOS technology they use from TSMC, just due to the competence of their design teams, they have passed from being 3 years behind Intel in IPC in 2017, to being ahead of Intel in IPC in 2020.
If that is not technical superiority, I do not know what is.
Like I have said, I believe that Intel could have done much better than that, but they seem to have done some sort of a random walk, instead of a directed run, like AMD.
I think that everyone will take advantage of the migration to ARM to push more lock in, despite the supposedly open ARM architecture.
A sort of poison pill: "you get more performance and better battery life, but you can't install apps of type A, B and C and those apps can only do X, Y and Z".
Intel will either solve its process issues in its own factories or, worst case, they outsource production to TSMC - either of which eliminates any process/manufacturing advantage held by AMD and Apple.
On a 5-10 year timeline, I don’t see a reason for Intel to continue stumbling on 5nm and 3nm processes though.
At that point, the only sustainable leverage the rest of the world would have in chip technology would be ASML.
Why it though provoking? It's always realpolitik. All wars are.
There's always a pretext but the subtext is what actually causes wars.
Additionally, Intel works with ASML and other similar suppliers. Intel even owns a chunk of ASML.
To go even further than your comment (with which I agree, 5nm isn't the center of AMD's income right now), TSMC isn't even making most of its wafer revenue from 5nm and 7nm. Straight from the horse's mouth (Wendell Huang, CFO):
"Now, let's move on to the revenue by technology. 5-nanometer process technology contributed 20% of wafer revenue in the fourth quarter, while 7-nanometer and 16-nanometer contributed 29% and 13%, respectively. Advanced technologies, which are defined as 16-nanometer and below, accounted for 62% of wafer revenue. On a full-year basis, 5-nanometer revenue contribution came in at 8% of 2020 wafer revenue. 7-nanometer was 33% and 16-nanometer was 17%."
https://www.fool.com/earnings/call-transcripts/2021/01/14/ta...
Given that mobile apps are more lightweight and consume far less resources than their electron counterparts, would people prefer to use those instead? Especially if their UIs were updated to support larger desktop screens.
Android phones these days have at least 4GB or RAM and mobile apps are in general more limited plus you run fewer of them in parallel as they tend to be offloaded from RAM once the limit is reached.
> Solution Two: Subsidies
Solution Three: lower prices/margins (temporarily) to match the value proposition of AMD on Windows PCs and Linux Cloud servers.
Furthermore, AMD is not the biggest threat to Intel. The biggest threat is cloud providers like Amazon designing their own chips, which is already happening. If those succeed, who would build them? Certainly not Intel, if they continue to manufacture only their own designs -- that business, like so much other fab business, will go to TSMC.
Maybe. I didn't suggest becoming the cheap option I suggested re-evaluating its premium pricing strategy in the short-term to reflect current and future customer value. Margin stickiness seems to be a built-in bias similar to the sunk-costs fallacy.
Server-side Neoverse is a threat but a slow-moving one. I'm assuming that "Breakup" (going fabless) will not show benefits for many months if not years. Price seems like an obvious lever; perhaps I'm being naive about pricing but it's not obvious to me why.
I really hope Intel does better than IBM with Power.
This is like the Microsoft pivot into cloud to save itself.
Unfortunately, I doubt that the US government functions well enough at this point to recognize the threat and overcome the influence Intel's money would wield against the effort.
I really like this concept, though I’d advocate for a straight subsidy (sales of American-made chips to a U.S.-registered and based buyer get $ credit, paid directly to the supplier and buyer, on proof of sale and proof of purchase) given the logistical issues of the U.S. government having a stockpile of cutting-edge chips it can’t dump on the market.
Semiconductor manufacturing is just one example where this is happening, electronics is another. Maybe one day Toyota Auto fabs will be making Teslas.
Intel tried to maintain a competitive advantage and introduced several innovative technology design efforts with its next generation DRAM offerings. These products did not provide enough competitive advantage, thus the company lost its strategic position in the DRAM market over time. Intel declined from an 82.9% market share in 1974 to a paltry 1.3% share in 1984.
Intel’s serendipitous and fortuitous entry into microprocessors happened when Busicom, a Japanese calculator company, contacted Intel for the development of a new chipset. Intel developed the microprocessor but the design was owned by Busicom. Legendary Intel employee Ted Hoff had the foresight to lobby top management to buy back the design for uses in non calculator devices. The microprocessor became an important source of sales revenue for Intel, eventually displacing DRAMs as the number one business.
https://anthonysmoak.com/2016/03/27/andy-grove-and-intels-mo...
1. Apple's CPUs will not improve anywhere near as fast as the competition. Computation per watt of (some) competitors' products will outpace Apple's in just a few years.
2. Intel will come roaring back on the back of TSMC, but first will need to wait on growth of manufacturing capacity, as certain competitors can get more money per mm^2.
3. Intel will fail to address its product-quality problem, but it will not end up hurting them.
It's a shame the Mill is so secretive, actually, they're design is rather nice.
VLIW works (especially in the way it was done in Itanium - IIRC) when either your workload is too predictable or maybe if your compiler manages to be one order of magnitude smarter than it is today (even with llvm, etc)
It seems even M1 prefers to reorder scalar operations than work with SIMD ops in some cases (this is one of its processors)
Can someone update us on where AWS offer them, if at all?
That said, the EPYC machine type is available in 12 zones of four different regions in the US, which isn't bad.
2022: share price tanks, ceo booted, they shuffle but dont have a plan, no longer blue chip so finance is hard to come by. delisting. everyone booted. doors close.
2023/4: AMD only game in town. profits and volumes up. so are the faults and vulnerabilities. They spend most of their effort in fixes and not innovation.
2024: M1 chip available on dells/hps/thinkpads. AWS only use Graviton unless customer specifically buys another chip.
2025: Desktop ARM chip available on dells/hps/thinkpads. 2025: AWS makes a 'compile-to-anything' service. decompiler and recompiler on demand.
2026: AMD still suffering. Hires Jim Keller for the 20th time. makes a new ZEN generation that beats M1 and Arm. AMD goes into mobile CPUs.