> We expect that Apple's new products in 12-18 months will adopt processors made by 5nm process, including the new 2H20 5G iPhone, new 2H20 iPad equipped with mini LED, and new 1H21 Mac equipped with the own-design processor. We think that iPhone 5G support, iPad 's adoption of innovative mid-size panel technology, and Mac's first adoption of the own-design processor are all Apple's critical product and technology strategies. Given that the processor is the core component of new products, we believe that Apple had increased 5nm-related investments after the epidemic outbreak. Further, Apple occupying more resources of related suppliers will hinder competitors' developments.
IMO, an x86_64 chip makes way more sense. The patents are about to expire. Removing nearly all of the legacy mode only cruft (which is not as much as you might think, but tends to be in the critical data path) and making a chip that runs at least x86_64 user mode code would align with how they removed 32 bit support in Catalina.
I'm curious about what you're thinking about here. In fact almost all the code paths for user mode code are running out of the uOp cache in modern devices and completely decoupled from the legacy stuff. And even in the kernel, doing locking and mode switching on the normal paths doesn't hit any major fallbacks. There's a ton of microcode and other legacy handling for odd stuff for sure, but really not on performance loads.
Also, the instruction decode cases for 16bit mode is still in the main instruction decoder and not ucode AFAIK. They're almost the same encoding, and there's not enough ucode pace for it all, but removing those cases from the muxes there would help power consumption. Yes, you run out of the uOp cache a lot of the time, but not as much as you might think, and AFAIK the instruction decoder is still cranking away in the background because you want it to be immediately available as soon as an instruction is not in the uOP cache. That means the power efficiencies can be gained there.
MMU - they're pretty different
I def bet that if they're making an x86 chip, it shares a lot of RTL with their A series cores, but the distinction is probably more like they have a shared library of a lot of primitives, and have pretty different uarchs built from them.
On the other though, AMD legitimately does have fairly close ties to Apple. Jim Keller has bounced around a lot, but Apple and AMD is where he started new major uarchs. And Hugon Dhyana, and the game consoles show that AMD is more than willing to work with high volume OEMs to have semi custom designs, particularly to empower their security architectures. Yes, Intel includes custom logic for security, but not to the same degree as AMD. I think Intel includes all of their customer's custom logic on most of the masks, but fuses or otherwise hides the functionality; AMD goes hog wild with custom masks.
You've given me a bunch to think about, thanks! I hadn't really considered AMD here.