I hope I’m wrong though.
I agree it would be odd for the mac line to bifurcate like that, but even in the ppc -> x86 days they supported ppc macs for quite a long time.
Apple has been getting great returns from its chip investments because they have been able to reuse the blocks in so many devices. If they do go ARM for mac there is diminishing returns as they move up the product line about how much of the silicon they can reuse. They are right at a point where power hungry multiple memory controllers, complex core interconnects and other things that will likely never make it into an iphone are going to be necessary. I suppose they can use the highest end macs as test beds to see if ideas work out, but is apple really going to have chips fabbed with 20+ big cores and complex intercore transport for products they will never sell in huge numbers?
I think for the highest end apple is better continuing to piggy back off intel's server investments and focus their chip team where it has been been absolutely dominating: sub 10W incredible performance per watt.
macOS on two instruction sets would require some extra developer time, but volunteers keep debian running on 18 arches. After the initial port there is some care and feeding, but it's manageable. The biggest issues are cross compiling, something apple is already really good at, and device drivers. Apple has that covered with the T2 chip. Move more and more of the peripheral connectivity into something like the T2 chip and then you only have to write/maintain aarch64 drivers.
The more I think about it the more it makes really good sense.
And it’s not that the can’t techically, it’s that they are all about focus and unambiguous messages to developers and costumers.
But after reading it all, I guess they could fork the Mac into two categories. Air, light or whatever, and Pro.
I actually like that idea a lot.
MOS/68k, 68k/PowerPC, PowerPC/Intel, Intel/ARM
Now I'm guessing the dwarf format is: x86-64/arm32/arm64 with arm32 being legacy.
Back then we were also switching from GCC to LLVM, which at the time I thought was ludicrous because we'd be losing all of the flexibility in architectures GCC gave us. But I guess my worries were without merit.