I always thought it made more sense to try introducing capabilities on higher-performance applications (all the stuff you might use an arm A-class for) given they are pretty heavyweight. This is what Arm's Morello (https://www.arm.com/architecture/cpu/morello) offer. However introducing them at the low end, in the embedded space, instead may work a lot better. Within the A-class processor space there's a huge software ecosystem to work with and your software likely comes from multiple vendors, it's an uphill struggle to inject capabilities into that space, especially if you want to make full use of them.
With embedded applications you tend to have far tighter control over the whole software stack, there's a lot more vertical integration and it's pretty static. Once you've deployed your product it's doing the same job day in day out. You need occasional updates, maybe the odd new feature but it's a very different world to the software stack on the typical phone. So overall easier for a single company or group to say 'yes let's try capabilities' and just get on and do it.
Security is potentially a lot more critical in these applications as well. Everyone knows IoT security is a joke but regulators are watching this too and there will be future legislation that will put a lot more liability on the manufacturers of IoT devices and they'll need to demonstrate they've taken security seriously, using a capability based system is one way to do that.
Operational technology (industrial IoT) is also a key area of concern for security. Having unsecure internet enabled operational technology running critical infrastructure and industrial processes is clearly a major issue. The various cyber security agencies across the western world recognise this and published a guide: https://www.cisa.gov/resources-tools/resources/secure-by-des... urging security by design and default and it explicitly mentions CHERI. Again the initial costs and work to introduce capabilities become very justifiable against the security (and critically for companies, liability reduction) benefits.
"First steps in CHERIoT Security Research"
https://msrc.microsoft.com/blog/2023/02/first-steps-in-cheri...
Ironically, the future of secure computing is bringing back memory tagging.
I find it's often the case exciting new tech turns out to have its fundamental principles described in a paper from the 60s or 70s ;)
Part of it seems to be that technological improvements (such as better silicon and faster compute/storage/communication as well as more input data) can enable formerly impractical ideas to scale up/out and become useful.
Another part is that the performance and power wall has forced CPU designers to think about other ways to improve CPUs, such as improving security and reliability. Maybe the market will finally be willing to trade off some cost and speed for better security and reliability.
Lastly, software which used to be impractical or overly expensive because of resource usage can often run easily on modern hardware.
Burroughs was one of the first systems with it, the Lisp and Ada Machines, Xerox Workstations, IBM mainframes, ETHZ systems, among others, all of them rather expensive, or niche, when compared with what became regular consumer hardware.
The failure of Intel's APX32 project probably did not help as well.