Do you trust any modern OS not to accidently include sensitive information when it generates a crash report for an app and sends it off the some remote server in the background?
Isolation is a useful tool. In an ideal world it can be done perfectly at the OS level, but we don't live in that world.
The problem with DRM and "trusted computing" part is that it's under someone else's control, some central authority etc. From my reading of the docs on this, this is not the case with pVM, from https://source.android.com/docs/core/virtualization/security
> Data is tied to instances of a pVM, and secure boot ensures that access to an instance’s data can be controlled
> When a device is unlocked with fastboot oem unlock, user data is wiped.
> Once unlocked, the owner of the device is free to reflash partitions that are usually protected by verified boot, including partitions containing the pKVM implementation. Therefore, pKVM on an unlocked device won't be trusted to uphold the security model.
So my reading of this is that that it is under the users control, as long as they have the ability to unlock the bootloader, and reflash the device with their own images.
I'd love someone who is more knowledgeable to weigh in, but this tech, to me, doesn't seem that close to TPM/DRM type chips where there is no possibility of user control.
There are also vendors that are doing smart contract execution in trusted computing devices so you can get the benefits of trusted execution without the overhead of everyone executing the same code.
I think there are usecases like this outside the mobile _phone_ that are interesting. For example on-device learning for edge devices where the device is not under your control.
This absolutely isn't the case. I know a number of vendors who are deploying edge ML capacity in satellites where the use case is for "agencies" to deploy ML algorithms that they (the vendors) cannot see.