Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.
False analogy. You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)
> Systems can be secure and trusted by the user without having to cede control
In an ideal world where users have infinite information and infinite capability to process and internalize it to become an infosec expert, sure. I don’t know about you, but most of us don’t live in that world.
I agree it’s not perfect. Having to use liquid glass and being unable to install custom watch faces is ridiculous. There’s probably an opportunity for a hardened OS which can be trusted by interested parties to not be maliciously altered, and also not force so many constraints onto users like current walled gardens do. But a fully open OS, plus an ordinary user who has no time or willingness to casually become a tptacek on the side, in addition to completely unrelated full-time job that’s getting more competitive due to LLMs and whatnot, seems more like a disaster than utopia.
Isn’t the status quo, that you need to intentionally choose to allow this?
It's also really incredible how people can see "user being in control" and just immediately jump to "user having to be an infosec expert", as if one implied the other. You can't really discuss things in good faith in such climate :(
Some countries do :) Though I think physical analogies are misleading in a lot of ways here.
> Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.
Secure, yes, trustworthy to a random developer looking at your device, no. They're entirely separate concepts.
> Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.
Expecting users to know things does a lot of heavy lifting here.
User being informed means they have to know what a compromised system would entail. That alone is a huge and frankly impossible thing to expect from regular people.
> Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to.
> And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.
Then you can't demand those developers trust your device.
The systems used by regular people could just refuse to boot further when detecting a compromise, so I'm not sure where this comes from. We have prior art for that too. This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.
> Then you can't demand those developers trust your device.
Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.