It's not an easy problem! But Apple should work on solving it–they have already put in some effort in this direction on macOS, although they have their hands tied behind their back there because they're going from unrestrictive → restrictive and such changes usually break things and make people angry. On iOS they pretty much have a "clean slate" with which to start with.
Usually, solutions in this area generally have a couple of characteristics: the first is that the "secure" case is generally useful for 95+% of people, to the point that they might not even know that there are other "modes" that are more permissive. The second is putting surmountable but significant barriers in place to prevent disabling these features, in an attempt to prevent casual/unintentional deactivation. Strange key combinations that lead to scary text and wiping the device seem to be fairly effective in keeping out people who cannot give informed consent. And a third is allowing a user-specified root of trust: for example, one can imagine an iPhone that is every bit as secure as any other iPhone today because I have enabled all the security features, but it's using my keys to attest against instead of Apple's. There's a lot of interesting work being done in this area: one I personally like is Chromebooks, which have the dual purpose of being secure, "locked-down" devices for general consumer use, but also for being useful for development work. And there we're seeing interesting solutions such as using KVM to run an isolated Linux, developer mode, write protect screws, …