As is, not even researching it, appears very likely that lockdown mode is easy to fingerprint via a browser from information shared in the linked article. Spoofing if functionality is off is not a common thing and would be very hard to do if not impossible if combined with challenge-response like counter-measure from the attacker to confirm the functionality is actually accessible to the end-user.
I think the more realistic threat model here is presented by ad networks and major websites doing typical types of browser fingerprinting, like canvas, fonts, etc. as well as possibly some of the techniques mentioned in the article here, like webGL, JIT JS, etc.
In that case of a limited number of trusted sites that we focus on ensuring compatibility with, spoofing is easier, because we can pay a lot of attention to ensuring that our "middleman" fixes the errors introduced by spoofed client-to-server communications.
Some technologies like WebGL will simply never work on a spoofed site, of course. But for the very limited number of sites when users lose important functionality, they can just turn off Lockdown mode.
If a Lockdown'd phone habitually patronizes malicious websites, the protection will never be enough anyway. So we shouldn't worry about protecting against being fingerprinted by a very malicious website - Lockdown users must simply avoid these, with or without a fingerprinting vulnerability!
If your suggesting Apple should proxy all internet traffic to devices — that is a horrible idea, incredibly dangerous, and a huge step in the wrong direction. To counter the issues I pointed out, Apple would literally have to be able to decrypt all the traffic and act as if they were the user, which is obviously a insane security issue.
As for avoiding malicious websites, again, I don’t believe you understand what advanced attacks look like. Any site can be hacked and if it is, fingerprinting can be used to only attack a very well defined known list of targets. For example, a very well known CEO of a security startup used a limousine service that was hacked after this was discovered and used to launch at attack against them.
Understand your interested in the topic, that’s great, but try to balance your technical familiarity, familiarity with the topic, and the very real threat security breaches pose to very small subset of the world. These features are not intended to counter AD companies, but attackers that in the worst case situation will ultimately kill the target.
iCloud Private Relay already exists.
https://support.apple.com/en-us/HT212614
3rd party description of it:
And again, is it a realistic threat model to imagine that a high volume website, trusted enough to be browsed regularly by Lockdown-paranoid users, will be hacked in such a way as to deliver a fingerprinting attack to browsers, and only that?
I appreciate the sense of superiority that you have, but try to follow along.
The device has the features turned off because they are know to be hard to harden against attacks or worse, have known vulnerabilities. To spoof them being on, a proxy that isolates requests to the functionality that’s off on the device would have to be sent to another device, but accurately responds as if it was on, including specific designed counter-measuring from an attacker to confirm the end user had real-time control over the proxied system. Just makes no sense to have such a complex system and in majority of situations would require another device that would be vulnerable to attack and always near the target and secured device.
>> And again, is it a realistic threat model to imagine that a high volume website, trusted enough to be browsed regularly by Lockdown-paranoid users, will be hacked in such a way as to deliver a fingerprinting attack to browsers, and only that?
Simple answer is yes. Also, it doesn’t have to be a high volume website, just one the target trusts enough to visit.