> On devices with iOS 14 and iPadOS 14 or later, Apple modified the C compiler toolchain used to build the iBoot bootloader to improve its security. The modified toolchain implements code designed to prevent memory- and type-safety issues that are typically encountered in C programs. For example, it helps prevent most vulnerabilities in the following classes:
> • Buffer overflows, by ensuring that all pointers carry bounds information that’s verified when accessing memory
> • Heap exploitation, by separating heap data from its metadata and accurately detecting error conditions such as double free errors
> • Type confusion, by ensuring that all pointers carry runtime type information that’s verified during pointer cast operations
> • Type confusion caused by use after free errors, by segregating all dynamic memory allocations by static type
They made a dialect of C with bounds safety, see:
https://james.darpinian.com/blog/apple-imessage-encryption/
My current understanding of the facts:
1. Google defaults to encrypted backups of messages, as well as e2e encryption of messages.
2. Apple defaults only to e2ee of messages, leaving a massive backdoor.
3. Closing that backdoor is possible for the consumer, by enabling ADP (advanced data protection) on your device. However, this makes no difference, since 99.9% of the people you communicate will not close the backdoor. Thus, the only way to live is to assume that all the messages you send via iMessage will always be accessible to Apple, no matter what you do.
It's not like overall I think Google is better for privacy than Apple, but this choice by Apple is really at odds with their supposed emphasis on privacy.
I was unable to use Apple Fitness+ on my TV due to it telling me my Watch couldn’t pair with the TV.
The problem went away when turning off ADP.
To turn off ADP required opening a support case with Apple which took three weeks to resolve, before this an attempt to turn off would just fail with no detailed error.
Other things like iCloud on the web were disabled with ADP on.
I just wanted encrypted backups, that was it.
It would be bad PR for Apple if everybody constantly kept losing their messages because they had no way to get back into their account.
How does Google manage this, such every normie on earth isn’t freaking out?
Unbreakable phones are coming. We’ll have to decide who controls the cockpit: The captain? Or the cabin?Both promise security, Apple promises some degree of privacy. Google stores your encryption keys, and so does Apple unless you opt in for ADP.
Is it similar to Facebook Messenger (encrypted in transit and at rest but Meta can read it) and Telegram (keys owned by Telegram unless you start a private chat)?
There are things Pixels do that iPhones don’t, e.g., you get notified when a local cell tower picks your IMEI. I mean it’s meaningless since they all do it, but you can also enable a higher level of security to avoid 2G. Not sure it’s meaningful but it’s a nice to have.
Differences in capabilities, experience and implementation are all downstream from that. In other words, everyone pays lip service to privacy and security, but it's very difficult to believe that parties like Meta or Google are actually being honest with you. The incentives just aren't there.
With Apple, you get to fork over your wallet, but at least you seem the be primarily the user they've got to provide services to.
With Google/Meta, you're a sucker to bleed dry.
By default, Apple offers you at no charge: email aliases, private relay, Ask No Track barrier. These are just the ones I can think of right now. I am sure there are more. A big thing with Apple is not that they offer different privacy services but they make it EASY and SEAMLESS to use. No other company comes close.
Apple also makes it easier to achieve that privacy:
- They put all the privacy controls in one place in Settings so you can audit
- App developers are mandated to publish what they collect when publishing apps to the App Store.As was demonstrated in LA, it's starting to have significant civil rights consequences.
https://www.apple.com/newsroom/pdfs/fy2024-q4/FY24_Q4_Consol...
https://www.macrumors.com/2025/10/30/apple-4q-2025-earnings/
Would Google or Meta go bankrupt if they stopped selling ads? Yes. Apple wouldn’t.
I know, I'm living in a fantasy world in my head.
Two years ago I was locked out of my MacBook pro.
Then I just booted in some recovery mode and just..reset the password!?
Sure macos logged me off from (most) apps and website, but every single file was there unencrypted!
I swear people that keep boasting that whole apple privacy thing have absolutely no clue what they are talking about, nothing short of tech illiterate charlatans. But God the propaganda works.
And don't start me on iMessage.
Would you prefer that Apple did not give you the option to disable the security feature you disabled during setup?
Apple has since confirmed in a statement provided to Ars that the US federal government “prohibited” the company “from sharing any information,” but now that Wyden has outed the feds, Apple has updated its transparency reporting and will “detail these kinds of requests” in a separate section on push notifications in its next report.We know now that it was all marketing talk. Apple didn’t like Meta so they spun a bunch of obstacles. Apple has and would use your data for ads, models and anything that keeps the shareholders happy. And we don’t know the half of the story where as a US corp, they’re technically obliged to share data from the not-E2EE iCloud syncs of every iPhone.
Illegal to do this in (at least) the EU, California and China.
Our choices are either (A) an OS monitized by tracking user interaction and activity, or (B) monitized by owning the basic act of installing software on the device, both of these options suck and I struggle to give up the more open option for one that might be more secure.
I wouldn't say they are sound. First, macOS provides the freedom to install your own applications (ok, they need to be signed and notarized if the quarantine attribute is set) and it's not the case that the Mac has mass malware infestations. Second, the App Store is full of scams, so App Store - safe, external - unsafe is a false dichotomy.
Apple uses these arguments, but of course the real reason is that they want to continue to keep 30% of every transaction made on an iPhone or iPad. This is why they have responded to the DMA with a lot of malicious compliance that makes it nearly impossible to run an alt-store financially.
(Despite my qualms about not being able to install apps outside the app store, I do think they are doing a lot of good work of making the platform more secure.)
Security, privacy, and ownership aren't equally separated in my mind.
This apparently includes retrieving all photos from iCloud in chunks of specified size, which seems an infinitely better option than attempting to download them through the iCloud web interface which caps downloads to 1000 photos at a time at less than impressive download speeds.
1. Constant popups about "application requesting access" on macOS. That often happens without any user's activity.
2. If you leave the permission popup open for some time (because it's on a different screen), it auto-denies. And then you won't be able to find ANY mention of it in the UI.
3. macOS developers can't be assed to fix mis-features, like inability to bind low ports to localhost without having root access (you can open any listening port on 0.0.0.0 but you can't open 127.0.0.1:80).
If you want to see security done well (or at least better), see the GrapheneOS project.
The developers also appear to believe that the apps have a right to inspect the trustworthiness of the user's device, by offering to support apps that would trust their keys [1], locking out users who maintain their freedom by building their own forks.
It's disheartening that a lot of security-minded people seem to be fixated on the "AOSP security model", without realizing or ignoring the fact that a lot of that security is aimed at protecting the apps from the users, not the other way around. App sandboxing is great, but I should still be able to see the app data, even if via an inconvenient method such as the adb shell.
1. https://grapheneos.org/articles/attestation-compatibility-gu...
But if you wish to build it from source, it could probably be a good option.
That is not a bad thing. The alternative is not having apps that do these checks available on the platform at all. It’s ridiculous that someone should expect that every fork of it should have that capability (because the average developer is not going to accept the keys of someone’s one off fork).
If there’s anyone to blame, it should be the app developers choosing to do that (benefits of attestation aside).
Attestation is also a security feature, which is one of the points of GOS. People are free to use any other distribution of Android if they take issue with it.
Obviously I could be wrong here, this is just the general sentiment that I get from reading GOS documentation and its developer’s comments.
Apple’s implementation of MTE is relatively limited in scope compared to GrapheneOS (and even stock Android with advanced security enabled) as it’s hardware intensive and degrades performance. I imagine once things get fast enough we could see synchronous MTE enabled everywhere.
It is curious at the moment though that enabling something like Lockdown Mode doesn’t force MTE everywhere, which imo it should. I think the people who are willing to accept the compromises of enabling that would likely also be willing to tolerate the app crashes, worse performance etc that would come with globally enabled MTE.
Macs are PCs now? This coming directly from Apple is hilarious.
I would really like to see a benchmark with and without security measures.
Apple makes available on a highly controlled basis iPhones which permit the user to disable “virtually all” of the security features. They’re available only to vetted security researchers who apply for one, often under some kind of sponsorship, and they’re designed to obviously announce what they are. For example they are engraved on the sides with “Confidential and Proprietary. Property of Apple”.
They’re loaned, not sold or given, remain Apple’s property, and are provided on a 12-month (optionally renewable) basis. You have to apply and be selected by Apple to receive one, and you have to agree to some (understandable but) onerous requirements laid out in an legal agreement.
I expect that if you were to interrogate these iPhones they would report that the CPU fuse state isn’t “Production” like the models that are sold.
They refer to these iPhones as Security Research Devices, or SRDs.
Isn’t whole disk encryption nowadays done in hardware on the storage controller?
Zeroing allocated memory is complicated because it also has performance benefits, since it improves compressed swap.
There is no point creating such document if elephant in the room is not addressed.
That doesn't seem like avoiding the elephant in the room to me. It seems like very much acknowledging the issue and speaking on it head-on.
You might as well enumerate all the viruses ever made on Windows, point to them, and then ask why Microsoft isn’t proving they’ve shut them all down yet in their documents.
Microsoft does not sell Windows as a sealed, uncompromisable appliance. It assumes a hostile environment, acknowledges malware exists, and provides users and third parties with inspection, detection, and remediation tools. Compromise is part of the model.
Apple’s model is the opposite. iOS is explicitly marketed as secure because it forbids inspection, sideloading, and user control. The promise is not “we reduce risk”, it’s “this class of risk is structurally eliminated”. That makes omissions meaningful.
So when a document titled Apple Platform Security avoids acknowledging Pegasus-class attacks at all, it isn’t comparable to Microsoft not listing every Windows virus. These are not hypothetical threats. They are documented, deployed, and explicitly designed to bypass the very mechanisms Apple presents as definitive.
If Apple believes this class of attack is no longer viable, that’s worth stating. If it remains viable, that also matters, because users have no independent way to assess compromise. A vague notification that Apple “suspects” something, with no tooling or verification path, is not equivalent to a transparent security model.
The issue is not that Apple failed to enumerate exploits. It’s that the platform’s credibility rests on an absolute security narrative, while quietly excluding the one threat model that contradicts it. In other words Apple's model is good old security by obscurity.
> Lockdown Mode is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature. When Lockdown Mode is enabled, your device won’t function like it typically does. To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all.