story
This #2, so-called "Private Cloud Compute", is not the same as iCloud. And certainly not the same as sending queries to OpenAI.
Quoting:
“With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests. These models run on servers powered by Apple silicon, providing a foundation that allows Apple to ensure that data is never retained or exposed.“
“Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.”
“Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.”
iOS won't send any data to a PCC that isn't running a firmware that's been made public in their transparency logs and compute nodes have no way to be debugged in a way that exposes user data[1]
And at the end of the day, this is going to give the warrant holder a handful of requests from a specific user? Why wouldn't they use that same warrant to get onto the target's device directly and get that same data plus a ton more?
0: https://help.apple.com/pdf/security/en_US/apple-platform-sec... 1: https://security.apple.com/blog/private-cloud-compute/
Warrants to hack devices are a lot less common and generally harder to obtain. That's why police will send Google warrants for "give us info on every device who has been in a radius of x between y and z time".
I'm sure Apple did their very best to protect their users, but I don't think their very best is good enough to warrant this kind of trust. A "secure cloud" solution will also tempt future projects to use the cloud over local processing more, as cloud processing is now readily available. Apple's local processing is a major advantage over the competition but I doubt that'll stay that way if their cloud solution remains this integrated.
At least with gmail and chat clients etc. things are somewhat put in compartments, one of the services might screw up and do something with your emails but your Messenger or WhatsApp chats are not affected by that, or vice versa. But when you bake it into the OS (laptop or phone) you're IMHO taking a much bigger risk, no matter what the intentions are.
I have no idea what code is running on a server I can't access. I can't exactly go SSH into siri.apple.com and match checksums. Knowing Apple's control freak attitude, I very much doubt any researcher permitted to look at their servers is going to be very independent either.
Apple is just as privacy friendly as ChatGPT or Gemini. That's not necessarily a bad thing! AI requires feeding lots of data into the cloud, that's how it works. Trying to sell their service as anything more than that is disingenuous, though.
Signal has the added benefit that it doesn't need to read what's in the messages you send. It needs some very basic routing information and the rest can be encrypted end to end. With AI stuff, the contents need to be decrypted in the cloud, so the end-to-end protections don't apply.
Apple's thrown stones come back to hunt their glass ceiling.
Once the data is out of your possession it's out of your control.
Drow "nation state is after me" from the threat model and you'll be a lot happier.
- TLA agency deploys scarce zero days or field ops because you're particularly interesting, vs..
- TLA agency has everything about you in a dragnet, and low level cop in small town searches your data for a laugh because they know you, and leaks it back to your social circle or uses it for commercial advantage
The history of tech is the history of falling costs with mass production. Expensive TLA surveillance tech for nation states can become broadly accessible, e.g. through-wall WiFi radar sensing sold to millions via IEEE 802.11bf WiFi 7 Sensing in NPU/AI PCs [1], or USB implant cables [2] with a few zeros lopped off the TLA price.
Instead of adversary motives, threat models can be based on adversary costs.
As adversary costs fall, threat models need to evolve.
[1] https://www.technologyreview.com/2024/02/27/1088154/wifi-sen...
Actually, once your e2e key that encrypts your data is out of your possession, it's out of your control.
Over the past decade it's become commercially feasible to be NSL-proof.
But in summary 1. The servers run on Apple Silicon hardware which have fancier security features 2. Software is open source 3. iOS verifies that the server is actually running that open source software before talking to it 4. This is insane privacy for AI
The security features are meant to prevent the server operator (Apple) from being able to access data that's being processed in their farm. The idea is that with that + E2E encryption, it should be way closer to on-device processing in terms of privacy and security
Here's also a great summary from Matthew Green: https://x.com/matthew_d_green/status/1800291897245835616