iMessage is extremely secure[1], except for the fact that Apple controls the device list for iCloud accounts. The method would simply be for Apple to silently add another device to a target's account which is under law enforcement's control. I say "silently" in that they would need to hide it from the target's iCloud management UI to stay clandestine, but that's it, just a minor UI change. iMessage clients will then graciously encrypt and send a copy of every message to/from the target to the monitoring device.
This would still work even with impossible-to-crack encryption. It wouldn't allow access to old messages, just stuff after the monitoring was enabled. It's the modern wiretap.
It mirrors wiretapping in that sufficiently sophisticated people could discover the "bug" by looking at how many devices iMessage is sending copies to when messaging the target (just inspecting the size of outgoing data with a network monitoring tool would probably suffice), but it would go a long way and probably be effective for a high percentage of cases.
The main thrust of the article is that encryption is not new, just the extent of it, particularly iMessage. Here's a way around that.
[1] http://images.apple.com/iphone/business/docs/iOS_Security_Fe...
Is iMessage centralised? I'm pretty certain it is, and if that is the case then you couldn't find out if you were tapped or not; one message gets sent to the server (perhaps with a list of the devices you want to send it to) and the server under Apple/LEO control sends a copy to their "device".
http://blog.quarkslab.com/imessage-privacy.html goes into detail as to how the key exchange process works.
My hunch is that Apple doesn't have a way to prevent the popup messages so if they were forced to add a law enforcement iPhone to an iMessage pool then the target would notice an extra device was added.
Whenever an iMessage client wants to message the target, it gets a list of target device public encryption keys from Apple. It then sends a separately-encrypted copy of each message to each of those recipients.
What Apple could do is to "bug" the lists that are sent to the devices wanting to sending something to the target, without modifying the list that the target device sees. So all incoming messages to the target would get cc'd to law enforcement. This could probably be done completely server-side.
As for outgoing messages, Apple could do the same thing in reverse: whenever the target device asks for the list of recipient devices, just add the monitoring device to it. Again, all by simply modifying the device directory server, no client changes.
The bigger point here is that Apple claims their hands are tied due to the design of the encryption. But as long as the directory service is still under their central control, there is still a technical means for complying with law enforcement requests to monitor iMessage communication.
Perhaps law enforcement just needs to get more specific with their demand. Don't demand "decryption" anymore.. demand a wiretap.
Speaking as a jailbreaker, this is actually incorrect. At least as of previous revisions, the UID key lives in hardware - you can ask the hardware AES engine to encrypt or decrypt using the key, but not what it is. Thus far, neither any device's UID key or (what would be useful to jailbreakers) the shared GID key has been publicly extracted; what gets extracted are secondary keys derived from the UID and GID keys, but as the whitepaper says, the passcode lock key derivation is designed so that you actually have to run a decryption with the UID to try a given passcode. Although I haven't looked into the newer devices, most likely this remains true, since there would be no reason to decrease security by handing the keys to software (even running on a supposedly secure coprocessor).
But it is an absolute certainty that communications technologies built and operated by major American industry are wholly compromised. To believe otherwise is to grossly misunderestimate the nature of State intelligence actors. The historical record is clear that big telecom + hardware providers have always been in bed with State power, both in America and elsewhere, and the Snowden docs pretty clearly show that's still true today.
Maybe Apple's announcement means that the county sheriff can't read your teenage son's weed-dealing text messages. But if bin Laden had an iPhone, the men in the windowless buildings would beyond a shadow of a doubt be reading his communications, probably via seven or eight independent attack vectors (not counting the compromised publicly switched telephone network, over-the-air signals, etc.)
If you have secrets, keep them off of communication technologies run by large companies. Especially when those technologies are 100% closed source and the companies in question have openly admitted including backdoors in previous versions of the tech you're currently using.
Also look at the sworn affidavit that EFF obtained from local SF bay area whistleblower Mark Klein -- an AT&T technician who revealed the existence of the NSA's fiber taps at the 2nd & Folsom Street SF facility.
There is no such entity as "major American industry." There are different companies with different incentives and different willingnesses to protect their users. Some companies do the right thing; others don't.
Right now, you can do full disk encryption on an Android device (which seems likely to become hardware-assisted on future devices similar to the solution mentioned in the article). If you pick a sufficiently strong passphrase, that should keep your data secure even on devices without hardware assistance. However, if the device is turned on and locked (the common case), it's trivial to remote-install an arbitrary app, including one that unlocks the phone. (You can do this yourself with your Google account, which means anyone with access to that account can do so as well.)
It would help to be able to disable remote installation of any kind; that wouldn't have to come at the expense of convenience, because the phone could just prompt (behind the lockscreen) to install a requested app.
For home users, Sophos has a Home Edition of their UTM that you can install on an old PC. The requirements are a bit high, there's an IP limit (that you could always overcome with a NAT) and it doesn't allow dual-homed ISPs, but the UI is better than anything else I've tried (not saying there aren't plenty of warts). Once installed, you can setup a VPN and HTTPS proxy within literally 2 minutes.
Disclaimer, I worked there for a short time.
Once its on your phone, it can just screenshot things if nothing else.
However, assuming they have an appropriate warrant, can't they get your iCloud backups and try to brute force those? Maybe I'm being an idiot and overlooking something obvious, but it seems to me the encryption on the backups CANNOT depend on anything in the Secure Enclave.
That's because one of the use cases that iCloud backup has to support is the "my phone got destroyed, and now I want to restore my data to my new phone" case. To support this, it seems to me that the backup encryption can only depend on my iCloud account name and password. They can throw GPUs and FPGAs and all the rest at brute forcing that.
My conclusion then is that when I get a new iPhone, I should view this as a means of protecting my data on the phone only. It lets me be very secure against an attacker who obtains my phone, but not my backups, provided I have a good passcode, where "good" can be achieved without having to be so long as to be annoying to memorize or type. A passcode equivalent to 32 random bits would take on average over 5 years to brute force.
To protect against someone who can obtain my backups, I need a good password on iCloud, where "good" means something considerably more than equivalent to 32 bits.
[1] I wonder if they could overclock the phone to make this go faster?
On the other hand, it is perfectly possible to devise a system to locally encrypt iCloud backups and still be able to restore them. Look at how iCloud keychain works, in the Apple documents, as those datas (= all your passwords and secrets) are synced through the cloud between your devices but Apple can't access them. For iCloud keychain, in case you lost access to all your devices, you need a master recovery key that's generated when you first activate it; if you don't have it, you lose the data.
This is wrong. iCloud uses AES 128 and 256 encryption:
http://support.apple.com/kb/HT4865?viewlocale=en_US&locale=e...
1. [...]
2. [...]
3. [...]
4. [...]
5. The manufacturer of the A7 chip stores every UID for
every chip.
I'm a total layman, but the UID has to be created at some point and so it can be known by someone. Wouldn't it be the easiest way to just record it for every chip? Apple wouldn't even have know about it.Do not use simple pin passwords on your phone. In particular, if you use fingerprint access, there is no reason not to have a long, complex password.
> In particular, if you use fingerprint access, there is no reason not to have a long, complex password.
Very, very often my fingerprint isn't recognised properly and I have to type in my password. It has been getting better as of the last few updates, but I still need to input my password multiple times per day.
You can in the UK.
https://en.wikipedia.org/wiki/Regulation_of_Investigatory_Po...
Are you sure? In the UK and I'm pretty sure in my native Australia, they definitely can, under pain of "contempt of court".
I totally agree that, if you are really concerned about this, fingerprint is a bad idea, but the legal ramifications are interesting.
Not quite relatedly, does anybody know if there is a heat sensor? Or can I just cut somebody's finger off to use it? (We are obviously well outside of judicial channels here! :)
"Drug him and hit him with this $5 wrench until he tells us the password."
Now one could make the argument that the fingerprint is more secure than a poor password, for most use cases. I'd tend to agree with that, with the caveat that it is no longer your decision (read: subject to your interrogation resistance) to open your phone if you are captured.
Apple claims they can't decrypt data, but, the article suggests that they can simply run the decryption on the local phone with custom firmware. Most people chose a 4 digit pin, and, @80 millisecond/guess, that means Apple should be able to crack your phone in 12 minutes.
If you use a longer passcode, your data is more secure - but I thought that was always the story with Apple.
So what, if anything, has changed (other than more data being encrypted?)
On iOS I believe this is incorrect, the Passcode is another name for PIN on the iPhone.
http://support.apple.com/kb/ht4113
You can have complex or simple passcodes, but everybody I've ever seen who bothers to have a passcode (myself excepted), sets it to a 4 digit code (aka PIN). To make it worse, it's usually their ATM PIN.
It could be a total lie, and hardwired or masked-rom per-revision (but I doubt that, too easy to discover)
It could be in a one-time programmable block somewhere that gets provisioned during manufacture - a flash block/eeprom with write-only access (externally at least), or a series of blowable fuses, or even laser/EB-trimmed traces.
All of those 1-time prog methods are susceptible to the person operating the programmer to record the codes generated, although managing and exfiltrating that much data would make it rather tricky.
The method of storage also influences how hard it is to extract through decapping and probing/inspection.
If I had to design something like this (note: not a crypto engineer), I'd have some enclave-internal source of entropy run through some whitening filters, and read from until it meets whatever statistical randomness/entropy checks, at which point it is used to seed & store the UID into internal EEPROM. That way, there's no path in or out for the key material, except when already applied via one of the supported primitives.
Then you need to protect your secrets! Couple of layers of active masks (can they do active resistance/path-length measurements instead of just continuity yet? That would annoy the 'bridge & mill' types :)) Encrypted buses, memory, and round-the-houses-routing is also pretty standard for the course, but I'm sure it too could be improved on.
IIRC there was someone on HN who was working for a TV smartcard mfgr who was reasonably confident they'd never been breached. Curious what he'd have to say (without an NDA :) )
My understanding here is that this Enclave is just a specific part of the overall die, so they're somewhat constricted in the crazy-fabtech methods they might otherwise be able to consider.
Technically, this is where it breaks down. As in "Trust me I don't store the keys."
If that hypothesis is true(they don't store these keys), then they'll have a hard time breaking your encryption indeed. But you must trust Apple at that point.
If there was a way to buy an anonymously replaceable chip with this cryptographic key in it and replace it on the phone like a SIM, then we'd be much closer to stating "Apple can't decrypt your phone".
Given that the end-user has entered the passcode it shouldn't be hard to retain the data: after upgrading the Secure Enclave firmware simply unencrypt all data using the old key and reencrypt it using the new key (derived from same passphrase but a new UID).
You can also use a "two stage" approach where the encryption key derived in hardware is only used to protect a secondary key. In this case you just reencrypt this secondary key which in turn protects the data.
http://www.arm.com/products/processors/technologies/trustzon...
" The Secure Enclave is a coprocessor fabricated in the Apple A7 or later A-series processor. It utilizes its own secure boot and personalized software update separate from the application processor. It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.
The Secure Enclave uses encrypted memory and includes a hardware random number generator. Its microkernel is based on the L4 family, with modifications by Apple. Communication between the Secure Enclave and the application processor is isolated to an interrupt-driven mailbox and shared memory data buffers. "
That seems ideal. Let's hope Apple actually does that (probably not).
Even those who are super security conscious tend to just use numeric PINs. It's the very, very rare individual who enters an alphanumeric passcode.
* Regarding the fixed 80ms timing: has there been study on the average time needed (aside from the WHY 80ms instead of 70ms or 90ms). I also want to ask for clarification: where is the entire PBKDF2-AES is done? On the AES engine (which I believe is part of the A7 chip)? On a TPM chip (which might be a NO based on unauthenticed source [1])?
* So this UID created in every device and stored in Secure Enclaved which there is a dedicated path between SE and AES engine. But can we conduct any side-channel attack? I am pretty noob with hardware security.
It's very probably not within the realms of practicality just yet, however.
The only issue would be making the process so 100% reliable that you succeed first time, because a single mistake or misunderstanding could trash the single copy you have of the code.
I'm curious now if flylogic or chipworks have done any serious teardown of the 'secure enclave' stuff.
What if that is not true? What if the device has a built-in keylogger to just get all the crypto from the user input? be it a passcode or a fingerprint.
Wouldn´t it be partly better if this were based on a trully public key cryptography with a randomly generated private key generated each time the device is factory reset?
Is this any reasonable PI, police, FBI, hackers can easily social engineer via legal and/or illegal means?
Apple could also have it set that you must have the iPhone passphrase to restore a backup but obviously those can be "easily" brute forced (because for the restore to work, it must mean you can bypass the old device's UID)
In our post-Snowden world this is just ridiculous and intellectually insulting. The author is either naive beyond belief or he got paid to write this PR shill piece.
cf. https://gigaom.com/2014/09/18/apples-warrant-canary-disappea...
Although the FBI seems to be not very happy about this (if it's not just "for show" that is)[1]. The FBI is using the age-old "Save/Protect the children" argument, literally.
[1] http://www.washingtonpost.com/business/technology/2014/09/25...
This is getting into speculation about their role in Prism but I'm wondering how the iCloud encryption actually works. They say everything is encrypted while stored [0] but it's not clear (or I haven't found) whether that's using a key derived from the password or something Apple control. Either way I'm not entirely sure there's any way to stop Apple getting it if they're told to given the lack of transparency.
Add to that the usual closed software problems. Apple says they don't have a specific backdoor anymore (!), and they won't let you audit anything.
Well duh! It's their software. Of course they could backdoor it in future, such as if required to by the government. That's true of any software. Apple are asserting that right now there are no such backdoors and iMessages are secure. I've not seen any credible argument that this is not the case other thst "maybe they're lying". Ok. What's the alternative? Run everything through OpenSSL? That didn't worm out do well. Maybe we should run everything on Linux using Bash scripts. Oops again!
Maybe Apple are lying. Maybe they will sell us all out. But if they do these things always have a tendency to come out in the open eventually. So far they've had a pretty good track record of being on the level. In the end it's tfag reputation, and their appreciation of its value, that is the best and really the only guarantee we have, as with anyone else we rely on.