When I was in middle school, I kept a little blue diary protected with a trusty heart-shaped lock. It didn’t contain any sensitive information – mostly stories about my crush and reports of what went down on the playground – but the information was deeply personal and important enough for me to secure it as best as an 11-year old could. We are taught from a young age that we have an inalienable right to privacy. Though these days, privacy feels more like a luxury than a right. The irony is that, while technology has provided a medium for democratic discourse, it has also shaved away at individual privacy. We as users are disturbed by how internet companies build a virtual portrait of us based on our every click, post, and email. So why is it, when a major player in the tech world wants to protect our privacy, we freak out?
Unless you’ve been living under a rock (I’m sure it was lovely down there. How’s the food? We should chat!), you’ve heard that last week, a federal magistrate ordered Apple to help the FBI unlock the iPhone belonging to one of the killers in the December San Bernardino shooting. Apple refused. On Wednesday, Tim Cook released a statement about Apple’s decision not to create a “backdoor” to get around its own safeguards. In short, the statement outlines how Apple has assisted the FBI with the San Bernarnino case thus far, why creating a backdoor undermines decades of work Apple has done to protect its users, and what detrimental effects this order could have on the future of user privacy.
I understand both Apple and the FBI’s stances on the matter and both provide compelling arguments as to whether or not Apple should develop software to crack open the San Bernardino shooters iPhone, but stepping back and looking at the big picture – at the potential damage this software can do and the precedent it sets for American privacy – I am inclined to support Apple’s decision to ignore the FBI’s orders. Weakening data encryption in products to accommodate the US government also accommodates hackers, oppressive foreign governments, and others seeking to exploit user information. There is no in-between: either build completely unbreakable data systems or create systems with holes which can be accesses by any and all willing to find them.
I could go on and on about this case, but rather than continuing to adding fire to this heated debate – and because there are plenty of better articles about the issue – I would like to outline exactly what data Apple is protecting it and what tools it uses to perform this task in order to allow others to be completely informed when they choose a side in this matter.
The Hateful iOS 8
But two years ago, the release of iOS 8 changed all this. This update, lauded as the most comprehensive, detailed upgrade Apple has ever offered, made the extraction of data protected by a user passcode impossible without possession of said passcode. After the release of their new security systems, Apple stated“…it is not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.” And so here we are today, the FBI asking Apple to do just that.
But how is it that Apple made it so difficult to access user data?
Apple’s Secrets Revealed! (sort of)
If you have some time to spare, check out the iOS Security Guide September 2014 which outlines all of the hardware, software, and services that work together to maximize personal security in devices with iOS 8 (and if you are curious/ have more time to spare – lucky you – updates for iOS 9 can be found here). But for a quick synopsis, keep reading.
So disk encryption is not new with iOS 8 – as a matter of fact, most of the “new” security features that Apple rolled out in 2014 were already being implemented in the iPhone prior to iOS 8 on a smaller scale – Apple just decided to protect much more data under a user passcode with iOS 8. Previously, data that was encrypted with a key not derived from the user passcode could be accessed using a custom boot image that bypasses the passcode entry phase. Basically, this allowed Apple to decrypt data without the need to crack a password. Not so with iOS 8.
Pause: before we move on to more nitty-gritty details on how Apple protects user data, let’s back up a moment and talk about how passcode protection works on a more general scale (I needed to look up and understand this before I dove into Apple’s iOS Security Guide mumbo jumbo so if your knowledge of security systems is limited like mine was, read on! Otherwise, feel free to skip ahead.)
Password-base Encryption for Dummies (myself included)
Normal passcode-based file encryption systems work in the following manner: a user enters a passcode, to which the system takes in and applies a key derivation function (KDF). This function converts the passcode into an encryption key. This basic passcode encryption does not require any special hardware so, given a user chooses a strong passcode, the method can be securely implemented. If at this point you are thinking about the passcodes you have sprinkled across the universe and realize you have a few “Password” or “123456” floating around in cyber space: a) go change them NOW! Don’t worry, I’ll be here when you get back, and b) engineers know this and have tried to account for the fact that no one ever chooses strong passcodes.
Hackers are able to access data encrypted with a poor password by working through a compilation of common passwords to see if any one of them decrypts the data. This process is performed speedily and with minimal effort through the use of hardware like FPGAs or GPUs. A common defense against this “brute-force” type of hacking is to use a slow key derivation function like PBKDF2, a “password-strengthening algorithm” that makes it difficult for a computer to check that any one password is the correct master password. Algorithms like PBKDF2 are deliberately resource-intensive and slow down sequential login attempts, but a hacker with powerful hardware and the virtuous will to succeed can defeat most KDFs. Womp womp. Apple could have continued to play in this tug-of-war and use memory-hard KDFs like scrypt to further fend of hackers, but it decided to change the game and take another approach.
A is for Apple Encryption
Apple’s encryption strategy was to add a 256-bit device-unique secret key called a UID which is “burned” into the phone’s chip at the factory and “fused into the application processor during manufacturing.” After the iPhone leaves the factory, Apple has no access to the key. Thus, Apple claims “no software or firmware can read them directly.” Devices with an A7 processor have yet another layer of security. In these phones, the key and the encrypting process are protected within a cryptographic co-processor called the Secure Enclave, which is designed to prevent the extraction of the UID key. So yes, the UID is basically a series of 256 0’s and 1’s. This means there are oh so many possible combinations and oh so many possible encryption keys.
Additionally, because only the device itself knows the UID (what a self-aware piece of equipment, knowing thy self!) and the UID cannot be extracted, all password cracking attempts have to run on the device itself. This means that password cracking attempts requiring heavy hardware like FPGA or GPU cannot be used. Apple could write a custom firmware that attempts to crack the keys on the device but thanks to the slowdown the PBKDF2 algorithm creates, the process could take a very long time. A random 6-character passcode consisting of lowercase letters and numbers could take up to 5 1/2 years. Though, as we will see shortly, this is not software the FBI would like Apple to write.
P is for Password Protection
With the multiple layers of encryption lasagna above – some of which I am still chewing on myself – the way to get to an iPhone’s data is not by decrypting it, it is by somehow figuring out the correct user passcode. While a 256-bit UID could take several lifetimes to guess, a user passcode consisting of 4 or 6 digits has at most 1 million possible values and can be decrypted in less than 30 minutes. (Pheww…good thing you strengthened all your passcodes after my first call to action!). But just like Apple created barriers to decrypting data, it also created two features to help protect user passcodes and discourage a brute-force attack of user passcodes. First, the iOS interface itself enforces escalating delay times – up to an hour – if repeated erroneous passcodes are entered. This alone would not be so difficult to combat but Apple added a second feature which, if previously enabled, allows the device to wipe out the encryption key needed to access user data, making all data on the device unreachable.
Data Protection and Me
After reading all about iOS encryption and how these keys are derived (and indulging myself in a much needed snack), I wondered how and when these keys were used. The answer to this lies in protection classes. Protection classes are used to enforce the access policies of files. Every time a file is created, it is assigned to a protection class by the app that creates it based on when data should be accessible. What does this mean? In the case of files assigned to NSProtectionComplete, they are so important that the operating system can decrypt them only when the devices user interface is unlocked. These files’ encryption keys are wrapped with a class key derived from the UID and user passcode. When the device locks again, the key is wiped from memory, making the files unavailable again.
Above is the complete list of iOS protection classes. Data protection classes tie together how the UID and user password work together to protect user data.
Back to Current Events…
Now to just to clarify, what the FBI is asking Apple to do is not to decrypt the data on the San Bernardino shooters phone, they are asking Apple to create new firmware that will make a brute-force attack possible without fear of a security wipe. To do so, Apple would have to remove the interface delays generated by incorrect passcode guesses and allow a passcode to be input other than by physically tapping in combinations. Yes, Apple does indeed have the technical ability to create such firmware, but what it does not have is the ability to create such firmware that will only unlock ONE particular iPhone. Additionally, what the FBI asked for in their warrant was firmware that disables these security walls by hardcoding the devices UID into the firmware. Thus, if in the future the FBI wanted to break into another iPhone, they could reuse the code by changing the UID that is written into the firmware. They are asking for reusable hardware. While the government may argue that the use of this firmware would be limited to this case, there is no way to guarantee such control…
Bring it on Home, Stephanie!
Thanks for taking this long-winded journey with me. Hopefully, with a greater understanding of how iOS data encryption works, you can make the most informed decision on what side you’re on. Though, if you take only one piece of information away from this post, let it be this: MAKE SECURE PASSCODES PEOPLE!
*Much credit goes to this amazing post by Matthew Green, this equally amazing (be it slightly longer and more technical) article by David Schuetz, and this article by Cyrus Farivar, which began my obsession. I just hope I did not get too many things wrong!