Operating system security

Understanding iOS Security: Part 1

Rorot
March 19, 2015 by
Rorot

Apple uses iOS (operating system) to power many of its mobile devices such as iPhone, iPad and so on. From the beginning, security has been placed at the core of iOS. There are many inherent features that secure the device and its resources at different levels. This article aims to provide answers to questions such as the following:

  • What really happens when an iPhone is powered on?
  • How is data at rest secured by iOS?
  • If the device is lost or stolen, can the attacker view or modify my personal data?
  • How are privacy controls enforced?

For ease of understanding, we wil deal with each of these topics in separate sections. Let's begin!

Boot level security mechanism

In the desktop computer world, an attacker can access the data present on the hard disk even without knowledge of the password of that system. For instance, he can remove the hard disk and plug it to a different system and read the data, or he can boot the system into a different OS by using a live CD. But do you think it's possible in the case of an iPhone? I.e., Can an attacker who has access to an iPhone remove the chip and read its data or sideload another OS to access data? Not really under normal circumstances!

This is because iOS devices don't load firmware that is not signed by Apple. Taking a look at the boot level security mechanism would help us to understand this in a better fashion. So what really happens when you power on your iPhone?

  1. When an iOS device is turned on, the processor immediately executes code known as the boot ROM. This boot ROM code is something that is designed during chip fabrication and is implicitly trusted. This boot ROM also contains root certificates of Apple which will be used to signature check the loading of the next stages.
  2. LLB (Low Level Boot loader) is the next thing that will be loaded after the signature check. LLB finishes its task and loads next stage boot loader iBoot after verifying its signature.
  3. iBoot verifies and runs iOS kernel.

Thus, as shown in the following figure, at each stage a signature check is done before loading the next step. This is called "Chain of Trust". Hence, under normal circumstances, this chain of trust ensures iOS runs on valid devices only and also verifies that the phone is not booted into another operating system.

Can this signature check be bypassed so that we can flash our own boot loader? Yes it can be. Several vulnerabilities have been identified in boot ROM code which can be exploited to not only flash our own boot loader but also to bypass the signature checks of every stage. Remember that if one link is compromised, it would ultimately lead to compromise of all the other links that follow. How this can be done will be discussed in a separate post.

Secure Enclave

You must have heard about the finger print sensor introduced in iPhone 5S. Apple says this finger print information is encrypted and stored in a 'Secure Enclave' inside the phone and is never backed up to iCloud or any Apple servers. So what is this Secure Enclave and how does it work? Secure Enclave is a coprocessor created inside Apple A7 processor. All the cryptographics required for data protection are handled by this. It has a secure boot and updates which are separate from the main processor. Secure Enclave is a concept that is similar to ARM's Trust zone technology. Following is a sample depiction of hardware architecture of trust zones.

As shown above, a new mode called 'secure mode' is added to the processor. In simple terms, it kind of creates two-world architecture on the same device. The first world that runs normal iOS apps (user mode) and the second world that runs only trusted code (secure mode). Data written to the RAM when in secure monitor mode cannot be accessed when in user mode. The following steps compiled from http://blog.fortinet.com/post/iphone5s-inside-the-secure-enclave explain how Secure Enclave works while validating the fingerprint in iPhone 5S:

  • User enters his fingerprint
  • Locking service calls an API present in secure world
  • Processor switches to secure world
  • The bits which characterize the fingerprint move from sensor to processor
  • This data cannot be eavesdropped or modified by any app because this process is running in secure mode which is different from user mode
  • Necessary cryptographic verifications are done & access granted.

Apple thus argues that even if the kernel is compromised, the integrity of data protection will be maintained. As per Apple's documentation, "Each Secure Enclave is provisioned during fabrication with its own UID (Unique ID) that is not accessible to other parts of the system and is not known to Apple. When the device starts up, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave's portion of the device's memory space".

Code Signing

Apps have today become critical components of any mobile operating system. Apple believes enforcing strict security at the application level is important to ensure overall security of the device. Apple has gone to great extent to make this happen, and code signing is one step in that direction. To put it simply, Apple does not allow running any app which is not approved by it! To ensure that all apps are from a trusted and approved source and have not been tampered with, iOS requires all apps to be signed by Apple. Default apps like Safari are signed by Apple. Other third party apps are also to be verified and signed by Apple. In other words, the above discussed chain of trust principle continues from boot loader to OS to apps. But how does this actually work? Does this mean I cannot run an app developed by me if it's not signed by Apple?

In order to develop and install apps on iOS devices, developers must register with Apple and join the iOS Developer Program. The real-world identity of each developer, whether an individual or a business, is verified by Apple before their certificate is issued. This certificate enables developers to sign apps and submit them to the App Store for distribution. As a result, all apps in the App Store have been submitted by an identifiable person or organization, serving as a deterrent to the creation of malicious apps. These apps are further reviewed by Apple to ensure they operate as described and don't contain obvious bugs or other problems. Apple believes this process would give customers more confidence in the quality of apps they buy.

If corporate companies want to use in house apps for their internal purpose, they need to apply for iOS Developer Enterprise program (iDEP). Apple approves applicants after verifying their identity and eligibility. Once an organization becomes a member of iDEP, it can register to obtain a Provisioning Profile. This is the one that permits in-house apps to run on devices it authorizes. Users must have the Provisioning Profile installed in order to run the in-house apps. This ensures that only the organization's intended users are able to load the apps onto their iOS devices.

In-house apps also check to ensure the signature is valid at runtime. Apps with an expired or revoked certificate will not run. This code signing process is depicted in the following figure.

Thus we have explored three major security features in iOS – secure boot process, Secure Enclave, and application signing in this article. In the next part, we will look into other security features such as data protection, encryption and so on. 'Til then, Happy Hacking!

Rorot
Rorot

Rorot (@rorot333) is an Information Security Professional with 5.5 years of experience in Penetration testing & Vulnerability assessments of web and mobile applications. He is currently a security researcher at Infosec Institute. Twitter: @rorot333 Email: rorot33@gmail.com