Está en la página 1de 16

The Android Platform Security Model∗

René Mayrhofer Jeffrey Vander Stoep


Google Google
rmayrhofer@google.com jeffv@google.com

Chad Brubaker Nick Kralevich


Google Google
cbrubaker@google.com nnk@google.com

ABSTRACT multi-party consent1 model: an action should only happen if all


arXiv:1904.05572v1 [cs.CR] 11 Apr 2019

Android is the most widely deployed end-user focused operating involved parties consent to it. If any party does not consent, the
system. With its growing set of use cases encompassing commu- action is blocked. This is different to the security models that more
nication, navigation, media consumption, entertainment, finance, traditional operating systems implement, which are focused on user
health, and access to sensors, actuators, cameras, or microphones, access control and do not explicitly consider other stakeholders.
its underlying security model needs to address a host of practi- While the multi-party model has implicitly informed architecture
cal threats in a wide variety of scenarios while being useful to and design of the Android platform from the beginning, it has
non-security experts. The model needs to strike a difficult balance been refined and extended based on experience gathered from past
between security, privacy, and usability for end users, assurances releases. This paper aims to both document the Android security
for app developers, and system performance under tight hardware model and systematically analyze its implications in the context
constraints. While many of the underlying design principles have of ecosystem constraints and historical developments. Specifically,
implicitly informed the overall system architecture, access con- we make the following contributions:
trol mechanisms, and mitigation techniques, the Android security (1) We motivate and for the first time define the Android security
model has previously not been formally published. This paper aims model based on security principles and the wider context
to both document the abstract model and discuss its implications. in which Android operates. Note that the core three-party
Based on a definition of the threat model and Android ecosystem consent model described and analyzed in this paper has been
context in which it operates, we analyze how the different secu- implicitly informing Android security mechanisms since the
rity measures in past and current Android implementations work earliest versions, and we therefore systematize knowledge
together to mitigate these threats. There are some special cases that has, in parts, existed before, but that was not formally
in applying the security model, and we discuss such deliberate published so far.
deviations from the abstract model. (2) We define the threat model and how the security model
addresses it and discuss implications as well as necessary
KEYWORDS special case handling.
(3) We explain how AOSP (Android Open Source Project, the
Android, security, operating system, informal model
reference implementation of the Android platform) enforces
the security model based on multiple interacting security
measures on different layers.
1 INTRODUCTION (4) We identify currently open gaps and potential for future
Android is, at the time of this writing, the most widely deployed improvement of this implementation.
end-user operating system. With more than 2 billion monthly ac-
This paper focuses on security and privacy measures in the An-
tive devices [9] and a general trend towards mobile use of Internet
droid platform itself, i.e. code running on user devices that is part
services, Android is now the most common interface for global
of AOSP. There are complementary security services in the form
users to interact with digital services. Across different form factors
of Google Play Protect (GPP) scanning applications submitted to
(including e.g. phones, tablets, wearables, TV, Internet-of-Things,
Google Play and on-device (Verify Apps or Safe Browsing as opt-in
automobiles, and more special-use categories) there is a vast – and
services) as well as Google Play policy and other legal frameworks.
still growing – range of use cases from communication, media
These are out of scope of the current paper, but are covered by
consumption, and entertainment to finance, health, and physical
related work [6, 28, 50, 93]. However, we explicitly point out one
sensors/actuators. Many of these applications are increasingly se-
policy change in Google Play with potentially significant positive
curity and privacy critical, and Android as an OS needs to provide
effects for security: Play now requires that new apps and app up-
sufficient and appropriate assurances to users as well as developers.
dates target a recent Android API level, which will allow Android
To balance the different (and sometimes conflicting) needs and
to deprecate and remove APIs known to be abused or that have had
wishes of users, application developers, content producers, service
security issues in the past [35].
providers, and employers, Android is fundamentally based on a
1 Throughout the paper, the term ‘consent’ is used to refer to various technical methods
of declaring or enforcing a party’s intent, rather than the legal requirement or standard
∗ Last updated in March 2019 based on Android 9.0 as released. found in many privacy legal regimes around the world.
1
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

In the following, we will first introduce Android security prin- things device types, handheld scanners/displays and other special-
ciples, and the ecosystem context and threat analysis that are the purpose worker devices, TVs, cars, etc.). Some of these OEMs do
basis of the Android security model (Section 2). Then, we define not have detailed technical expertise, but rely on ODMs (Original
the central security model (Section 3) and its implementation in the Device Manufacturers) for developing hardware and firmware and
form of OS architecture and enforcement mechanisms on different then re-package or simply re-label devices with their own brand.
OS layers (Section 4). Note that all implementation specific sections Only devices shipping with Google services integration need to
refer to Android Pie (9.0) at the time of its initial release unless get their firmware certified, but devices simply based off AOSP can
mentioned otherwise (cf. [79]). We will refer to earlier Android ver- be made without permission or registration. Therefore, there is no
sion numbers instead of their code names: 4.1–4.3 (Jelly Bean), 4.4 single register listing all OEMs, and the list is constantly changing
(KitKat), 5.x (Lollipop), 6.x (Marshmallow), 7.x (Nougat), 8.x (Oreo). with new hardware concepts being continuously developed. One
All tables are based on an analysis of security relevant changes to implication is that changing APIs and other interfaces can lead to
the whole AOSP code base between Android releases 4.x and 9.0 large changes in the device ecosystem and take time to reach most
(inclusive), spanning about 7 years of code evolution. Finally, we of these use cases.
discuss special cases (Section 5) and related work in terms of other However, devices using Android as a trademarked name to ad-
security models (Section 6). vertise their compatibility with Android apps need to pass the
Compatibility Test Suite (CTS). Developers rely on this compati-
2 ANDROID BACKGROUND bility when writing apps for this wide variety of different devices.
In contrast to some other platforms, Android explicitly supports
Before introducing the security model, we explain the context in installation of apps from arbitrary sources, which led to the devel-
which it needs to operate, both in terms of ecosystem requirements opment of different app stores and the existence of apps outside
and platform security principles. of Google Play. Consequently, there is a long tail of apps with a
very specific purpose, being installed on only few devices, and/or
2.1 Ecosystem context targeting old Android API releases. Definition of and changes to
Some of the design decisions need to be put in context of the larger APIs need to be considerate of the huge number of applications
ecosystem, which does not exist in isolation. A successful ecosystem that are part of the Android ecosystem.
is one where all parties benefit when it grows, but also requires a Apps can be written in any language. As long as apps interface
minimum level of mutual trust. This implies that a platform must with the Android framework using the well-defined Java language
create safe-by-default environments where the main parties (end APIs for process workflow, they can be written in any programming
user, application developer, operating system) can define mutually language, with or without runtime support, compiled or interpreted.
beneficial terms of engagement. If these parties cannot come to an Android does not currently support non-Java language APIs for
agreement, then the most trust building operation is to disallow the basic process lifecycle control, because they would have to
the action (default-deny). The Android platform security model be supported in parallel, making the framework more complex
introduced below is based on this notion. and therefore more error-prone. Note that this restriction is not
This section is not comprehensive, but briefly summarizes those directly limiting, but apps need to have at least a small Java language
aspects of the Android ecosystem that have direct implications to wrapper to start their initial process and interface with fundamental
the security model: OS services. The important implication of this flexibility for security
mechanisms is that they cannot rely on compile-time checks or any
Android is an end user focused operating system. Although An-
other assumptions on the build environment. Therefore, Android
droid strives for flexibility, the main focus is on typical users. The
security needs to be based on runtime protections around the app
obvious implication is that, as a consumer OS, it must be useful to
boundary.
users and attractive to developers.
The end user focus implies that user interfaces and workflows
2.2 Android security principles
need to be safe by default and require explicit intent for any actions
that could compromise security or privacy. This also means that From the start, Android has assumed a few basic security and pri-
the OS must not offload technically detailed security or privacy vacy principles that can be seen as an implicit contract between
decisions to non-expert users who are not sufficiently skilled or many parties in this open ecosystem:
experienced to make them [5]. Actors control access to the data they create. Any actor that creates
a data item is implicitly granted control over this particular instance
The Android ecosystem is immense. Different statistics show that
of data representation. Note that this refers to the technical act of
in the last few years, the majority of a global, intensely diverse
protecting data, either on the filesystem or in memory — but does
user base already used mobile devices to access Internet resources
not automatically imply ownership over data in the legal sense.
(i.e. 63% in the US [1], 56% globally [2], with over 68% in Asia and
over 80% in India). Additionally, there are hundreds of different Consent is informed and meaningful. Actors consenting to any
OEMs (Original Equipment Manufacturers, i.e. device manufactur- action must be empowered to base their decision on information
ers) making tens of thousands of Android devices in different form about the action and its implications and must have meaningful
factors [80] (including, but not limited to, standard smartphones ways to grant or deny this consent. This applies to both users and
and tablets, watches, glasses, cameras and many other Internet of developers, although very different technical means of enforcing
2
The Android Platform Security Model

(lack of) consent apply. Consent is not only required from the actor Adversaries can get physical access to Android devices. For all
that created a data item, but from all involved actors. Consent mobile and wearable devices, we have to assume that they will
decisions should be enforced and not self-policed. potentially fall under physical control of adversaries at some point.
The same is true for other Android form factors such as things,
Safe by design/default. Components should be safe by design. cars, TVs, etc. Therefore, we assume Android devices to be either
That is, the default use of an operating system component or service directly accessible to adversaries or to be in physical proximity to
should always protect security and privacy assumptions, potentially adversaries as an explicit part of the threat model. This includes
at the cost of blocking some use cases. This principle applies to mod- loss or theft, but also multiple (benign but potentially curious) users
ules, APIs, communication channels, and generally to interfaces sharing a device (such as a TV or tablet). We derive specific threats
of all kinds. When variants of such interfaces are offered for more due to physical or proximal access:
flexibility (e.g. a second interface method with more parameters to T1 Powered-off devices under complete physical control of an
override default behavior), these should be hard to abuse, either un- adversary (with potentially high sophistication up to nation
intentionally or intentionally. Note that this architectural principle state level attackers), e.g. border control or customs checks.
targets developers, which includes device manufacturers, but implic- T2 Screen locked devices under complete physical control of an
itly includes users in how security is designed and presented in user adversary, e.g. thieves trying to exfiltrate data for additional
interfaces. Android targets a wide range of developers and inten- identity theft.
tionally keeps barriers to entry low for app development. Making T3 Screen unlocked (shared) devices under control of an autho-
it hard to abuse APIs not only guards against malicious adversaries, rized but different user, e.g. intimate partner abuse, voluntary
but also mitigates genuine errors resulting e.g. from incomplete submission to a border control or customs check
knowledge of an interface definition or caused by developers lack- T4 (Screen locked or unlocked) devices in physical proximity
ing experience in secure system design. As in the defense in depth to an adversary (with the assumed capability to control all
approach, there is no single solution to making a system safe by available radio communication channels, including cellu-
design. Instead, this is considered a guiding principle for defining lar, WiFi, Bluetooth, GPS, NFC, and FM), e.g. direct attacks
new interfaces and refining – or, when necessary, deprecating and through Bluetooth [30]. Although NFC could be considered
removing – existing ones. to be a separate category to other proximal radio attacks
because of the scale of distance, we still include it in the
Defense in depth. A robust security system is not sufficient if threat class of proximity instead of physical control.
the acceptable behavior of the operating system allows an attacker
Network communication is untrusted. The standard assumption
to accomplish all of their goals without bypassing the security
of network communication under complete control of an adversary
model (e.g. ransomware encrypting all files it has access to under
certainly also holds for Android devices. This includes the first hop
the access control model). Specifically, violating any of the above
of network communication (e.g. captive WiFi portals breaking TLS
principles should require such bypassing of controls on-device (in
connections and malicious fake access points) as well as other points
contrast to relying on off-device verification e.g. at build time).
of control (e.g. mobile network operators or national firewalls),
Therefore, the primary goal of any security system is to enforce
summarized in the usual Dolev-Yao model [40] with additional
its model. For Android operating in a multitude of environments
relay threats for short-range radios (e.g. NFC or BLE wormhole
(see below for the threat model), this implies an approach that does
attacks). For practical purposes, we mainly consider two network-
not immediately fail when a single assumption is violated or a
level threats:
single implementation bug is found, even if the device is not up
to date. Defense in depth is characterized by rendering individual T5 Passive eavesdropping and traffic analysis, including track-
vulnerabilities more difficult or impossible to exploit, and increasing ing devices within or across networks, e.g. based on MAC
the number of vulnerabilities required for an attacker to achieve address or other device network identifiers.
their goals. We primarily adopt four common security strategies to T6 Active manipulation of network traffic, e.g. MITM on TLS
prevent adversaries from bypassing the security model: isolation connections.
and containment, exploit mitigation, integrity, and patching/updates. These two threats are different from [T4] (proximal radio attacks)
Their implementation will be discussed in more detail in section 4. in terms of scalability of attacks. Controlling a single choke point
in a major network can be used to attack a large number of devices,
while proximal (last hop) radio attacks require physical proximity
2.3 Threat model to target devices.
Threat models for mobile devices are different from those com-
monly used for desktop or server operating systems for two major Untrusted code is executed on the device. One fundamental differ-
reasons: by definition, mobile devices are easily lost or stolen, and ence to other mobile operating systems is that Android intentionally
they connect to untrusted networks as part of their expected usage. allows (with explicit consent by end users) installation of applica-
At the same time, by being close to users at most times, they are tion code from arbitrary sources, and does not enforce vetting of
also exposed to even more privacy sensitive data than many other apps by a central instance. This implies attack vectors on multiple
categories of devices. Recent work [73] previously introduced a lay- levels (cf. [73]):
ered threat model for mobile devices which we adopt for discussing T7 Abusing APIs supported by the OS with malicious intent,
the Android security model within the scope of this paper: e.g. spyware.
3
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

T8 Exploiting bugs in the OS, e.g. kernel, drivers, or system • Data in special system locations is controlled by the platform
services [36–38, 90]. (e.g. list of granted permissions).
T9 Abusing APIs supported by other apps installed on the However, it is important to point out that, under three party consent,
device [89]. even if one party primarily controls a data item, it may only act on
T10 Untrusted code from the web (i.e. JavaScript) is executed it if the other two parties consent. Control over data also does not
without explicit consent. imply ownership (which is a legal concept rather than a technical
T11 Mimicking system or other app user interfaces to con- one and therefore outside the scope of an OS security model).
fuse users (based on the knowledge that standard in-band Note that there are corner cases in which only two parties may
security indicators are not effective [39, 81]), e.g. to input need to consent (for actions in which the user only uses platform/OS
PIN/password into a malicious app [49]. services without involvement of additional apps) or a fourth party
T12 Reading content from system or other app user interfaces, may be introduced (e.g. on devices or profiles controlled by a mobile
e.g. to screen-scrape confidential data from another app [58, device management, this policy is also considered for consenting
63]. to an action).
T13 Injecting input events into system or other app user inter-
faces [51]. ○
2 Open ecosystem access. Both users and developers are part of
an open ecosystem that is not limited to a single application store.
Untrusted content is processed by the device. In addition to di- Central vetting of developers or registration of users is not required.
rectly executing untrusted code, devices process a wide variety of This aspect has an important implication for the security model:
untrusted data, including rich (in the sense of complex structure) generic app-to-app interaction is explicitly supported. Instead of
media. This directly leads to threats concerning processing of data creating specific platform APIs for every conceivable workflow,
and metadata: app developers are free to define their own APIs they offer to other
T14 Exploiting code that processes untrusted content in the apps.
OS or apps, e.g. in media libraries [88]. This can be both a
local as well as a remote attack surface, depending on where ○3 Security is a compatibility requirement. The security model
input data is taken from. is part of the Android specification, which is defined in the Com-
T15 Abusing unique identifiers for targeted attacks (which patibility Definition Document (CDD) [10] and enforced by the
can happen even on trusted networks), e.g. using a phone Compatibility (CTS), Vendor (VTS), and other test suites. Devices
number or email address for spamming or correlation with that do not conform to CDD and do not pass CTS are not Android.
other data sets, including locations. Within the scope of this paper, we define rooting as modifying the
system to allow starting processes that are not subject to sandbox-
3 THE ANDROID PLATFORM SECURITY ing and isolation. Such rooting, both intentional and malicious, is a
specific example of a non-compliant change which violates CDD.
MODEL
As such, only CDD-compliant devices are considered. While many
The basic security model described in this section has informed devices support unlocking their bootloader and flashing modified
the design of Android, and has been refined but not fundamentally firmware2 , such modifications may be considered incompatible
changed. Given the ecosystem context and general Android princi- under CDD if security assurances do not hold. Verified boot and
ples explained above, the Android security model balances security hardware key attestation can be used to validate if currently run-
and privacy requirements of users with security requirements of ning firmware is in a known-good state, and in turn may influence
applications and the platform itself. The threat model described consent decisions by users and developers.
above includes threats to all stakeholders, and the security model
and its enforcement by the Android platform aims to address all of ○4 Factory reset restores the device to a safe state. In the event
them. The Android platform security model is informally defined of security model bypass leading to a persistent compromise, a
by 5 rules: factory reset, which wipes/reformats the writable data partitions,
returns a device to a state that depends only on integrity protected
○1 Three party consent. No action should be executed unless partitions. In other words, system software does not need to be
all three main parties agree — i.e. user, platform, and developer re-installed, but wiping the data partition(s) will return a device
(implicitly representing stake holders such as content producers and to its default state. Note that the general expectation is that the
service providers). Any one party can veto the action. This three- read-only device software may have been updated since originally
party consent spans the traditional two dimensions of subjects taking it out of the box, which is intentionally not downgraded by
(users and application processes) vs. objects (files, network sockets factory reset. Therefore, more specifically, factory reset returns an
and IPC interfaces, memory regions, virtual data providers, etc.) Android device to a state that only depends on system code that
that underlie most security models (e.g. [91]). Focusing on (regular is covered by Verified Boot, but does not depend on writable data
and pseudo) files as the main category of objects to protect, the partitions.
default control over these files depends on their location and which
party created them: 2 Google Nexus and Pixel devices as well as many others support the standard fastboot

• Data in shared storage is controlled by users. oem unlock command to allow flashing any firmware images to actively support
developers and power users. However, executing this unlocking workflow will forcibly
• Data in private app directories and app virtual address space factory reset the device (wiping all data) to make sure that security guarantees are not
is controlled by apps. retroactively violated for data on the device.
4
The Android Platform Security Model

○5 Applications are security principals. The main difference to (or when using key rotation functionality, a key that was previ-
traditional operating systems that run apps in the context of the ously granted this ability by the app). This prevents third parties
logged-in user account is that Android apps are not considered — including the app store — from replacing or removing code or
to be fully authorized agents for user actions. In the traditional resources in order to change the app’s intended behavior. However,
model typically implemented by server and desktop OS, there is the app signing key is trusted implicitly upon installation, so re-
often no need to even exploit the security boundary because run- placing or modifying apps in transit (e.g. when side-loading apps)
ning malicious code with the full permissions of the main user is is currently out of scope of the platform security model and may
sufficient for abuse. Examples are many, including file encrypting violate developer consent.
ransomware [59, 84] (which does not violate the OS security model
if it simply re-writes all the files the current user account has access 4.1.2 The Platform
to) and private data leakage (e.g. browser login tokens [70], history While the platform, like the developer, consents via code signing,
or other tracking data, cryptocurrency wallet keys, etc.). the goals are quite different: the platform acts to ensure that the
system functions as intended. This includes enforcing regulatory or
contractual requirements as well as taking an opinionated stance
Summary. Even though, at first glance, the Android security on what kinds of behaviors are acceptable. Platform consent is
model grants less power to users compared to traditional operating enforced via Verified Boot (see below for details) protecting the
systems that do not impose a multi-party consent model, there system images from modification as well as platform applications
is an immediate benefit to end users: if one app cannot act with using the platform signing key and associated permissions, much
full user privileges, the user cannot be tricked into letting it access like applications.
data controlled by other apps. In other words, requiring application
developer consent – enforced by the platform – helps avoid user 4.1.3 User(s)
confusion attacks and therefore better protects private data. Achieving meaningful user consent is by far the most difficult
and nuanced challenge in determining meaningful consent. Some
4 IMPLEMENTATION of the guiding principles have always been core to Android, while
Android’s security measures implement the security model and are others were refined based on experiences during the 10 years of
designed to address the threats outlined above. In this section we development so far:
describe security measures and indicate which threats they mitigate, • Avoid over-prompting. Over-prompting the user leads to
taking into account the architectural security principles of ‘defense prompt fatigue and blindness (cf. [7]). Prompting the user
in depth’ and ‘safe by design’. with a yes/no prompt for every action does not lead to mean-
ingful consent as users become blind to the prompts due to
4.1 Consent their regularity.
Methods of giving meaningful consent vary greatly between actors, • Prompt in a way that is understandable. Users are as-
as well as potential issues and constraints. sumed not to be experts or understand nuanced security
questions (cf. [48]). Prompts and disclosures must be phrased
4.1.1 Developer(s) in a way that a non-technical user can understand the effects
Unlike traditional desktop operating systems, Android ensures of their decision.
that the developer consents to actions on their app or their app’s • Prefer pickers and transactional consent over wide gran-
data. This prevents large classes of abusive behavior where unre- ularity. When possible, we limit access to specific items
lated apps inject code into or steal data from other applications on instead of the entire set. For example, the Contacts Picker
a user’s device. allows the user to select a specific contact to share with the
Consent for developers, unlike the user, is enshrined via the code application instead of using the Contacts permission. These
they sign and the system executes. For example, an app can consent both limit the data exposed as well as present the choice to
to the user sharing its data by providing a respective mechanism, the user in a clear and intuitive way.
e.g. based on OS sharing methods such as built-in implicit Intent • The OS must not offload a difficult problem onto the
resolution chooser dialogs [11]. Another example is debugging: as user. Android regularly takes an opinionated stance on what
assigned virtual memory content is controlled by the app, debug- behaviors are too risky to be allowed and may avoid adding
ging from an external process is only allowed if an app consents to functionality that may be useful to a power user but danger-
it (specifically through the debuggable flag in the app manifest). ous to an average user.
Meaningful consent then is ensuring that APIs and their behav- • Provide users a way to undo previously made decisions.
iors are clear and the developer understands how their application Users can make mistakes. Even the most security and privacy-
is interacting with or providing data to other components. Addi- savvy users may simply press the wrong button from time
tionally, we assume that developers of varying skill levels may not to time, which is even more likely when they are being tired
have a complete understanding of security nuances, and as a result or distracted. To mitigate against such mistakes or the user
APIs must also be safe by default and difficult to incorrectly use in simply changing their mind, it should be easy for the user to
order to avoid accidental security regressions. undo a previous decision whenever possible. This may vary
In order to ensure that it is the app developer and not another from denying previously granted permissions to removing
party that is consenting, applications are signed by the developer an app from the device entirely.
5
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

Additionally, it is critical to ensure that the user who is con- file-based encryption that rely on the presence of an underlying
senting is the legitimate user of the device and not another person user-supplied credential. As an example of how this helps drive
with physical access to the device ([T1]-[T3]), which directly relies lockscreen adoption, starting with Android 7.x we see that 77%
on the next component in the form of the Android lock screen. of devices with fingerprint sensors have a secure lockscreen en-
Implementing model rule ○ 1 is cross-cutting on all system layers. abled, while only 50% of devices without fingerprints have a secure
lockscreen3 .
We use two examples to better describe the consent parties:
As of Android 9.0, the tiered authentication model splits modali-
• Sharing data from one app to another requires: ties into three tiers.
– user consent through the user selecting a target app in the
• Primary Authentication modalities are restricted to knowledge-
share dialog;
factors and by default include PIN, pattern, and password.
– developer consent of the source app by initiating the share
Primary authentication provides access to all functions on
with the data (e.g. image) they want to allow out of their
the phone.
app;
• Secondary Authentication modalities are required to be ‘strong’
– developer consent of the target app by accepting the shared
biometrics as defined by their spoof and imposter accep-
data; and
tance rates [78]. Accounting for an explicit attacker in the
– platform consent by arbitrating the data access and ensur-
threat model helps reduce the potential for insecure unlock
ing that the target app cannot access any other data than
methods [75]. Secondary modalities are also prevented from
the explicitly shared item through the same link, which
performing some actions — for example, they do not de-
forms a temporary trust relationship between two apps.
crypt file-based or full-disk encrypted user data partitions
• Changing mobile network operator (MNO) configuration
(such as on first boot) and are required to fallback to primary
option requires:
authentication once every 72 hours.
– user consent by selecting the options in a settings dialog;
• Tertiary Authentication modalities are those that are either
– (MNO app) developer consent by implementing options
weak biometrics that do not meet the spoofability bar or alter-
to change these configuration items, potentially querying
nate modalities such as unlocking when paired with a trusted
policy on backend systems; and
Bluetooth device, or unlocking at trusted locations. Tertiary
– platform consent by verifying e.g. policies based on coun-
modalities are subject to all the constraints of secondary
try regulations and ensuring that settings do not impact
modalities, but are additionally restricted from granting ac-
platform or network stability.
cess to Keymaster auth-bound keys (such as those required
for payments) and also require a fallback to primary authen-
4.2 Authentication
tication after any 4-hour idle period.
Authentication is a gatekeeper function for ensuring that a system
interacts with its owner or legitimate user. On mobile devices the The Android lockscreen is currently implemented by Android
primary means of authentication is via the lockscreen. Note that a system components above the kernel, specifically Keyguard and the
lockscreen is an obvious trade-off between security and usability: respective unlock methods (some of which may be OEM specific).
On the one hand, users unlock phones for short (10-250 seconds) User knowledge factors of secure lockscreens are passed on to
interactions about 50 times per day on average and even up to 200 Gatekeeper/Weaver (explained below) both for matching them with
times in exceptional cases [45, 56], and the lockscreen is obviously stored templates and deriving keys for storage encryption. One
an immediate hindrance to frictionless interaction with a device [54, implication is that a kernel compromise could lead to bypassing
55]. On the other hand, devices without a lockscreen are immedi- the lockscreen — but only after the user has logged in for the first
ately open to being abused by unauthorized users ([T1]-[T3]), and time after reboot.
the OS cannot reliably enforce user consent without authentication.
In their current form, lockscreens on mobile devices largely en- 4.3 Isolation and Containment
force a binary model — either the whole phone is accessible, or the One of the most important parts of enforcing the security model is
majority of functions (especially all security or privacy sensitive to enforce it at runtime against potentially malicious code already
ones) are locked. Neither long, semi-random alphanumeric pass- running on the device. The Linux kernel provides much of the foun-
words (which would be highly secure but not usable for mobile dation and structure upon which Android’s security model is based.
devices) nor swipe-only lockscreens (usable, but not offering any Process isolation provides the fundamental security primitive for
security) are advisable. Therefore, it is critically important for the sandboxing. With very few exceptions, the process boundary is
lockscreen to strike a reasonable balance between security and where security decisions are made and enforced — Android inten-
usability. tionally does not rely on in-process compartmentalization such as
Towards this end, recent Android releases use a tiered authentica- the Java security model. The security boundary of a process is com-
tion model where a secure knowledge-factor based authentication prised of the process boundary and its entry points and implements
mechanism can be backed by convenience modalities that are func- rule ○:
2 an app does not have to be vetted or pre-processed to run
tionally constrained based on the level of security they provide. The within the sandbox. Strengthening this boundary can be achieved
added convenience afforded by such a model helps drive lockscreen by a number of means such as:
adoption and allows more users to benefit both from the imme-
diate security benefits of a lockscreen and from features such as 3 These numbers are from internal analysis that has not yet been formally published.
6
The Android Platform Security Model

• Access control: adding permission checks, increasing the (3) Special Access Permissions: For permissions that expose
granularity of permission checks, or switching to safer de- more or are higher risk than runtime permissions there
faults (e.g. default deny). exists a special class of permissions with much higher
• Attack surface reduction: reducing the number of entry granting friction that the application cannot show a run-
points i.e. principle of least privilege. time prompt for. In order for a user to allow an application
• Containment: isolating and de-privileging components, par- to use a special access permission the user must go to
ticularly ones that handle untrusted content. settings and manually grant the permission to the appli-
• Architectural decomposition: breaking privileged processes cation.
into less privileged components and applying attack surface (4) Privileged Permissions: These permissions are for pre-
reduction. installed privileged applications only and allow privileged
• Separation of concerns: avoiding duplication of functionality. actions such as carrier billing.
In this section we describe the various sandboxing and access (5) Signature Permissions: These permissions are only avail-
control mechanisms used on Android on different layers and how able to components signed with the same key as the com-
they improve the overall security posture. ponent which declares the permission e.g. the platform
signing key. They are intended to guard internal or highly
4.3.1 Permissions privileged actions, e.g. configuring the network interfaces.
Android uses three distinct permission mechanisms to perform Permission availability is defined by their protectionLevel
access control: attribute [13] with two parts (the level itself and a number of
• Discretionary Access Control (DAC): Processes may grant optional flags) which may broaden which applications may
or deny access to resources that they own by modifying per- be granted a permission as well as how they may request it.
missions on the object (e.g. granting world read access) or The protection levels are:
by passing a handle to the object over IPC. On Android this – normal: Normal permissions are those that do not pose
is implemented using UNIX-style permissions that are en- much privacy or security risk and are granted automati-
forced by the kernel and URI permission grants. Processes cally at install time. These permissions are primarily used
running as the root user often have broad authority to over- for auditability of app behavior.
ride UNIX permissions (subject to MAC permissions – see – dangerous: Permissions with this protectionLevel are
below). URI permission grants provide the core mechanism runtime permissions, and apps must both declare them in
for app to app interaction allowing an app to grant selective their manifest as well as request users grant them during
access to pieces of data it owns. use. These permissions, which are fairly fine-grained to
• Mandatory Access Control (MAC): The system has a se- support auditing and enforcement, are grouped into logical
curity policy that dictates what actions are allowed. Only permissions using the permissionGroup attribute. When
actions explicitly granted by policy are allowed. On An- requesting runtime permissions, the group appears as a
droid this is implemented using SELinux [87] and primarily single permission to avoid over-prompting.
enforced by the kernel. Android makes extensive use of – signature: Applications can only be granted such permis-
SELinux to protect system components and assert security sion if they are signed with the same key as the application
model requirements during compatibility testing. that defines the permission, which is the platform sign-
• Android permissions gate access to sensitive data and ing key for platform permission. These permissions are
services. Enforcement is primarily done in userspace by granted at install time if the application is allowed to use
the data/service provider (with notable exceptions such as them.
INTERNET). Permissions are defined statically in an app’s Additionally, there are a number of protection flags that
AndroidManifest.xml [12], though not all permissions re- modify the grantability of permissions. For example, the
quested may be granted. Android 6.0 brought a major change BLUETOOTH_PRIVILEGED permission has a protectionLevel
by no longer guaranteeing that all requested permissions of signature|privileged, with the privileged flag allow-
are granted when an application is installed. This was a di- ing privileged applications to be granted the permission
rect result of the realization that users were not sufficiently (even if they are not signed with the platform key).
equipped to make such a decision at installation time (cf. [47, Each of the three permission mechanisms roughly aligns with
48, 82, 101]). how one of the three parties grant consent (rule ○). 1 The platform
At a high level Android permissions fall into one of five utilizes MAC, apps use DAC, and users consent by granting An-
classes in increasing order of severity: droid permissions. Note that permissions are not intended to be a
(1) Audit-only permissions: These are install time permissions mechanism for obtaining consent in the legal sense but a techni-
with the ‘normal’ protection level. cal measure to enforce auditability and control. It is up to the app
(2) Runtime permissions: These are permissions that the user developer processing personal user data to meet applicable legal
must approve as part of a runtime prompt dialog. These requirements.
permissions are guarding commonly used sensitive user
data, and depending on how critical they are for the cur- 4.3.2 Application sandbox
rent functioning of an application, different strategies for Android’s original DAC application sandbox separated apps from
requesting them are recommended [24]. each other and the system by providing each application with a
7
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

unique UNIX user ID (UID) and a directory owned by the app. This UID AID_MEDIA. Other processes that warranted UID isolation in-
approach was quite different from the traditional desktop approach clude the telephony stack, WiFi, and Bluetooth (cf. Table 2).
of running applications using the UID of the physical user. The
unique per-app UID simplifies permission checking and eliminates 4.3.4 Sandboxing the kernel
racy per-process ID (PID) checks. Permissions granted to an app are Security hardening efforts in Android’s userspace have increas-
stored in a centralized location (/data/system/packages.xml). to ingly made the kernel a more attractive target for privilege escala-
be queried by other services. For example, when an app requests tion attacks [95]. Hardware drivers provided by System on a Chip
location from the location service, the location service queries the (SoC) vendors account for the vast majority of kernel vulnerabilities
permissions service to see if the requesting UID has been granted on Android [98]. Reducing app/system access to these drivers was
the location permission. described above, but sandboxing code inside the kernel itself also
The UID sandbox had a number of shortcomings. Processes improved significantly over the various releases (cf. Table 3).
running as root were essentially unsandboxed and possessed ex-
tensive power to manipulate the system, apps, and private app
4.3.5 Sandboxing below the kernel
data. Likewise, processes running as the system UID were exempt
from Android permission checks and permitted to perform many In addition to the kernel, the trusted computing base (TCB) on
privileged operations. Use of DAC meant that apps and system Android devices starts with the boot loader (which is typically split
processes could override safe defaults and were more susceptible to into multiple stages) and implicitly includes other components be-
dangerous behavior, such as symlink following or leaking files/data low the kernel, such as the trusted execution environment (TEE),
across security boundaries via IPC or fork/exec. Additionally, DAC hardware drivers, and userspace components init, ueventd, and
mechanisms can only apply to files on file systems that support vold [25]. It is clear that the sum of all these creates sufficient com-
access controls lists (respectively simple UNIX access bits). The plexity that, given current state of the art, we have to assume bugs
main implication is that the FAT family of file systems, which is still in some of them. For highly sensitive use cases, even the mitigations
commonly used on extended storage such as (micro-) SD cards or against kernel and system process bugs described above may not
media connected through USB, does not directly support applying provide sufficient assurance against potential vulnerabilities.
DAC. On Android, each app has a well-known directory on external Therefore, we explicitly consider the possibility of a kernel com-
storage devices, where the package name of the app is included promise (e.g. through directly attacking some kernel interfaces
into the path (e.g. /sdcard/Android/data/com.example). Since based on physical access in [T2]-[T4] or chaining together mul-
the OS already maintains a mapping from package name to UID, it tiple bugs from user space code to reach kernel surfaces in [T8]),
can assign UID ownership to all files in these well-known directo- misconfiguration (e.g. with incorrect or overly permissive SELinux
ries, effectively creating a DAC on a filesystem that doesn’t natively policies [34]), or bypass (e.g. by modifying the boot chain to boot
support it. From Android 4.4 to Android 7.x, this mapping was im- a different kernel with deactivated security policies) as part of
plemented through FUSE, while Android 8.0 and later implement the threat model for some select scenarios. To be clear, with a
an in-kernel sdcardfs for better performance. Both are equivalent compromised kernel, Android no longer meets the compatibility
in maintaining the mapping of app UIDs to implement effective requirements and many of the security and privacy assurances for
DAC. users and apps no longer hold. However, we can still defend against
Despite its deficiencies, the UID sandbox laid the groundwork some threats even under this assumption:
and is still the primary enforcement mechanism that separates • Keymaster implements the Android key store in TEE to
apps from each other. It has proven to be a solid foundation upon guard cryptographic key storage and use in the case of a
which to add additional sandbox restrictions. These shortcomings run-time kernel compromise [14]. That is, even with a fully
have been mitigated in a number of ways over subsequent releases, compromised kernel, an attacker cannot read key material
partially through the addition of MAC policies but also including stored in Keymaster. Apps can explicitly request keys to
many other mechanisms such as runtime permissions and attack be stored in Keymaster, i.e. to be hardware-bound, to be
surface reduction (cf. Table 1). only accessible after user authentication (which is tied to
Rooting, as defined above, has the main aim of enabling certain Gatekeeper/Weaver), and/or request attestation certificates
apps and their processes to break out of this application sandbox to verify these key properties [15], allowing verification of
in the sense of granting “root” user privileges [57], which override compatibility in terms of rule ○.
3
the DAC rules (but not automatically MAC policies, which led to • Strongbox, specified starting with Android 9.0, implements
extended rooting schemes with processes intentionally exempt the Android keystore in separate tamper resistant hardware
from MAC restrictions). Malware may try to apply these rooting (TRH) for even better isolation. This mitigates [T1] and [T2]
approaches through temporary or permanent exploits and therefore against strong adversaries, e.g. against cold boot memory
bypass the application sandbox. attacks [53] or hardware bugs such as Spectre/Meltdown [61,
69], Rowhammer [32, 99], or Clkscrew [92] that allow privi-
4.3.3 Sandboxing system processes lege escalation even from kernel to TEE. From a hardware
In addition to the application sandbox, Android launched with perspective, the main application processor (AP) will always
a limited set of UID sandboxes for system processes. Notably, An- have a significantly larger attack surface than dedicated
droid’s architects recognized the inherent risk of processing un- secure hardware. Adding a separate TRH affords another
trusted media content and so isolated the media frameworks into sandboxing layer of defense in depth.
8
The Android Platform Security Model

Table 1: Application sandboxing improvements in Android releases

Release Improvement Threats mitigated


≤ 4.3 Isolated process: Apps may optionally run services in a process with no Android permissions and [T10] access to [T5]
access to only two binder services. For example, the Chrome browser runs its renderer in an isolated [T8][T9][T12][T13]
process for rendering untrusted web content.
5.x SELinux: SELinux was enabled for all userspace, significantly improving the separation between apps [T8][T14]
and system processes. Separation between apps is still primarily enforced via the UID sandbox. A major
benefit of SELinux is the auditability/testability of policy. The ability to test security requirements
during compatibility testing increased dramatically with the introduction of SELinux.
5.x Webview moved to an updatable APK, independent of a full system OTA. [T10]
6.x Run time permissions were introduced, which moved the request for dangerous permission from install [T7]
to first use (cf. above description of permission classes).
6.x Multi-user support: SELinux categories were introduced for a per-physical-user app sandbox. [T3]
6.x Safer defaults on private app data: App home directory moved from 0751 UNIX permissions to 0700 [T9]
(based on targetSdkVersion).
6.x SELinux restrictions on ioctl system call: 59% of all app reachable kernel vulnerabilities were through [T8][T14]
the ioctl() syscall, and these restrictions limit reachability of potential kernel vulnerabilities from user
space code [95, 96].
6.x Removal of app access to debugfs (9% of all app-reachable kernel vulnerabilities). [T8][T14]
7.x hidepid=2: Remove /proc/<pid> side channel used to infer when apps were started. [T11]
7.x perf-event-hardening (11% of app reachable kernel vulnerabilities were reached via [T8]
perf_event_open()).
7.x Safer defaults on /proc filesystem access. [T7][T11]
7.x MITM CA certificates are not trusted by default. [T6]
8.x Safer defaults on /sys filesystem access. [T7][T11]
8.x All apps run with a seccomp filter intended to reduce kernel attack surface. [T8][T14]
8.x Webviews for all apps move into the isolated process. [T10]
8.x Apps must opt-in to use cleartext network traffic. [T5]
9.0 Per-app SELinux sandbox (for apps with targetSdkVersion=P or greater). [T9][T11]

Table 2: System sandboxing improvements in Android releases

Release Improvement Threats mitigated


4.4 SELinux in enforcing mode: MAC for 4 root processes installd, netd, vold, zygote. [T7][T8][T14]
5.x SELinux: MAC for all userspace processes. [T7][T8]
6.x SELinux: MAC for all processes.
7.x Architectural decomposition of mediaserver. [T7][T8][T14]
7.x ioctl system call restrictions for system components [96]. [T7][T8][T14]
8.x Treble Architectural decomposition: Move HALs (Hardware Abstraction Layer components) into [T7][T8][T14]
separate processes, reduce permissions, restrict access to hardware drivers [33, 97].

Table 3: Kernel sandboxing improvements in Android releases

Release Improvement Threats mitigated


5.x Privileged eXecute Never (PXN) [3]: Disallow the kernel from executing userspace. Prevents ‘ret2user’ [T8][T14]
style attacks.
6.x Kernel threads moved into SELinux enforcing mode, limiting kernel access to userspace files. [T8][T14]
8.x Privileged Access Never (PAN) and PAN emulation: Prevent the kernel from accessing any userspace [T8][T14]
memory without going through hardened copy-*-user() functions [94].

Note that only storing and using keys in TEE or TRH does directly with Keymaster or Strongbox, they can use it as an
not completely solve the problem of making them unusable oracle for cryptographic operations that require the private
under the assumption of a kernel compromise: if an attacker key. This is the reason why keys can be authentication bound
gains access to the low-level interfaces for communicating and/or require user presence verification, e.g. by pushing a
9
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

hardware button that is detectable by the TRH to assure that 4.5 Encryption of data in transit
keys are not used in the background without user consent. Android assumes that all networks are hostile and could be injecting
• Gatekeeper implements verification of user lock screen fac- attacks or spying on traffic. In order to ensure that network level
tors (PIN/password/pattern) in TEE and, upon successful adversaries do not bypass app data protections, Android takes the
authentication, communicates this to Keymaster for releas- stance that all network traffic should be end-to-end encrypted. Link
ing access to authentication bound keys [16]. Weaver im- level encryption is insufficient. This primarily protects against [T5]
plements the same functionality in TRH and communicates and [T6].
with Strongbox. Specified for Android 9.0 and initially im- In addition to ensuring that connections use encryption, Android
plemented on the Google Pixel 2 and Pixel 3 phones, we also focuses heavily on ensuring that the encryption is used correctly.
add a property called ‘Insider Attack Resistance’ (IAR): with- While TLS options are secure by default, we have seen that it is easy
out knowledge of the user’s lock screen factor, an upgrade for developers to incorrectly customize TLS in a way that leaves
to the Weaver/Strongbox code running in TRH will wipe their traffic vulnerable to MITM attacks [43, 44, 52]. Table 4 lists
the secrets used for on-device encryption [74, 102]. That is, recent improvements in terms of making network connections safe
even with access to internal code signing keys, existing data by default.
cannot be exfiltrated without the user’s cooperation.
• Protected Confirmation, also introduced with Android 4.6 Exploit mitigation
9.0 [17], partially mitigates [T11] and [T13]. In its current
scope, apps can tie usage of a key stored in Keymaster or A robust security system should assume that software vulnerabili-
Strongbox to the user confirming (by pushing a physical but- ties exist and actively defend against them. Historically, about 85%
ton) that they have seen a message displayed on the screen. of security vulnerabilities on Android result from unsafe memory
Upon confirmation, the app receives a hash of the displayed access (cf. [62, slide 54]). While this section primarily describes
message, which can be used to remotely verify that a user mitigations against memory unsafety, we note that the best defense
has confirmed the message. By controlling the screen output is the memory safety offered by languages such as Java. Much of
through TEE when protected confirmation is requested by the Android framework is written in Java, effectively defending
an app, even a full kernel compromise (without user cooper- large swathes of the OS from entire categories of security bugs.
ation) cannot lead to creating these signed confirmations. Android mandates the use of a number of mitigations including
ASLR [29, 86], RWX memory restrictions (e.g. W ⊕ X , cf. [85]),
and buffer overflow protections (such as stack-protector for the
4.4 Encryption of data at rest stack and allocator protections for the heap). Similar protections
A second element of enforcing the security model, particularly rules are mandated for Android kernels [94].
○1 and ○,3 is required when the main system kernel is not running In addition to the mitigations listed above, Android is actively
or is bypassed (e.g. by reading directly from non-volatile storage). rolling out additional mitigations, focusing first on code areas which
Full Disk Encryption (FDE) uses a credential protected key to are remotely reachable (e.g. the media frameworks [26]) or have a
encrypt the entire user data partition. FDE was introduced in An- history of high severity security vulnerabilities (e.g. the kernel). An-
droid 5.0, and while effective against [T1], it had a number of short- droid has pioneered the use of LLVM undefined behavior sanitizer
comings. Core device functionality (such as emergency dialer, ac- (UBSAN) in production devices to protect against integer overflow
cessibility services, and alarms) were inaccessible until password vulnerabilities in the media frameworks and other security sensi-
entry. Multi-user support introduced in Android 6.0 still required tive components. Android is also rolling out LLVM Control Flow
the password of the primary user before disk access. Integrity (CFI) in the kernel and security sensitive userspace com-
These shortcomings were mitigated by File Based Encryption ponents including media, Bluetooth, WiFi, NFC, and parsers [71].
(FBE) introduced in Android 7.0. On devices with TEE or TRH, all These mitigation methods work in tandem with isolation and
keys are derived within these secure environments, entangling the containment mechanisms to form many layers of defense; even
user knowledge factor with hardware-bound random numbers that if one layer fails, other mechanisms aim to prevent a successful
are inaccessible to the Android kernel and components above.4 FBE exploitation chain. Mitigation mechanisms also help to uphold
allows individual files to be tied to the credentials of different users, rules ○ 2 and ○3 without placing additional assumptions on which
cryptographically protecting per-user data on shared devices [T3]. languages apps are written in.
Devices with FBE also support a feature called Direct Boot which
enables access to emergency dialer, accessibility services, alarms, 4.7 System integrity
and receiving calls all before the user inputs their credentials.
Finally, system (sometimes also referred to device) integrity is an
Note that encryption of data at rest helps significantly with
important defense against attackers gaining a persistent foothold.
enforcing rule ○, 4 as effectively wiping user data only requires
AOSP has supported Verified Boot using the Linux kernel dm-verity
to delete the master key material, which is much quicker and not
support since Android KitKat providing strong integrity enforce-
subject to the complexities of e.g. flash translation layer interactions.
ment for Android’s Trusted Computing Base (TCB) and system
components to implement rule ○.4 Verified Boot [19] has been man-
4A
dated since Android Nougat (with an exemption granted to devices
detailed specification and analysis of key entanglement is subject to related work
and currently in progress. A reference to this detail will be added to a later version of which cannot perform AES crypto above 50MiB/sec.) and makes
this paper. modifications to the boot chain detectable by verifying the boot,
10
The Android Platform Security Model

Table 4: Network sandboxing improvements in Android releases

Release Improvement Threats mitigated


6.x usesCleartextTraffic in manifest to prevent unintentional cleartext connections [31]. [T5][T6]
7.x Network security config [18] to declaratively specify TLS and cleartext settings on a per-domain or [T5][T6]
app-wide basis to customize TLS connections.
9.0 DNS-over-TLS [60] to reduce sensitive data sent over cleartext and made apps opt-in to using cleartext [T5][T6]
traffic in their network security config.

TEE, and additional vendor/OEM partitions, as well as performing be a comprehensive list. One goal of defining the Android security
on-access verification of blocks on the system partition [20]. That model publicly is to enable researchers to discover potential addi-
is, attackers cannot permanently modify the TCB even after all pre- tional gaps by comparing the implementation in AOSP with the
vious layers of defense have failed, leading to a successful kernel model we describe, and to engage in conversation on those special
compromise. Note that this assumes the primary boot loader as cases.
root of trust to still be intact. As this is typically implemented in a
ROM mask in sufficiently simple code, critical bugs at that stage
are less likely. • Listing packages: App discovery is currently necessary for
Additionally, rollback protection with hardware support pre- direct app-to-app interaction which is derived from the open
vents attacks from flashing a properly signed but outdated system ecosystem principle (rule ○). 2
image that has known vulnerabilities and could be exploited. Fi- • VPN apps may monitor/block network traffic for other
nally, the Verified Boot state is included in key attestation certifi- apps: This is generally a deviation from the application sand-
cates (provided by Keymaster/Strongbox) in the deviceLocked and box model since one app may see and impact traffic from
verifiedBootState fields, which can be verified by apps as well as another app (developer consent). VPN apps are granted an
passed onto backend services to remotely verify boot integrity [21]. exemption because of the value they offer users, such as
Application integrity is enforced via APK signing [22]. Every improved privacy and data usage controls, and because user
app is signed and an update can only be installed if the new APK is consent is clear. For applications which use end-to-end en-
signed with the same identity or by an identity that was delegated cryption, clear-text traffic is not available to the VPN applica-
by the original signer. tion, partially restoring the confidentiality of the application
With Android 9.0, only updateable apps are not covered by Ver- sandbox.
ified Boot. Integrity of updateable apps is checked by Android’s • Backup: Data from the private app directory is backed up by
PackageManager during installation/update. Integrity of firmware default. Apps may opt out by setting fields in their manifest.
for other CPUs (including, but not limited to, the various radio • Enterprise: Android allows so-called Device Owner (DO) or
chipsets, the GPU, touch screen controllers, etc.) is out of scope of Profile Owner (PO) policies to be enforced by a Device Policy
Android Verified Boot at the time of this writing, and is typically Controller (DPC) app. A DO is installed on the primary/main
handled by OEM-specific boot loaders. user account, while a PO is installed on a secondary user
that acts as a work profile. Work profiles allow separation
4.8 Patching of personal from enterprise data on a single device and are
Orthogonal to all the previous defense mechanisms, vulnerable based on Android multi-user support. This separation is en-
code should be fixed to close discovered holes in any of the layers. forced by the same isolation and containment methods that
Regular patching can be seen as another layer of defense. However, protect apps from each other but implement a significantly
shipping updated code to the huge and diverse Android ecosystem stricter divide between the profiles [8].
is a challenge (which is one of the reasons for applying the defense A DPC introduces a fourth party to the consent model: only
in depth strategy). if the policy allows an action (e.g. within the work profile
Starting in August 2015, Android has publicly released a monthly controlled by a PO) in addition to consent by all other par-
security bulletin and patches for security vulnerabilities reported ties can it be executed. The distinction of personal and work
to Google. To address ecosystem diversity, project Treble launched profile is enhanced by the recent support of different user
with Android 8.0, with a goal of reducing the time/cost of updating knowledge factors (handled by the lockscreen as explained
Android devices [72, 76]. above in subsection 4.2), which lead to different encryption
In 2018, the Android Enterprise Recommended program as well keys for FBE. Note that on devices with a work profile man-
as general agreements with OEMs added the requirement of 90-day aged by PO but no full-device control (i.e. no DO), privacy
guaranteed security updates [23]. guarantees for the personal profile still need to hold under
this security model.
5 SPECIAL CASES • Factory Reset Protection (FRP): is an exception to not
There are some special cases that require intentional deviations storing any persistent data across factory reset (rule ○), 4
from the abstract security model to balance specific needs of various but is a deliberate deviation from this part of the model to
parties. This section describes some of these but is not intended to mitigate the threat of theft and factory reset ([T1][T2]).
11
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

6 RELATED WORK or directories more transparent by supporting declarative


Classical operating system security models are primarily concerned use similar to network security config for TLS connections
with defining access control (read/write/execute or more finely would make it easier for app developers to securely use these
granular) by subjects (but most often single users, groups, or roles) features.
to objects (typically files and other resources controlled by the OS, • It is common for malware to dynamically load its second
in combination with permissions sometimes also called protection stage depending on the respective device it is being installed
domains [91]). The most common data structures for efficiently on, to both try to exploit specific detected vulnerabilities and
implementing these relations (which, conceptually, are sparse ma- hide its payload from scanning in the app store. One potential
trices) are Access Control Lists (ACLs) [83] and capability lists mitigation is to require all executable code to: a) be signed by
(e.g. [100]). One of the first well-known and well-defined models a key that is trusted by the respective Android instance (e.g.
was the Bell-LaPadula multi-level security model [27], which de- with public keys that are pre-shipped in the firmware and/or
fined properties for assigning permissions and can be considered can be added by end-users) or b) have a special permission
the abstract basis for Mandatory Access Control and Type Enforce- to dynamically load/create code during runtime that is not
ment schemes like SELinux. Consequently, the Android platform contained in the application bundle itself (the APK file). This
security model implicitly builds upon these general models and could give better control over code integrity, but would still
their principle of least privilege. not limit languages or platforms used to create these apps.
One fundamental difference is that, while classical models as- It is recognized that this mitigation is limited to executable
sume processes started by a user to be a proxy for their actions and code. Interpreted code or server based configuration would
therefore execute directly with user privileges, more contemporary bypass this mitigation.
models explicitly acknowledge the threat of malware started by a • Advanced attackers may gain access to OEM or vendor code
user and therefore aim to compartmentalize their actions. Many signing keys. Even under such circumstance, it is beneficial
mobile OS (including Symbian as an earlier example) assign permis- to still retain some security and privacy assurances to users.
sions to processes (i.e. applications) instead of users, and Android One recent example is the specification and implementation
uses a comparable approach. A more detailed comparison to other of ’Insider Attack Resistance’ (IAR) for updateable code in
mobile OS is out of scope in this paper, and we refer to other sur- TRH [102], and extending similar defenses to higher-level
veys [41, 64, 77] as well as previous analysis of Android security software is desirable [74]. Potential approaches could be
mechanisms and how malware exploited weaknesses [4, 42, 46, reproducible firmware builds or logs of released firmware
66–68, 103]. hashes comparable to e.g. Certificate Transparency [65].
• W ⊕ X memory is already a standard protection mechanism
7 CONCLUSION in most current OS, including Android. However, pages that
are executable but not writable are typically still readable,
In this paper, we described the Android platform security model and and such read access can leak e.g. the location of ROP gad-
the complex threat model and ecosystem it needs to operate in. One gets. A potential improvement would be to restrict code
of the abstract rules is a multi-party consent model that is different page access to only execute but not even read (execute-only
to most standard OS security models in the sense that it implicitly pages).
considers applications to have equal veto rights over actions in the • Hardware level attacks are becoming more popular, and
same sense that the platform implementation and, obviously, users therefore additional (software and hardware) defense against
have. While this may seem restricting from a user point of view, it e.g. RAM related attacks would add another layer of defense,
effectively limits the potential abuse a malicious app can do on data although, most probably with a trade-off in performance
controlled by other apps; by avoiding an all-powerful user account overhead.
with unfiltered access to all data (as is the default with most current
However, all such future work needs to be done considering its
desktop/server OS), whole classes of threats such as file encrypting
impact on the wider ecosystem and should be kept in line with
ransomware or direct data exfiltration become impractical.
fundamental Android security principles.
AOSP implements the Android platform security model as well
as the general security principles of ‘defense in depth’ and ‘safe
ACKNOWLEDGMENTS
by default’. Different security mechanisms combine as multiple
layers of defense, and an important aspect is that even if security We would like to thank Dianne Hackborn for her influential work
relevant bugs exist, they should not necessarily lead to exploits over a large part of the Android platform security history and in-
reachable from standard user space code. While the current model sightful remarks on earlier drafts of this paper. Additionally, we
and its implementation already cover most of the threat model that thank Joel Galenson, Ivan Lozano, Paul Crowley, Shawn Willden,
is currently in scope of Android security and privacy considerations, Jeff Sharkey, and Xiaowen Xin for input on various parts, and
there are some deliberate special cases to the conceptually simple particularly Vishwath Mohan for direct contributions to the Au-
security model, and there is room for future work: thentication section. We also thank the enormous number of se-
curity researchers (https://source.android.com/security/overview/
• Keystore already supports API flags/methods to request acknowledgements) who have improved Android over the years
hardware- or authentication-bound keys. However, apps and anonymous reviewers who have contributed highly helpful
need to use these methods explicitly to benefit from im- feedback to earlier drafts of this paper.
provements like Strongbox. Making encryption of app files
12
The Android Platform Security Model

REFERENCES [20] AOSP. url: https://android.googlesource.com/platform/


[1] url: https://www.stonetemple.com/mobile-vs-desktop- external/avb/+/pie-release/README.md (visited on 2018-
usage-study/ (visited on 2018-07-30). 11-14).
[2] url: http://gs.statcounter.com/platform- market- share/ [21] AOSP. url: https : / / developer . android . com / training /
desktop-mobile-tablet (visited on 2018-07-30). articles/security-key-attestation (visited on 2018-11-14).
[3] url: https://kernsec.org/wiki/index.php/Exploit_Methods/ [22] AOSP. url: https://source.android.com/security/apksigning/
Userspace_execution (visited on 2018-07-30). (visited on 2018-11-14).
[4] Y. Acar, M. Backes, S. Bugiel, S. Fahl, P. McDaniel, and [23] AOSP. url: https://www.android.com/enterprise/recommended/
M. Smith. “SoK: Lessons Learned from Android Security requirements/ (visited on 2018-11-14).
Research for Appified Software Platforms”. In: 2016 IEEE [24] AOSP. Android platform permissions requesting guidance.
Symposium on Security and Privacy (SP). May 2016, pp. 433– url: https://material.io/design/platform-guidance/android-
451. doi: 10.1109/SP.2016.33. permissions.html#request-types (visited on 2019-03-06).
[5] A. Adams and M. A. Sasse. “Users Are Not the Enemy”. In: [25] AOSP. Security Updates and Resources - Process Types. url:
Commun. ACM 42.12 (Dec. 1999), pp. 40–46. issn: 0001-0782. https://source.android.com/security/overview/updates-
doi: 10.1145/322796.322806. resources#process_types.
[6] A. Ahn. How we fought bad apps and malicious developers in [26] D. Austin and J. Vander Stoep. Hardening the media stack.
2017. Jan. 2018. url: https://android-developers.googleblog. May 2016. url: https://android- developers.googleblog.
com/2018/01/how-we-fought-bad-apps-and-malicious. com/2016/05/hardening-media-stack.html.
html. [27] D. Bell and L. LaPadula. Secure Computer System Unified
[7] B. B. Anderson, A. Vance, C. B. Kirwan, J. L. Jenkins, and Exposition and Multics Interpretation. Tech. rep. MTR-2997.
D. Eargle. “From Warning to Wallpaper: Why the Brain MITRE Corp., July 1975.
Habituates to Security Warnings and What Can Be Done [28] J. Bender. Google Play security metadata and offline app
About It”. In: Journal of Management Information Systems distribution. June 2018. url: https://android- developers.
33.3 (2016), pp. 713–743. doi: 10.1080/07421222.2016.12439 googleblog.com/2018/06/google-play-security-metadata-
47. and.html.
[8] Android Enterprise Security White Paper. Sept. 2018. url: [29] S. Bhatkar, D. C. DuVarney, and R. Sekar. “Address Obfus-
https://source.android.com/security/reports/Google_A cation: An Efficient Approach to Combat a Board Range of
ndroid_Enterprise_Security_Whitepaper_2018.pdf (visited Memory Error Exploits”. In: Proc. USENIX Security Sympo-
on 2018-11-14). sium - Volume 12. USENIX Association, 2003, pp. 8–8. url:
[9] Android Security 2017 Year In Review. Mar. 2018. url: https: http://dl.acm.org/citation.cfm?id=1251353.1251361.
//source.android.com/security/reports/Google_Android_ [30] BlueBorne. 2017. url: https://go.armis.com/hubfs/BlueB
Security_2017_Report_Final.pdf. orne%20- %20Android%20Exploit%20(20171130).pdf ?t=
[10] AOSP. url: https://source.android.com/compatibility/cdd 1529364695784.
(visited on 2018-11-14). [31] C. Brubaker. Introducing nogotofail — a network traffic secu-
[11] AOSP. url: https://developer.android.com/guide/components/ rity testing tool. Nov. 2014. url: https://security.googleblog.
intents-filters (visited on 2018-11-14). com/2014/11/introducing- nogotofaila- network- traffic.
[12] AOSP. url: https://developer.android.com/guide/topics/ html.
manifest/manifest-intro (visited on 2018-11-14). [32] P. Carru. Attack TrustZone with Rowhammer. Apr. 2017.
[13] AOSP. url: https://developer.android.com/guide/topics/ url: http://www.eshard.com/wp-content/plugins/email-
manifest/permission-element (visited on 2018-11-14). before-download/download.php?dl=9465aa084ff0f070a3
[14] AOSP. url: https://source.android.com/security/keystore/ acedb56bcb34f5.
(visited on 2018-11-14). [33] D. Cashman. SELinux in Android O: Separating Policy to
[15] AOSP. url: https : / / developer . android . com / reference / Allow for Independent Updates. Linux Security Summit. 2017.
android/security/keystore/KeyGenParameterSpec (visited url: https://events.static.linuxfound.org/sites/events/files/
on 2018-11-14). slides/LSS%20-%20Treble%20%27n%27%20SELinux.pdf.
[16] AOSP. url: https://source.android.com/security/authentication/ [34] H. Chen, N. Li, W. Enck, Y. Aafer, and X. Zhang. “Analy-
gatekeeper (visited on 2018-11-14). sis of SEAndroid Policies: Combining MAC and DAC in
[17] AOSP. url: https : / / developer . android . com / preview / Android”. In: Proceedings of the 33rd Annual Computer Secu-
features/security%5C#android- protected- confirmation rity Applications Conference. ACM, 2017, pp. 553–565. isbn:
(visited on 2018-11-14). 978-1-4503-5345-8. doi: 10.1145/3134600.3134638.
[18] AOSP. url: https : / / developer . android . com / training / [35] E. Cunningham. Improving app security and performance
articles/security-config (visited on 2018-11-14). on Google Play for years to come. Dec. 2017. url: https :
[19] AOSP. url: https://source.android.com/security/verifiedboot/ //android-developers.googleblog.com/2017/12/improving-
verified-boot (visited on 2018-11-14). app-security-and-performance.html.
[36] CVE-2017-13177. Aug. 2017. url: https://cve.mitre.org/cgi-
bin / cvename . cgi ? name = CVE - 2017 - 13177 (visited on
2018-06-27).
13
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

[37] CVE-2017-17558: Remote code execution in media frameworks. [50] N. Fischer. Protecting WebView with Safe Browsing. Apr.
June 2018. url: https : / / source . android . com / security / 2018. url: https://android- developers.googleblog.com/
bulletin/2018-06-01#kernel-components (visited on 2018- 2018/04/protecting-webview-with-safe-browsing.html.
06-27). [51] Y. Fratantonio, C. Qian, S. Chung, and W. Lee. “Cloak and
[38] CVE-2018-9341: Remote code execution in media frameworks. Dagger: From Two Permissions to Complete Control of the
June 2018. url: https : / / source . android . com / security / UI Feedback Loop”. In: Proceedings of the IEEE Symposium
bulletin/2018-06-01#media-framework (visited on 2018-06- on Security and Privacy (Oakland). May 2017.
27). [52] M. Georgiev, S. Iyengar, S. Jana, R. Anubhai, D. Boneh,
[39] R. Dhamija, J. D. Tygar, and M. Hearst. “Why Phishing and V. Shmatikov. “The most dangerous code in the world:
Works”. In: Proceedings of the SIGCHI Conference on Human validating SSL certificates in non-browser software”. In:
Factors in Computing Systems. ACM, 2006, pp. 581–590. isbn: ACM Conference on Computer and Communications Security.
1-59593-372-7. doi: 10.1145/1124772.1124861. 2012, pp. 38–49.
[40] D. Dolev and A. C.-c. Yao. “On the security of public key [53] J. A. Halderman, S. D. Schoen, N. Heninger, W. Clarkson,
protocols”. In: IEEE Transactions on Information Theory 29 W. Paul, J. A. Calandrino, A. J. Feldman, J. Appelbaum,
(2 1983), pp. 198–208. doi: 10.1109/TIT.1983.1056650. and E. W. Felten. “Lest We Remember: Cold-boot Attacks
[41] A. Egners, B. Marschollek, and U. Meyer. Hackers in Your on Encryption Keys”. In: Commun. ACM 52.5 (May 2009),
Pocket: A Survey of Smartphone Security Across Platforms. pp. 91–98. issn: 0001-0782. doi: 10.1145/1506409.1506429.
Tech. rep. 2012,7. RWTH Aachen University, Jan. 2012. url: [54] D. Hintze, R. D. Findling, M. Muaaz, S. Scholz, and R. Mayrhofer.
https://itsec.rwth-aachen.de/publications/ae_hacker_in_ “Diversity in Locked and Unlocked Mobile Device Usage”.
your_pocket.pdf. In: Proceedings of the 2014 ACM International Joint Con-
[42] W. Enck, M. Ongtang, and P. McDaniel. “Understanding ference on Pervasive and Ubiquitous Computing: Adjunct
Android Security”. In: IEEE Security Privacy 7.1 (Jan. 2009), Publication (UbiComp 2014). ACM Press, 2014, pp. 379–384.
pp. 50–57. issn: 1540-7993. doi: 10.1109/MSP.2009.26. isbn: 978-1-4503-3047-3. doi: 10.1145/2638728.2641697.
[43] S. Fahl, M. Harbach, T. Muders, L. Baumgärtner, B. Freisleben, [55] D. Hintze, R. D. Findling, S. Scholz, and R. Mayrhofer. “Mo-
and M. Smith. “Why Eve and Mallory Love Android: An bile Device Usage Characteristics: The Effect of Context
Analysis of Android SSL (in)Security”. In: Proceedings of and Form Factor on Locked and Unlocked Usage”. In: Proc.
the 2012 ACM Conference on Computer and Communications MoMM 2014: 12th International Conference on Advances in
Security. ACM, 2012, pp. 50–61. isbn: 978-1-4503-1651-4. Mobile Computing and Multimedia. ACM Press, Dec. 2014,
doi: 10.1145/2382196.2382205. pp. 105–114. isbn: 978-1-4503-3008-4. doi: 10.1145/2684103.
[44] S. Fahl, M. Harbach, H. Perl, M. Koetter, and M. Smith. 2684156.
“Rethinking SSL Development in an Appified World”. In: [56] D. Hintze, P. Hintze, R. D. Findling, and R. Mayrhofer. “A
Proceedings of the 2013 ACM SIGSAC Conference on Com- Large-Scale, Long-Term Analysis of Mobile Device Usage
puter &#38; Communications Security. ACM, 2013, pp. 49–60. Characteristics”. In: Proc. ACM Interact. Mob. Wearable Ubiq-
isbn: 978-1-4503-2477-9. doi: 10.1145/2508859.2516655. uitous Technol. 1.2 (June 2017), 13:1–13:21. issn: 2474-9567.
[45] H. Falaki, R. Mahajan, S. Kandula, D. Lymberopoulos, R. doi: 10.1145/3090078.
Govindan, and D. Estrin. “Diversity in Smartphone Usage”. [57] S. Höbarth and R. Mayrhofer. “A framework for on-device
In: Proc. 8th International Conference on Mobile Systems, privilege escalation exploit execution on Android”. In: Proc.
Applications, and Services. ACM, 2010, pp. 179–194. isbn: IWSSI/SPMU 2011: 3rd International Workshop on Security
978-1-60558-985-5. doi: 10.1145/1814433.1814453. and Privacy in Spontaneous Interaction and Mobile Phone
[46] P. Faruki, A. Bharmal, V. Laxmi, V. Ganmoor, M. S. Gaur, Use, colocated with Pervasive 2011. June 2011.
M. Conti, and M. Rajarajan. “Android Security: A Survey [58] Y. Jang, C. Song, S. P. Chung, T. Wang, and W. Lee. “A11Y
of Issues, Malware Penetration, and Defenses”. In: IEEE Attacks: Exploiting Accessibility in Operating Systems”. In:
Communications Surveys Tutorials 17.2 (2015), pp. 998–1022. Proceedings of the 2014 ACM SIGSAC Conference on Com-
issn: 1553-877X. doi: 10.1109/COMST.2014.2386139. puter and Communications Security. ACM, 2014, pp. 103–
[47] A. P. Felt, S. Egelman, M. Finifter, D. Akhawe, and D. A. 115. isbn: 978-1-4503-2957-6. doi: 10.1145/2660267.2660295.
Wagner. “How to Ask for Permission”. In: HotSec. 2012. [59] A. Kharraz, W. Robertson, D. Balzarotti, L. Bilge, and E.
[48] A. P. Felt, E. Ha, S. Egelman, A. Haney, E. Chin, and D. Wag- Kirda. “Cutting the Gordian Knot: A Look Under the Hood
ner. “Android Permissions: User Attention, Comprehension, of Ransomware Attacks”. In: Detection of Intrusions and Mal-
and Behavior”. In: Proceedings of the Eighth Symposium on ware, and Vulnerability Assessment. Springer International
Usable Privacy and Security. ACM, 2012, 3:1–3:14. isbn: 978- Publishing, 2015, pp. 3–24. isbn: 978-3-319-20550-2.
1-4503-1532-6. doi: 10.1145/2335356.2335360. [60] E. Kline and B. Schwartz. DNS over TLS support in An-
[49] E. Fernandes, Q. A. Chen, J. Paupore, G. Essl, J. A. Halder- droid P Developer Preview. Apr. 2018. url: https://android-
man, Z. M. Mao, and A. Prakash. “Android UI Deception developers . googleblog . com / 2018 / 04 / dns - over - tls -
Revisited: Attacks and Defenses”. en. In: Financial Cryptog- support-in-android-p.html.
raphy and Data Security. Springer, Berlin, Heidelberg, Feb. [61] P. Kocher, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M.
2016, pp. 41–59. isbn: 978-3-662-54969-8. doi: 10.1007/978- Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom.
3-662-54970-4_3. “Spectre Attacks: Exploiting Speculative Execution”. In:
14
The Android Platform Security Model

arXiv:1801.01203 [cs] (2018). url: http://arxiv.org/abs/1801. [76] T. McDonnell, B. Ray, and M. Kim. “An Empirical Study of
01203. API Stability and Adoption in the Android Ecosystem”. In:
[62] N. Kralevich. The Art of Defense: How vulnerabilities help 2013 IEEE International Conference on Software Maintenance.
shape security features and mitigations in Android. Black- Sept. 2013, pp. 70–79. doi: 10.1109/ICSM.2013.18.
Hat. 2016. url: https : / / www . blackhat . com / docs / us - [77] I. Mohamed and D. Patel. “Android vs iOS Security: A Com-
16/materials/us-16-%20Kralevich-The-Art-Of-Defense-H parative Study”. In: 2015 12th International Conference on In-
ow-%20Vulnerabilities-Help-Shape-%20%5C%20Security- formation Technology - New Generations. Apr. 2015, pp. 725–
Features-And-Mitigations-In-Android.pdf. 730. doi: 10.1109/ITNG.2015.123.
[63] J. Kraunelis, Y. Chen, Z. Ling, X. Fu, and W. Zhao. “On Mal- [78] V. Mohan. Better Biometrics in Android P. June 2018. url:
ware Leveraging the Android Accessibility Framework”. In: https : / / android - developers . googleblog . com / 2018 / 06 /
Mobile and Ubiquitous Systems: Computing, Networking, and better-biometrics-in-android-p.html.
Services. Springer International Publishing, 2014, pp. 512– [79] V. Nanda and R. Mayrhofer. Android Pie à la mode: Security
523. isbn: 978-3-319-11569-6. & Privacy. Dec. 2018. url: https : / / android - developers .
[64] M. La Polla, F. Martinelli, and D. Sgandurra. “A Survey on googleblog.com/2018/12/android-pie-la-mode-security-
Security for Mobile Devices”. In: 15 (Jan. 2013), pp. 446–471. privacy.html.
[65] B. Laurie, A. Langley, and E. Kasper. Certificate Transparency. [80] S. Pichai. Android has created more choice, not less. July
2013. url: https://www.rfc-editor.org/info/rfc6962 (visited 2018. url: https://blog.google/around-the-globe/google-
on 2018-06-29). europe/android-has-created-more-choice-not-less/.
[66] L. Li, A. Bartel, J. Klein, Y. L. Traon, S. Arzt, S. Rasthofer, E. [81] P. Riedl, R. Mayrhofer, A. Möller, M. Kranz, F. Lettner, C.
Bodden, D. Octeau, and P. McDaniel. “I know what leaked Holzmann, and M. Koelle. “Only play in your comfort zone:
in your pocket: uncovering privacy leaks on Android Apps interaction methods for improving security awareness on
with Static Taint Analysis”. In: arXiv:1404.7431 [cs] (Apr. mobile devices”. English. In: Personal and Ubiquitous Com-
2014). url: http://arxiv.org/abs/1404.7431. puting (Mar. 2015), pp. 1–14. issn: 1617-4909. doi: 10.1007/
[67] L. Li, T. F. Bissyandé, M. Papadakis, S. Rasthofer, A. Bartel, s00779-015-0840-5.
D. Octeau, J. Klein, and L. Traon. “Static analysis of Android [82] F. Roesner, T. Kohno, E. Moshchuk, B. Parno, H. J. Wang, and
apps: A systematic literature review”. In: Information and C. Cowan. “User-driven access control: Rethinking permis-
Software Technology 88 (2017), pp. 67–95. issn: 0950-5849. sion granting in modern operating systems”. In: Proceedings
doi: 10.1016/j.infsof.2017.04.001. of the 2012 IEEE Symposium on Security and Privacy, ser. SP
[68] M. Lindorfer, M. Neugschwandtner, L. Weichselbaum, Y. ’12. 2012, pp. 224–238. doi: 10.1109/SP.2012.24.
Fratantonio, V. v. d. Veen, and C. Platzer. “ANDRUBIS – [83] R. S. Sandhu and P. Samarati. “Access control: principle and
1,000,000 Apps Later: A View on Current Android Mal- practice”. In: IEEE Communications Magazine 32.9 (Sept.
ware Behaviors”. In: 2014 Third International Workshop on 1994), pp. 40–48. issn: 0163-6804. doi: 10.1109/35.312842.
Building Analysis Datasets and Gathering Experience Re- [84] N. Scaife, H. Carter, P. Traynor, and K. R. B. Butler. “Cryp-
turns for Security (BADGERS). Sept. 2014, pp. 3–17. doi: toLock (and Drop It): Stopping Ransomware Attacks on
10.1109/BADGERS.2014.7. User Data”. In: 2016 IEEE 36th International Conference on
[69] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, S. Distributed Computing Systems (ICDCS). June 2016, pp. 303–
Mangard, P. Kocher, D. Genkin, Y. Yarom, and M. Hamburg. 312. doi: 10.1109/ICDCS.2016.46.
“Meltdown”. In: arXiv:1801.01207 [cs] (2018). url: http:// [85] A. Seshadri, M. Luk, N. Qu, and A. Perrig. “SecVisor: A
arxiv.org/abs/1801.01207. Tiny Hypervisor to Provide Lifetime Kernel Code Integrity
[70] T. Lodderstedt, M. McGloin, and P. Hunt. OAuth 2.0 Threat for Commodity OSes”. In: Proceedings of Twenty-first ACM
Model and Security Considerations. Jan. 2013. url: https : SIGOPS Symposium on Operating Systems Principles. ACM,
//www.rfc-editor.org/info/rfc6819. 2007, pp. 335–350. isbn: 978-1-59593-591-5. doi: 10.1145/
[71] I. Lozano. Compiler-based security mitigations in Android P. 1294261.1294294.
June 2018. url: https://android-developers.googleblog.com/ [86] H. Shacham, M. Page, B. Pfaff, E.-J. Goh, N. Modadugu, and
2018/06/compiler-based-security-mitigations-in.html. D. Boneh. “On the Effectiveness of Address-space Random-
[72] I. Malchev. Here comes Treble: A modular base for Android. ization”. In: Proceedings of the 11th ACM Conference on Com-
May 2017. url: https://android- developers.googleblog. puter and Communications Security. ACM, 2004, pp. 298–
com/2017/05/here-comes-treble-modular-base-for.html. 307. isbn: 1-58113-961-6. doi: 10.1145/1030083.1030124.
[73] R. Mayrhofer. “An Architecture for Secure Mobile Devices”. [87] S. Smalley and R. Craig. “Security Enhanced (SE) Android:
In: Security and Communication Networks (2014). issn: 1939- Bringing Flexible MAC to Android”. en. In: Proc. of NDSS
0122. doi: 10.1002/sec.1028. 2013. Apr. 2013, p. 18.
[74] R. Mayrhofer. “Insider Attack Resistance in the Android [88] Stagefright Vulnerability Report. 2015. url: https://www.kb.
Ecosystem”. In: Enigma 2019. USENIX Association, Jan. cert.org/vuls/id/924951.
2019. [89] SVE-2018-11599: Theft of arbitrary files leading to emails and
[75] R. Mayrhofer, S. Sigg, and V. Mohan. “Adversary Models for email accounts takeover. May 2018. url: https://security.
Mobile Device Authentication”. In: (). submitted for review. samsungmobile.com/securityUpdate.smsb (visited on 2018-
06-27).
15
René Mayrhofer, Jeffrey Vander Stoep, Chad Brubaker, and Nick Kralevich

[90] SVE-2018-11633: Buffer Overflow in Trustlet. May 2018. url: [98] J. Vander Stoep and S. Tolvanen. Year in Review: Android
https://security.samsungmobile.com/securityUpdate.smsb Kernel Security. Linux Security Summit. 2018. url: https:
(visited on 2018-06-27). //events.linuxfoundation.org/wp-content/uploads/2017/
[91] A. S. Tanenbaum and H. Bos. Modern Operating Systems. 4th. 11/LSS2018.pdf.
Prentice Hall Press, 2014. isbn: 013359162X, 9780133591620. [99] V. van der Veen, Y. Fratantonio, M. Lindorfer, D. Gruss,
[92] A. Tang, S. Sethumadhavan, and S. Stolfo. “CLKSCREW: C. Maurice, G. Vigna, H. Bos, K. Razavi, and C. Giuffrida.
Exposing the Perils of Security-Oblivious Energy Manage- “Drammer: Deterministic Rowhammer Attacks on Mobile
ment”. In: 26th USENIX Security Symposium (USENIX Secu- Platforms”. en. In: ACM Press, 2016, pp. 1675–1689. isbn:
rity 17). USENIX Association, 2017, pp. 1057–1074. isbn: 978- 978-1-4503-4139-4. doi: 10.1145/2976749.2978406.
1-931971-40-9. url: https://www.usenix.org/conference/ [100] R. Watson. New approaches to operatng system security ex-
usenixsecurity17/technical-sessions/presentation/tang. tensibility. Tech. rep. UCAM-CL-TR-818. Cambridge Univer-
[93] S. D. Tetali. Keeping 2 Billion Android devices safe with ma- sity, Apr. 2012. url: http://www.cl.cam.ac.uk/techreports/
chine learning. May 2018. url: https://android-developers. UCAM-CL-TR-818.pdf.
googleblog . com / 2018 / 05 / keeping - 2 - billion - android - [101] P. Wijesekera, A. Baokar, A. Hosseini, S. Egelman, D. Wag-
devices-safe.html. ner, and K. Beznosov. “Android Permissions Remystified: A
[94] S. Tolvanen. Hardening the Kernel in Android Oreo. Aug. Field Study on Contextual Integrity”. In: 24th USENIX Secu-
2017. url: https://android- developers.googleblog.com/ rity Symposium (USENIX Security 15). USENIX Association,
2017/08/hardening-kernel-in-android-oreo.html. 2015, pp. 499–514. isbn: 978-1-931971-232. url: https : / /
[95] J. Vander Stoep. Android: Protecting the Kernel. Linux Secu- www.usenix.org/conference/usenixsecurity15/technical-
rity Summit. 2016. url: https://events.static.linuxfound. sessions/presentation/wijesekera.
org / sites / events / files / slides / Android - %20protecting % [102] S. Willden. Insider Attack Resistance. May 2018. url: https:
20the%20kernel.pdf. //android- developers.googleblog.com/2018/05/insider-
[96] J. Vander Stoep. Ioctl Command Whitelisting in SELinux. attack-resistance.html.
Linux Security Summit. 2015. url: http://kernsec.org/files/ [103] Y. Zhang, M. Yang, B. Xu, Z. Yang, G. Gu, P. Ning, X. S.
lss2015/vanderstoep.pdf. Wang, and B. Zang. “Vetting Undesirable Behaviors in An-
[97] J. Vander Stoep. Shut the HAL up. July 2017. url: https : droid Apps with Permission Use Analysis”. In: Proceedings
//android-developers.googleblog.com/2017/07/shut-hal- of the 2013 ACM SIGSAC Conference on Computer & Com-
up.html. munications Security. ACM, 2013, pp. 611–622. isbn: 978-1-
4503-2477-9. doi: 10.1145/2508859.2516689.

16

También podría gustarte