Está en la página 1de 126

A Roadmap for Cybersecurity Research

November 2009
Executive Summary.................................................................................................................................................iii
Current Hard Problems in INFOSEC Research
1. Scalable Trustworthy Systems ...................................................................................................................1
2. Enterprise-Level Metrics (ELMs) ...........................................................................................................13
3. System Evaluation Life Cycle....................................................................................................................22
4. Combatting Insider Threats . ...................................................................................................................29
5. Combatting Malware and Botnets ..........................................................................................................38
6. Global-Scale Identity Management ........................................................................................................50
7. Survivability of Time-Critical Systems . .................................................................................................57
8. Situational Understanding and Attack Attribution ..............................................................................65
9. Provenance .................................................................................................................................................76
10. Privacy-Aware Security ...........................................................................................................................83
11. Usable Security . .......................................................................................................................................90
Appendix A. Interdependencies among Topics . .............................................................................................A1
Appendix B. Technology Transfer ..................................................................................................................... B1
Appendix C. List of Participants in the Roadmap Development..................................................................C1
Appendix D. Acronyms....................................................................................................................................... D1

Executive Summary
Executive Summary

The United States is at a significant decision point. We must continue to defend our
current systems and networks and at the same time attempt to “get out in front” of
our adversaries and ensure that future generations of technology will position us to
better protect our critical infrastructures and respond to attacks from our adversaries.
The term “system” is used broadly to encompass systems of systems and networks.

This cybersecurity research roadmap is an attempt to begin to define a national R&D

agenda that is required to enable us to get ahead of our adversaries and produce
the technologies that will protect our information systems and networks into the
future. The research, development, test, evaluation, and other life cycle consider-
ations required are far reaching—from technologies that secure individuals and
their information to technologies that will ensure that our critical infrastructures
are much more resilient. The R&D investments recommended in this roadmap
must tackle the vulnerabilities of today and envision those of the future.

The intent of this document is to provide detailed research and development

agendas for the future relating to 11 hard problem areas in cybersecurity, for use
by agencies of the U.S. Government and other potential R&D funding sources.
The 11 hard problems are:

1. Scalable trustworthy systems (including system architectures and requisite

development methodology)
2. Enterprise-level metrics (including measures of overall system trustworthiness)
3. System evaluation life cycle (including approaches for sufficient assurance)
4. Combatting insider threats
5. Combatting malware and botnets
6. Global-scale identity management
7. Survivability of time-critical systems
8. Situational understanding and attack attribution
9. Provenance (relating to information, systems, and hardware)
10. Privacy-aware security
11. Usable security

For each of these hard problems, the roadmap identifies critical needs, gaps in
research, and research agenda appropriate for near, medium, and long term

DHS S&T assembled a large team of subject matter experts who provided input
into the development of this research roadmap. The content was developed over
the course of 15 months that included three regional multi-day workshops, two
virtual workshops for each topic, and numerous editing activities by the participants.


Information technology has become pervasive in every way—from our phones and
other small devices to our enterprise networks to the infrastructure that runs our
economy. Improvements to the security of this information technology are essential
for our future. As the critical infrastructures of the United States have become more
and more dependent on public and private networks, the potential for widespread
national impact resulting from disruption or failure of these networks has also
increased. Securing the nation’s critical infrastructures requires protecting not only
their physical systems but, just as important, the cyber portions of the systems on
which they rely. The most significant cyber threats to the nation are fundamentally
different from those posed by the “script kiddies” or virus writers who tradition-
ally have plagued users of the Internet. Today, the Internet has a significant role
in enabling the communications, monitoring, operations, and business systems
underlying many of the nation’s critical infrastructures. Cyberattacks are increas-
ing in frequency and impact. Adversaries seeking to disrupt the nation’s critical
infrastructures are driven by different motives and view cyberspace as a possible
means to have much greater impact, such as causing harm to people or widespread
economic damage. Although to date no cyberattack has had a significant impact on
our nation’s critical infrastructures, previous attacks have demonstrated that exten-
sive vulnerabilities exist in information systems and networks, with the potential for
serious damage. The effects of a successful attack might include serious economic
consequences through impacts on major economic and industrial sectors, threats
to infrastructure elements such as electric power, and disruptions that impede the
response and communication capabilities of first responders in crisis situations.

The United States is at a significant decision point. We must continue to defend our
current systems and networks and at the same time attempt to “get out in front”
of our adversaries and ensure that future generations of technology will position
us to better protect our critical infrastructures and respond to attacks from our
adversaries. It is the opinion of those involved in creating this research roadmap that
government-funded research and development (R&D) must play an increasing role
to enable us to accomplish this goal of national and economic security. The research
topics in this roadmap, however, are relevant not only to the federal government
but also to the private sector and others who are interested in securing the future.

This cybersecurity research roadmap is an attempt to begin to define a national R&D

agenda that is required to enable us to get ahead of our adversaries and produce
the technologies that will protect our information systems and networks into the
future. The research, development, test, evaluation, and other life cycle consider-
ations required are far reaching—from technologies that secure individuals and
their information to technologies that will ensure that our critical infrastructures
“The time is now near at hand...” are much more resilient. These investments must tackle the vulnerabilities of today
— George Washington, July 2, 1776 and envision those of the future.

Historical background research programs. The original list has mixes of legacy systems), and the pres-
proven useful in guiding INFOSEC ence of significant, asymmetric threats.
The INFOSEC Research Council (IRC) research, and policy makers and planners
is an informal organization of govern- may find the document useful in evalu- The area of cybersecurity and the associ-
ment program managers who sponsor ating the contributions of ongoing and ated research and development activities
information security research within the proposed INFOSEC research programs. have been written about frequently over
U.S. Government. Many organizations However, the significant evolution of the past decade. In addition to both
have representatives as regular members technology and threats between 1999 the original IRC HPL in 1999 and the
of the IRC: Central Intelligence Agency, and 2005 required an update to the list. revision in 2005, the following reports
Department of Defense (including the Therefore, an updated version of the have discussed the need for investment
Air Force, Army, Defense Advanced HPL was published in November 2005. in this critical area:
Research Projects Agency, National This updated document included the
ƒƒ Toward a Safer and More Secure
Reconnaissance Office, National Secu- following technical hard problems from
rity Agency, Navy, and Office of the the information security perspective: Cyberspace
Secretary of Defense), Department ƒƒ Federal Plan for Cyber Security
1. Global-Scale Identity Management and Information Assurance
of Energy, Department of Homeland
Security, Federal Aviation Administra- 2. Insider Threat Research and Development
tion, Intelligence Advanced Research 3. Availability of Time-Critical ƒƒ Cyber Security: A Crisis of
Projects Activity, National Aeronautics Systems
and Space Administration, National 4. Building Scalable Secure Systems
ƒƒ Hardening the Internet
Institutes of Health, National Institute 5. Situational Understanding and
of Standards and Technology, National Attack Attribution ƒƒ Information Security
Science Foundation, and the Technical 6. Information Provenance Governance: A Call to Action
Support Working Group. In addition, ƒƒ The National Strategy to Secure
7. Security with Privacy
the IRC is regularly attended by partner Cyberspace
organizations from Canada and the 8. Enterprise-Level Security Metrics
ƒƒ Cyber Security Research and
United Kingdom.
Development Agenda
These eight problems were selected
The IRC developed the original Hard as the hardest and most critical chal-
Problem List (HPL), which was com- lenges that must be addressed by the These reports can be found at http://
posed in 1997 and published in draft INFOSEC research community if trust-
form in 1999. The HPL defines desir- worthy systems envisioned by the U.S.
able research topics by identifying a set Government are to be built. INFOSEC Current context
of key problems from the U.S. Govern- problems may be characterized as “hard”
ment perspective and in the context of for several reasons. Some problems are On January 8, 2008, the President
IRC member missions. Solutions to hard because of the fundamental techni- issued National Security Presiden-
these problems would remove major cal challenges of building secure systems, tial Directive 54/Homeland Security
barriers to effective information secu- others because of the complexity of Presidential Directive 23, which for-
rity (INFOSEC). The Hard Problem information technology (IT) system malized the Comprehensive National
List was intended to help guide the applications. Contributing to these Cybersecurity Initiative (CNCI) and a
research program planning of the IRC problems are conflicting regulatory and series of continuous efforts designed to
member organizations. It was also hoped policy goals, poor understanding of establish a frontline defense (reducing
that nonmember organizations and operational needs and user interfaces, current vulnerabilities and preventing
industrial partners would consider these rapid changes in technology, large het- intrusions), defending against the full
problems in the development of their erogeneous environments (including spectrum of threats by using intelligence

and strengthening supply chain security, influence in networking and IT systems, interagency coordination to ensure cov-
and shaping the future environment by components, and standards among U.S. erage of all the topics.
enhancing our research, development, competitors. Federal agencies with
and education, as well as investing in mission-critical needs for increased Each of the following topic areas is
“leap-ahead” technologies. cybersecurity, which includes informa- treated in detail in a subsequent section
tion assurance as well as network and of its own, from Section 1 to Section 11.
The vision of the CNCI research com- system security, can play a direct role
1. Scalable trustworthy systems
munity over the next 10 years is to in determining research priorities and
(including system architectures and
“transform the cyber-infrastructure so assessing emerging technology proto-
requisite development methodol-
that critical national interests are pro- types. Moreover, through technology
tected from catastrophic damage and transfer efforts, the federal government
2. Enterprise-level metrics (including
our society can confidently adopt new can encourage rapid adoption of the
measures of overall system trust-
technological advances.” results of leap-ahead research. Technol-
ogy breakthroughs that can curb or
Two components of the CNCI deal break the resource-draining cycle of 3. System evaluation life cycle (in-
cluding approaches for sufficient
with cybersecurity research and develop- security patching will have a high likeli-
ment—one focused on the coordination hood of marketplace implementation.
of federal R&D and the other on the 4. Combatting insider threats
development of leap-ahead technologies. As stated previously, this Cybersecu- 5. Combatting malware and botnets
rity Research Roadmap is an attempt 6. Global-scale identity management
No single federal agency “owns” the to begin to address a national R&D 7. Survivability of time-critical
issue of cybersecurity. In fact, the agenda that is required to enable us to systems
federal government does not uniquely get ahead of our adversaries and produce 8. Situational understanding and
own cybersecurity. It is a national and the technologies that will protect our attack attribution
global challenge with far-reaching con- information systems and networks into 9. Provenance (relating to informa-
sequences that requires a cooperative, the future. The topics contained in this tion, systems, and hardware)
comprehensive effort across the public roadmap and the research and develop-
10. Privacy-aware security
and private sectors. However, as it has ment that would be accomplished if the
11. Usable security
done historically, U.S. Government roadmap were implemented are, in fact,
R&D in key technologies working in leap-ahead in nature and address many
close cooperation with private-sector of the topics that have been identified Eight of these topics (1, 2, 4, 6, 7, 8,
partners can jump-start the necessary in the CNCI activities 9, 10) are adopted from the November
fundamental technical transformation. 2005 IRC Hard Problem List [IRC05]
Document format and are still of vital relevance. The
The leap-ahead strategy aligns with the other three topics (3, 5, 11) represent
consensus of the nation’s networking The intent of this document is to additional areas considered to be of
and cybersecurity research communi- provide detailed research and develop- particular importance for the future.
ties that the only long-term solution to ment agendas for the future relating to
the vulnerabilities of today’s network- 11 hard problem areas in cybersecurity, The order in which the 11 topics are
ing and information technologies is to for use by agencies of the U.S. Govern- presented reflects some structural simi-
ensure that future generations of these ment and anyone else that is funding larities among subgroups of the topics
technologies are designed with secu- or doing R&D. It is expected that each and exhibits clearly some of their major
rity built in from the ground up. The agency will find certain parts of the interdependencies. The order proceeds
leap-ahead strategy will help extend document resonant with its own needs roughly from overarching system con-
U.S. leadership at a time of growing and will proceed accordingly with some cepts to more detailed issues—except

for the last topic—and has the following Background ƒƒ What R&D is evolutionary and
structure: what is more basic, higher risk,
ƒƒ What is the problem being game changing?
a. Topics 1–3 frame the overarching addressed?
ƒƒ Resources
problems. ƒƒ What are the potential threats?
ƒƒ Measures of success
b. Topics 4–5 relate to specific major ƒƒ Who are the potential
beneficiaries? What are their ƒƒ What needs to be in place for test
threats and needs.
respective needs? and evaluation?
c. Topics 6–10 relate to some of the
ƒƒ What is the current state of the ƒƒ To what extent can we test real
“ilities” and to system concepts
practice? systems?
required for implementing the
previous topics. ƒƒ What is the status of current Following the 11 sections are three
research? appendices:
Topic 11, usable security, is different
from the others in its cross-cutting Future Directions Appendix A: Interdependencies among
nature. If taken seriously enough, it Topics
ƒƒ On what categories can we
can influence the success of almost all
the other topics. However, some sort subdivide the topics? Appendix B: Technology Transfer
of transcendent usability requirements ƒƒ What are the major research
need to be embedded pervasively in all gaps? Appendix C: List of Participants in the
the other topics. ƒƒ What are some exemplary Roadmap Development
problems for R&D on this topic?
Each of the 11 sections follows a
ƒƒ What are the challenges that
similar format. To get a full picture of
must be addressed?
the problem, where we are, and where
we need to go, we ask the following ƒƒ What approaches might be
questions: desirable?

[IRC2005] INFOSEC Research Council Hard Problem List, November 2005

[USAF-SAB07] United States Air Force Scientific Advisory Board, Report on Implications of Cyber Warfare. Volume 1:
Executive Summary and Annotated Brief; Volume 2: Final Report, August 2007. For Official Use Only.

Additional background documents (including the two most recent National Research Council study reports on cybersecurity)
can be found online. (


The content of this research roadmap was developed over the course of 15 months
that included three workshops, two phone sessions for each topic, and numer-
ous editing activities by the participants. Appendix C lists all the participants.
The Cyber Security program of the Department of Homeland Security (DHS)
Science and Technology (S&T) Directorate would like to express its appre-
ciation for the considerable amount of time they dedicated to this effort.

DHS S&T would also like to acknowledge the support provided by the staff of SRI
International in Menlo Park, CA, and Washington, DC. SRI is under contract with
DHS S&T to provide technical, management, and subject matter expert support for
the DHS S&T Cyber Security program. Those involved in this effort include Gary
Bridges, Steve Dawson, Drew Dean, Jeremy Epstein, Pat Lincoln, Ulf Lindqvist,
Jenny McNeill, Peter Neumann, Robin Roy, Zach Tudor, and Alfonso Valdes.

Of particular note is the work of Jenny McNeill and Peter Neumann. Jenny
has been responsible for the organization of each of the workshops and phone
sessions and has worked with SRI staff members Klaus Krause, Roxanne Jones,
and Ascencion Villanueva to produce the final document. Peter Neumann
has been relentless in his efforts to ensure that this research roadmap rep-
resents the real needs of the community and has worked with roadmap
participants and government sponsors to produce a high-quality product.

Current Hard Problems in INFOSEC Research
1. Scalable Trustworthy Systems

What is the problem being addressed?

Trustworthiness is a multidimensional measure of the extent to which a system is
likely to satisfy each of multiple aspects of each stated requirement for some desired
combination of system integrity, system availability and survivability, data confi-
dentiality, guaranteed real-time performance, accountability, attribution, usability,
and other critical needs. Precise definitions of what trustworthiness means for these
requirements and well-defined measures against which trustworthiness can be evalu-
ated are fundamental precursors to developing and operating trustworthy systems.
These precursors cut across everything related to scalable trustworthy systems. If
what must be depended on does not perform according to its expectations, then
whatever must depend on it may itself not be trustworthy. A trusted system is
one that must be assumed to satisfy its requirements—whether or not it is actu-
ally trustworthy; indeed, it is a system whose failure in any way may compromise
those requirements. Unfortunately, today’s systems are typically not well suited for
applications with critical trustworthiness requirements.

Scalability is the ability to satisfy given requirements as systems, networks, and

systems of systems expand in functionality, capacity, complexity, and scope of trust-
worthiness requirements security, reliability, survivability, and improved real-time
performance. Scalability must typically be addressed from the outset; experience
shows that scalability usually cannot be retrofitted into systems for which it was
not an original design goal. Scalable trustworthiness will be essential for many
national- and world-scale systems, including those supporting critical infrastructures.
Current methodologies for creating high-assurance systems do not scale to the size
of today’s—let alone tomorrow’s—critical systems.

Composability is the ability to create systems and applications with predictably

satisfactory behavior from components, subsystems, and other systems. To enhance
scalability in complex distributed applications that must be trustworthy, high-
assurance systems should be developed from a set of composable components and
subsystems, each of which is itself suitably trustworthy, within a system architecture
that inherently supports facile composability. Composition includes the ability to
run software compatibly on different hardware, aided considerably by abstraction,
operating systems, and suitable programming languages. However, we do not yet
have a suitable set of trustworthy building blocks, composition methodologies,
and analytic tools that would ensure that trustworthy systems could be developed
as systems of other systems. In addition, requirements and evaluations should
also compose accordingly. In the future, it will be vital that new systems can be
incrementally added to a system of systems with some predictable confidence that
the trustworthiness of the resulting systems of systems will not be weakened—or
indeed that it may be strengthened.

Growing interconnectedness among follows: (1) trustworthiness, (2) com- computing base that would provide a
existing systems results, in effect, in posability, and (3) scalability. Thus, the suitable foundation for such computing.
new composite systems at increasingly challenge addressed here is threefold: However, this assumption has not been
large scales. Existing hardware, operat- (a) to provide a sound basis for compos- justified. In the future, we must be able
ing system, networking, and application ability that can scale to the development to develop scalable trustworthy systems
architectures do not adequately account of large and complex trustworthy effectively.
for combined requirements for security, systems; (b) to stimulate the develop-
performance, and usability—confound- ment of the components, analysis tools, Who are the potential
ing attempts to build trustworthy and testbeds required for that effort;
beneficiaries? What are their
systems on them. As a result, today the and (c) to ensure that trustworthiness
respective needs?
security of a system of systems may be evaluations themselves can be composed.
drastically less than that of most of its Large organizations in all sectors—for
components. What are the potential example, government, military, com-
mercial, financial, and energy—suffer
In certain cases, it may be possible the consequences of using large-scale
to build systems that are more trust- Threats to a system in operation include computing systems whose trustworthi-
worthy than some (or even most) everything that can prevent critical appli- ness either is not assured or is potentially
of their components—for example, cations from satisfying their intended compromised because of costs that
through constructive system design and requirements, including insider and out- outweigh the perceived benefits. All
meticulous attention to good software sider misuse, malware and other system stakeholders have requirements for
engineering practices. Techniques for subversions, software flaws, hardware confidentiality, integrity, and availabil-
building more trustworthy systems out malfunctions, human failures, physical ity in their computing infrastructures,
of less trustworthy components have damage, and environmental disruptions. although the relative importance of
long been known and used in practice Indeed, systems sometimes fail without these requirements varies by application.
(e.g., summarized in [Neu2004], in the any external provocation, as a result Achieving scalability and evolvability of
context of composability). For example, of design flaws, implementation bugs, systems without compromising trust-
error-correcting codes can overcome misconfiguration, and system aging. worthiness is a major need. Typical
unreliable communications and storage Additional threats arise in the system customers include the following:
media, and encryption can be used to acquisition and code distribution pro-
ƒƒ Large-system developers (e.g., of
increase confidentiality and integrity cesses. Serious security problems have
despite insecure communication chan- also resulted from discarded or stolen operating systems, database
nels. These techniques are incomplete by systems. For large-scale systems consist- management systems, national
themselves and generally ignore many ing of many independent installations infrastructures such as the power
security threats. They typically depend (such as the Domain Name System, grid)
on the existence of some combination DNS), security updates must reach and ƒƒ Application developers
of trustworthy developers, trustwor- be installed in all relevant components ƒƒ Microelectronics developers
thy systems, trustworthy users, and throughout the entire life cycle of the
ƒƒ System integrators
trustworthy administrators, and their systems. This scope of updating has proven
trustworthy embedding in those systems. to be difficult to achieve. ƒƒ Large- and small-scale users
ƒƒ Purveyors of potential exemplar
The primary focus of this topic area is Critical systems and their operating envi- applications for scalable
scalability that preserves and enhances ronments must be trustworthy despite a trustworthiness
trustworthiness in real systems. The per- very wide range of adversities and adver-
ceived order of importance for research saries. Historically, many system uses Several types of systems suggest the
and development in this topic area is as assumed the existence of a trustworthy importance of being able to develop


scalable trustworthy systems. Examples What is the current state of insufficient in the long run. Research
include the following: the practice? is needed to establish the repertoire of
architected hardware protections that
ƒƒ Air traffic control systems Hardware developers have recently made are essential for system trustworthiness.
ƒƒ Power grids significant investments in specification, It is unlikely that software alone can ever
ƒƒ Worldwide funds transfer systems formal methods, configuration control, compensate fully for the lack of such
modeling, and prediction, partly in hardware protections.
ƒƒ Cellphone networks
response to recognized problems, such
Such systems need to be robust and as the Intel floating point flaw, and A possible implication is that the com-
capable of satisfying the perceived trust- partly as a result of increased demon- mercial off-the-shelf (COTS) systems in
worthiness requirements. Outages in strations of the effectiveness of those pervasive use today will never become
these systems can be extremely costly techniques. sufficiently trustworthy. If that is indeed
and dangerous. However, the extent to true, testing that implication should be
which the underlying concepts used to The foundation for trustworthy scalable identified as an activity and milestone
build these existing systems can continue systems is established by the underly- in the recommended research agenda.
to scale and also be responsive to more ing hardware architecture. Adequate
exacting trustworthiness requirements hardware protections are essential, and Convincing hardware manufacturers
is unknown—especially in the face of nearly all extant hardware architectures and software developers to provide and
increasing cyberthreats. The R&D must lack needed capabilities. Examples support needed hardware capabilities,
provide convincing arguments that they include fine-grain memory protec- of course, is a fundamental obstacle.
will scale appropriately. Exemplars of tion, inaccessible program control state, The manufacturers’ main motivations
potential component systems might unmodifiable executable codes, fully are least change and time to market.
include the following: granular access protections, and virtu- Until compelling research findings, legal
ally mapped bus access by I/O and consequences (e.g., financial liability
ƒƒ Trustworthy handheld other adapter boards. for customer damages), and economic
multipurpose devices and other forces (e.g., purchase policies mandat-
end-user devices Although it might be appealing to try ing the needed capabilities) are brought
ƒƒ Trustworthy special-purpose to apply those approaches to software, to bear, it seems unlikely that goals for
servers the issues of scalability suggest that the securing COTS and open source
ƒƒ Embedded control systems additional approaches may be necessary. products can be realized.
that can be composed and used Numerous software-related failures
effectively have occurred (e.g., see [Neu1995]). What is the status of current
In addition, techniques are needed to
ƒƒ Trustworthy networks research?
address how software/hardware inter-
ƒƒ Navigation systems, such as actions affect the overall trust level. Over the past decade, significant com-
the Global Positioning Systems Unfortunately, there is no existing puter security investments have been
(GPS) mandate for significant investment made in attempts to create greater
during software system development to assurance for existing applications
One or more such systems should be ensure scalable trustworthiness. Conse- and computer-based enterprises that
chosen for deeper study to develop quently, such efforts are generally not are based predominantly on COTS
better understanding of the approaches adequately addressed. components. Despite some progress,
to scalable security developed in this there are severe limits to this approach,
program. In turn, the results of ongoing Diagnostic tools to detect software and success has been meager at best,
work on scalable trustworthiness should flaws on today’s hardware architectures particularly with respect to trustwor-
be applied to those and other exemplars. may be useful in the short run but are thiness, composability, and scalability.


The assurance attainable by incremental example of how trustworthy com- In recent years, research has advanced
improvements on COTS products is puting systems can be designed and significantly in formal methods appli-
fundamentally inadequate for critical built. It will make all elements of the cable to software trustworthiness. That
applications. constructive security process openly research is generally more applicable to
available. Recent advances in cryptog- new systems rather than to being retro-
Various research projects over the past raphy can also help, although some fitted into existing systems. However, it
half-century have been aimed at the composability issues remain to be needs to focus on attributes and subsys-
challenge of designing and evaluating resolved as to how to embed those tems for which it can be most effective,
scalable trustworthy systems and net- advances securely into marginally and must deal with complexity, scal-
works, with some important research secure computer systems. Also, public ability, hardware and software, and
contributions with respect to both key infrastructures (PKIs) are becom- practical issues such as device drivers
hardware and software. Some of these ing more widely used and embedded and excessive root privileges.
date back to the 1960s and 1970s, such in applications. However, many gaps
as Multics, PSOS (the Provably Secure remain in reusable requirements for
Operating System) and its formally trustworthiness, system architectures, FUTURE DIRECTIONS
based Hierarchical Development Meth- software engineering practices, sound
odology (HDM), the Blacker system programming languages that avoid On what categories can we
as an early example of a virtual private many of the characteristic flaws, and
subdivide this topic?
network, the CLInc (Computational analysis tools that scale up to entire
Logic, Inc.) stack, Gypsy, InaJo, Euclid, systems. Thoroughly worked examples For present purposes, different
ML and other functional programming of trustworthy systems are needed that approaches to development of trustwor-
languages, and the verifying compiler, can clearly demonstrate that well-con- thy scalable systems are associated with
to name just a few. However, very few ceived composability can enhance both the following three roadmap categories.
systems available today have taken trustworthiness and scalability. For These categories are distinguished from
serious advantage of such potentially example, each of the exemplars noted one another roughly based on the extent
far-reaching research efforts, or even above would benefit greatly from the to which they are able to reuse existing
the rather minimal guidance of Security incorporation of scalable trustworthy components.
Level 4 in FIPS 140-1. Also, the valued systems.
but inadequately observed 1975 secu- 1. Improving trustworthiness in
rity principles of Saltzer and Schroeder At present, even for small systems, there existing systems. This incremental
have recently been updated by Saltzer exist very few examples of requirements, approach could entail augmenting rela-
and Kaashoek [Sal+2009]. trustworthiness metrics, and opera- tively untrustworthy systems with some
tional systems that encompass a broad trustworthy components and enforcing
Some more recent efforts can also be spectrum of trustworthiness with any operational constraints in attempts to
cited here. For example, architectures generality. Furthermore, such require- achieve either trustworthy functions or
exist or are contemplated for robust ments, metrics, and systems need to systems with more clearly understood
hardware that would inherently be composable and scalable into trust- trust properties. Can we make existing
increase system trustworthiness worthy systems of systems. However, systems significantly more trustworthy
by avoiding common vulnerabilities, a few examples exist for dedicated without wholesale replacement?
including modernized capability- special-purpose systems, such as data
based architectures. In addition, the diodes enforcing one-way communi- 2. Clean-slate approaches. This entails
Trusted Computing Exemplar Project cation paths and the Naval Research building trustworthy primitives, com-
at the Naval Postgraduate School Laboratory Pump enabling trustworthy posing them into trustworthy functions,
( reading of information at lower levels and then verifying the overall trust level
is intended to provide a working of multilevel security. of the composite system. How much


better would this be? Would this enable controlled. A clean-slate approach tol- practices that can yield greater trust-
solutions of problems that cannot be erating an ongoing level of continuous worthiness. See also [Can2001], which
adequately addressed today, and for compromise in its system components represents the beginning of work on
what requirements? Under what circum- might also be viewed as a hybrid of the notion of universal composability
stances and for what requirements might categories 2 and 3. Further R&D is applied to cryptography.
this be possible? What new technologies, clearly required to determine the trade-
system architectures, and tools might offs in cost-effectiveness, practicality, However, there are gaps in our under-
be needed? performance, usability, and relative standing of composability as it relates
trustworthiness attainable for any par- to security, and to trustworthiness more
3. Operating successfully for given ticular set of requirements. DARPA’s generally, primarily because we lack
requirements despite the presence IAMANET is a step in that direction. precise specifications of most of the
of partially untrusted environments. important requirements and desired
For example, existing computing An urgent need exists for R&D on properties. For example, we are often
systems might be viewed as “enemy incremental, clean-slate, and hybrids good at developing specific solutions
territory” because they have been approaches. Trustworthiness issues may to specific security problems, but we
subject to unknown influences within affect the development process and the do not understand how to apply and
the commercial supply chain and the resulting system performance. Adding combine these specific solutions to
overall life cycle (design, implementa-functionality and concomitant com- produce trustworthy systems. We lack
tion, operations, maintenance, and plexity to achieve trustworthiness may methods for analyzing how even small
decommissioning). be counterproductive, if not done con- changes to systems affect their trust-
structively; it typically merely introduces worthiness. More broadly, we lack a
It is inherently impossible to control new vulnerabilities. Trustworthiness good understanding of how to develop
every aspect of the entire life cycle must be designed in from the outset and maintain trustworthy systems com-
and the surrounding operational envi- with complete specified requirements. prehensively throughout the entire life
ronments. For example, end-to-end Functionality and trustworthiness are cycle. We lack methods and tools for
cryptography enables communications inherently in conflict in the design decomposing high-level trustworthiness
over untrustworthy media—but does process, and this conflict must be goals into specific design requirements,
not address denial-of-service attacks resolved before any implementation. capturing and specifying security require-
en route or insider subversion at the ments, analyzing security requirements,
endpoints. What are the major research mapping higher-layer requirements into
lower-layer ones, and verifying system
The three categories are not intended trustworthiness properties. We do not
to be mutually exclusive. For example, Research relating to composability has understand how to combine systems in
hybrid approaches can combine legacy addressed some of the fundamental ways that ensure that the combination is
systems from category 1 with incremen- problems and underlying theory. For more, rather than less, secure and resil-
tal changes and significant advances example, see [Neu2004] for a recent ient than its weakest components. We
from category 2. Indeed, hybrids among consideration of past work, current prac- lack a detailed case history of past suc-
these three categories are not merely tice, and R&D directions that might be cesses and failures in the development
possible but quite likely. For example, useful in the future. It contains numer- of trustworthy systems that could help
approaches that begin with a clean-slate ous references to papers and reports us to elucidate principles and properties
architecture could also incorporate some on composability. It also considers a of trustworthy systems, both in an over-
improvements of existing systems, and variety of techniques for compositions arching sense and in specific application
even allow some operations to take place of subsystems that can increase trustwor- areas. We lack development tools and
in untrusted environments—if suitably thiness, as well as system and network languages that could enable separation
encapsulated, confined, or otherwise architectures and system development of functionality and trustworthiness


concerns for developers. For small for composing trustworthy ƒƒ More extensive detailed worked
systems, ad hoc solutions seldom suffice systems examples.
if they do not reflect such fundamental
ƒƒ Well-defined composable
understanding of the problems. For the Several threads could run through this
specifications for requirements
large-scale, highly complex systems of timeline—for example, R&D relating
and components
the future, we cannot expect to achieve to trustworthy isolation, separation,
adequate trustworthiness without ƒƒ Realistic provable security and virtualization in hardware and
deeper understanding, better tools, and properties for small-scale systems software; composability of designs and
more reliable evaluation methods—as ƒƒ Urgent need for detailed worked implementations; analyses that could
well as composable building blocks and examples greatly simplify evaluation of trustwor-
well-documented, worked examples of thiness before putting applications into
less complex systems. ƒƒ Better understanding of the operation; robust architectures that
security properties of existing provide self-testing, self-diagnosing,
The research directions can be parti- major components. self-reconfiguring, compromise resil-
tioned into near-term, medium-term, ient, and automated remediation; and
Medium term
and long-term opportunities. In general, architectures that break the current
ƒƒ New hardware with well-
the near-term approaches fall into the asymmetric advantage for attackers
understood trustworthiness
incremental category, and the long- (offense is cheaper than defense, at
term approaches fall into clean-slate present). The emphasis needs to be
and hybrid categories. However, the ƒƒ Better operating systems and on realistic, practical approaches to
long-term approaches often have staged networking developing systems that are scalable,
efforts that begin with near-term efforts. ƒƒ Better application architectures
composable, and trustworthy.
Also, the hybrid efforts tend to require
for trustworthy systems
longer-term schedules because some of The gaps in practice and R&D,
them rely on near- and medium-term ƒƒ Isolation of legacy systems approaches, and potential benefits are
efforts. in trustworthy virtualization summarized in Table 1.1. The research
environments directions for scalable trustworthy
Near term ƒƒ Continued research in systems are intended to address these
ƒƒ Development of prototype composability, techniques for gaps. Table 1.2 also provides a summary
trustworthy systems in selected verifying the security properties of this section.
application and infrastructure of composed systems in terms of
This topic area interacts strongly with
domains their specifications
enterprise-level metrics (Section 2) and
ƒƒ Exploitation of cloud ƒƒ Urgent need for detailed realistic evaluation methodology (Section 3) to
architectures and web-based and practical worked examples. provide assurance of trustworthiness.
applications Long term In the absence of such metrics and suit-
ƒƒ Development of simulation ƒƒ Tools for verifying able evaluation methodologies, security
environments for testing would be difficult to comprehend, and
trustworthiness of composite
approaches to development of the cost-benefit trade-offs would be
scalable trustworthy systems difficult to evaluate. In addition, all the
ƒƒ Techniques and tools for other topic areas can benefit from scal-
ƒƒ Intensive further research in developing and maintaining able trustworthy systems, as discussed
composability trustworthy systems throughout in Appendix A.
ƒƒ Development of building blocks the life cycle


TABLE 1.1: Summary of Gaps, Approaches, and Benefits

Concept Gaps in Practice Gaps in R&D Approaches Potential Benefits

Requirements Nonexistent, inconsistent, Orange Book/Common Canonical, composable, Relevant developments;
incomplete nonscalable Criteria have inherent scalable trustworthiness Simplified procurement
requirements limitations requirements process
System Inflexibility; Constraints of Evolvable architectures, Scalably composable Long-term scalable
architectures flawed legacy systems scalable theory of components and evolvability maintaining
composability are needed trustworthy architectures trustworthy operation

Development Unprincipled systems, Principles not Built-in assured Fewer flaws and risks;
methodologies unsafe languages, sloppy experientially scalably composable Simplified evaluations
and software programming practices demonstrated; Good trustworthiness
engineering programming language
theory widely ignored

Analytic tools Ad-hoc, piecemeal tools with Tools need sounder bases Rigorously based Eliminating many flaws
limited usefulness composable tools

Whole-system Impossible today for large Top-to-bottom, end-to- Formal methods, Scalable incremental
evaluations systems end analyses needed hierarchical staged evaluations
Operational Enormous burdens on User and administrator Dynamic self-diagnosis Simplified, scalable
practices administrators usability are often ignored and self-healing operational management

What are the challenges that of high assurance information technol- could compromise the trustworthi-
must be addressed? ogy. Time-consuming evaluations of ness of the entire system. Designing
trustworthy systems today create long complex secure systems from the ground
The absence of sound systemwide delays when compared with conven- up is an exceptionally hard problem,
architectures designed for trustworthi- tional system developments with weaker particularly since large systems may
ness and the relatively large costs of evaluations. Consequently, development have catastrophic flaws in their design
full verification and validation (V&V) of trustworthy systems can be expected and implementation that are not dis-
have kept any secure computing base to take longer than is typically planned covered until late in development, or
from economically providing the req- for COTS systems. In addition, the even after deployment. Catastrophic
uisite assurance and functionality. (The performance of trustworthy systems software flaws may occur even in just
sole exception is provided by “high- typically lags the performance of COTS a few lines of mission-critical code,
consequence” government applications, systems with comparable functions. and are almost inevitable in the tens
in which cost is a secondary concern of millions of lines of code in today’s
to national security.) This situation is One of the most pressing challenges systems. Given the relatively minuscule
exacerbated by the scale and complexity involves designing system architectures size of programs and systems that have
often needed to provide required func- that minimize how much of the system been extensively verified and the huge
tionality. In addition, the length of must be trustworthy—i.e., minimiz- size of modern systems and applica-
the evaluation process can exceed the ing the size and extent of the trusted tions, scaling up formal approaches to
time available for patches and system computing base (TCB). In contrast, for production and verification of bug-free
upgrades and retarded the incorporation a poorly designed system, any failure systems seems like a Herculean task. Yet,


TABLE 1.2: Scalable Trustworthy Systems Overview

Vision: Make the development of trustworthy systems of systems (TSoS) practical; ensure that even very large and complex systems
can be built with predictable scalability and demonstrable trustworthiness, using well-understood composable architectures and well-
designed, soundly developed, assuredly trustworthy components.

Challenges: Most of today’s systems are built out of untrustworthy legacy systems using inadequate architectures, development
practices, and tools. We lack appropriate theory, metrics of trustworthiness and scalability, sound composable architectures, synthesis and
analysis tools, and trustworthy building blocks.

Goals: Sound foundations and supporting tools that can relate mechanisms to policies, attacks to mechanisms, and systems to
requirements, enabling facile development of composable TSoS systematically enhancing trustworthiness (i.e., making them more
trustworthy than their weakest components); documented TSoS developments, from specifications to prototypes to deployed systems.


Incremental Systems Clean-Slate Systems Hybrid Systems

Near-term milestones: Near-term milestones: Near-term milestones:
Sound analytic tools Alternative architectures Mix-and-match systems
Secure bootloading Well-specified requirements Integration tools
Trusted platforms Sound kernels/VMMs Evaluation strategies

Medium-term milestones: Medium-term milestones: Medium-term milestones:

Systematic use of tools Provably sound prototypes Use in infrastructures
More tool development Proven architectures Integration experiments

Long-term milestones: Long-term milestones: Long-term milestones:

Extensively evaluated systems Top-to-bottom formal evaluations Seamless integration of COTS/open-source
Test/evaluation: Identify measures of trustworthiness, composability, and scalability, and apply them to real systems.

Tech transfer: Publish composition methodologies for developing TSoS with mix-and-match components. Release open-source tools
for creating, configuring, and maintaining TSoS. Release open-source composable, trustworthy components. Publish successful, well-
documented TSoS developments. Develop profitable business models for public-private TSoS development partnerships for critical
applications, and pursue them in selected areas.

formally inspired approaches may be components is almost certainly an even of the executable code has not been
more promising than any of the less harder problem. compromised and (b) that the code
formal approaches attempted to date. resides in memory in a manner that it
In addition, considerable progress is As one example, securing the bootload can be neither read nor altered, but only
being made in analyzing system behav- process would be very valuable, but the executed. Firmware residing in ROM,
ior across multiple layers of abstraction. underlying general principle is that every when ROM updating is cryptographi-
On the other hand, designing complex module of executable software within cally protected for integrity, meets these
trustworthy systems and “compromise- a system should be backed by a chain criteria. Software that is cryptographi-
resilient” systems on top of insecure of trust, assuring (a) that the integrity cally protected for integrity, validated


when loaded, and protected by hardware there are no accepted methodologies for languages; and corresponding analysis
so it can only be executed also meets design, implementation, operation, and techniques. System design and analysis,
these criteria. evaluation that adequately characterize of course, must also anticipate desired
the trade-offs among trustworthiness, operational practice and human usabil-
One of the most relevant challenges for functionality, cost, and so on. ity. It must also encompass the entire
this topic area is how to achieve highly system life cycle and consider both
principled system development pro- What approaches might be environmental adversaries and other
cesses based on detailed and farsighted desirable? adverse influences.
requirements and sound architectures Currently, searching for flaws in micro-
that can be composed out of demon- processor design makes effective use Recent years have seen considerable
strably trustworthy components and of formal verification tools to evaluate progress in model checking and theorem
subsystems, and subjected to rigor- a chip’s logic design, in addition to proving. In particular, significant prog-
ous software, hardware, and system other forms of testing and simulation. ress has been made in the past decade
engineering disciplines for its imple- This technology is now becoming very on static and dynamic analysis of source
mentation. The tools currently being cost-effective. However, it is not likely code. This progress needs to be extended,
used do not even ensure that a com- to scale up by itself to the evaluation with particular emphasis on realistic
posed system is at least as trustworthy of entire hardware/software systems, scalability that would be applicable to
as its components. including their applications. Also, it large-scale systems and their applications.
is unclear whether existing hardware
Measuring confidentiality and integrity verification tools are robust against Verification of a poorly built system after
flaws in trustworthy system construc- nation-state types of adversaries. Formal the fact has never been accomplished,
tion requires the ability to identify and verification and other analytic tools that and is never likely to work. However,
measure the channels through which can scale will be critical to building because we cannot afford to scrap our
information can leak out of a system. systems with significantly higher assur- existing systems, we must seek an evo-
Covert channels have been well studied ance than today’s systems. Better tools lutionary strategy that composes new
in the constrained, older, local sense are needed for incorporating assurance systems out of combinations of old and
of the term. In an increasingly con- in the development process and for auto- new subsystems, while minimizing the
nected world of cross-domain traffic, mating formal verification. These tools risks from the old systems. A first step
distributed covert channels become may provide the functionality to build might involve a more formal under-
increasingly available. For more distrib- a secure computing base to meet many standing of the security limitations
uted forms of covert channels or other of users’ needs for assurance and func- and deficiencies of important exist-
out-of-band signaling channels, we lack tionality. They should be available for ing components, which would at least
the science, mathematics, fundamental pervasive use in military systems, as well allow us to know the risks being taken
theory, tools for risk assessment, and the as to commercial providers of process by using such components in trustwor-
ability to seal off such adverse channels. control systems, real-time operating thy composable systems. The ultimate
systems, and application environments. goal is to replace old systems gradually
Legacy constraints on COTS soft- Tools that can scale up to entire systems and piecewise over time, to increase
ware, lack of networking support, and (such as national-scale infrastructures) trustworthiness for progressively more
serious interoperability constraints have will require rethinking how we design, complex systems.
retarded progress. Meaningful security build, analyze, operate, and maintain
has not been seen as a competitive systems; addressing requirements; Verification is expensive. Most COTS
advantage in the mainstream. Even if system architectures; software engi- systems are built around functional-
trustworthiness were seen in that light, neering; programming and specification ity rather than trustworthiness, and


are optimized on cost of development forever remain stuck in the intractable composability of function enables scal-
and time to deployment—generally to position of starting from scratch each ability of system development today).
the detriment of trustworthiness and time. This foundation must include Fundamental research in writing security
often resulting in undetected vulner- verified and validated hardware, soft- specifications that are precise enough to
abilities. An alternative approach is to ware, compilers, and libraries with be verified, strict enough to be trusted,
start from a specification and check the easily composable models that include and flexible enough to be implemented
soundness of the system as it is being responses to environmental stimuli, will be crucial to major advances in
built. The success of such an approach misconfigurations and other human this area.
would depend on new languages, envi- errors, and adversarial influences, as
ronments that enable piecewise formal well as means of verifying composi- Resources
verification, and more scalable proof- tions of those components.
generation technology that requires As noted above, this topic is absolutely
less user input for proof-carrying code. What R&D is evolutionary and fundamental to the other topics. The
A computer automated secure software what is more basic, higher costs of not being able to develop scal-
engineering environment could greatly able trustworthy systems have already
risk, game changing?
facilitate the construction of secure proven to be enormous and will con-
systems. Better yet, it should encompass Evolutionary R&D might include incre- tinue to escalate. Unfortunately, the
hardware and total system trustworthi- mental improvements of large-scale costs of developing high-assurance
ness as well. systems for certain critical national systems in the past have been consider-
infrastructures and specific applica- able. Thus, we must reduce those costs
Another critical element is the creation tion domains, such as DNS and without compromising the effective-
of comprehensible models of logic and DNSSEC, routing and securing the ness of the development and evaluation
behavior, with comprehensible inter- Border Gateway Protocol (BGP), vir- processes and the trustworthiness of
faces so that developers can maintain tualization and hypervisors, network the resulting systems. Although it is
an understanding of systems even as file systems and other dedicated servers, difficult to assess the costs of develop-
they increase in size and scale. Such exploitation of multicore architectures, ing trustworthy systems in the absence
models and interfaces should help and web environments (e.g., browsers, of soundly conceived building blocks,
developers avoid situations where cata- web servers, and application servers we are concerned here with the costs
strophic bugs lurk in the complexity such as WebSphere and WebLogic). of the research and prototype devel-
of incomprehensible systems or in the However, approaches such as harden- opments that would demonstrate the
complexity of the interactions among ing particularly vulnerable components efficacy and scalability of the desired
systems. Creation of a language for or starkly subsetting functionality are approaches. This may seem to be a
effectively specifying a policy involving inherently limited, and belief in their rather open-ended challenge. However,
many components is a hard problem. effectiveness is full of risks. Goals of incisive approaches that can increase
Problems that emerge from interac- this line of R&D include identifying composability, scalability, and trust-
tions between components underscore needs, principles, methodologies, tools, worthiness are urgently needed, and
the need for verifying behavior not and reusable building blocks for scalable even relatively small steps forward can
only in the lab, but in the field as well. trustworthy systems development. have significant benefits.

Finally, efficiently creating provably More basic, higher-risk, game-changing To this end, many resources will be
trustworthy systems will require R&D broadly includes various topics essential. The most precious resource is
creation of secure but flexible com- under the umbrella of composability, undoubtedly the diverse collection of
ponents, and theories and tools for because it is believed that only effec- people who could contribute. Also vital
combining them. Without a secure tive composability for trustworthiness are suitable languages for requirements,
computing foundation, developers will can achieve true scalability (just as specification, programming, and so on,


along with suitable development tools. computer automated secure software could proceed for any systems in the
In particular, theories are needed to engineering environment (including its context of the exemplars noted above,
support analytic tools that can facili- generalization to hardware and systems) initially with respect to prototypes and
tate the prediction of trustworthiness, should be measured in the reduction of potentially scaling upward to enterprises.
inclusion modeling, simulation, and person-hours required to construct and
formal methods. verify systems of comparable assurance To what extent can we test
levels and security. The reuse and size
real systems?
Measures of success of components being reused should be
measured, since the most commonly In general, it may be more cost-effective
Overall, the most important measure used components in mission-critical to carry out R&D on components, com-
of success would be the demonstrable systems should be verified components. posability, and scalability in trustworthy
avoidance of the characteristic system Evaluation methodologies need to be environments at the subsystem level
failures that have been so common in developed to systematically exploit the than in general system environments.
the past (e.g., see [Neu1995]), just a few metrics. The measures of success for scal-However, composition still requires test
of which are noted earlier in this section. able trustworthy systems also themselves and evaluation of the entire system. In
need to be composable into enterprise- that it is clearly undesirable to experi-
Properties that are important to the level measures of success, along with the ment with critical systems such as
designers of systems should be measured measures contained in the sections on power grids, although owners of these
in terms of the scale of systems that can the other topic areas that follow. systems have realistic but limited-scale
be shown to have achieved a specified test environments. There is consider-
level of trustworthiness. As noted at the What needs to be in place for able need for better analytic tools and
beginning of this section, trustworthi- testbeds that closely represent reality.
test and evaluation?
ness typically encompasses requirements Furthermore, if applicable principles,
for security, reliability, survivability, and Significant improvements are necessary techniques, and system architectures
many other system properties. Each in system architectures, development can be demonstrated for less critical
system will need to have its own set methodologies, evaluation methodolo- systems, successful system developments
of metrics for evaluation of trustwor- gies, composable subsystems, scalability, would give insights and inspiration that
thiness, composability, and scalability. and carefully documented, successful would be applicable to the more critical
Those metrics should mirror generic worked examples of scalable prototypes. systems without having to test them
requirements, as well as any require- Production of a reasonable number of initially in more difficult environments.
ments that are specific to the intended examples will typically require that will
applications. The effectiveness of any not all succeed. Test and evaluation

[Can2001] Ran Canetti. Universally composable security: A new paradigm for cryptographic protocols
(, 2005. An extended version of the paper from the 42nd
Symposium on Foundations of Computer Science (FOCS’01) began a series of papers
applying the notion of universal composability to cryptography. Much can be learned
from this work regarding the more general problems of system composability.

[Neu1995] Peter G. Neumann. Computer-Related Risks, Addison-Wesley/ACM Press, New York, 1995. See also an
annotated index to online sources for the incidents noted here, as well as many more recent cases


[Neu2004] Peter G. Neumann. Principled assuredly trustworthy composable architectures. DARPA-CHATS Final
Report, SRI International, Menlo Park, California, December 2004
( This report characterizes many of the
obstacles that must be overcome in achieving composability with predictable results.

[Sal+2009] J.H. Saltzer and F. Kaashoek. Principles of computer design. Morgan Kauffman, 2009. (Chapters
1-6; Chapters 7-11 are online at:


Current Hard Problems in INFOSEC Research
2. Enterprise-Level Metrics (ELMs)


What is the problem being addressed?

Defining effective metrics for information security (and for trustworthiness more
generally) has proven very difficult, even though there is general agreement that such
metrics could allow measurement of progress in security measures and at least rough
comparisons between systems for security. Metrics underlie and quantify progress
in all other roadmap topic areas. We cannot manage what we cannot measure, as
the saying goes. However, general community agreement on meaningful metrics
has been hard to achieve, partly because of the rapid evolution of information
technology (IT), as well as the shifting locus of adversarial action.

Along with the systems- and component-level metrics that are discussed elsewhere
in this document and the technology-specific metrics that are continuing to emerge
with new technologies year after year, it is essential to have a macro-level view of
security within an organization. A successful research program in metrics should
define a security-relevant science of measurement. The goals should be to develop
metrics to allow us to answer questions such as the following:

ƒƒ How secure is my organization?

ƒƒ Has our security posture improved over the last year?
ƒƒ To what degree has security improved in response to changing threats and
ƒƒ How do we compare with our peers with respect to security?
ƒƒ How secure is this product or software that we are purchasing or deploying?
ƒƒ How does that product or software fit into the existing systems and
ƒƒ What is the marginal change in our security (for better or for worse), given
the use of a new tool or practice?
ƒƒ How should we invest our resources to maximize security and minimize
ƒƒ What combination of requirement specification, up-front architecture,
formal modeling, detailed analysis, tool building, code reviews, programmer
training, and so on, would be most effective for a given situation?
ƒƒ How much security is enough, given the current and projected threats?
ƒƒ How robust are our systems against cyber threats, misconfiguration,
environmental effects, and other problems? This question is especially
important for critical infrastructures, national security, and many other
large-scale computer-related applications.

Enterprise-level metrics (ELMs) address environment. Note that this definition quantifiable, feasible to measure, and
the security posture of an organization incorporates a specification of system repeatable. They provide relevant trends
and complement the component-level objectives and a specification of the over time and are useful in tracking
metrics examined elsewhere in the system environment, which would performance and directing resources
roadmap topics. “Enterprise” is a term include some notion of a threat model. to initiate performance improvement
that encompasses a wide range. It could Although this type of probability metric actions.” [
in principle apply to the Internet as a has been computed for system reliability bulletns/bltnaug03.htm]
whole, but realistically it is intended and for certain system risk assessments,
here to scale in scope from a large cor- the potential accuracy of such assess- Most organizations view the answers to
poration or department of the federal ments with respect to security seems the questions listed above in the short
government down to the small office/ to be extremely questionable, given the term from a financial mind-set and
home office (SOHO). For our purposes, rapidly changing threat environment for attempt to make cost-benefit trade-
an enterprise has a centralized decision IT systems. For example, a presumed off analyses. However, in the absence
making authority to ensure the use of high probability of meeting security of good metrics, it is unclear whether
ELMs to rationally select among alterna- objectives essentially goes to zero at the those analyses are addressing the right
tives to improve the security posture of instant security exploits are announced problems. Decisions resulting from
that enterprise. ELMs can support deci- and immediately perpetrated. such analyses will frequently be detri-
sions such as whether adoption of one mental to making significant security
technology or another might improve Security metrics are difficult to develop improvements in the long term and
enterprise security. ELMs also provide because they typically try to measure thus eventually require costly new
the basis for accurate situational aware- the absence of something negative (e.g., developments.
ness of the enterprise’s security posture. lack of any unknown vulnerabilities in
systems and lack of adversary capabilities What are the potential
In this discussion, we define metrics rel- to exploit both known and unknown
evant to systems and networking within vulnerabilities). This task is difficult
an enterprise, and consider composing because there are always unknowns in Lack of effective ELMs leaves one in the
host-level and other lower-layer mea- the system and the landscape is dynamic dark about cyberthreats in general. With
surements up to an enterprise level. In and adversarial. We need better defini- respect to enterprises as a whole, cyber-
other words, the goals of ELMs are to tions of the environment and attacker security has been without meaningful
understand the security of a large-scale models to guide risk-based determi- measurements and metrics throughout
system—enabling us to understand nation. These are difficult areas, but the history of information technol-
enterprise security as a whole, with a progress is achievable. ogy. (Some success has been achieved
goal of using these measurements to with specific attributes at the compo-
guide rational investments in security. The following definition from NIST nent level.) This lack seriously impedes
If these ELM goals are met, then exten- may provide useful insights. the ability to make enterprise-wide
sions to other related cases, such as informed decisions of how to effectively
Internet service providers (ISPs) and “IT security metrics provide a practical avoid or control innumerable known
their customers, should be feasible. approach to measuring information and unknown threats and risks at every
security. Evaluating security at the system stage of development and operation.
Security itself is typically poorly defined level, IT security metrics are tools that
in real systems, or is merely implicit. facilitate decision making and account- Who are the potential
One view might be to define it as the ability through collection, analysis, and
beneficiaries? What are their
probability that a system under attack reporting of relevant performance data.
respective needs?
will meet its specified objectives for a Based on IT security performance goals
specified period of time in a specified and objectives, IT security metrics are In short, everyone who is affected by an

automated IT system has the potential caused by cyber attacks, which might short-term economic losses caused by
to benefit from better security metrics, be enhanced with the existence of mean- system outages. Potential beneficiaries,
especially at the enterprise level. Spon- ingful metrics. However, that market challenges, and needs are summarized
sors of security R&D require such is perhaps undercut not by the lack in Table 2.1.
metrics to measure progress. With such of suitable metrics, but more by the
metrics, decision makers, acquisition prevalence of insecure systems and their What is the current state of
managers and investors in security tech- exploitations and by a historical lack of
the practice?
nology could make a better business case consistent actuarial data.
for such technology, and guide intel- At present, the practice of measuring
ligent investment in such technology. Metrics defined relative to a mission security is very ad hoc. Many of the
This demand of course would guide threat model are necessary to understand processes for measurement and metric
the market for development of mea- the components of risk, to make risk selection are mostly or completely sub-
surably more secure systems. Metrics calculations, and to improve decision jective or procedural, as in evaluation
can be applied not just to technol- making in response to perceived risk. of compliance with Sarbanes-Oxley,
ogy, but to practices as well, and can A risk model must incorporate threat HIPAA, and so on. New approaches
provide management with an incentive information, the value of the enterprise are introduced continually as the old
structure oriented toward security per- information being protected, poten- approaches prove to be ineffective. There
formance improvement. Robust metrics tial consequences of system failure, are measurements such as size and scope
would enhance the certification and operational practices, and technology. of botnets, number of infections in a
accreditation process, moving toward More specifically, risk assessment needs a set of networks, number of break-ins,
quantitative rather than qualitative pro- threat model (encompassing intent and antivirus detection rates over time, and
cesses. Metrics also can be used to assess capabilities), a model of actual protective numbers of warrants served, crimi-
the relative security implications of measures, a model of the probability that nal convictions obtained, and national
alternative security measures, practices, the adversary will defeat those protective security letters issued (enforcement).
or policies. measures, and identification of the con- These are not related to fundamental
sequences of concern or adversary goals. characteristics of systems, but are more
Administrators require metrics to guide These consequences of concern are typi- about what can be measured about
the development of optimal network cally specific to each enterprise, although adversaries. Examples include websites
configurations that explicitly consider many commonalities exist. For critical that attempt to categorize the current
security, usability, cost, and perfor- infrastructures, loss of system availability state of the Internet’s health, the current
mance. There seems to be a potential may be the key concern. For commercial state of virus infections world wide, or
market in insurance and underwriting enterprises, loss of proprietary infor- the number and sizes of botnets cur-
for predicting and reducing damages mation may be a greater concern than rently active.

TABLE 2.1: Beneficiaries, Challenges, and Needs

Beneficiaries Challenges Needs

Developers Establishing meaningful ELMs Specification languages, analysis tools
(comprehensive, feasibly implementable, for feasibility, hierarchical evaluation,
realistic) and incremental change
System procurers Insisting on the use of meaningful ELMs Certified evaluations
User communities Having access to the evaluations of Detailed evaluations spanning all
meaningful ELMs relevant aspects of trustworthiness

Numerous initiatives and projects are on some sort of thermometer). ƒƒ Measures of effectiveness. The
being undertaken to improve or develop However, password strength is a Institute for Defense Analyses
metrics for all or a specific portion of the rather vacuous concept in systems (IDA) developed a methodology
security domain. Included in these are with inherently weak security in for determining the effectiveness
the following: other areas. of cybersecurity controls based on
ƒƒ Security implementation its well-used and -documented
ƒƒ Several government documents methodology for determining the
and efforts (for example, NIST metrics, which might be used
to assess how many systems in effectiveness of physical security
SP800-55) that describe an controls. Using a modified
approach to defining and an enterprise install a newly
announced patch, and how Delphi technique, the measures
implementing IT security of effectiveness of various
metrics. Although some of the quickly.
components and configurations
measures and metrics are useful, ƒƒ Initiatives in security processes, were determined, which then
they are not sufficient to answer which might define metrics allowed for a security “ranking”
the security questions identified relating to the adoption of those of the potential effectiveness
earlier in this section. processes and require extensive of various architectures and
documentation. However, operating modes against different
ƒƒ Methods that assess security
such approaches typically are classes of adversaries [IDA2006].
based on system complexity
about process and not actual
(code complexity, number ƒƒ Ideal-based metrics. The Idaho
performance improvement with
of entry points, etc.). These National Laboratory (INL) took
respect to security.
may give some indication of a vastly different approach to
vulnerability, but in the absence This section focuses on metrics for developing metrics. It chose to
of data on attack success rates or cybersecurity issues. However, it is also specify several best-case outcomes
the efficacy of mitigation efforts, useful to consider existing metrics and
of security and then attempt to
these methods prove very little. design techniques for physical security
develop real-world measures of
systems, and the known limitations
ƒƒ Red Teaming, which provides those “ideals.” The resulting set of
of those techniques. This informa-
some measure of adversary work 10 system measurements covering
tion would help advance cybersecurity
factor and is currently done 7 ideals is being tested in the
research. It will also be required as
in security assessments and field to determine how well they
our logical and physical cybersecurity
penetration testing. One can can predict actual network or
systems become ever more intertwined
apply penetration testing, using system security performance
and interdependent. Similarly, tech-
a variety of available tools and/ [McQ2008].
niques for financial risk management
or hiring a number of firms that ƒƒ Goal-oriented metrics. Used
may also be applicable to cybersecurity.
provide this as a service. For primarily in the software
example, this can provide metrics development domain, the
on adversary work factor and What is the status of current goal-oriented paradigm seeks to
residual vulnerabilities before and research? establish explicit measurement
after implementation of a security goals, define sets of questions
There are initiatives aimed at developing
plan. that relate to achieving the goals,
new paradigms for identifying measures
ƒƒ Heuristic approaches to provide and metrics. Some of them attempt to and identify metrics that help to
metrics in a number of security- apply tools and techniques from other answer those questions.
related areas. For example, disciplines; others attempt to approach ƒƒ Quality of Protection (QoP).
systems often report a measure the problem from new directions. These This is a recent approach that
of “password strength” (usually initiatives include the following: is in early stages of maturity. It

has been the subject of several answering questions such as the degree Analysis
workshops but is still relatively to which one system is more secure than Analysis focuses on determining how
qualitative [QoP2008]. another or the degree to which adop- effectively the metrics describe and
ƒƒ Adversary-based metrics. MIT tion of security technology or practice predict the performance of the system.
Lincoln Laboratory chose to makes a system more secure. However, The prediction should include both
explore the feasibility and effort as noted above, these measurements are current and postulated adversary
required for an attacker to break relative to assumed models for adver- capabilities. There has been relatively
into network components, by sary capabilities and goals, and to our little work on enterprise-level analy-
examining reachability of those knowledge of our systems’ vulnera- ses, because a foundation of credible
components and vulnerabilities bilities—and therefore are potentially metrics and foundational approaches
present or hypothesized to be limited by shortcomings in the models, for deriving enterprise-level evaluations
present. It and others have built requirements, knowledge, assumptions, from more local evaluations have been
tools employing attack graphs to and other factors. lacking.
model the security of networks. Composition
While this section is focused on enter-
prise-level metrics (ELMs), we must Since security properties are often best
FUTURE DIRECTIONS also consider definitions of metrics for viewed as total-system or enterprise-level
interconnected infrastructure systems, emergent properties, research is required
as well as for non-enterprise devices. in the composability of lower-level
On what categories can we
We must also anticipate the nature of metrics (for components and subsys-
subdivide this topic?
the enterprise of the future; for example, tems) to derive higher-level metrics
For the purposes of this section, we technology trends imply that we should for the entire system. This “compos-
divide the topic of enterprise-level consider smart phones as part of the able metrics” issue is a key concern for
metrics into five categories: definition, enterprise. Infrastructure systems may developing scalable trustworthy systems.
collection, analysis, composition, and be thought of as a particular class of In addition, the composability of enter-
adoption. enterprise-level systems. However, the prise-level metrics into meta-enterprise
interrelationships among the differ- metrics and the composability of the
Definition ent infrastructures also suggest that resulting evaluations present challenges
Definition identifies and develops the we must eventually be able to consider for the long-term future.
models and measures to create a set meta-enterprises.
of security primitives (e.g., for confi- Adoption
dentiality, integrity, availability, and Collection Adoption refers to those activities that
others). NIST SP 800-55 provides a Collection requirements may inspire transform ELM results into a useful
useful framework for metrics definition. new research in hardware and software form (such as a measurement paradigm
This publication proposes development for systems that enable the collection of or methodology) that can be broadly
of metrics along the dimensions of data through meaningful metrics, ideally used—taking systems, processes, organi-
implementation (of a security policy), in ways that cannot be compromised by zational constraints, and human factors
effectiveness/efficiency, and mission adversaries. This includes conditioning into account. Monetary and financial
impact. the data via normalization, categoriza- considerations may suggest adoption of
tion, prioritization, and valuation. It metrics such as the number of records
Ideally, metrics would be defined to might also include system developments in a customer database and a cost per
quantify security, but such definitions with built-in auditability and embedded record if those records are disclosed.
have been difficult to achieve in prac- forensics support, as well as other topic We may also consider financial metrics
tice. At the basic level, we would like areas, such as malware defense and situ- retrospectively (the cost of a particular
to quantify the security of systems, ational understanding. compromise, in terms of direct loss,

reputation, remediation costs, etc.). This ƒƒ Composition models of metrics as system survivability under threats
retrospection would be useful for system to determine enterprise values that are not addressed, human safety,
designers and for the insurance under- from subsystem metrics and so on.
writing concept mentioned previously. ƒƒ Scalability of sets of metrics
Adapting approaches to metrics from
ƒƒ Developing or identifying metric
What are the major research other disciplines is appropriate, but the
hierarchies result is not complete and often not
ƒƒ Measures and metrics for security sufficiently applicable (as in the case of
In spite of considerable efforts in primitives probability metrics for component and
the past, we do not have any univer- ƒƒ Appropriate uses of metrics system reliability). We should consider
sally agreed-upon methodologies to (operations, evaluation, risk connections with other fields, while
address the fundamental question of management, decision making) remaining aware that their techniques
how to quantify system security. At a may not be directly applicable to cyber-
ƒƒ Ability to measure operational
minimum, an evaluation methodol- security because of intelligent adversaries
security values
ogy would support hypothesis testing, and the fluid nature of the attack space.
benchmarking, and adversary models. ƒƒ Measuring human-system
Hypothesis testing of various degrees of interaction (HSI) Many disciplines (such as financial
formality, from simple engagements to ƒƒ Tools to enhance and automate metrics and risk management practices;
formal, well-instrumented experiments, the above areas in large balanced scorecard, six-sigma, and insur-
is needed to determine the viability of enterprises ance models; complexity theory; and
proposed security measures. Bench- data mining) operate in environments of
marking is needed to establish a system decision making under uncertainty, but
effectiveness baseline, which permits the
What R&D is evolutionary, most have proven methods to determine
progress of the system to be tracked as and what is more basic, risk. For example, the field of finance
changes are made and the threat envi- higher risk, game changing? has various metrics that help decision
ronment evolves. Finally, evaluation Composability advances (for multi- makers understand what is transpiring
must include well-developed adver- ple metrics) could be game-changing in their organizations. Such metrics
sary models that predict how a specific advances. Hierarchical composition can provide insight into liquidity, asset
adversary might act in a given context of metrics should support frameworks management, debt management, prof-
as systems react to that adversary’s intru- such as argument trees and security cases itability, and market value of a firm.
sions or other exploits. (analogous to safety cases in complex Capital budgeting tools determining
mechanical systems, such as aircraft). net present-values and internal rates of
What are some exemplary return allow insights into the returns
Identifying comprehensive metrics, or that can be expected from investments
problems for R&D on this
a different set of measurement dimen- in different projects. In addition, there
sions, might provide a leap forward. The are decision-making approaches, such
The range of requirements for metrics in well-known and well-used confidential- as the Capital Asset Pricing Model
security is broad. R&D may be focused ity, integrity, availability (CIA) model and options pricing models, that link
in any of the following areas: is good for discussing security, but may risk and return to provide a perspec-
not be easily or directly measured in tive of the entire financial portfolio
ƒƒ Choosing appropriate metrics large enterprises. It is also inherently under a wide range of potential market
ƒƒ Methods for validating metrics incomplete. For example, it ignores conditions. These methodologies have
requirements relating to accountability, demonstrated some usefulness and have
ƒƒ Methods for metric computation auditing, real-time monitoring, and been applied across industries to support
and collection other aspects of trustworthiness, such decision making. A possible analog for

IT security would be sound systems ƒƒ Economic or market analysis of metrics and evaluation methodologies
development frameworks that support adversary actions may provide for security of the information domain
enterprise-level views of an organiza- an indirect metric for security with the metrics and evaluation meth-
tion’s security. Research is needed to effectiveness. If the cost to odologies for physical, cognitive, and
identify system design elements that exploit a vulnerability on a social domains.
enable meaningful metrics definition critical and widely used server
and data collection. Research is also system increases significantly, we
might surmise that the system Resources
needed on issues in collection logistics,
such as the cost of collection and its is becoming more secure over Industry trends such as exposure to data
impact on the metric being used (e.g., time or that the system has breaches are leading to the development
whether the collection compromises become more valuable to its of tools to measure the effectiveness
adversaries. This approach can be
security). of system implementations. Industry
confounded by, for example, the
mandates and government regulations
monetary assets accessible to the
Research on metrics related to adversary such as the Federal Information Secu-
adversary by compromising the
behaviors and capabilities needs to be rity Management Act (FISMA) and
service. (A very secure system not
conducted in several key areas, such as Sarbanes-Oxley require the govern-
widely used in an attractive target
the following: space may discourage a market ment and private-sector firms to become
for high-priced vulnerabilities.) accountable in the area of IT security.
ƒƒ The extent of an adversary’s These factors will lead industry and
opportunity to affect hardware It is also not obvious that this
is an enterprise-level metric. government to seek solutions for the
and software needs to be studied. improvement of security metrics.
Nonetheless, the assembled
This may lead to research into,
experts considered market
for example, global supply-chain Government investment in R&D is still
analysis a novel and interesting
metrics that account for potential avenue of research. required to address the foundational
adversarial influence during questions that have been discussed,
acquisition, update, and remote ƒƒ Metrics relating to the impact of
such as adversary capabilities and threat
management cycles. cybersecurity recommendations
on public- and private-sector
ƒƒ Metrics in the broad area of enterprise-level systems.
adversary work factor have Measures of success
been considered for some time.
The simple example is the Metrics can guide root-cause analysis in The ability to accurately and confi-
increase in the recommended the case of security incidents. Research dently predict the security performance
length of cryptographic keys using existing events should compile a of a component, network, or enter-
as computational power has list of metrics that might have avoided prise is the ultimate measure of success
increased. This work should the incident if they had been known for metrics R&D. Interim milestones
continue, but there is a question before the incident. include better inputs for risk calculation
as to the repeatability of the and security investment decisions. The
A stretch objective in the long term is extent to which the evaluation of local
obtained metric.
the development of metrics and data metrics (e.g., see the other sections)
ƒƒ Research related to an adversary’s collection schemes that can provide can be combined into enterprise-level
propensity to attempt a actuarial-quality data with respect to metrics would be a significant measure
particular attack, in response security. This is needed for a robust of success.
to a defensive posture adopted market for insurance against cybersecu-
by the enterprise, needs to be rity-related risks. Another long-range
conducted. stretch goal would be to unify the

What needs to be in place for assessment of “time to compromise” To what extent can we test
test and evaluation? experimental metrics, possibly consider- real systems?
ing systems that are identical except for
Testbeds and tools within the testbeds some security enhancement. An enterprise is a testbed of sorts to
are needed to evaluate the descriptive glean insights on usability, organiza-
and predictive value and effectiveness of Evaluation and experimentation are tional behavior, and response to security
proposed measures and models, particu- essential to measure something that is practices. Much of the initial collection
larly for potentially destructive events. relevant to security. Evaluation method- and verification must be done on real
Repositories of measurement “baselines” ology goes hand in hand with metrics, systems to ensure applicability of the
to compare new metric methods and and tools that accurately measure and measurements and derived metrics.
models will also be required. Virtualiza- do not distort quantities of interest also
tion and honeynet environments permit have direct influence on metrics.

[And2008] R. Anderson. Security Engineering: A Guide to Building Dependable Distributed Systems. Wiley, Indianapolis,
Indiana, 2008.

[Avi+2004] A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and
secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1):11-33, January-March 2004.

[Che2006] E. Chew, A. Clay, J. Hash, N. Bartol, and A. Brown. Guide for Developing Performance Metrics for Information
Security. NIST Special Publication 800-80, National Institute of Standards and Technology, Gaithersburg,
Maryland, May 2006.

[CRA2003] Four Grand Challenges in Trustworthy Computing: Second in a Series of Conferences on Grand Research
Challenges in Computer Science and Engineering. Computing Research Association, Washington, D.C.,
2006 ( ).

[Jaq2007] A. Jaquith. Security Metrics. Addison Wesley Professional, Upper Saddle River, New Jersey, 2007.

[IDA2006] Institute for Defense Analysis. National Comparative Risk Assessment Pilot Project. Draft Final,
September 2006, IDA Document D-3309.

[McQ2008] M.A. McQueen, W.F. Boyer, S. McBride, M. Farrar, and Z. Tudor. Measurable control system security through
ideal driven technical metrics. In Proceedings of the SCADA Scientific Security Symposium, January 2008.

[Met2008] Metricon 3.0, July 29, 2008, with copious URLs (

[NIS2009] Information Security Training Requirements: A Role- and Performance-Based Model. NIST Special
Publication 800-16 Revision 1, National Institute of Standards and Technology, Gaithersburg, Maryland,
March 20, 2009 (

[QoP2008] 4th Workshop on Quality of Protection (Workshop co-located with CCS-2008), October 2008 (http://

[Swa+2003] M. Swanson, N. Bartol, J. Sabato, J. Hash, and L. Graffo. Security Metrics Guide for Information Technology
Systems. NIST Special Publication 800-55, National Institute of Standards and Technology, Gaithersburg,
Maryland, July 2003.

Current Hard Problems in INFOSEC Research
3. System Evaluation Life Cycle


What is the problem being addressed?

The security field lacks methods to systematically and cost-effectively evaluate its
products in a timely fashion. Without realistic, precise evaluations, the field cannot
gauge its progress toward handling security threats, and system procurement is
seriously impeded. Evaluations that take longer than the existence of a particular
system version are of minimal use. A suitable life cycle methodology would allow
us to allocate resources in a more informed manner and enable consistent results
across multiple developments and applications.

System evaluation encompasses any testing or evaluation method, including

testing environments and tools, deployed to evaluate the ability of a system or a
security “artifact” to satisfy its specified critical requirements. A security artifact
may be a protocol, device, architecture, or, indeed, an entire system or application
environment. Its security depends on the security of the environments in which the
artifact will be deployed (e.g., an enterprise or the Internet), and must be reflected
throughout the system development life cycle (SDLC). Such a product must meet
its specification with respect to a security policy that it is supposed to enforce, and
not be vulnerable to attack or exploitation that causes it to perform incorrectly or
maliciously. Secondary but also important performance goals can be expressed as “do
no harm.” The proposed artifact should not inflict collateral damage on legitimate
actors or traffic in the Internet, and it should not create additional security problems.
The system evaluation life cycle thus denotes continuous evaluation throughout the
system life cycle (requirements, design, development and implementation, testing,
deployment and operations, and decommissioning and disposal). See [NIS2008].

Security evaluation in the SDLC involves four major areas in addressing

potential threats:

ƒƒ Developing explicit requirements and specifications for systems, including

security features, processes, and performance requirements for each
development phase in sufficient detail.
ƒƒ Understanding whether a product meets its specification with respect to a
security policy that it is suppose to enforce. A part of this is understanding
how well the product meets the specification and ensures that there
are no exploitable flaws. In the case of systems enforcing mandatory
confidentiality or integrity policies, this includes demonstration of the
limits to adversarial exploitation of covert channels.
ƒƒ Understanding whether a product can be successfully attacked or bypassed
by testing it in each phase of its development life cycle, either in a testbed
or through a mathematical model or simulation.

ƒƒ Developing system evaluation Such understanding is needed to evalu- of security products (because they need
processes whereby incremental ate the likelihood of human acceptance reliable means to evaluate what they
changes can be tracked and of proposed security artifacts and to buy); the creators of these products, such
rapidly reevaluated without simulate human actions during evalu- as software and hardware companies;
having to repeat the entire ation (e.g., browsing patterns during and researchers (because they need to
process. evaluation of a web server defense). measure their success). Having effective
evaluation methods opens the door to
In each case, independent assessment What are the potential the possibility of standardization of
of a product could reduce reliance on security and to formation of attestation
vendor claims that might mask serious agencies that independently evaluate
problems. On the other hand, embed- Threats against information and infor- and rank security products. The poten-
ded self-assurance techniques (such as mation systems are at the heart of the tial beneficiaries, challenges, and needs
proof-carrying code) could also be used need for robust system evaluation. In are summarized in Table 3.1.
to demonstrate that certain properties addition to the threats to operational
were satisfied. systems, adversaries have the potential to What is the current state of
affect the security of artifacts at numer-
the practice?
Systematic, realistic, easy-to-use and ous points within the development life
standardized evaluation methods are cycle. The complexity of systems, modi- Evaluation of security artifacts is ad hoc.
needed to objectively quantify perfor- fications, constant changes to supply Current methodologies, such as these
mance of any security artifacts and the chains, remote upgrades and patches, discussed in NIST SP800-64 (Security
security of environments where these and other factors give rise to numerous Considerations in the System Development
artifacts are to be deployed, before and new threat vectors. Life Cycle) [NIS2008] and Microsoft’s
after deployment, as well as the per- The Security Development Life cycle
formance of proposed solutions. The Who are the potential [How+2006], merely reorder or reem-
evaluation techniques should objectively phasize many of the tools and methods
beneficiaries? What are their
quantify security posture throughout the that have been unsuccessful in creating
respective needs?
critical system life cycle. This evaluation security development paradigms. There
should support research, development, With regard to the system life cycle, are neither standards nor metrics for
and operational decisions, and maximize system architects, engineers, develop- security evaluation. Product developers
the impact of the investment. ers, and evaluators will benefit from and vendors evaluate their merchandise
enhanced methods of evaluation. Bene- in-house, before release, via different
Finally, evaluation must occur in a ficiaries of improved security evaluations tests that are not disclosed to the public.
realistic environment. The research range from large and small enterprises Often, real evaluation takes place in
community lacks data about realis- to end users of systems. Although customer environments by product
tic adversarial behavior, including the beneficiaries’ needs are generally the vendors collecting periodic statistics
tactics and techniques that adversar- same—to prevent security incidents and about threats detected and prevented
ies use to disrupt and deny normal to respond quickly to those that evade during live operation. Although this is
operations, as well as normal system prevention and minimize damage, while the ultimate measure of success—how a
use patterns and business relation- protecting privacy—environments that product performs in the real world—it
ships, to create a realistic environment they seek to protect may be very different, does not offer security guarantees to
for evaluation that resembles current as are their needs for reliability, correct- customers prior to purchase. There have
environments in which systems are ness of operation, and confidentiality. been many incidents when known secu-
deployed. We also lack understanding Direct beneficiaries of better evaluation rity devices have failed (e.g., the Witty
of human behavior as users interact with methods are system developers, system worm infected security products from a
the system and with security artifacts. users and administrators; the customers well-known security product vendor). In


TABLE 3.1: Beneficiaries, Challenges, and Needs

Beneficiaries Challenges Needs

System developers Integrate components into systems with Robust methods to compare components
predictable and dependable security to be used in new systems. Tools,
properties; effectively track changes from techniques, and standards of system
one version to another. evaluation to enable certification of security
properties of products developed.
System owners and administrators Understand the risk to their information Suites of tools that can be used throughout
operations and assets. the operational phases of the system life
cycle to evaluate the current system state
Operate and maintain information systems and the requirements and impacts of
in a secure manner. system upgrades or changes.
End users Operate confidently in cyberspace. Recognized and implemented life cycle
system evaluation methods that provide
high confidence in the safety and security of
using online tools and environments.

addition, past efforts such as evaluations cause exploitable vulnerabilities.

of the Trusted System Security Evalua- ƒƒ Develop realistic traffic,
tion Criteria and the Common Criteria On what categories can we adversary, and environment
[ISO1999] suffer from inadequate incre- subdivide this topic? models that span all four domains
mental methods to rapidly reevaluate of conflict (physical, information,
new versions of systems. We initially discuss this topic relative to cognitive, and social).
a nominal life cycle model. The SDLC
phases represented in our nominal ƒƒ Develop security test cases,
What is the status of current procedures, and models to
model are: requirements, design, devel-
research? evaluate the artifact in each life
opment and implementation, testing,
Relatively little research has been done deployment and operations, and decom- cycle phase.
on system evaluation methods. The missioning. System evaluation has to be ƒƒ More effectively perform speedy
research community still values such done throughout the entire life cycle, reevaluations of successive
topics much less than research on novel with continuous feedback and reevalu- versions resulting from changes
defenses and attacks. The metrics and ation against previous stages. in requirements, designs,
measures needed to describe security implementation, and experience
properties during the evaluation life Potential R&D directions that might gained from uses of system
cycle must be developed. (See Section 2). be pursued at multiple life cycle phases applications.
The lack of metrics results in security include the following:
products that cannot be compared and The following discussion considers the
ƒƒ Develop cost-effective methods
in solving past problems instead of antic- individual phases.
ipating and preventing future threats. to specify security features for
Because the necessary metrics are likely succeeding life cycle phases. Requirements
to depend on the nature of the threat ƒƒ Develop adversarial assessment ƒƒ Establish a sounder basis for
a security artifact aims to address, it is techniques that identify and how security requirements get
likely that the set of metrics will be large test for abnormal or unintended specified, evaluated, and updated
and complex. operating conditions that may at each phase in the life cycle.

ƒƒ Incorporate relevant (current and specifications. Concerns about that are ill-documented, are time-
anticipated) threats models in insider threats inside the dependent, and occur only when
the requirements phase so that development process also need to all of the subsystems have been
the final specification can be be addressed. integrated.
evaluated against those threats. ƒƒ Pursue verification that a system ƒƒ Conduct Red Team exercises in
ƒƒ Specify what constitutes secure is implemented in a way that a structured way on testbeds to
operation of systems and security claims can be tested. bring realism. Expand the Red
environments. ƒƒ Consider new programming Team concept to include all
ƒƒ Establish requirement languages, constraints on or phases of the life cycle.
specification languages that subsets of existing languages, ƒƒ Establish evolvable testbeds
express security properties, so and hardware design techniques that are easily upgradeable as
that automated code analysis that express security properties, technology, threat, and adversary
can be used to extract what the enforce mandatory access models change.
code means to do and what its controls, and specify interfaces,
ƒƒ Improve techniques for
assumptions are. so that automated code analysis
combined performance, usability,
Design can be used to extract what the
and security testing. This
ƒƒ Be able to share data with code means to do and what its
includes abnormal environments
adequate privacy, including data assumptions are.
(e.g., extreme temperatures) and
on attacks, and with emphasis on Testing operating conditions (e.g., misuse
economics of data sharing. ƒƒ Select and evaluate metrics for by insiders) that are relevant for
ƒƒ Develop a richer process to evaluation of trustworthiness security testing but may exceed
develop data used to validate requirements. the system’s intended range of
security claims. ƒƒ Select and use evaluation operation.
ƒƒ Develop frameworks for threat methods that are well suited to Deployment and Operations
prediction based on data about the anticipated ranges of threats ƒƒ Establish and use evaluation
current attacks and trends. and operational environments. methods that can compare actual
ƒƒ Develop simulations of (unusual ƒƒ Develop automated techniques operational measurements with
or unanticipated) system states for identifying all accessible design specifications to provide
that are critical for security, as system interfaces (intentional, feedback to all life cycle phases.
opposed to simulation of steady unintentional, and adversary- ƒƒ Develop methods to identify
states. induced) and system system, threat, or environment
Development and Implementation dependencies. For example, changes that require reevaluation
ƒƒ Pursue evaluation methods able exploitation of a buffer overflow to validate compliance with
to verify that an implementation might be considered a simple evolving security requirements.
example of an unintended system
follows requirements precisely ƒƒ Define and consistently deploy
and does not introduce anything certification and accreditation
not intended by requirements. ƒƒ Develop and apply automated methods that provide
If specifications exist, this can tools for testing all system realistic values regarding the
be done in two steps: verifying dependencies under a wide range trustworthiness of a system with
consistency of specifications of conditions. As an example, respect to its given requirements.
with requirements and then some adversaries may exploit Decommissioning
consistency of software with hardware-software interactions ƒƒ Develop end-of-life evaluation


methods to verify that security needs real hosts but can emulate or methodologies and supporting
requirements have been achieved simulate the network interconnections.) tools that can result in timely
during the entire life cycle. Also relevant here is the DETERlab evaluations and can rapidly
This includes ensuring that an testbed (cyber-DEfense Technology track the effects of incremental
adversary can not extract useful Experimental Research laboratory changes.
information or design knowledge testbed ( The ƒƒ Enable creation of attack data
from a decommissioned or DETERlab testbed is a general-purpose repositories under government
discarded security artifact. experimental infrastructure for use in
management, similar to the
ƒƒ Inform threat models from research (
PREDICT repository
product or system end-of-life (,
analysis. Understanding of which evaluation
for legitimate data. Develop
methods work for which threats is also
approaches to bring realism into
lacking. For example, formal reason-
What are the major research ing and model checking may work for
simulations and testbeds.
gaps? software, but simulation may work ƒƒ Develop research about when
A major gap is lack of the knowledge better for routing threats. Finally, there scalability matters, and in what
and understanding of the threat domain is no peer review mechanism to review way. Develop research about
that is needed to develop realistic secu- and validate evaluation mechanisms or when realism (or simulation)
rity requirements. One reason for this proposals. matters, and what type of
gap is the lack of widely available data realism. Develop research about
on legitimate and attack traffic, for what type of testing works for
various threats and at various levels. What are some exemplary
which threats and environments.
Another large challenge is the lack of problems for R&D on this
Develop simple metrics for
reliable methods to measure success of topic?
system and network health and
various attacks, and inversely to measure Possible directions to solve current for attack success.
the success of defensive actions against problems in security evaluation are:
attacks. (a) system architectures that enhance ƒƒ Develop detailed metrics for
evaluation throughout the development system and network health
Yet another challenge lies in not under- cycle; (b)  development of security and for attack success. Develop
standing how much realism matters for metrics and benchmarks for compo- benchmarks and standardize
testing and evaluation. For example, nents, subsystems, and entire enterprises; testing.
can tests in a 100-node topology (c) development of tools for easy replica-
with realistic traffic predict behavior tion of realistic environments in testbeds What R&D is evolutionary,
in a 10,000-node topology, and for and simulations; (d) realistic adversary and what is more basic,
which threats? Some large “hybrid” models, including how those adversaries higher risk, game changing?
testbeds may need mixtures of real, might react to changes in the defensive
emulated, and simulated entities to security posture; and (e) the encom- The development over time of system
provide flexible tradeoffs between test passing methodologies that bring these evaluation tools, methodologies, mea-
accuracy and testbed cost/scalabil- components together. sures, and metrics will require iterations
ity. If so, then workload estimation and refinements of the successes of
and workload partitioning tools are Projects envisioned in this area include short-term projects, as well as long-term
needed to design experiments for large the following: research. There are short- and long-term
testbeds. (A simple example is that implications in many of the projects and
a malware research testbed typically ƒƒ Develop cost-effective challenges noted.

Evolutionary, relatively short-term success and for security based on metrics for evaluation, including joint
R&D challenges include the following: the models of correct operation. design of realism criteria for evaluation
ƒƒ Developing benchmarks to environments.
ƒƒ Defining verifiable parametric
sets of requirements for standardize testing.
Government should help in mandat-
trustworthiness and improved ƒƒ Developing understanding about
ing, regulating, and promoting this
models for assessing advantages and limitations of collaboration, especially with regard
requirements. various evaluation methods to data sharing. Legal barriers to data
ƒƒ Devising methods to recreate (simulation, emulation, pilot sharing must be addressed. Some
realism in testbeds and deployment, model checking, industry sectors may be reluctant to
simulations while providing etc.) when related to specific share vulnerability data because of legal
flexible trade-offs between cost, threats. liability concerns. There may also be
scalability, and accuracy. (These ƒƒ Managing risky test privacy and customer relations concerns.
include better methods for environments (such as those An example would be data sharing by
designing experiments for large containing malware). common carriers where the shared data
testbeds). ƒƒ Developing better techniques for uniquely identify individual customers.
ƒƒ Developing methods and security testing across all domains The government should also provide
representations such as of conflict. more complete threat and adversary
abstraction models to describe capability models for use in developing
ƒƒ Developing integrated, cost-
threats, so that designers can evaluation and testing criteria.
effective methodologies and
develop detailed specifications. tools that systemically address
Other potential government activities
ƒƒ Developing user interfaces, tools, all of the above desiderata,
include the following:
and capabilities to allow complex including facilitation of scalable
evaluations to be conducted. trustworthiness (Section 1), ƒƒ Propose evaluation methods that
survivability (Section 7), are proven correct as national
ƒƒ Developing tool sets that
resistance to tampering and or international standards for
can grow with technology
other forms of insider misuse tech transfer. They also should
(e.g., 64‑bit words, IPv6).
by developers (Section 4), rapid be implemented in current
ƒƒ Creating better techniques for reevaluation after incremental popular simulations and testbeds.
testing combined performance, changes, and suitable uses of Industry should be encouraged
usability, and security. formal methods where most to use these methods, perhaps via
ƒƒ Developing understanding of usefully applicable—among market incentives.
how much realism matters and other needs. The potential ƒƒ Form attestation agencies that
what type of realism is possible utility of formal methods has would evaluate products on the
and useful. increased significantly in the past market, using evaluation methods
four decades and needs to be that are ready for tech transfer,
Long-term, high-risk R&D challenges
considered whenever it can be and rank those products publicly.
include the following:
demonstrably effective. ƒƒ Create a National CyberSecurity
ƒƒ Developing models of correct and Safety Board that would
operation for various network Resources
collect attack reports from
elements and networks at and Academia and industry should col- organizations and share them in
across all levels of protocol laborate to share data about traffic, a privacy-safe manner. The board
models. attacks, and network environments could also mandate sharing.
ƒƒ Developing metrics for attack and to jointly define standards and Another way is establishing


a PREDICT-like repository developed by projects in other well-established evaluation methods.
for attack data sharing. Yet a areas (e.g., solutions for critical Direct comparisons of vendor products
third way is developing market system availability). Thus, various will be possible, based on measures of
incentives for data sharing. evaluation methods could be performance in standard tests.
ƒƒ Fund joint academic/industry compared with real-deployment
evaluations. Without this ground What needs to be in place for
partnerships in a novel way.
truth comparison, it is impossible test and evaluation?
Academics have a hard time
finding industry partners that to develop good evaluation A flexible, scalable, and secure large-scale
are willing to commit to tech methods because evaluation must testbed would enable high-fidelity tests
transfer. A novel way would correctly predict ground truth. of products using new development and
have government find several evaluation methods.
partners from various fields: Measures of success
enterprises, ISPs, government One key milestone as a measure of To what extent can we test
networks, SCADA facilities, success will be the eventual adoption
real systems?
security device manufacturers, by standards bodies such as NIST or
etc. These partners would ISO of consistent frameworks, method- Because system evaluation must occur
pledge to provide data to ologies, and tools for system evaluation. at all phases of the life cycle, there
researchers in the evaluation System developers will be able to choose should be opportunities to test new
area and to provide small pilot components from vendors based on tools and methodologies on real systems
deployments of technologies results obtained from well-known and inobtrusively.

[Ade2008] S. Adee. The hunt for the kill switch. IEEE Spectrum, 45(5):32-37, May 2008

[DSB2005] Defense Science Board Task Force on High Performance Microchip Supply, February 2005
( ).

[How+2006] M. Howard and S. Lipner. The Security Development Life Cycle. Microsoft Press, Redmond, Washington, 2006.

[ISO1999] International Organization for Standardization / International Electrotechnical Commission (ISO/IEC)

International Standard 15408:1999 (parts 1 through 3), Common Criteria for Information Technology Security
Evaluation, August 1999.

[NIS2008] Security Considerations in the System Development Lhife Cycle. NIST Special Publication 800-64 Revision 2
(Draft), National Institute of Standards and Technology, Gaithersburg, Maryland, March 2008.


Current Hard Problems in INFOSEC Research
4. Combatting Insider Threats

What is the problem being addressed?

Cybersecurity measures are often focused on threats from outside an organization,

rather than threats posed by untrustworthy individuals inside an organization.
Experience has shown that insiders pose significant threats:

ƒƒ Trusted insiders are among the primary sources of many losses in the
commercial banking industry.
ƒƒ Well-publicized intelligence community moles, such as Aldrich Ames,
Robert Hanssen, and Jonathan Pollard, have caused enormous and
irreparable harm to national interests.
ƒƒ Many insiders involved in misuses were hired as system
administrators, became executives, or held other kinds of privileges
[Cap2008.1, Cap2008.2].

This section focuses on insider threats to cyber systems and presents a roadmap for
high-impact research that could aggressively curtail some aspects of this problem. At
a high level, opportunities exist to mitigate insider threats through aggressive profil-
ing and monitoring of users of critical systems, “fishbowling” suspects, “chaffing”
data and services users who are not entitled to access, and finally “quarantining”
confirmed malevolent actors to contain damage and leaks while collecting action-
able counter-intelligence and legally acceptable evidence.

There are many proposed definitions of the insider threat. For the purposes of this
discussion, an insider threat is one that is attributable to individuals who abuse
granted privileges. The scope of consideration here includes individuals masquerad-
ing as other individuals, traitors abusing their own privileges, and innocents fooled
by malevolent entities into taking adverse actions. Inadvertent and intentional
misuse by privileged users are both within the scope of the definition. Although an
insider can have software and hardware acting on his or her behalf, it is the indi-
vidual’s actions that are of primary concern here. Software proxies and other forms
of malevolent software or hardware—that is, electronic insiders—are considered in
Section 5 on combatting malware and botnets.

The insider threat is context dependent in time and space. It is potentially relevant
at each layer of abstraction. For example, a user may be a physical insider or a
logical insider, or both. The threat model must be policy driven, in that no one
description will fit all situations.

Unlike unauthorized outsiders and insiders who must overcome security controls to
access system resources, authorized insiders have legitimate and (depending on their
positions) minimally constrained access to computing resources. In addition, highly

trusted insiders who design, maintain, [Noo+2008], integrity, availability, and brought forward by the research commu-
or manage critical information systems total system survivability are of highest nity are multilevel security (MLS), an
are of particular concern because they priority and can be compromised by example of mandatory access controls
possess the skills and access necessary to insiders. (MAC) that prevents highly sensitive
engage in serious abuse or harm. Typical information from being accessed by less
trusted insiders are system administra- Beneficiary needs may include tools privileged users. Some work has also
tors, system programmers, and security and techniques to prevent and detect been done on multilevel integrity (MLI
administrators, although ordinary users malicious insider activity throughout [Bib1977]), which prevents less trusted
may have or acquire those privileges the entire system life cycle, approaches entities from affecting more trusted
(sometimes as a result of design flaws to minimize the negative impact of entities. However, these are typically
and implementation bugs). Thus, there malicious insider actions, education and too cumbersome to be usable in all but
are different categories of insiders. training for safe computing technology the most extreme environments; even
and human peer detection of insider in such environments, the necessary
What are the potential abuses, and systems that are resilient systems are not readily available. Access
and can effectively remediate detected controls that are used in typical business
insider exploits. Of particular interest environments tend to be discretionary,
The insider threat is often discussed will be the ability to deal with multiple meaning that the individual or group of
in terms of threats to confidentiality colluding insiders—including detect- individuals who are designated as owners
and privacy (such as data exfiltration). ing potential abuses and responding to of an object can arbitrarily grant or deny
However, other trustworthiness require- them. others access to the object. Discretion-
ments, such as integrity, availability, ary access controls (DAC) typically do
and accountability, can also be com- What is the current state of not prevent anyone with read access to
promised by insiders. The threats span an object from copying it and sharing
the practice?
the entire system life cycle, including the copy outside the reach of that user’s
not only design and development but The insider threat today is addressed access control system. They also do not
also operation and decommissioning mostly with procedures such as aware- ensure sufficient protection for system
(e.g., where a new owner or discov- ness training, background checks, good and data integrity. Further background
erer can implicitly become a de facto labor practices, identity management on these and other security-related issues
insider). and user authentication, limited audits can be found in [And08,Bis02,Pfl03].
and network monitoring, two-person
Who are the potential controls, application-level profiling and File and disk encryption may have some
monitoring, and general access con- relevance to the insider threat, to the
beneficiaries? What are their
trols. However, these procedures are extent that privileged insiders might
respective needs?
not consistently and stringently applied not be able to access the encrypted data
The beneficiaries of this research range because of high cost, low motivation, of other privileged users. Also of pos-
from the national security bodies operat- and limited effectiveness. For example, sible relevance might be secret splitting,
ing the most sensitive classified systems large-scale identity management can k-out-of-n authorizations, and possibly
to homeland security officials who accomplish a degree of nonrepudiation zero-knowledge proofs. However, these
need to share Sensitive But Unclassified and deterrence but does not actually would need considerable improvement
(SBU) information/Controlled Unclas- prevent an insider from abusing granted if they were to be effective in commercial
sified Information (CUI), and to health privileges. products.
care, finance, and many other sectors
where sensitive and valuable informa- Technical access controls can be applied
tion is managed. In many systems, such to reduce the insider threat but not elim-
as those operating critical infrastructures inate it. The technologies traditionally


What is the status of current [HDJ2006], various Columbia Univer- ƒƒ Response strategy and privacy
research? sity papers and a book on insider threats protection for falsely accused
(e.g., [Sto+2008]), and an NSA/ARDA insider abuses. In particular,
Several studies of the insider threat have (IARPA) report on classifications and privacy-enhanced sharing of
been produced in the past 10 to 15 years, insider threats [Bra2004] are relevant. behavior models and advanced
although some of these rely on research Also, the Schonlau data set for user fishbowling techniques to enable
on access controls dating back as many command modeling may be of interest detailed monitoring and limit
as 40 years. These need to be compiled ( damage by a suspected inside
and serve as input to a taxonomy of the threat. (See Section 10.)
threats and possible violations. Ongoing ƒƒ Behavior-based access control.
and emerging research efforts include
ƒƒ Decoys, deception, tripwires in
the following: On what categories can we the open.
ƒƒ The 2008 Dagstuhl summer subdivide this topic?
ƒƒ Beacons in decoy (and real)
seminar on Countering Insider
Approaches for coping with insider documents. Adobe and other
Threats [Dag08] included
misuse can be categorized as collect and modern platforms perform a
position papers that are being
analyze (monitoring), detect (provide great deal of network activity at
considered for publication as
incentives and data), deter (prevention startup and during document
a book. It represented a broad
should be an important goal), protect opening, potentially enabling
assessment of a wide variety of
(maintain operations and economics), significant beaconing.
predict (anticipate threats and attacks), ƒƒ More pervasive monitoring
ƒƒ Ongoing insider and identity and react (reduce opportunity, capabil- and profiling, coupled with
management projects under ity, and motivation and morale for the remediation in the presence of
the aegis of The Institute for insider). For present purposes, these six detected potential misuses.
Information Infrastructure categories are grouped pairwise into
Protection (I3P)—for example, ƒƒ Controlled watermarking of
three bins: collect and analyze, detect;
decoy networking and honeypots, deter, protect; and predict, react. documents and services to trace
correlating host and network sources.
indicators of insider threats, and ƒƒ Useful data. The research
What are the major research
behavior-based access control. community needs much more
Three papers from the I3P gaps?
data and more realistic data sets
identity management projects Many gaps relating to insider threats need for experimentation.
were presented at IDtrust 2009. to be better understood and remediated.
See the references in Section 6. ƒƒ Procedures and technology for
ƒƒ Checking. Better mechanisms emergency overrides are needed
ƒƒ Two Carnegie Mellon University
are needed for policy specification in almost every imaginable
reports on insider threats in
and automated checking (e.g., application, but must typically
government [Cap2008.1] and in
role-based access control [RBAC] be specific to each application.
information technology generally,
and other techniques). However, They are particularly important
with emphasis on the financial
any such mechanism must have in health care, military, and other
sector [Cap2008.2]; see also
precise and sound semantics if it situations where human lives
[Ran2004] and [FSS2008].
is to be useful. (Some past work depend on urgent access. The
In addition, a DoD Insider Threat on digital rights management existing limitations are in part
to Information Systems report may be of some indirect interest related to lack of motivation for
[IAT2008], a study of best practices here.) developing and using fine-grained


access controls. In addition, with respect to preventing insider to be established. Very few such
emergency overrides can be misuse. In addition, even the data sets on insider behavior are
abused by insiders who feign or existing controls are not used available today, in part because
exploit crises. Overall, approaches to their full extent. Moreover, victims are reluctant to divulge
must be closely connected to better mechanisms are needed details and in part because many
policy specifications. for both active monitoring (for cases remain unknown beyond
ƒƒ Lessons may be learned from detection and response) and local confines. What data should
safety systems. For example, passive monitoring (for later be collected and how it should
in process control applications, analysis and forensics). Note be made available (openly
separate safety systems are used to that the prevention/monitoring/ or otherwise), perhaps via
ensure that a process is safely shut recording/archiving mechanisms trustworthy third parties, need to
down when certain parameters must themselves be able to be considered. Privacy concerns
are exceeded because of failure withstand threats, especially must be addressed.
of the control system or for any when the defenders are also the
ƒƒ Systems need to be designed to
unanticipated reasons. Analogous attackers. Also, collection of
be auditable in ways sufficient to
protection mechanisms for evidence that will stand up in
allow collection and analysis of
an information system might court is an important part of
forensic-quality data.
ensure that certain operations are deterrence. To this end, forensic
mechanisms and information ƒƒ Models are needed to represent
never allowed, regardless of the
privileges of the users attempting must be separated from the both normal and abnormal
them. Similarly, the principles of systems themselves. insider activity. However, past
least common mechanism and experience with pitfalls of such
least privilege should be applied models needs to be respected.
Advanced fine-grained differential access
more consistently. Also relevant controls; role-based access controls; ƒƒ Methodologies are needed
would be “safe booting” for self- serious observance of separation of roles, for measuring and comparing
protected monitors. techniques and tools meant to
duties, and functionality; and the prin-
ƒƒ From a user perspective, ciple of least privilege also need to be handle insider threats.
security and usability must integrated with functional cryptography Detect
generally be integrally aligned, techniques, such as identity-based and ƒƒ Detection of insider abuse and
but especially with respect to attribute-based encryption, and with suspected anomalies must be
insider misuse. For example, fine-grained policies for the use of all timely and reliable.
users should not feel threatened the above concepts.
ƒƒ Data mining, modeling, and
unless they are actually threats
profiling techniques are needed
to system integrity and to other What are some exemplary
users. (Interactions with the for detection of malicious insider
problems for R&D on this activity.
usability topic in Section 11 are
particularly relevant here.) ƒƒ Better techniques are needed
The categories noted above and some to determine user intent from
ƒƒ Privacy is an important
potential approaches are summarized strict observation, as opposed to
consideration, although it
in Table 4.1. merely detecting deviations from
typically depends on the specific
policies of each organization. Collect and Analyze expected policies.
ƒƒ Existing access controls tend ƒƒ Data sets relating to insider ƒƒ Prediction and detection need to
to be inadequately fine-grained behavior and insider misuse need be effectively integrated.


TABLE 4.1: Potential Approaches to Combatting Insider Threats

Category Definition Potential Approaches

Collect and Analyze, Detect Understanding and identifying threats and Broad-based misuse detection oriented to
potential risks insiders
Deter, Protect Trustworthy systems with specific policies Inherently secure systems with differential
to hinder insider misuse access controls
Predict, React Remediation when insider misuse is Intelligent interpretation of likely
detected but not prevented consequences and risks

Deter and particular policies, and to more invisible might be useful

ƒƒ Fine-grained access controls identify all relevant insiders in addressing the insider threat.
and correspondingly detailed therein. (See the section on Decoys must be conspicuous,
accountability need to have System Evaluation Life Cycle.) believable, differentiable (by
adequate assurance. Audit logs Note that in many cases there are good guys), noninterfering, and
must be reduced to be correctly dynamically changing.
no specific boundaries between
interpreted, without themselves
inside and outside. ƒƒ New research is especially
leaking information.
needed in countering multiple
ƒƒ Continuous user authentication
ƒƒ Deterrence policies need to be colluding insiders. For example,
and reauthentication may be
explored and improved. Training the development of defensive
desirable to address insider
should include use of decoys. mechanisms that systematically
ƒƒ Incentives need to be developed, necessitate multiple colluders
ƒƒ System architectures need to would be a considerable
such as increased risks of being
pervasively enforce the principle improvement.
caught, greater consequences
of least privilege, which is
if caught, lessened payoffs ƒƒ Anti-tamper technologies are
particularly relevant against
if successful, and decreased needed for situations where
insider threats. The principle
opportunities for user insiders have physical access to
of least common mechanism systems. Similar technologies may
disgruntlement. The role of an
could also be useful, restricting be desirable for logical insiders.
ombudsperson should also be
functionality and limiting Inspiration from nuclear safety
considered in this context.
potential damage. Access control controls can illuminate some of
ƒƒ Increased incentives for mechanisms must move beyond the concerns.
anonymous whistle-blowing, the concept of too-powerful
engendering an atmosphere of ƒƒ Protections are needed for
superuser mechanisms, by
peer-level misuse detection and both system integrity and
splitting up the privileges as
monitoring. data integrity, perhaps with
was done in Trusted Xenix. finer-grained controls than
ƒƒ Social, ethical, and legal issues, as Mechanisms such as k-out- for outsiders. In addition,
well as human factors, need to be of-n authorizations might also operational auditing and
addressed in a multidisciplinary be useful. New access control rollback mechanisms are
fashion. mechanisms that permit some needed subsequent to integrity
Protect of the discipline of multilevel violations. Note that physical
ƒƒ Using a life cycle view could security might also help. means (e.g.,write-once media)
be helpful to establish security ƒƒ Deception, diversity, and making and logical means (log-structured
perimeters for specific purposes certain protection mechanisms file systems) are both relevant.


ƒƒ Mechanisms are needed that communications, user behavior, credentials, perfect forward
exhaustively describe and enforce and content. Prediction must secrecy built into systems, and
the privileges that a user is address users and surrogates, as other approaches that could
actually granted. In particular, well as their actions and targets. simplify timely reactions.
visualization tools are needed for React ƒƒ Note that these categories are
understanding the implications ƒƒ Automated mechanisms are somewhat interrelated. Any
of both requested and granted needed that can intercede when research program related to
privileges, relative to each user misuse is suspected, without coping with the insider threats
and each object. This approach jeopardizing system missions and needs to keep this in mind.
needs to include not just logical without interfering with other Table 4.2 summarizes some of the
privileges, but also physical users. For example, some sort of research gaps, research initiatives,
privileges. graceful degradation or system benefits, and time-frame.
recovery may be needed, either
ƒƒ Mechanisms are needed to
before misuse has been correctly What are the near-term, mid-
prevent overescalation of
identified or afterwards. term, long-term capabilities
privileges on a systemwide basis
(e.g., chained access that allows ƒƒ Mechanisms and policies are that need to be developed?
unintended access to a sensitive needed to react appropriately Near Term
piece of data). However, note to the detection of potentially ƒƒ Compile and compare existing
that neither trust nor delegation actively colluding insiders. studies relating to the insider
is a transitive operation. ƒƒ Architecturally integrated threat. (Detect)
Predict defense and response strategies ƒƒ Develop data collection
ƒƒ Various predictive models might mitigate the effects of mechanisms and collect data.
are needed—for example, for insider attacks—for example, an (Detect)
indicators of risks of insider insider who is able to override ƒƒ Evaluate suitability of existing
misuse, dynamic precursor existing policies. One strategy of RBAC R&D to address insider
indicators for such misuse, and considerable interest would be threats. (Protect)
determining what is operationally unalterable (e.g., once-writable)
ƒƒ Develop anti-tampering
relevant (such as the potentially and non-bypassable audit trails
approaches. (Protect)
likely outcomes). that cannot be compromised.
Another strategy would be ƒƒ Explore the possible relevance
ƒƒ Dynamic analysis techniques
are needed to predict a system mechanisms that cannot be of digital rights management
component’s susceptibility to altered without physical access, (DRM) approaches. (Protect)
a certain insider attack, based such as overriding safety Medium Term
on system operations and interlocks. ƒƒ Develop feature extraction and
configuration changes. ƒƒ Architecturally integrated machine learning mechanisms to
ƒƒ Profiles of expected good response strategies might also be find outliers. (Detect)
behavior and profiles of possible invoked when misuse is detected, ƒƒ Develop tools to exhaustively and
bad behavior are generally both gathering forensics-worthy accurately understand granted
useful, but neither approach is evidence of the potential network privileges as roles and system
sufficient. Additional approaches of inside threats, adversary configurations change. (Detect)
are needed. sources and methods, to enable
ƒƒ Develop procedures to evaluate
ƒƒ Better technologies are law-enforcement use of evidence. insider threat protection methods
needed to achieve meaningful ƒƒ Research is needed on scalable in reliable and comparable ways.
prediction, including analysis of mechanisms for revocable (Detect)


TABLE 4.2: Gaps and Research Initiatives

Identified Gap Research Initiatives Benefit Time Frame

Inadequately fine-grained Better mechanisms, policies, Better detection and prevention Near- to long term
access controls monitoring of insider misuse
Absence of insider-misuse aware Better detection tools More precise detection of Near term
detection insider misuse
Difficulties in remediation Mixed strategies for finer- Flexible response to detected Longer term
grained, continuous monitoring misuses
and action

ƒƒ Develop better methods to advances in natural language might be able to hinder insider misuse.
combat insiders acting alone. understanding). (Protect) For example, what might be the rela-
(Protect) tive merits of cryptographically based
ƒƒ Develop insider prediction
authentication, biometrics, and so on,
ƒƒ Pursue the relevance and techniques for users, agents, and
with respect to misuse, usability, and
effectiveness of deception actions. (React)
effectiveness? To what extent would
techniques. (Protect) various approaches to differential
ƒƒ Incorporate integrity protection What R&D is evolutionary and access controls hinder insider misuse?
into authorization and system what is more basic, higher Detectability of insider misuse and the
architectures. (Protect) risk, game changing? inviolability of audit trails would also be
amenable to useful metrics.
ƒƒ Develop behavior-based security,
Intelligent uses of authentication, exist-
for example, advanced decoy ing access-control and accountability The extent to which such localized
networking. (Protect) mechanisms, and behavior monitor- metrics might be composable into
ƒƒ Develop and apply various risk ing would generally be incremental enterprise-level metrics is a challenge of
indicators. (React) improvements. However, in the long particular interest here.
term, significantly new approaches are
Long Term
desirable. To what extent can we test
ƒƒ Establish effective methods
to apply the principle of least real systems?
privilege. (Protect)
ƒƒ There is a strong need for
ƒƒ Develop methods to address Research, experimental testbeds, and realistic data for evaluation of
multiple colluding insiders. evaluations will be essential. technologies and policies that
(Protect) counter insider threats. This
Measures of success must be done operationally in
ƒƒ Pursue the architecture of insider-
a relatively noninvasive way.
resilient systems. (Protect) Various metrics are needed with respect Testbeds are needed, as well
ƒƒ Pursue applications of to the ability of systems to cope with as exportable databases of
cryptography that might limit insiders. Some will be generic: others anonymized data (anonymization
insider threats. (Protect) will be specific to given applications and
is generally a complicated
given systems. Metrics might consider
ƒƒ Develop automated decoy problem).
the extent to which various approaches
generation (may require to authentication and authorization ƒƒ Red teaming is needed to identify


potential attack vectors available misuse can be expected to be but rarely detected or reported. If
to insiders and to test the unique in their motivation budgets are limited, choices may
relevance of potential solutions. and execution, although there have to be made regarding the
ƒƒ Some effort should be devoted to will be common modalities. relative importance of improving
reliably simulating insider attacks Thus, special care must be positive and negative detection
and their system consequences. devoted to understanding and rates, and for which types of
misuse cases.
ƒƒ Cases of insider misuse may accommodating the implications
represent statistically rare of rare events. Alternatively, ƒƒ Tests involving decoys might be
events. Many cases of insider insider misuse may be common useful in training exercises.

[And2008] R. Anderson. Security Engineering: A Guide to Building Dependable
Distributed Systems. Wiley, Indianapolis, Indiana, 2008.

[Bib1977] K.J. Biba. Integrity Considerations for Secure Computer Systems. Technical Report MTR 3153,
The MITRE Corporation, Bedford, Massachusetts, June 1975. Also available from USAF
Electronic Systems Division, Bedford, Massachusetts, as ESD-TR-76-372, April 1977.

[Bis2002] M. Bishop. Computer Security: Art and Science. Addison-Wesley Professional, Boston, Massachusetts, 2002.

[Bra2004] Richard D. Brackney and Robert H. Anderson. Understanding the Insider Threat: Proceedings of a
March 2004 Workshop. RAND Corporation, Santa Monica, California, 2004
( ).

[Cap2008.1] D. Capelli, T. Conway, S. Keverline, E. Kowalski, A. Moore, B. Willke, and M. Williams. Insider Threat
Study: Illicit Cyber Activity in the Government Sector, Carnegie Mellon University, January 2008
( ).

[Cap2008.2] D. Capelli, E. Kowalski, and A. Moore. Insider Threat Study: Illicit Cyber Activity in the Information
Technology and Telecommunications Sector. Carnegie Mellon University, January 2008
( ).

[Dag2008] Dagstuhl Workshop on Insider Threats, July 2008 (

[FSS2008] Financial Services Sector Coordinating Council for Critical Infrastructure Protection and
Homeland Security, Research and Development Committee. Research Agenda for the Banking
and Finance Sector. September 2008 (
FINAL.pdf ). Challenge 4 of this report is Understanding the Human Insider Threat.

[HDJ2006] IT Security: Best Practices for Defending Against Insider Threats to Proprietary Data, National Defense
Journal Training Conference, Arlington, Virginia. Homeland Defense Journal, 19 July 2006


[IAT2008] Information Assurance Technical Analysis Center (IATAC). The Insider Threat to Information
Systems: A State-of-the-Art Report. IATAC, Herndon, Virginia, February 18, 2008.

[Kee2005] M. Keeney, D. Cappelli, E. Kowalski, A. Moore, T. Shimeali, and St. Rogers. Insider Threat
Study: Computer System Sabotage in Critical Infrastructure Sectors. Carnegie Mellon
University, May 2005 ( ).

[Moo2008] Andrew P. Moore, Dawn M. Cappelli, and Randall F. Trzeciak. The “Big Picture” of IT Insider Sabotage
Across U.S. Critical Infrastructures. Technical Report CMU/SEI-2008-TR-009, Carnegie Mellon
University, 2008 ( ). This report describes the MERIT model.

[Neu2008] Peter G. Neumann. Combatting insider misuse with relevance to integrity and accountability in elections
and other applications. Dagstuhl Workshop on Insider Threats, July 2008
( ). This position paper expands on the
fuzziness of trustworthiness perimeters and the context-dependent nature of the concept of insiders.

[Noo+2008] Thomas Noonan and Edmund Archuleta. The Insider Threat to Critical Infrastructures. National
Infrastructure Advisory Council, April 2008
( ).

[Pfl2003] Charles P. Pfleeger and Shari L. Pfleeger. Security in Computing, Third

Edition. Prentice Hall, Upper Saddle River, New Jersey, 2003.

[Ran04] M.R. Randazzo, D. Cappelli, M. Keeney, and A. Moore. Insider Threat Study: Illicit Cyber Activity in the
Banking and Finance Sector, Carnegie Mellon University, August 2004
( ).

[Sto+08] Salvatore Stolfo, Steven Bellovin, Shlomo Hershkop, Angelos Keromytis, Sara Sinclair, and Sean
Smith (editors). Insider Attack and Cyber Security: Beyond the Hacker. Springer, New York, 2008.


Current Hard Problems in INFOSEC Research
5. Combatting Malware and Botnets

What is the problem being addressed?

Malware refers to a broad class of attack software or hardware that is loaded on
machines, typically without the knowledge of the legitimate owner, that compro-
mises the machine to the benefit of an adversary. Present classes of malware include
viruses, worms, Trojan horses, spyware, and bot executables. Spyware is a class of
malware used to surreptitiously track and/or transmit data to an unauthorized third
party. Bots (short for “robots”) are malware programs that are covertly installed
on a targeted system, allowing an unauthorized user to remotely control the com-
promised computer for a variety of malicious purposes [GAO2007]. Botnets are
networks of machines that have been compromised by bot malware so that they
are under the control of an adversary.

Malware infects systems via many vectors, including propagation from infected
machines, tricking users to open tainted files, or getting users to visit malware-
propagating websites. Malware may load itself onto a USB drive inserted into
an infected device and then infect every other system into which that device is
subsequently inserted. Malware may propagate from devices and equipment that
contain embedded systems and computational logic. An example would be infected
test equipment at a factory that infects the units under test. In short, malware can
be inserted at any point in the system life cycle. The World Wide Web has become
a major vector for malware propagation. In particular, malware can be remotely
injected into otherwise legitimate websites, where it can subsequently infect visitors
to those supposedly “trusted” sites.

There are numerous examples of malware that is not specific to a particular operat-
ing system or even class of device. Malware has been found on external devices (for
example, digital picture frames and hard drives) and may be deliberately coded into
systems (life cycle attacks). Increasingly intelligent household appliances are vulner-
able, as exemplified by news of a potential attack on a high-end espresso machine
[Thu2008]. Patching of these appliances may be difficult or impossible. Table 5.1
summarizes malware propagation mechanisms.

Potentially victimized systems include end user systems, servers, network infra-
structure devices such as routers and switches, and process control systems such as
Supervisory Control and Data Acquisition (SCADA).

A related policy issue is that reasonable people may disagree on what is legitimate
commercial activity versus malware. In addition, ostensibly legal software utilities
(for example, for digital rights management [DRM]) may have unintended conse-
quences that mimic the effects of malware [Sch2005, Hal2006].

It is likely that miscreants will develop systems until attribution can be consequences of botnets and malware
new infection mechanisms in the future, accomplished. Honeypots can include spam, distributed denials of
either through discovery of new security also be useful in this regard.) service (DDoSs), eavesdropping on
gaps in current systems or through new traffic (sniffing), click fraud, loss of
exploits that arise as new communi- The NSA/ODNI Workshop on system stability, loss of confidentiality,
cation and computation paradigms Computational Cyberdefense in loss of data integrity, and loss of access
emerge. Compromised Environments, Santa to network resources (for example,
Fe, NM, August  2009, was an being identified as a bot node and
The technical challenges are, wherever example of a step in this direction then blocked by one’s ISP or network
possible, to do the following: ( administrator, effectively a DoS inflicted
by one victim on another). An increas-
ƒƒ Avoid allowing malware onto a
What are the potential ing number of websites (such as popular
social networking systems, web forums,
ƒƒ Detect malware that has been threats?
and mashups) permit user-generated
installed. Malware has significant impact in many content, which, if not properly checked,
ƒƒ Limit the damage malware can aspects of the information age and can allow attackers to insert rogue
do once it has installed itself on a underlies many of the topics discussed content that is then potentially down-
platform. elsewhere in this document. Impacts loaded by many users.
ƒƒ Operate securely and effectively can be single-host to networkwide, nui-
in the presence of malware. sance to costly to catastrophic. Negative Beyond its nuisance impact, malware
ƒƒ Determine the level of risk consequences include degraded system can have serious economic and national
based on indications of detected performance and data destruction or security consequences. Malware can
malware. modification. Spyware permits adver- enable adversary control of critical com-
ƒƒ Remove malware once it has saries to log user actions (to steal user puting resources, which in turn may
been installed (remediation), and credentials and facilitate identity theft, lead, for example, to information com-
monitor and identify its source for example), while bot malware enables promise, disruption and destabilization
(attribution). (Remediation an adversary to build large networks of of infrastructure systems (“denial of
may sometimes be purposefully compromised machines and amplify an control”), and manipulation of financial
delayed on carefully monitored adversary’s digital firepower. Negative markets.

TABLE 5.1: Malware Propagation Mechanisms

Malware Propagation Mechanism Examples

Life cycle From the developer, either deliberate or through the use of infected
development kits.
Scan and Exploit Numerous propagating worms. May propagate without requiring
action on the part of the user.
Compromised Devices Infected USB tokens, CDs/DVDs, picture frames, etc.
Tainted File E-mail attachment
Web Rogue website induces user to download tainted files. (Note: Newer
malware may infect victims’ systems when they merely visit the
rogue site, or by redirecting them to an infected site via cross-site
scripting, for example)


Malware can be particularly damaging in 2003, even though these systems The potential of malware to compromise
to elements of the network infrastruc- were supposedly immune to such an confidentiality, integrity, and availability
ture. Attacks against the Domain Name attack (the plant was not online at the of the Internet and other critical infor-
System (DNS), for example, could time) [SF2003]. Propagating malware mation infrastructures is another serious
direct traffic to rogue sites and enable may have exacerbated the impact of concern. A real-world example would
a wide variety of man-in-the-middle the 2003 blackout in the northeastern be the attacks on Estonia’s cyber infra-
and denial-of-service attacks. Successful United States and slowed the recovery structure via a distributed botnet in the
attacks against DNS allow an adversary from it. It is reasonable to assume that spring of 2007 [IW2007]. That incident
to intercept and redirect traffic, for malware authors will target embedded raised the issue of whether “cyberwar” is
example to rogue or spoofed servers. In systems and emerging initiatives, such covered under NATO’s collective self-
addition to redirection to rogue servers, as the Advanced Metering Infrastructure defense mission. In the absence of robust
there is also the opportunity for selective (AMI) for electric power. attribution, the question remains moot.
or timed denial-of-service attacks; it may There were reports of a cyber dimension
be easier to drop a site from DNS than There is also the impact associated with in the August 2008 conflict in the nation
to deny availability by flooding its con- remediating compromised machines. of Georgia, but the cyber attacks were
nection. These concerns underlay the From an ISP’s point of view, the biggest apparently limited to denials of service
recent mandate to implement DNSSEC impacts include dealing with customer against Georgian government websites
for the .gov domain and recommenda- support calls, purchasing and distrib- and did not target cyberinfrastructure
tions to implement DNSSEC for DNS uting antivirus (A/V) software, and [Ant2008]. A recent malware-du-jour is
root servers. minimizing customer churn. For some Conficker, which spread initially primar-
high-consequence government applica- ily through systems that had not been
Adversaries buy and sell exploits and tions, an infection may even necessitate upgraded with security patches, and has
lease botnets in an active adversary replacement of system components/ subsequently reappeared periodically in
market [Fra2007]. These botnets can be hardware. increasingly sophisticated versions.
used for massive distributed attacks,
spam distribution, and theft of sensi- Who are the potential The law enforcement and DoD com-
tive data, such as security credentials, munities are particularly interested in
beneficiaries? What are their
financial information, and company attribution, which, as noted above, is
respective needs?
proprietary information, through currently difficult.
sophisticated phishing attacks. The use Malware potentially affects anyone who
of botnets makes attribution to the uses a computer or other information What is the current state of
ultimate perpetrator extremely difficult. system. Malware remediation (clean-
the practice?
Botnets provide the adversary with vast ing infected machines, for example) is
resources of digital firepower and the difficult in the case of professionally Deployed solutions by commercial anti-
potential to carry out surveillance on administered systems and beyond the virus and intrusion detection system/
sensitive systems, among other threats. technical capability of many private intrusion prevention system (IDS/IPS)
citizens and small office/home office vendors, as well as the open-source com-
Malware propagation is usually dis- (SOHO) users. Rapid, scalable, usable, munity, attempt to detect or prevent
cussed in the context of enterprise and and inexpensive remediation may be an incoming infection via a variety
home computing. However, it also has the most important near-term need of vectors. A/V removal of detected
the potential to affect control systems in this topic area. Improved detection malware and system reboot are currently
and other infrastructure systems. For and quarantine of infected systems are the primary cleanup mechanisms. The
example, the alarm systems at the also needed, as discussed below. Ben- fundamental challenge to this approach
Davis-Besse nuclear plant in Ohio eficiaries, challenges, and needs are is that miscreants can release repacked
were infected by the Slammer worm summarized in Table 5.2. and/or modified malware continually,


TABLE 5.2: Beneficiaries, Challenges, and Needs

Beneficiaries Challenges Needs

Users Under attack from multiple malware User-friendly prevention, detection,
vectors; Systems not professionally containment, and remediation of malware
Administrators Protect critical systems, maintain continuity, New detection paradigms, robust
enterprise-scale remediation in face of remediation, robust distribution of
explosive growth in malware variants prevention and patches
Infrastructure Systems Prevent accidental infection [SF 2003], Similar to administrator needs, but often
address the growing challenge of targeted with special constraints of legacy systems
infection and the inability to patch and reboot at
arbitrary times
ISPs Provide continuity of service, deal with Defenses against propagating attacks and
malware on more massive scale than botnets; progress in the malware area has
administrators face potential immediate impact in alleviation of
these consequences
Law Enforcement Counter growing use of malware and Robust attribution, advances in forensics
botnets for criminal fraud and data and
identity theft
Government and DoD Growing infection of defense systems, Share the needs of administrators, ISPs, and
such as the Welchia intrusion into the Navy law enforcement
Marine Corps Intranet (NMCI) [Messmer
2003]. More recently, there have been
reports of malware engineered specifically
to target defense systems [LATimes08]

while new A/V signatures take time Web-based A/V services have entered Vendors of operating systems and
to produce, test, and distribute. In the market, some offering a service applications have developed mecha-
addition, it takes time for the user com- whereby a security professional can nisms for online updating and patching
munity to develop, test, and deploy submit a suspicious executable to see software for bugs, including bugs that
patches for the underlying vulnerability whether it is identified as malicious by affect security. Other defenses include
that the malware is exploiting. Further- current tools. This mechanism most antispyware, whitelists of trusted web-
more, the malware developers can test likely functions also as a testbed for sites and machines, and reputation
their software against the latest A/V malware developers (VirusTotal). mechanisms.
versions. [Vir].

Research in malware detection and The U.S. National Institute of Stan- Current detection and remediation
prevention is ongoing. For example, dards and Technology (NIST) Security approaches are losing ground, because
see the Cyber-Threat Analytics project Content Automation Protocol (SCAP) it is relatively easy for an adversary
( Also worth is a method for using specific stan- (whether sophisticated or not) to
noting is the Anti-Phishing Working dards to enable automated vulnerability alter malware to evade most existing
Group (APWG): management, measurement, and policy detection approaches. Given trends in compliance evaluation. malware evolution, existing approaches


(such as A/V software and system patch- a virtualized environment on a par- of, malware infection. For example,
ing) are becoming less effective. For ticular host) [Vra2005] and honeynets DNS zone changes may predict a spam
example, malware writers have evolved (network environments, partially attack. Fast flux of DNS registrations (as
strategies such as polymorphism, virtual, deployed on unused address in Conficker) may indicate that particu-
packing, and encryption to hide their space, that interact with malware in lar hosts are part of the command and
signature from existing A/V software. such a way as to capture a copy to enable control (C2) network for a large botnet.
There is also a window of vulnerabil- further analysis) [SRI2009]. Malware Encrypted traffic on some network ports
ity between the discovery of a new is increasingly engineered to detect may indicate C2 traffic to a botnet client
malware variant and subsequent system virtual and honeynet environments on a given host.
patches and A/V updates. Further, and change its behavior in response.
malware authors also strive to disable There is industry research advancing Virtualization and honeynets still
or subvert existing A/V software once virtual machines to the Trusted Platform provide much potential in malware
their malware has a foothold on the Module (TPM) and hypervisor technol- detection, analysis, and response, at
target system. (This is the case with a ogy in hardware and software, as well least for the near and medium terms.
later version of Conficker, for example.) as in cleanup/remediation (technically For honeynets to continue to be useful,
A/V software may itself be vulnerable possible to do remotely in some cases, research must address issues such as:
to life cycle attacks that subvert it prior but with unclear legal and policy impli-
to installation. Patching is a necessary cations if the system owner has not given ƒƒ What features of honeynets do
system defense that also has drawbacks. prior permission). The Department of adversaries look for to identify
For example, the patch can be reverse Homeland Security has funded ongoing them as honeynets?
engineered by the adversary to find research in cross-domain attack correla- ƒƒ What is the ratio of “enter and
the original vulnerability. This may tion and botnet detection and mitigation retreat” to “enter and attack” in
allow the malware writers to refine [CAT2009]. Analysis techniques include honeynets?
their attacks against the unpatched static and dynamic analysis methods
ƒƒ How does what is actually
systems. Much can be learned from from traditional computer science.
observed in a honeynet compare
recent experiences with successive ver-
with known “script kiddie”
sions of Conficker. There is considerable research into open-
source IDS (SNORT and Bro) along the attacks and targeted malware
Specifically with respect to identity theft, lines of expanding the signature base activity in the real world?
which is one potential consequence and defending these systems against
of malware but may be perpetrated adversarial intentions. Recent research DARPA’s Self-Regenerative Systems
by other means, there is an emerging has considered automatic signature gen- (SRS) program developed some technol-
commercial market in identity theft eration from common byte sequences in ogy around these techniques.
insurance and remediation. This implies suspicious packet payloads [Kim2004]
that some firms believe they have ade- as a countermeasure to polymorphic Artificial diversity is transparent to
quate metrics to quantify risk in this malware. correct system use but diverse from the
case. point of view of some exploits. This has
Significant research has been done into been an elusive goal, but some modest
What is the status of current analysis of execution traces and similar progress has been made in the com-
characteristics of malware on an infected mercial and research sectors. Address
host, but we have a poor understand- space randomization is now included
There is considerable activity in malware ing of the network dimensions of the in many operating systems; and there
detection, capture, analysis, and defense. malware problem. Certain network has been some work in the general area
Major approaches include virtualiza- behaviors have been observed to be of system obfuscation (equivalent func-
tion (detect/contain/capture within important precursors to, or indicators tionality with diverse implementation)


[Sha2004], although it has some funda- IT experts in order to develop effective agility and polymorphism of malware.
mental limitations. defenses. Reaction is supported by cost- Automatic detection of the command
effective, secure remediation that can be and control structure of a malware
Emerging approaches such as behavior- implemented by non-IT professionals. sample is a significant challenge.
based detection and semantic malware
descriptions have shown promise and are What are the major research We do not have an adequate taxon-
deployed in commercial A/V software. omy of malware and botnets. It has
However, new techniques must be devel- been observed that many examples
oped to keep pace with the development A/V and IDS/IPS approaches are of malware are derived from earlier
of malware. becoming less effective because malware examples, but this avenue has not been
is becoming increasingly sophisticated, explored as far as necessary. Progress
and at any rate the user base (particularly in this area may enable, for example,
FUTURE DIRECTIONS consumer systems) does not keep A/V defenses against general classes of
up to date. Malware polymorphism malware, including as-yet unseen
On what categories can we is outpacing signature generation and variants of current exemplars. A well-
subdivide this topic? distribution in A/V and IDS/IPS. understood taxonomy may also support
For this malware and botnets topic, and improve attribution.
prevent/protect/detect/analyze/react Current research initiatives do not
provides a reasonable framework (see adequately address the increasing The attacker-defender relation is cur-
Table 5.3). Protection and detection sophistication and stealth of malware, rently asymmetric. An attacker who
are supported by instrumented virtual- including the encryption and packing develops an exploit for a particular
ization and sandboxing environments to of the malicious code itself, as well system type will find large numbers of
combat inherently secure systems, appli- as encrypted command and control nearly identical exemplars of that type.
cations, and protocols. Analysis consists channels and fast-flux DNS for botnets Thus, it is desirable to force the adver-
of examination of captured malware (for [Sha2008, Hol2008]. Broadly speaking, sary to handcraft exploits to individual
example, harvested on a honeynet) by research should better understand the hosts, so that the cost of developing

TABLE 5.3: Potential Approaches

Category Definition Potential Approaches

Prevent Prevent the production and propagation of IDS/IPS, A/V, Virtualization, Inherently secure
malware systems
Protect Protect systems from infection when IPS, A/V, Inherently secure systems
malware is in the system’s environment
Detect Detect malware as it propagates on IDS/IPS, A/V, Virtualization, Deceptive
networks, detect malware infections on environments
specific systems
Analyze Analyze malware’s infection, propagation, Static and dynamic analysis,
and destructive mechanisms Experimentation in large-scale secured
React Remediate a malware infection and identify Updated IDS/IPS and A/V, Inherently
mechanisms to prevent future outbreaks secure systems, Thin client, Secure cloud
(links to the prevent category) computing paradigm


malware to compromise a large number What are some exemplary complexity of security controls, and rogue
of machines is raised significantly. Arti- problems for R&D on this content injection, users can be tricked
ficial diversity can address the growing topic? into interacting with adversary systems
asymmetry of the attacker-defender while thinking they are performing valid
relation. Robust Security Against OS Exploits: transactions, such as online banking.
Although binary-exploit malware target- Research in this area should advance user
For hosts, the defenses against malware ing the OS is still important and worthy education and awareness and make secu-
(e.g., A/V software, Windows Update, of incremental near-term investment, rity controls more usable, particularly in
and so on) are typically part of or exten- malware increasingly targets browsers browsers. Search engine manipulation
sions to operating systems (OSs). This and e-mail through social engineering causes the victim to go to the malware
fact allows malware to easily target and and other mechanisms. (e.g., at an infected website) rather than
disable those host-based defenses. A the malware’s targeting the user (e.g., via
summary of the gaps are outlined in Protect Users from Deceptive Infections: phishing e-mail). Server-side attacks in
Table 5.4. At present, through social engineering, the form of Structured Query Language

TABLE 5.4: Gaps and Research Initiatives

Identified Gap Research Initiatives Benefit Time Frame

Inadequate defenses against Human factors analysis to More secure present and future Near
e-mail and web malware resist social engineering (tools, e-commerce
interfaces, education), Robust
Escape from virtual machines TPM low in the hardware/ Prolongs usefulness of Near
software stack virtualization as a defensive
Difficulty of remediation Thin client, Automatic Fast, cost-effective recovery Near
remediation from attack
Inadequate test environments Internet-scale emulation Safe observation of malware Near
spread dynamics, better
containment strategies
Attacker/defender asymmetry Intentional diversity, Inherently Attacker must craft attack for a Medium/Long
monitorable systems large number of platforms
No attack tolerance Attack containment, Safe Correct operation in the Medium
sandboxing, Intentional diversity presence of “subclinical”
malware infection
Detection approaches losing the Inherently monitorable systems, Less space for attacker to Medium/Long
battle of scale Robust software whitelisting, conceal activity
Model-based monitoring of
Detection that is generalized
correct software behavior
and scalable
Inadequately understood threat Analysis of adversary markets, Strategic view enables defensive Long
Penetration of adversary community to take the upper
communities, Containing hand
damage of botnets while


(SQL) injection, cross-site scripting, and useful. The general research question is that is difficult to define. Moreover,
other methods are increasingly common how “deception” can be best leveraged sharing malware may be illegal, depend-
ways to infect clients accessing compro- by defenders. ing on the business of the entity.
mised website.
There are concerns about the limitations Collaborative detection supports an
Internet-scale emulation could of these approaches. Even a correctly identified need in the situational under-
provide game-changing breakthroughs functioning hypervisor is inadequate standing topic area. In particular, the
in malware research. Being able to in case of some flaws in the guest OS, detection, quarantine, and remedia-
observe malware (specifically botnets for example. Also, highly sophisticated tion of botnet assets is a major overlap
and worms) at Internet scales without malware is likely to be able to escape cur- between the research needs for malware
placing the real Internet in jeopardy may rent-generation virtual environments. and those of situational understanding
help identify weaknesses in the malware Improved hardware architecture access (Section 8). Network-level defenses
code and how it spreads or reacts to mechanisms will maintain the effective- must come online to supplement host-
outside stimuli. Additionally, charac- ness of these approaches to some degree. level defenses. For example, we require
teristics observed at the macro level may However, additional research is needed better identification of bad traffic at the
give us clues as to how to detect and on techniques that seize the strategic low carrier level. This presents challenges in
respond to malware at the micro level. ground within our computing systems scale and speed.
High-fidelity large-scale emulation is an and also separate the security func-
important enabling capability for many tions from other functionality. The key Thin-client technology has been pro-
of the other initiatives discussed below. insight is that our detection methods posed in the past. In this model, the
and instrumentation must reside lower user’s machine is stateless, and all files
The broad area of virtualization and in the hardware/software stack than and applications are distributed on some
honeynets will provide much value the malware. Otherwise, the malware network (the terminology “in the cloud”
in the near and medium terms, with controls the defenders’ situational aware- is occasionally used, although there are
respect to protection and detection ness, and the defenders have no chance. also parallels with traditional main-
approaches. Malware is becoming more Recent research injecting vulnerabilities frame computing). If we can make the
adaptive, in terms of polymorphism and into hardware designs suggests disturb- distributed resources secure, and that is
evasion techniques. The latter might ing possibilities for the future on this itself a big question, the attacker options
be used to a defensive advantage. If front. against user asset are greatly reduced,
malware is designed to be dormant if and remediation is merely a question
it detects that it is in a virtual machine Collaborative detection may involve of restarting. The long-term research
or in a honeynet environment, active privacy-preserving security information challenges toward this secure cloud
deception on the part of the defender sharing across independent domains computing paradigm are securing the
(making production systems look like that may not have an established trust distributed resource base and making
virtual systems and production net- relationship. We may share malware this base available to the authenticated
works look like honeynets, and vice samples, metadata of a sample, and and authorized user from any location,
versa; changing virtual and real systems experiences. A repository of active supported by a dedicated, integrated
very rapidly; or even the use of an analog malware may accelerate research infrastructure.
to a “screen saver” that toggles a com- advances but raises security concerns in
puter from real to honeynet when the its own right, and access must be care- Remediation of infected systems is
user is not actively using it) may prove fully controlled according to a policy extremely difficult, and it is arguably


impossible to assert that a previously research area. Short-term work in trusted are required.
infected system has in fact been thor- paths to all devices may reduce the risk
oughly cleansed. In particular, systems of, for example, key logging software. Not enough is being done in threat
may be infected with rootkits, which In the short term, we require advances analysis. In any case, the nature of
come in many forms, from user level in authenticated updates, eventually the threat changes over time. One
to kernel level rootkits. More recently, evolving systems that are immune to interesting avenue of research is eco-
hardware virtual machine (HVM) root- malware. Advances in this area relate to nomic analysis of adversary markets.
kits have been proposed, which load the scalable trustworthy systems topic Attackers sell malware exploits (and
themselves into an existing operating in Section 1. also networks of infected machines, or
system, transforming it into a guest OS botnets). The price fluctuations may
controlled by the rootkit [Dai2006]. A longer-term research challenge is to permit analysis of adversary trends and
We require advances in remediation, develop systems, applications, and pro- may also enable definition of metrics as
built-in diagnostic instrumentation, tocols that are inherently more secure to the effectiveness of defenses. Related
and VM introspection that provides against malware infection and also easier to the economic approach is research
embedded digital forensics to deal with to monitor in a verifiable way (in effect, into making malware economically less
these threats. to reduce the space in which malware attractive to adversaries (for example,
can hide within systems). In particu- by much better damage containment,
Containment technology (which lar, hardware-based instrumentation increasing the effectiveness of attribu-
includes TPM approaches mentioned that provides unbiased introspection tion, limiting the number of systems
previously) is promising but needs for and unimpeded control of COTS that can be targeted with a given exploit,
further work. An interesting goal is to computing devices, while being unob- and changing existing laws/policies so
tolerate malware (for example, safely servable by the malware, may help that the punishments reflect the true
doing a trusted transaction from a enable embedded forensics and intrinsi- societal cost of cybercrime).
potentially untrusted system). Another cally auditable systems.
goal is to have a “safe sandbox” for criti-
cal transactions (in contrast to current Artificial diversity can take many What R&D is evolutionary,
sandboxing environments that typi- forms: the code is different at each site, and what is more basic,
cally seek to contain the malware in the the location of code is different, system higher risk, game changing?
sandbox). A final issue is whether large calls are randomized, or other data is In the near term, we are in a defensive
systems can achieve their goal while changed. It may be worth researching struggle, and R&D should continue
tolerating a residual level of ongoing (both in terms of practicality and eco- in the promising areas of virtualization
compromise within their components nomics) how to randomize instruction and honeynets. We require near-term
and subsystems. Generally, the research sets, operating systems, and libraries advances in remediation to address
agenda should recognize that malware that are loaded from different system the serious and increasing difficulty
is part of the environment, and secure reboots. A difficult end goal would be of malware cleanup, particularly on
operation in the presence of malware is to develop systems that function equiva- end-user systems. Research in the area
essential. lently for correct usage but are unique of attack attribution in the near and
from an attack standpoint, so an adver- medium terms can aid the policing that
Development of inherently secure, sary must craft attacks for individual is necessary on the Internet. Mecha-
monitorable, and auditable systems machines. Artificial diversity is just one nisms to share data from various kinds of
has presented a significant challenge. In approach to changing the attacker- malware attacks are currently lacking, as
general, this is a medium- to long-term defender asymmetry, and novel ideas well. The problems faced by researchers

in this domain range from privacy con- that must be detected in order to claim It would be beneficial to have reliable
cerns, legal aspects of data sharing, and effectiveness at some level. metrics that estimate the vulnerability
the sheer volume of data itself. Research of particular systems to corruption by
in generating adequate metadata and We can define measures of success at malware, and how well they are able
provenance is required to overcome a high level by answering the follow- to withstand other kinds of malware-
these hurdles. ing questions and tracking the answers enabled attacks, such as DDoS attacks.
over time: Similarly, metrics that suggest the ben-
Techniques to capture and analyze efits that will accrue with the use of
ƒƒ How many machines do we
malware and propagate defenses faster particular malware prevention or reme-
are essential in order to contain epidem- know about that serve malware? diation strategies would be helpful.
ics. Longer-term research should focus ƒƒ What is the rate of emergence of
on inherently secure, monitorable, and new malware?
auditable systems. Threat analysis and ƒƒ Since spam is a primary botnet What needs to be in place for
economic analysis of adversary markets output, what fraction of e-mail is test and evaluation?
should be undertaken in pilot form in spam?
the near term, and pursued more vigor- Beyond reverse engineering of malware,
ƒƒ What is the industry estimate of
ously if they are shown to be useful. the most effective studies of malicious
hosts serving malware? code have taken place on network test-
ƒƒ What is the trend in malware beds. These testbeds have included
Measures of success severity (on a notional simple virtual machines “networked”
We require baseline measurements of continuum, say from nuisance to on an analyst’s computer, testbeds
the fraction of infected machines at any adware, spyware, bot capture)? consisting of tens or hundreds of real
time; success would be a reduction in ƒƒ What fraction of known attacks (nonvirtualized) nodes, such as DETER
this fraction over time. is successful, and what fraction is [DET], and simulated networks created
thwarted? within network simulation tools. The
Some researchers currently track the research community has yet to approach
emergence of malware. In this way, they We may also consider cost-based mea- studies of malware in Internet-scale
are able to identify trends (for example, sures (from the defender point of view), emulated environments. The infrastruc-
the number of new malware samples per such as: ture and tools do not currently exist to
month). A reversal of the upward trend build emulation environments on the
ƒƒ What is our cost of searching for
in malware emergence would indicate order of 10,000,000 nodes or more.
success. malware propagators?
ƒƒ What is the cost to identify As malware sophistication improves to
Time between malware capture and botnets and their bot command include detection of virtual environ-
propagation of defense (or, perhaps and control infrastructures? ments, the realism of the virtualization
more appropriately, implementation ƒƒ What is the cost to increase environment (for example, virtual
of the defense on formerly vulnerable sharing of malware host lists? machine or honeynet) testbed presents
systems) tracks progress in human and a challenge.
automated response time. Economic analysis of adversary markets
may allow definition of metrics as to Tools and environments to study
With reference to the repository, we effectiveness of particular defenses. malware need to evolve as the malware
may define a minimal set of exemplars evolves. In particular, the community


currently does not have testbeds for to the research community. Another To what extent can we test
hardware/firmware-based malware. desirable resource would be a shared real systems?
honeynet, which would allow learning It is possible to test defenses for efficacy
The tools and infrastructure required to malware behavior. Current honeynets on real systems. Experiments can be
adequately harden a test environment are run mostly on an ad hoc basis by conceived in which real and emulation
are research problems in their own right. individual groups. Legal and regula- networks are exposed to public networks,
Testbeds to study malware are specific tory issues inhibit meaningful sharing, with and without particular defenses.
to this application. The testbed should however. However, rapid automated configura-
not be discernible as a test environment, tion and propagation of defenses must
even to sophisticated malware. Internet-scale emulation would permit first be thoroughly demonstrated on
realistic testing of defenses and their emulated systems.
The community requires an up-to-date, dynamic interaction with malware out-
reliably curated malware repository for breaks. Observation at this level would
research purposes. Limited repositories provide a view of worm and botnet
exist at present, but they are not available spread and operation never seen before.


[Ant2008] A.M. Antonopoulos. Georgia cyberwar overblown. Network World, August 19, 2008

[CAT2009] Conference for Homeland Security 2009 (CATCH ’09), Cybersecurity Applications and Technology,
March 3–4, 2009. The IEEE proceedings of this conference include relevant papers on detection and
mitigation of botnets, as well as correlation and collaboration in cross-domain attacks, from the University
of Michigan and Georgia Tech, as well as Endeavor, HBGary, Milcord, and Sonalyst (among others).

[Dai2006] Dino Dai Zovi, Vitriol: Hardware virtualization rootkits. In Proceedings of the Black Hat
USA Conference, 2006.

[DET] Cyber-DEfense Technology Experimental Research laboratory Testbed (DETERlab)


[Fra2007] J. Franklin, V. Paxson, A. Perrig, and S. Savage. An inquiry into the nature and
causes of the wealth of Internet miscreants. Proceedings of ACM Computer and
Communications Security Conference, pp. 375-388, October 2007.

[GAO2007] CYBERCRIME: Public and Private Entities Face Challenges in Addressing Cyber Threats. Report
GAO-07705, U.S. Government Accountability Office, Washington, D.C., July 2007.

[Hal2006] J.A. Halderman and E.W. Felten. Lessons from the Sony CD DRM episode. In
Proceedings of the 15th USENIX Security Symposium, August 2006.

[Hol2008] T. Holz, C. Gorecki, K. Rieck, and F. Freiling. In Proceedings of the 15th Annual
Network & Distributed System Security (NDSS) Symposium, February 2008.

[Kim2004] Hyang-Ah Kim and Brad Karp, Autograph: Toward automated, distributed worm signature
detection, In Proceedings of the 13th USENIX Security Symposium, August 2004.

[IW2007] L. Greenemeier. Estonian attacks raise concern over cyber ‘nuclear winter.’ Information Week, May 24,
2007 (

[LAT2008] J.E. Barnes. Cyber-attack on Defense Department computers raises concerns. Los
Angeles Times, November 28, 2008 (

[Mes2003] Ellen Messmer. Welchia Worm Nails Navy Marine Corps, Network World Fusion, August 19,
2003. (

[Pou2003] Kevin Poulsen. Slammer worm crashed Ohio nuke plant network. SecurityFocus, August 19, 2003

[Sha2004] H. Shacham, M. Page, B. Pfaff, E.-J. Goh, N. Modadugu, and D. Boneh. On the
effectiveness of address-space randomization. In Proceedings of the 11th ACM Computer
and Communications Security Conference, Washington, D.C., pp. 298-307, 2004.

[Sha2008] M. Sharif, V. Yegneswaran, H. Saidi, P. Porras, and W. Lee. Eureka: A framework for
enabling static malware analysis. In Proceedings of the 13th European Symposium on Research
in Computer Security (ESORICS), Malaga, Spain, pp. 481-500, October 2008.

[Sch2005] Bruce Schneier. Real story of the rogue rootkit. Wired, November 17, 2005 (http://

[SRI2009] SRI Cyber-Threat Analytics ( and Malware Threat

Center ( For example, see analyses of Conficker.

[Thu2008] R. Thurston. Coffee drinkers in peril after espresso overspill attack. SC Magazine, June 20, 2008 (http://

[Vir] Virus Total (

[Vra+2005] M. Vrable, J. Ma, J. Chen, D. Moore, E. Vandekieft, A. Snoeren, G. Voelker, and S.

Savage. Scalability, fidelity and containment in the Potemkin virtual honeyfarm. ACM
SIGOPS Operating Systems Review, 39(5):148-162, December 2005 (SOSP ’05).


Current Hard Problems in INFOSEC Research
6. Global-Scale Identity Management

What is the problem being addressed?

Global-scale identity management concerns identifying and authenticating entities
such as people, hardware devices, distributed sensors and actuators, and software
applications when accessing critical information technology (IT) systems from
anywhere. The term global-scale is intended to emphasize the pervasive nature
of identities and implies the existence of identities in federated systems that may
be beyond the control of any single organization. This does not imply universal
access or a single identity for all purposes, which would be inherently dangerous.
In this context, global-scale identity management encompasses the establishment
of identities, management of credentials, oversight and accountability, scalable
revocation, establishment and enforcement of relevant policies, and resolution of
potential conflicts. To whatever extent it can be automated, it must be administra-
tively manageable and psychologically acceptable to users. It must, of course, also
be embedded in trustworthy systems and be integrally related to authentication
mechanisms and authorization systems, such as access controls. It also necessarily
involves the trustworthy binding of identities and credentials. It is much broader
than just identifying known individuals. It must scale to enormous numbers of
users, computer systems, hardware platforms and components, computer programs
and processes, and other entities.

Global-scale identity management is aimed specifically at government and com-

mercial organizations with diverse interorganizational relationships that today are
hampered by the lack of trustworthy credentials for accessing shared resources. In
such environments, credentials tend to proliferate in unmanageable ways. Identity
management within single organizations can benefit from—and needs to be com-
patible with—the global-scale problem.

Our concern here is mainly the IT-oriented aspects of the broad problems of
identity and credential management, including authentication, authorization, and
accountability. However, we recognize that there will be many trade-offs and privacy
implications that will affect identity management. In particular, global-scale identity
management may require not only advances in technology, but also open standards,
social norms, legal frameworks, and policies for the creation, use, maintenance,
and audit of identities and privilege information (e.g., rights or authorizations).
Clearly, managing and coordinating people and other entities on a global scale
also raises many issues relating to international laws and regulations that must be
considered. In addition, the question of when identifying information must be
provided is fundamentally a policy question that can and should be considered. In
all likelihood, any acceptable concept of global identity management will need to
incorporate policies governing release of identifying information. Overall, countless
critical systems and services require authenticated authorization for access and use,

and global-scale identity management fronts by a wide range of potential of integrity, confidentiality, and system
will be a critical enabler of future IT attackers with diverse motivations, survivability, as well as denial-of-service
capabilities. Furthermore, it is essential within large-scale organizations and attacks.
to be able to authorize on the basis of across multiple organizations. Insider
attributes other than merely supposed and outsider misuses are commonplace. Threats described in other topic areas
identities. Identity management needs Because of the lack of adequate iden- can also affect global-scale identity
to be fully integrated with all the systems tity management, it is often extremely management, most notably defects in
into which it is embedded. difficult to identify the misusers. For trustworthy scalable systems. In addi-
example, phishing attacks have become tion, defects in global-scale identity
Identity management systems must a pervasive problem for which identify- management can have negative impacts
enable a suite of capabilities. These ing the sources and the legitimacy of the on provenance and attack attribution.
include control and management of cre- phishers and rendering them ineffective
dentials used to authenticate one entity where possible are obvious needs. Who are the potential
to another, and authorization of an
beneficiaries? What are their
entity to adopt a specific role and assert Identity-related threats exist throughout
respective needs?
properties, characteristics, or attributes the development cycle and the global
of entities performing in a role. Global- supply chain, but the runtime threats Governmental agencies, corporations,
scale identity management must also are generally predominant. Misuse of institutions, individuals, and particu-
support nonrepudiation mechanisms identities by people and misuse of flawed larly the financial communities [FSSCC
and policies; dynamic management of authentication by remote sites and com- 2008] would benefit enormously from
identities, roles, and properties; and promised computers (e.g., zombies) are the existence of pervasive approaches
revocation of properties, roles, and iden- common. The Internet itself is a source to global identity management, with
tity credentials. Identity management of numerous collateral threats, including greater convenience, reduction of
systems must provide mechanisms for coordinated, widespread denial-of-ser- administrative costs, and possibili-
two-way assertions and authentica- vice attacks, such as repeated failed ties for better oversight. Users could
tion handshakes building mutual trust logins that result in disabling access by benefit from the decreased likelihood of
among mutually suspicious parties. legitimate users. Various threats arise impersonation, identity and credential
All the identities and associated asser- when single-sign-on authentication fraud, and untraceable misuse. Although
tions and credentials must be machine of identities occurs across boundaries the needs of different individuals and
and human understandable, so that all of comparable trustworthiness. This different organizations might differ
parties are aware of the identity interac- is likely to be a significant concern in somewhat, significant research in this
tions and relationships between them highly distributed, widespread system area would have widespread benefits for
(e.g., what these credentials are, who environments. Additional threats arise all of them.
issued them, who has used them, and with respect to the misuse of identities
who has seen them). The lifetimes of and authentication, especially in the What is the current state of
credentials may exceed human lifetimes presence of systems that are not ade-
the practice?
in some cases, which implies that pre- quately trustworthy. Even where systems
vention of and recovery from losses are have the potential for distinguishing There are many current approaches to
particularly difficult problems. among different roles associated with identity management. Many of these
different individuals and where fine- are not yet fully interoperable with
What are the potential grained access controls can be used, other required services, not scalable,
operational considerations and inade- only single-use, or limited in other
quate user awareness can tend to subvert ways. They do, however, collectively
Identification and authentication (I&A) the intended controls. In particular, exhibit pointwise examples that can lead
systems are being attacked on many threats are frequently aimed at violations toward enabling a global-scale identity


management framework. Examples could potentially be useful as part of management, including a government-
of existing approaches include the the authentication process, but most wide E-Authentication initiative, the
following: biometric technologies currently have Defense Department’s Common Access
various potential implementation vul- Card, and public key infrastructure for
ƒƒ Personal ID and authentication. nerabilities, such as fingerprint readers the Global Information Grid. These
Shibboleth is a standards-based, being fooled by fake gelatin fingers. are not research directions, but exhibit
open-source software system for Credit cards, debit cards, smart cards, many problems that can motivate future
single sign-on across multiple user-card-system authentication, and research. However, none of these can
websites. (See http://shibboleth. chip and PIN have all experienced some scale to the levels required without sub- Also of interest vulnerabilities and various misuses. stantial problems regarding federation
are Card Space, Liberty Alliance, Per-message techniques such as DKIM of certification authorities and delays in
SAML, and InCommon (all of (DomainKeys Identified Mail), authen- handling revoked privileges. Moreover,
which are federated approaches, ticating e-mail messages, PGP, and S/ although it is perhaps a minor consid-
in active use, undergoing further MIME are also worth considering— eration today, the existing standard and
especially for their limitations and implementations are based on public-
development, and evolving in
development histories. key cryptography that could eventually
the face of various problems with
be susceptible to attack by quantum
security, privacy, and usability).
It is desirable to learn from the relative computers.
ƒƒ The Homeland Security shortcomings of all these approaches
Presidential Directive 12 and any experience that might be gained Considerable research exists in policy
(HSPD-12) calls for a common from their deployment. However, for languages, trust negotiation, and cer-
identification standard for federal the most part, these sundry existing tificate infrastructures that have not
employees and contractors. An identity management concepts do not yet been tried in practice. Research
example of a solution in compliance connect well with each other. Forming strategies to achieve a strong I&A archi-
appropriate and effective, semantically tecture for the future include large-scale
with HSPD-12 is the DoD
meaningful connections between dis- symmetric key infrastructures with key
Common Access Card (CAC).
parate identity management systems distribution a priori, federated systems
presents a significant challenge. Given of brokers to enable such a system to
Various other approaches such as the a future with many competing and scale, strategies for scaling symmetric
following could play a role but are not cooperating identity management creation of one-time pads, schemes of
by themselves global-scale identity systems, we must develop a system of cryptography not reliant on a random
solutions. Nevertheless, they might assurance for the exchange of identity oracle, and other schemes of cryp-
be usefully considered. Open ID pro- credentials across identity manage- tography not susceptible to attack by
vides transitive authentication, but only ment systems, and principled means quantum computers (which seems pos-
minimal identification; however, trust is to combine information from multiple sible, for example, with lattice-based
inherently not transitive, and malicious identity management systems as input cryptography). The series of IDtrust
misuse is not addressed. Medical ID to policy-driven authorization deci- symposia at NIST summarize much
is intended to be HIPAA compliant. sions. The threats noted above are poorly work over the past 9 years [IDT2009],
Enterprise Physical Access is represen- addressed today. including three papers from the 2009
tative of token-based or identity-based symposium from an ongoing collabora-
physical access control systems. Stateless What is the status of current tive I3P project on identity management.
identity and authentication approaches On the other hand, relatively little work
include LPWA, the Lucent Personal- has been done on avoiding monolithic
ized Web Assistant. OTP/VeriSign is Currently, there are several major ini- trusted roots, apart from systems such
a symmetric key scheme. Biometrics tiatives involving large-scale identity as Trusted Xenix. There is also not


ƒƒ Federated bilateral user identity accountability, credential
enough effort devoted to trustworthy
and credential management on renewals, problems that result
bindings between credentials and users.
Biometrics and radio frequency identi- a very large scale, to facilitate from system updates, and so on.
fication (RFID) tags both require such interoperability among existing
ƒƒ Identity management for
binding. However, by no means should systems.
nonhuman entities such as
research on potential future approaches ƒƒ Efficient support for management domain names, routers, routes,
be limited to these initial ideas. of identities of objects, processes, autonomous systems, networks,
and transactions on a very large and sensors.
Note that merely making SSL client cer-
ƒƒ Flexible management of identities tificates work effectively in a usable way
On what categories can we (including granularity, aliases, might be a useful initial step forward.
subdivide the topic? proxies, groups, and associated
Two categories seem appropriate for this attributes). Policies for enhancing global identity
topic area, although some of the sug- ƒƒ Support for multiple privacy and
management (some of which have
gested research areas may require aspects mechanism implications) include the
cross-organization information
of both categories: following.
exposure requirements,
ƒƒ Mechanisms (e.g., for lightweight aliasing, and ƒƒ Risk management across a
authentication, attribution, unlinking. spectrum of risks. This is tightly
accountability, revocation, coupled with authorization.
ƒƒ Effective presentation of specific
federation, usable user interfaces, Game-theoretical analyses might
attributes: multiple roles,
user-accessible conceptual be useful.
multiple properties, effective
models, presentation, and
access rights, transparency of ƒƒ Trust or confidence in the
evaluations thereof ).
what has and has not been interactions (untrustworthy third
ƒƒ Policy-related research (e.g., revealed. parties; what happens when your
privacy, administration, credentials get stolen or the third
revocation policies, international ƒƒ Enabling rapidly evolving and
party disappears).
implications, economic, social newly created attributes, such as
value associated with identifiers. ƒƒ User acceptance: usability,
and cultural mores, and policies
interoperability, costs; fine-
relating to the effective use of the
ƒƒ Timely revocation of credentials grained attribute release and
above mechanisms)
(altering or withdrawing presentation to users.
As is the case for the other topics, the ƒƒ Explicating the structure,
term “research” is used here to encom- ƒƒ Avoidance of having to carry too meaning, and use of attributes:
pass the full spectrum of R&D, test, many certificates versus the risks semantics of identity and
evaluation, and technology transfer. of single-sign-on authentication attribute assertions.
Legal, law enforcement, political, that must be trustworthy despite ƒƒ Commercial success and
international, and cultural issues are traversing untrustworthy systems. acceptance: usability,
cross-cutting for both of these bins and interoperability, costs, sustainable
need to be addressed throughout. ƒƒ Long-term implications
of cryptographically based economic models; presentation
approaches, with respect to users.
Mechanisms for enhancing global iden-
tity management (with some policy to integrity, spoofability, ƒƒ Accommodating international
implications) include the following: revocation when compromised, implications that require special


consideration, such as seemingly must be able to present credentials for hardware, individual packets,
fundamental differences in identities, roles, and attributes—inde- messages, and so on.
privacy policies among different pendently but consistently interrelated,
EU nations, the United States, ƒƒ Containment, detection, and
relative to specific needs. For example,
and the rest of the world. remediation are poorly addressed,
why should a liquor store clerk be able
particularly following misuse of
ƒƒ Compensating for possible to view a person’s address and other
identities, authentication, and
implications of new approaches personal details on a driver’s license authorization.
that enable new types of when determining whether that person
transactions and secondary uses is at least 21, or, worse yet, to swipe ƒƒ Maintaining consistency of
that were not initially anticipated. a card with unknown consequences? reputations over time across
Services should be able to validate role identities is extremely difficult.
ƒƒ Understanding the implications However, carefully controlled
or property credentials for some situa-
of quantum computing and mechanisms to revoke or
tions without requiring explicit identity
quantum cryptography, and otherwise express doubts about
as well. Entities and services must also
exploring the possibilities of such reputations are also needed.
be able to select appropriate levels of
global identity management
confidence and assurance to fit their ƒƒ Past efforts to impose
without public-key cryptography
situation. In addition, secondary reuse national standards for identity
or with quantum-resistant public-
of credentials by authorizing entities management have met
key cryptography. must be effectively prevented. Some considerable resistance (as
Table 6.1 provides an oversimplified sort of mutual authentication should in Australia and the United
summary of the two categories. be possible whenever desirable. That is, Kingdom).
a bidirectional trusted path between the
authenticatee and the authenticator may ƒƒ There is a serious lack of
What are the major research economic models that would
be needed in some cases.
gaps? underscore the importance of
global-scale identity management
A key gap in identity management is Major gaps include the following:
and lead to coherent approaches.
the lack of transparent, fine-grained,
strongly typed control of identities, ƒƒ Existing systems tend to
ƒƒ There is also a serious lack of
roles, attributes, and credentials. Enti- authenticate only would- understanding of cultural and
ties must be able to know and control be identities of users, not social implications of identity,
what identity-related information has transactions, applications, management authentication, and
been provided on their behalf. Entities systems, communication paths, privacy among most citizens.

TABLE 6.1: Some Illustrative Approaches

Category Definition Potential Approaches

Mechanisms Identity- and attribute-based systems Globally trustworthy identities,
implementing authentication, cryptographic and biometric authentication,
authorization, accountability secure bindings to entities, distributed
Policies Rules and procedures for enforcing identity- Broadly based adversary detection systems
based controls, using relevant mechanisms that integrate misuse detection, network
monitoring, distributed management


Achieving the goal of open, globally systems in which trustworthiness can Some of the possibly relevant
accepted standards for identifying not be assured. metrics might involve the following
individuals, system components, and considerations:
processes is difficult and will take con- Measures of success ƒƒ Interoperability. How many
siderable coordination and cooperation
between industry and governments. Ideally, any system for identification, systems might be integrated?
Global-scale identity management is a authentication, and access control What efficiency can result as
hard problem for a number of reasons, should be able to support hundreds of scopes of scalability increase?
including standardization, scale, churn, millions of users with identity-based or ƒƒ Bilateral identity management.
time criticality, mitigation of insider role-based authentication. IDs, authen- How many identities might be
threats, and the prospect of threats tication, and authorization of privileges handled? What are the risks?
such as quantum computing to existing may sometimes be considered separately, ƒƒ Efficiency of identity transactions
cryptographic underpinnings. Main- but in any case must be considered at global scale. For example,
taining the anonymity of personal compatibly within a common context. what is the end-to-end minimum
information unless explicitly required An identifier declares who a person is time to process various types of
is another challenge. In addition, deter- and may have various levels of granu- transactions?
mining how system processes or threads larity and specificity. Who that person
ƒƒ Revocation. What are the time
should be identified and privileged is is (along with the applicable roles and
delays for expected propagation
an even more complex and daunting other attributes, such as physical loca-
as the global scale increases?
undertaking. Part of the challenge is to tion) will determine the privileges to be
distinguish between the user and the granted with respect to any particular ƒƒ Value metrics. What are the
subjects executing on his or her behalf. system policy. The system should be short-term and long-term values
Finally, although sensor networks and able to handle millions of privileges that might result from various
radio frequency identification (RFID) and a heavy churn rate of changes in approaches?
have tremendous utility, their current users, devices, roles, and privileges. In ƒƒ Privacy metrics. For example,
vulnerabilities and the desired scale of addition, each user may have dozens of how easily can behavior analysis
future deployment underscore the need distinct credentials across multiple orga- or pseudonymous profiling be
to address the hard challenges of identity nizations, with each credential having its used to link multiple identities?
management on a global scale. own set of privileges. It should be pos-
ƒƒ Risk management metrics. What
sible to measure or estimate the extent
are the risks associated with the
Resources to which incremental deployment of
above items?
new mechanisms and new policies could
Short-term gains can be made, par- be implemented and enforced. Revoca-
ticularly in prototypes and in the policy tion of privileges should be effective for What needs to be in place for
research items noted in the Background near-real-time use. Measurable metrics test and evaluation?
section above. In particular, the intel- need to encompass all these aspects of
ligent use of existing techniques and global identity management. Overall, Federated solutions will require realis-
implementations would help. However, it should be extremely difficult for any tic testbeds for test and evaluation of
serious effort needs to be devoted to national-level adversary to spoof a criti- global identity management approaches.
long-term approaches that address cal infrastructure system into believing Universities would provide natural envi-
inherent scalability, trustworthiness, that anyone attempting access is any- ronments for initial experimentation and
and resistance to cryptanalytic and sys- thing other than the actual adversary or might, under controlled circumstances,
temic attacks, particularly in federated adversaries. enable larger-scale collaborations.


Numerous opportunities will exist for organizational and multi-organizational Approaches to test markets require spe-
formal analysis of algorithms and pro- requirements, and the number of orga- cific attention to usefulness and usability
totypes, especially as they scale up to nizations, not just the number of people. and to cost-effectiveness. Possible test
federated solutions. These should com- Testing is only part of what is neces- markets include virtual environments
plement any testing. sary. Federated algorithms need some such as World of Warcraft or Second
formal analyses with respect to their Life and real-world environments such
To what extent can we test consistency, security, and reliability. as banking, financial services, eBay, the
Experiences with failed or ineffective Department of Energy, Department of
real systems?
attempts in the past must be reflected Veterans Affairs, federated hospitals,
Today’s test and evaluation are rather in new directions. As is often the case, and Las Vegas casinos. Realistic test-
ad hoc and leave beta testing to user sharing of such experiences is difficult. beds require realistic incentives such
communities. Test criteria, scalability, So are multi-institutional testbeds and as minimizing losses, ability to cope
robustness, and cost need to be con- experiments. Incentives are needed to with large-scale uses, ease of evaluation,
sidered. Some things can be tested; facilitate sharing of experiences relating and trustworthiness of the resulting
others require different kinds of analy- to vulnerabilities and exploits. Algorith- systems—including resilience to denials
sis, including large-scale simulations mic transparency is needed, rather than of service and other attacks, overall
and formal methods. Scalability is closely held proprietary solutions. system survivability, and so on.
needed with respect to the number of

[FSS2008] Financial Services Sector Coordinating Council for Critical Infrastructure. Protection and Homeland
Security, Research and Development Committee. Research Agenda for the Banking and Finance
Sector. September 2008, ( ).

[IDT2009] 8th Symposium on Identity and Trust on the Internet (IDtrust 2009), NIST, April 14-16, 2009
( The website contains proceedings of previous
years’ conferences. The 2009 proceedings include three papers representing team members
from the I3P Identity Management project (which includes MITRE, Cornell, Georgia
Tech, Purdue, SRI, and the University of Illinois at Urbana-Champaign).


Current Hard Problems in INFOSEC Research
7. Survivability of Time-Critical Systems

What is the problem being addressed?

Survivability is the capability of a system to fulfill its mission, in a timely manner,
in the presence of attacks, failures, or accidents [Avi+1994, Ell+1999, Neu2000].
It is one of the attributes that must be considered under trustworthiness, and is
meaningful in practice only with respect to well-defined mission requirements
against which the trustworthiness of survivability can be evaluated and measured.

Time-critical systems, generally speaking, are systems that require response on non-
human timescales to maintain survivability (i.e., continue to operate acceptably)
under relevant adversities. In these systems, human response is generally infeasible
because a combination of the complexity of the required analysis, the unavailabil-
ity and infeasibility of system administrators in real time, and the associated time
constraints. This section uses the following definition:

With respect to survivability, a time-critical system is a system for which faster-

than-human reaction is required to avoid adverse mission consequences and/or
system instability in the presence of attacks, failures, or accidents.

Of particular interest here are systems for which impaired survivability would have
large-scale consequences, particularly in terms of the number of people affected.
Examples of such systems include electric power grids and other critical infrastruc-
ture systems, regional transportation systems, large enterprise transaction systems,
and Internet infrastructure such as routing or DNS. Although impaired survivability
for some other types of systems may have severe consequences for small numbers of
users, they are not of primary relevance to this topic. Examples of such systems are
medical devices, individual transportation systems, home desktop computers, and
isolated embedded systems. Such systems are not always designed for an adequate
level of survivability, but the problem is less challenging to address for them than for
large and distributed systems. However, common-mode failures of large numbers
of small systems (for example, a vulnerability in a common type of medical device)
could have large-scale consequences. (Note that personal systems are not actually
ignored here, in that certain major advances in survivability of large-scale time-
critical systems may be applicable to smaller systems.)

Time criticality is a central property to be considered. It connects directly to

the “faster-than-human” aspect of the above definition of survivability. In some
systems, failure to fulfill a mission for even fractions of a second could have severe
consequences. In other types of systems, downtime for several minutes could be
acceptable. In some other systems, system stability could be threatened if upsets
are not handled on faster-than-human timescales. See Figure 7.1 for examples

Figure 7.1: Examples of Systems With Different Time-Criticality Requirements and
Different User Populations

Secondary relevance Primary relevance


critical transac-
tion system

Ad-hoc emergency
response system

office server
Home PC

Social networking website

Batch processing system

of systems categorized with respect to failures, and accidents. Rather than enu- educators and students, standards
relative time criticality and size of the merate a long list, we refer throughout bodies, and so on. These categories of
user population they serve. The systems to “all relevant adversities” for which beneficiaries have very different needs.
on the right side of the diagonal line survivability is required. End users need to have a working system
are considered in primary scope for this whenever they need to use it (avail-
discussion, while systems to the left of Who are the potential ability), and they need the system to
the line are of secondary interest, as continue working correctly once they
beneficiaries? What are their
indirect beneficiaries. have started using it (reliability). System
respective needs?
owners have many additional needs;
What are the potential Beneficiaries include the ultimate end for example, they need to have situ-
users of critical infrastructure systems ational awareness so that they can be
(the public), system owners and opera- warned about potential problems in
As noted in the definition of survivabil- tors, system developers and vendors, the system and manage system load,
ity, the threats include system attacks, regulators and other government bodies, and they need to be able to react to an


incident and to recover the system and systems, we cannot afford to wait for control systems is also under way in the
restore operations. such data to be gathered and analyzed. I3P program (
srpcs.html). However, considerable
What is the status of current effort is needed to extend fault toler-
What is the current state of ance concepts to survivability (including
practice? intrusion tolerance) and to pursue auto-
The current state of research can be mated and coordinated attack response
At present, IT systems attempt to maxi- partitioned into three areas: understand- and recovery.
mize survivability through replication ing the mission and risks; survivability
of components, redundancy of infor- architectures, methods, and tools; and Test and Evaluation. We need to be able
mation (e.g., error-correcting coding), test and evaluation. to test and evaluate the time-critical ele-
smart load sharing, journaling and trans- ments of systems. Some testbed efforts
action replay, automated recovery to a Understanding the Mission and Risks. have made general network testing
stable state, deferred committing for We need to better understand the time- infrastructures available to researchers
configuration changes, and manually critical nature of our systems and (for example PlanetLab, ORBIT, and
maintained filters to block repeated their missions. We also need to better DETER). Some other existing testbeds
bad requests. Toward the same goal, understand the risks to our systems are available only to restricted groups,
control systems today are supposedly with respect to impaired survivability. such as military or other government
disconnected from external networks The concept of risk typically includes research laboratories. However, testing
(especially when attacks are suspected), threats, vulnerabilities, and conse- of survivability is inherently unsatis-
although not consistently. Embedded quences. (Experiences with the design factory, because of the wide variety of
systems typically have no real protection and operation of critical infrastructure adversities and attacks, some of which
for survivability from malicious attacks systems would be helpful toward these may arise despite being highly improb-
(apart from some physical security), goals.) Some methodologies and tools able. In addition, testbeds tend to lack
even when external connections exist. exist in this area, but many risk analysis realism.
methods are imprecise and suffer from
The current metrics for survivability, limited data for one or several param-
availability, and reliability of time- eters. However, the recent efforts by
critical systems are based on the Haimes et al. and Kertzner et al. are On what categories can we
probabilities of natural and random worth noting [Hai+2007, Ker+2008]. subdivide the topics?
failures (e.g., MTBF). These metrics
typically ignore intentional attacks, Survivability Architectures, Methods, This topic is divided into three cat-
cascading failures, and other correlated and Tools. Efforts in this area include egories, as suggested in the preceding
causes or effects. For example, coordi- the large body of work in fault toler- section: understanding the mission
nated attacks and insider attacks are not ance for systems and networks (e.g.,  see and risks; survivability architectures,
addressed in most current approaches [Neu2000] for many references). A methods, and tools; and test and
to survivability. One often-cited reason previous major R&D program in this evaluation.
is that we do not have many real-world area was DARPA’s OASIS (Organically
examples of intentional well-planned Assured and Survivable Information Survivability architectures, methods,
attacks against time-critical systems. Systems), documented in the Third and tools are further divided into
However, because of the criticality of the DARPA Information Survivability Con- protect, detect, and react subcatego-
systems considered here and because of ference and Exhibition [DIS2003]. ries. Table 7.1 provides a summary of
many confirmed vulnerabilities in such Some work in the area of survivable the potential approaches.


What are the major research sets of requirements will apply ƒƒ There is no one-size-fits-all
gaps? to specific systems. We need architecture. Some systems will
processes and methods to identify be embedded and centralized;
As an attribute of trustworthiness,
survivability depends on trustworthy and locate time criticality in some will be networked
computer systems and communications systems and to express them in and distributed. However,
and trustworthy operations relating a rigorous manner. Similarly, composable, scalable trustworthy
to security, reliability, real-time per- we need to be able to identify systems (Section 1) are likely to
formance where essential, and much and quantify consequences, play a major role.
more. Thus, it is in essence a meta- which could be life-critical,
requirement. Its dependence on other environmental, or financial. The Survivability Architectures, Methods,
subrequirements must be made explicit. interaction between physical and Tools
(For example, see [Neu2000].) The and digital systems needs to be Protect (protection that does not involve
absence of meaningful requirements understood with greater fidelity.
for survivability is a serious gap in prac- ƒƒ Interdependencies among systems human interaction)
tice and is reflected in various gaps in and infrastructures need to be
research—for example, the inability to ƒƒ We need families of architectures
analyzed. We need to understand with scalable and composable
specify requirements in adequate detail
the extent to which a survivability components that can satisfy
and completeness, and the inability
failure in one system can cause critical trustworthiness
to determine whether specifications
a failure in another system, and requirements for real-time system
and systems actually satisfy those
the ways in which survivability behavior. We need to understand
properties can compose. how to balance confidentiality
Understanding the Mission and Risks ƒƒ We need to be able to build and integrity against timely
ƒƒ Rigorous definitions of properties models of systems, threats, availability. Traditional security
and requirements are needed vulnerabilities, and attack mechanisms tend to either
that can apply in a wide range methods. These models should introduce human timescales
of application environments. include evolution of attacks and or latency on a machine
These include concepts such blended threats that combine timescale and could thereby
as response time, outage time, independent and correlated impair availability. Techniques
and recovery time. Specific attack methods. for protecting integrity could

TABLE 7.1: Potential Approaches

Category Definition Potential Approaches

Protect Protect systems from all relevant adversities Inherently survivable system architectures
in the system’s environment. with pervasive requirements.
Detect Detect potential failures and attacks as early Broadly based adversity detection systems
as possible. that integrate misuse detection, network
monitoring, etc.
React Remediate detected adversities and recover Use situational awareness and related
as extensively as possible. diagnostics to assess damage; anticipate
potential recovery modes.


improve survivability, but not ƒƒ We need to understand how components into systems that are
necessarily. Some integrity core functions of systems can be more survivable.
protection mechanisms, such isolated from functions that can ƒƒ We need substantive methods
as checksums, could introduce be attacked, so that the time- for composable survivability. See
vulnerabilities if the checksums critical properties of the core Section 1 (Scalable Trustworthy
could be manipulated or made functions are preserved even Systems) for a more detailed
unavailable. Better techniques when the systems are attacked. discussion on composability.
are needed to ensure self- Research is needed on predictably We need tools for reasoning
monitoring and self-healing trustworthy resource allocation about composable survivability,
system capabilities, as well and scheduling applicable to including assurances relating
as autonomous operation. each of a wide range of different to identity and provenance of
Distributed systems must system architectures with components (Sections 6 and 9,
also be considered, not just different types of distributed respectively) and life cycle
embedded systems. Trustworthy control. evaluations (Section 3). For
management (including control, ƒƒ We need to explore how we can example, survivability claims
security, and integrity), timely achieve useful redundancy, with for a system composed of
delivery of distributed data, adequate assurance that single components should be derivable
and heterogeneous sensors points of failure are not present. from survivability claims for
will be particularly important. components. Developing and
ƒƒ We must be able to identify
Survivability also requires deploying generic building-
and prevent the possibilities of
protection against attacks, insider block platforms for composable
cascading failures. In particular,
misuse, hardware faults, and survivability would be very
we need mechanisms that detect
other adversities. It may also useful.
and stop cascading failures faster
need to limit dependence on
than they can propagate. This is ƒƒ For networks, we need to explore
untrustworthy components, such
a complex problem that needs the trade-offs between in-band
as complex operating systems that
large testbeds and new simulation and out-of-band control with
need frequent patches. Above all, respect to survivability, time
operational interfaces to human criticality, and economics.
controllers will be vital, especially ƒƒ Common-mode failures are
in emergency situations. a challenge in monocultures, ƒƒ We must be able to ensure
whereas system maintenance survivability for services on
ƒƒ We need new communication
is problematic in diversified which our time-critical systems
protocols that are designed for
and heterogeneous systems. depend. For example, all systems
survivability. For example, a
Techniques are needed depend on some form of power
protocol could make sure that
to determine appropriate source, and the survivability
an attacker needs to spend
balances between diversity of the system can never be
more resources than the system
and monoculture to achieve better than the survivability
needs to expend to defend itself
survivability in time-critical of its power sources. Other
while preserving its time-critical
systems. services to consider are cooling,
properties. Frequency hopping
and SYN cookies are examples of communications, DNS, and
ƒƒ Considerable effort is being
approaches using this principle. devoted to developing GPS.
Extending or replacing TCP/IP, hypervisors and virtualization. ƒƒ We need to investigate functional
Modbus, and other protocols Perhaps these approaches could distribution as a strategy for
might be considered. be applied to integrating COTS time-critical survivability and


consider challenges related to is at risk, we need to react to make sure important, but “depth” of hardening can
that strategy. Issues to investigate that survivability is preserved. The fol- also be important, as can affordability—
include the use of robust group lowing approaches to reaction need to an approach that is cost prohibitive will
communication schemes—peer- be investigated: not be very widely adopted.
to-peer and multicast for time-
critical systems. ƒƒ Self-healing systems that deploy What R&D is evolutionary and
ƒƒ Detection and recovery machine-time methods to restore what is more basic, higher
mechanisms themselves (see time-critical system properties.
risk, game changing?
below) need to be protected, ƒƒ Graceful degradation of service
to make sure they cannot be (connection with mission Near term
disabled or tricked into reaction. ƒƒ Realistic, comprehensive
understanding requirements).
ƒƒ Predictable reactions with
Detect ƒƒ Existing protocols
appropriate timeliness.
ƒƒ Identification of time-critical
To detect when the survivability of a ƒƒ Strategies for course of action components
time-critical system is at risk, we need to when intervention is required
have sophisticated and reliable detec- Medium term
(scenario planning before
tion methods. This capability requires ƒƒ Detection
reaction is needed, cyber
runtime methods to detect loss of playbook). ƒƒ Strategies for reaction
time-critical system properties, such ƒƒ Experimentation with
ƒƒ System change during operation
as degradation, and predict potential trustworthy protocols for
(to break adversarial planning, to
consequences. The following topics need networking and distributed
make planned attacks irrelevant).
investigation: control, out-of-band signaling,
ƒƒ Coordinating reaction with robustness, and emergency
ƒƒ Self-diagnosis (heartbeats, supporting services (e.g., tell ISP recovery
challenge-response, built-in to reconfigure routing into user’s ƒƒ Higher-speed
monitoring of critical functions, network, real-time black hole). intercommunications and
detection of process anomalies). coordination
ƒƒ Tarpitting, that is, slowing down
ƒƒ Intrinsically auditable systems an attacker without slowing ƒƒ Development tools
(systems that are by design down critical system functions. ƒƒ System models
instrumented for detection). ƒƒ Bringing undamaged/repaired Long term
ƒƒ Network elements that components back online via ƒƒ Evaluatable metrics
participate and collaborate on autonomous action (no human ƒƒ Establishment of trustworthy
detection. intervention). This includes protocols for networking and
reevaluation of component distributed control
ƒƒ Human-machine interfaces that
status and communication flows
enable better detection and better ƒƒ Self-diagnosis and self-repair
(routing, ad-hoc networks).
visualization. ƒƒ Provisioning for automated
ƒƒ Protocols that support closed- reaction and recovery
What are the challenges that
loop design (confirmation of
must be addressed?
actions). Resources
Significant advances in attacks on surviv-
React ability may require research in new areas. Making progress on the entire set
When we have detected that survivability Breadth of service environments can be of in-scope systems requires focused


research efforts for each of the underly- metrics). Resilience must be ƒƒ Research infrastructures that are
ing technologies and each type of critical possible in the face of unexpected needed to support research in this
system, together with a research-coor- inputs, when some partial degree area include a “library” of devices:
dinating function that can discern and of service must still be provided, keep a copy of every reasonably
understand both the common and the with appropriate recovery time. sized and priced manufactured
disparate types of solutions developed Attack efforts in testing need to device (compare this with seed
by those working on specific systems. be appropriately high. banks). Also, keep templates
An important role for the coordinating
ƒƒ Measuring the relationships or models of devices for use in
function is to expedite the flow of ideas
between complexity and time design and evaluation.
and understanding among the focused
groups. criticality is desired, especially ƒƒ Access to real-world normal and
when a system requires faster- attack data and system designs
For a subject this broad and all-encom- than-human reactions. for evaluating research results is
passing (it depends on security, reliability, ƒƒ High-fidelity simulations, needed for all types of systems
situational awareness and attack attri- including: how to simulate covered in this section, not just
bution, metrics, usability, life cycle physical aspects together with for typical data but also for
evaluation, combating malware and control functions, integrate extreme cases. Issues concerning
insider misuse, and many other aspects), security in testing and proprietary data and data
it seems wise to be prepared to launch simulation, and validate the sanitization need to be addressed,
multiple efforts targeting this topic area. simulation. Appropriate degrees including post-incident data
of fidelity, and determining that a
and analysis such as flight data
Measures of success simulation is sufficiently realistic.
records; and integration of
Success should be measured by the range ƒƒ Private industry needs to be testbeds (wireless, SCADA,
of environments over which the system engaged. general IT), enabling testbed
is capable of delivering adequate service ƒƒ Analytical models should be capabilities to be combined.
for top-priority tasks. These environ- developed based on simulations.
ments will vary by topology and spatial ƒƒ Red Teaming to assess structured
distribution: number, type, and location survivability, with red teams
of compromised machines; and a broad employing domain-specific skills.
range of disruption strategies.
ƒƒ Adversarial modeling that seeks
to understand the threat to time-
What needs to be in place for critical systems.
test and evaluation?
Many issues are relevant here:
To what extent can we test
real systems?
ƒƒ Metrics for survivability: ƒƒ Testing of large systems:
determining which existing survivability is not easy to test in
metrics (MTBF, etc.) are a very large and complex system,
applicable, which measures of such as an electric power grid.
success are appropriate, what Relevant issues include: how to
additional aspects of survivability share access to existing testbeds
and time criticality should be and how to compose results of
measured (not covered by existing subsystem tests.


[Avi+2004] A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts
and taxonomy of dependable and secure computing. IEEE Transactions on
Dependable and Secure Computing, 1(1):11-33, January-March 2004.

[DIS2003] 3rd DARPA Information Survivability Conference and Exposition (DISCEX-III 2003), 22-24
April 2003, Washington, DC, USA. IEEE Computer Society 2003, ISBN 0-7695-1897-4.

[Ell+1999] R.J. Ellison, D.A. Fisher, R.C. Linger, H.F. Lipson, T. Longstaff, and N.R.
Mead. Survivable Network Systems: An Emerging Discipline. Technical Report
CMU/SEI-97-TR-013, Carnegie Mellon University, May 1999.

[Hai+2007] Yacov Y. Haimes, Joost R. Santos, Kenneth G. Crowther, Matthew H. Henry, Chenyang Lian, and
Zhenyu Yan. Analysis of Interdependencies and Risk in Oil & Gas Infrastructure Systems. I3P Research
Report No. 11, June 2007 ( ).

[Ker+2008] Peter Kertzner, Jim Watters, Deborah Bodeau, and Adam Hahn. Process Control System Security
Technical Risk Assessment Methodology & Technical Implementation. I3P Research Report No.
13, March 2008 ( ).

[Neu2000] P.G. Neumann. Practical Architectures for Survivable Systems and Networks. SRI International,
Menlo Park, California, June 2000 (


Current Hard Problems in INFOSEC Research
8. Situational Understanding and Attack Attribution

What is the problem being addressed?

Situational understanding is information scaled to one’s level and areas of interest.
It encompasses one’s role, environment, the adversary, mission, resource status, what
is permissible to view, and which authorities are relevant. The challenges lie in the
path from massive data to information to understanding, allowing for appropriate
sharing at each point in the path.

The questions to be answered, in rough order of ascending difficulty, are the


ƒƒ Is there an attack or misuse to be addressed (detection, threat assessments)?

ƒƒ What is the attack (identification, not just IDS signature)?
ƒƒ Who is the attacker (accurate attribution)?
ƒƒ What is the attacker’s intent (with respect to the present attack as well as
predicting behavior over time)?
ƒƒ What is the likely impact?
ƒƒ How do we defend (autonomous enterprises and the community as a
ƒƒ What (possibly rogue) infrastructure enables the attack?
ƒƒ How can we prevent, deter, and/or mitigate future similar occurrences?

Situational understanding includes the state of one’s own system from a defensive
posture irrespective of whether an attack is taking place. It is critical to understand
system performance and behavior during non-attack periods, in that some attack
indicators may be observable only as deviations from “normal behavior.” This
understanding also must include performance of systems under stress that are
not caused by attacks, such as a dramatic increase in normal traffic due to sudden
popularity of a particular resource.

Situational understanding also encompasses both the defender and the adversary.
The defender must have adversary models in order to predict adversary courses of
action based on the current defensive posture. The defender’s system-level goals
are to deter unwanted adversary actions (e.g., attacking our information systems)
and induce preferred courses of action (e.g., working on socially useful projects as
opposed to developing crimeware, or redirecting attacks to a honeynet).

Attack attribution is defined as determining the identity or location of an attacker

or an attacker’s intermediary. Attribution includes the identification of interme-
diaries, although an intermediary may or may not be a willing participant in an

attack. Accurate attribution supports and how our decision makers interpret, Adversaries may be able to exfiltrate
improved situational understanding and react to, and mitigate those attacks. Of sensitive data over periods of time,
is therefore a key element of research in special concern are attacks on informa- again without actually taking down
this area. Appropriate attribution may tion systems with potentially significant the targeted systems. Here, situational
often be possible only incrementally, strategic impact, such as wide-scale understanding should clearly include
as situational understanding becomes power blackouts or loss of confidence understanding of government threat
clearer through interpretation of avail- in the banking system. Attacks may models and concerns. Sharing such
able information. come from insiders, from adversaries understanding is particularly impor-
using false credentials, from botnets, or tant—and sensitive in the sense that it is
Situational understanding is larger than from other sources or a blend of sources. likely to lead to recognition of additional
one user, or possibly even larger than one Understanding the attack is essential for weaknesses and vulnerabilities.
administrative domain, and addresses defense, remediation, attribution to the
what is happening through consider- true adversary or instigator, hardening In addition, the more serious attacks
ation of a particular area of interest at of systems against similar future attacks, now occur at two vastly different time-
a granularity that is appropriate to the and deterring future attacks. Attribution scales. The classic fear is cyber attacks
administrator(s) or analyst(s). In partic- should also encompass shell companies, that occur faster than human response
ular, situational understanding of events such as rogue domain resellers whose times. Those attacks are still of concern.
within infrastructures spanning multiple business model is to provide an enabling However, another concern is “low and
domains may require significant coor- infrastructure for malfeasance. There slow” and possibly stealthy attacks
dination and collaboration on multiple are numerous areas of open research that break the attack sequences into
fronts, such as decisions about when/ when it comes to these larger questions a series of small steps spread over a
whether to share data, how to depict the of attribution. For example, we have long time period. Achieving situational
situation as understanding changes over not adequately addressed digital finger- awareness for these two ends of the con-
time, and how to interpret or respond printing of rogue providers of hosting tinuum is likely to require very different
to the information. Attribution is a key services. (See also Section 9.) approaches.
element of this process, since it is con-
cerned with who is doing what and what There have been numerous widely pub- Who are the potential
should be done in response. licized large-scale attacks launched for
beneficiaries? What are their
a variety of purposes, but recently there
respective needs?
What are the potential is a consensus that skilled nonstate
actors are now primarily going after Although all computer users and all
financial gain [GAO2007, Fra2007]. consumers of information systems
Situational understanding addresses a Click fraud, stock “pump and dump,” products are potential victims of the
broad range of cyber attacks, specifically and other manipulations of real-time broad range of attacks we address, and
including large-scale and distributed markets prove that it is possible to profit would benefit from improved situational
attacks, where it is felt that adversary from cybercrime without actually taking awareness, we are primarily seeking tools
capabilities are outstripping our ability down the systems that are attacked. In and techniques to help the communities
to defend critical systems. Inability to this context, situational understanding whose challenges and needs are given
attribute sophisticated attacks to the should clearly encompass law enforce- in Table 8.1—although this is not an
original perpetrator leads to a growing ment threat models and priorities, as comprehensive set.
asymmetry in cyber conflict. well as how financial gains can accrue.
Because of time criticality for respond-
In this topic area, we are concerned For state actors, the current concern ing to certain cyber attacks, and hence
chiefly with the universe of cyber attacks is targeting of our critical infrastruc- the need to tie these to situational aware-
within the information systems domain tures and key government systems. ness, we consider developers and users


TABLE 8.1: Beneficiaries, Challenges, and Needs

Beneficiaries Challenges Needs

System Administrators Overwhelmed by attacks buried in massive Timely detection, presentation, sharing with
data volumes. Limited visibility beyond own peers across administrative boundaries.
domain. Effective remediation.
Service Providers Service continuity in spite of large-scale Attack attribution. Identify and quarantine
attacks. Understanding emerging attacks. compromised systems. Reliable IP mapping
Sharing with peers. to jurisdiction to support effective
cooperation with law enforcement.
Law Enforcement Identify and prosecute perpetrators Coordination with service providers
(individuals and emerging cybercrime and administrators. Data collection,
syndicates). presentation, and analysis of forensic
quality. Attribution to ultimate perpetrator.
Civil Government Continuity in spite of large-scale attacks Detection of attacks. Early identification
on government and civilian systems. of attacks on critical infrastructure sectors.
Coordination of national-level response. Sharing with private sector as well as state/
local agencies. Attribution.
Military Prevent attacks on defense systems. Early detection and identification of attacks.
Maintain system continuity in spite of Attribution. All of the above.
attacks. Prevent exfiltration of critical data.

of autonomic response systems as part but is currently accomplished via ad not know and trust each other. (For
of the customer base for advances in this hoc and informal relationships. In a example, how can an administrator in
topic area. few instances, data is shared across Domain A prove that a customer of
organizations, but normally the kinds Domain B is an attacker, and thereby
What is the current state of of information shared are limited persuade an administrator in that domain
(e.g., only network packet headers). to take corrective action?)
the practice?
Situational understanding currently Intrusion detection/prevention tech- Industry has made significant progress
is addressed within administrative nology is widely deployed, but many in the area of event/data correlation,
domains through intrusion detection/ question how much longer it will be with several security information and
prevention systems and security event effective as traffic volumes grow, attacks event management (SIEM) commercial
correlation systems, with much of the get more subtle, signature bases grow products widely deployed in the market.
analysis still done through manual correspondingly larger and unable to These offer considerable value in timely
perusal of log files. There have been cope with new attacks, and attackers data reduction and alarm management.
efforts to provide visualizations and use encryption, which makes packet However, with respect to visualization
other analytical tools to improve the payload signature analysis difficult. and presentation on a massive data scale,
ability to comprehend large amounts of Response to large-scale attacks remains these systems are inadequate and do not
data. These are largely special purpose to a large degree informal, via personal have scope well beyond organizational
and found within research laboratories trust relationships and telephone com- boundaries.
rather than being used widely within munications. This situation makes it
the field. Sharing security-relevant infor- difficult or impossible to achieve very We need to consider the viewpoint of
mation across domains is essential for rapid response or cooperation between the defender (end host, infrastructure
large-scale situational understanding domains where the administrators do component, enterprise, Internet). An


ISP wants an “inward” view of enterprise There are several forums for security techniques are applied to situational
customers since cooperative security event information sharing, such as understanding. There are significant
benefits from each domain’s filter- SANS Internet Storm Center’s dshield challenges and opportunities as link
ing outbound attack traffic from that [ISC], which describes itself as a coop- speeds become faster and data storage
domain (egress filtering). A defender at erative network security community, becomes cheaper.
an edge router is also looking outward and PhishTank [Phi], which allows
at its peers to monitor the inbound the defender community to contribute In the area of attribution, there is
flows for attack traffic (ingress filter- known or suspected instances of phish- active research in traceback techniques.
ing). This ingress filtering is essential to ing attacks. Phishing refers to a broad However, most methods depend on
the cooperative awareness and response class of fraudulent attempts to get a user cooperative defense and do not function
mentioned above. to provide personal information that the well with less than universal deploy-
phisher can subsequently use for identity ment. Skilled attackers easily evade most
Lack of trust between providers, issues of theft, identity fraud, unlawful financial currently deployed traceback systems.
scalability, and issues of partial deploy- transactions, and other criminal activity.
ment of defenses make attribution There has been some research in attacker
difficult in many cases. Privacy regula- For reasons ranging from customer intent modeling, with an objective to
tions and the very real concern that data privacy and concerns about revealing predict the attacker’s next steps, but
sanitization techniques are ineffective defensive posture to legal liability issues, this has had only limited success. In
also present barriers. The differing legal only limited meaningful progress has addition, most academic research in
regimes in different countries, or within been made in the area of interdomain the cybersecurity field uses inadequate
different areas of governments within security information sharing and in adversary models that do not capture
the same country, inhibit attribution as determining attacker location and how high-level adversaries actually
well. There is a need for international intent. attack complex systems. As mentioned
dialogue in how to handle cybersecurity previously, the short-term goal is mod-
incidents, so that attackers can either be What is the status of current eling adversary behavior to generate
identified and prosecuted or otherwise better attack indicators. The long-term
deterred from future wrongdoing. goal is to deter unwanted behaviors
Research in attack detection has contin- and to promote appropriate behaviors
Progress is being made in many areas ued along the path of faster signature (e.g., working for a legitimate organiza-
important to situational understanding, development and propagation, seeking tion as opposed to organized crime) via
including attribution. Protocols such to reduce the time window in which improved attribution. Most research in
as IPsec and IPv6’s extension headers zero-day attacks have an impact. this area is emphasizing the short-term
for authentication may improve the goal rather than the longer-term goal.
situation in the sense that spoofing the Egress filtering is increasingly used to
attack source is more difficult than in identify internal assets that may be cur- Sharing actionable data while respect-
current IPv4 networks. However, these rently compromised. This egress filtering ing privacy, authenticating the shared
message authentication techniques do (or, more generally, “unbiased introspec- information in the absence of inter-
not solve the underlying problem of tion”) also applies to ISPs, enterprises, domain trust, the economy of sharing
compromised machines being used to and home computers. (sharing marketplace), and sharing with
attack third parties. Thus, there is an privacy and anonymity are important
important linkage between this topic Scalable information processing research issues (see Section 10). Policy
and the topic addressing malware (see (e.g., data reduction), data mining, and legal barriers to sharing also need to
Section 5). statistical analysis, and other similar be addressed, in addition to the difficult


technical questions. Sharing lets one reporting responsibilities, assure
know if he or she is part of an attack integrity, and how long to store By analogy to physical security systems,
and needs to take action, and also lets data and in what form. “reaction” might be further broken out
one see the global picture. Some of the ƒƒ Analysis. Analyze the collected into delay, response, and mitigation
legal framework from the PREDICT data to abstract out meaning, steps. Some courses of action by the
data repository may be applicable potentially seek additional defender might delay the adversary from
( There are information for consolidation, achieving the ultimate objective of the
also examples in international scientific identify security incidents and attack. This buys time for the defender
collaborations involving information compute relevant metadata. to mount an effective response that
systems that could be considered in ways ƒƒ Presentation. Distill security thwarts the adversary’s goal. Another
to collectively identify threats. Another incidents and related contextual response might be to seek out addi-
model for sharing is seen in the interna- information to form enterprise- tional information that will improve
tional honeynet community. level situational awareness; enable situational awareness. If an effective
responses while maintaining response is not possible, then mitigation
There are different variants of attribu- forensic quality for attribution. of the consequences of the adversary’s
tion. In closed user communities, users Presentation may involve data action is also a valuable course of action.
often consent to monitoring as a con- sanitization or modification Many responses may require coordina-
dition for system access, so it is easier to comply with privacy or tion across organizational boundaries,
to assert who did what. A consent-to- classification-level requirements and shared situational awareness will be
monitoring policy is not likely to be on who is allowed to view what. important in supporting such activities.
implemented globally, so attribution of ƒƒ Sharing. Develop sharing
attacks that come in from the Internet awareness across independent What are the major gaps?
will remain difficult. This second type of domains and mechanisms
attribution should be balanced against to present relevant data to Attack signature generation and propa-
the need for anonymity and free speech appropriate communities, such gation are falling short, as many “legacy
concerns arising from requiring that all as network operators and law attacks” are still active on the Internet
traffic to be subject to attribution. enforcement, and preserve years after they were launched. Legacy
privacy of users, sensitive attacks persist for many reasons, such
corporate and national-security as poor system administrative practices
FUTURE DIRECTIONS data, and system defensive or lack of support for system admin-
posture. istration, a proliferation of consumer
On what categories can we ƒƒ Reaction. Determine local and systems not under professional system
subdivide this topic? cross-domain course of action administration but with a high-band-
We frame this topic area on the follow- to mitigate events. This includes width connection, reemergence of older
ing categories: measures to stop further damage, machines after being in storage without
fix damage that has occurred, appropriate attention (e.g.,  travel
ƒƒ Collection. Identify what data proactively change security laptops put back into service), or use
to collect; develop methods for configurations, and collect of legacy code or hardware in new
data collection, preservation of forensics to enable attribution applications or devices. This persis-
chain of custody (see Section 9), and prosecution. tence indicates that research is needed
validation, and organization. into better tools for system adminis-
ƒƒ Storage. Decide how to protect This framework may be considered an tration, as well as for survivability of
data in situ, efficiently access adaptation of John Boyd’s OODA loop well-administered systems in an envi-
stored data, and establish (Observe, Orient, Decide, Act (http:// ronment where many other systems are


poorly maintained. Also, the ability to apply within a single computer or local In addition to the database hurdles
quickly scrutinize new applications and network, but it could also be sufficient to (such as scale and organization) that
devices to see whether legacy flaws have provide attribution within a domain, or must be overcome in the collection of
been reintroduced would be beneficial. even a country. Moreover, adversaries are these diverse sources, it is in the inter-
getting better at hiding the true origin est of the adversary to poison these data
There remain significant gaps in the of their attacks behind networks of sources. Research is needed so that data
intrusion detection field, and currently compromised machines (e.g., botnets), provenance can be maintained. (See
deployed intrusion detection systems and throwaway computers may become Section 9.)
(IDS) fall short of needs, especially as common as throwaway cell phones as
with respect to enabling distributed prices drop. Adversaries increasingly use Analysis on Massive Data Scales. The
correlation. In particular, approaches techniques such as fast flux, where the analysis or evaluation process must
that include ever-growing signature sets DNS is rapidly manipulated to make consider the massive scale and hetero-
in attempting to identify the increasing identification and takedown of adversary geneity of data sources and the fact
variety of attacks may be approaching networks difficult [Hol2008]. that most of the data arriving from the
the end of their usefulness; alternative above sources is uninteresting chaff.
approaches are clearly needed. The data and analysis should support a
What are some exemplary variety of granularities, such as Border
Detection of attacks within encrypted Gateway Protocol (BGP) routes, DNS
problems for R&D on this
payloads will present an increasingly queries, domains in country-code top
serious challenge. Many botnets now use level domains (TLDs), repeated pat-
encrypted command and control chan- Collect and Store Relevant Data. terns of interaction that arise over the
nels. There are researchers investigating Understand how to identify, collect, course of months or years, and unex-
techniques that take advantage of this, and ultimately store data appropriate pected connections between companies
such as using the presence of ciphertext to the form of situational awareness and individuals. These derived quan-
on certain communications channels as desired. This might involve network- tities should themselves be archived
an attack indicator. However, it is likely centric data such as connectivity with or, alternatively, be able to be easily
that the fraction of encrypted traffic will peers over time, archives of name reso- reconstructed. The availability of these
increase under legitimate applications, lution, and route changes. In addition, data sources plays an important role
and thus alternative approaches are once data may need to be combined and/or in enabling attack attribution and also
again needed. sanitized to make it suitable for sharing contributes to an incremental building
or downstream retrieval, such as with of situational awareness.
Attribution remains a hard problem. In lower-layer alerts, local as well as exter-
most modern situations, it is useful to nal view, system- and application-level Novel Approaches to Presentation in
get as close as possible to the ultimate alerts, packet contents supporting deep Large-Scale Data. The massive scale
origin (node, process, or human actor). packet inspection on demand without of the data poses challenges to timely,
However, doing so touches on privacy, violating privacy or organizational secu- compact, and informative presentation.
legal, and forensic issues. For example, rity, archives to support snapshots and Scalable visualization, visualization with
public safety argues for full attribution history, and externally deployed moni- accurate geolocation, and zoomable
of all actions on the Internet, while toring infrastructure such as honeynets. visualization at varying levels of detail
free-speech rights in a civil society are Finally, data outside networks and hosts are just some of the difficult problems.
likely to require some forms of socially is also relevant, such as “people layer” Maintaining the ability to delve into the
approved anonymity. We also need to knowledge, as in tracking the so-called original data as well as broaden out to a
define the granularity of attack attribu- Russian Business Network (RBN) over high-level, people-aware view is an area
tion. In this respect, attribution could time. for future research.


Collaborative Collection, Vetting, and peer-to-peer (P2P) systems. Multiple What R&D is evolutionary and
Archiving. Collaborative collection issues arise with modern approaches. what is more basic, higher
of non-open data and the subse- Sparse reports may be misleading, risk, game changing?
quent vetting, archiving, correlation because voting mechanisms may not
(for example inferring routes collab- allow determining truth. Proving that Along the collection dimension,
oratively), and generation of useful only one organization is under attack near- and medium-term areas include
metadata are important research issues. may be difficult (likely to require sub- identification of data types, sources,
Numerous database issues arise, includ- mitting traffic samples that may reveal collection methods, and categorization;
ing processing of huge repositories, defensive posture, and subject to the directed data selection; and instru-
definition and derivation of meaningful possibility of spoofing). We require mentation of software and hardware
metadata such as provenance, valida- research in enabling technologies to components and subsystems. Long-term
tion of inputs, and multilevel security promote sharing across organizational research may consider systems that are
(MLS). Such an archive would support boundaries. intrinsically enabled for monitoring and
both research and operations. There are auditing. Challenges include the rapid
serious questions as to what to share Situational Understanding at Multiple growth of data and data rates, chang-
and to what degree, and these questions Timescales. We must be aware that there ing ideas about what can potentially
may occur at multiple levels. Examples are multiple timescales at which situ- be monitored, and privacy issues. (See
include what one controls, what one ational understanding must be inferred Section 10.)
shares in a trusted community, and what and presented. For low and slow attacks,
we can observe about an uncooperative such as those involved in insider-threat With respect to analysis, there is con-
and possibly adversarial entity. investigations, the attack traffic may sensus that the current signature-based
occur over long time spans (years or approaches will not keep up with the
Cross-Boundary Sharing of Situ- decades) and encompass multiple ingress problem much longer, because of issues
ational Understanding. Crossing points. In contrast, autonomic response of scale as well as system and attack com-
organizational boundaries may require requires millisecond situational under- plexity. Attack triage methods should
reputation systems and other ways of standing. For the human consumer, the be examined in the short term. Traffic
quickly determining when it might be timescale is intermediate. encryption and IPv6 may render many
safe to share information that cannot attack vectors harder but may also make
itself be gamed. It may be possible Some exemplary approaches are sum- analysis more difficult. In the long term,
to leverage research in reputation in marized in Table 8.2. conceptual breakthroughs are required

TABLE 8.2: Exemplary Approaches

Category Definition Sample Solutions

Collect and Analyze Data Understanding threats to overall Broad-based threat and misuse detection
trustworthiness and potential risks of integrating misuses and survivability threats
Massive-Scale Analysis New approaches to distributed system and Trustworthy systems with integrated
enterprise attack analysis analysis tools
Situational Understanding across Interpretation of multiple analyses over Intelligent correlated interpretation of likely
boundaries and multiple timescales space and time consequences and risks


to stay even with or ahead of the threat. reduction, alarm management, and ability to change the situational aware-
For example, some botnet command drill-down capability. In the near ness information presented to agents or
and control (C2) traffic is already on term, the emerging field of visual other autonomous response vehicles is a
encrypted channels. Ideally, intrinsically analytics may provide useful insights, potential vulnerability.
monitorable systems would permit an with new visualization devices pre-
adversary little or no space to operate senting opportunities for new ways of Sharing relevant information spans
without detection, or at least permit viewing items. An emerging challenge the gamut of levels from security alerts
observation that could be turned to in displaying situational awareness is the to sharing situational understanding
detection with additional analysis. Such increase in reliance both on very large obtained from analysis. Sharing can
systems detect attacks without a signa- (wall-size) viewing screens and on very enable global situational understand-
ture base that grows essentially without small handheld screens (e.g., BlackBer- ing and awareness, support reliable
bound. They also permit one to reliably ries). A suggested long-term effort is to attribution, and guide local response
assert that a system is operating in an consider alternative metaphors suited to appropriate to the global picture.
acceptably safe mode from a security the various extremes available, including Research is needed to determine how
standpoint. Additional approaches are such options as the scrollable, zoom- to achieve sharing with adequate privacy
needed that address monitoring and able map. Inference and forecasting are protections, and within regulatory
analysis in system design. also appropriate for long-term efforts. boundaries, what to share across auton-
We should build on the research in omous systems, and possible market
The state of the art tends to rely on information presentation for human mechanisms for sharing. The issue of
detection. Some limited progress has understanding and response. Another liability for misuse or for fraudulent or
been made to date on predicting attack- hard problem is visualization of low and erroneous shared data will need to be
ers’ next steps or inferring attacker slow attacks. Near- and medium-term addressed.
intent. Advances in target analysis will research is needed in how to assess the
better identify what is public and thus way different situational awareness pre- Research in appropriate reaction
presumed known to the adversary. This sentation approaches affect an analyst’s has both local (within an enterprise,
work may lead to solutions whereby or administrator’s ability to perform. within an autonomous systems) and
defenders manipulate the exposed global (across enterprise and autono-
“attack surface” to elicit or thwart Presentation approaches need awareness mous systems) components. Ideally,
attacker intent, or use cost-effective as to whether the consumer is a human the output of current and previous
defenses that increase protections when or an autonomous agent; reliance on research results should support an effec-
it is predicted they are needed. Cor- intelligent agents or other forms of tive course of action. When this is shared
related attack modeling advances are automated response means that these between entities, the shared information
appropriate to pursue as a medium-term elements will also need “situational should support effective local reaction,
area. Game theoretic and threat model awareness” to provide context for their while preserving privacy along with
approaches have made limited headway programmed behaviors. We require other information sanitization needs.
in this field but should be considered as research to enable agent-based defenses Research is required, for example in
long-term research. Threat and adversary in instances where action is needed at authenticating the authors of actionable
modeling may also support advances faster than human response times. This information and proof that a recom-
toward attribution and the ultimate goal is a presentation issue that ought to be mended course of action is appropriate.
of deterring future cyber attacks. This addressed in the medium term, and Research is also required in alternatives
is suitable for medium- to long-term a sharing issue when agent-to-agent to malfeasor blocking (it may be prefer-
research. cooperation is required in the long able to divert and observe), remediation
term. It is important to keep in mind of compromised assets (a need also
Information presentation will require that autonomous response may be an present in the malware research topic),
continued advancements in data attack vector for the adversary, and the and exoneration in the case of false


positives. Although response and reac- involves human subjects. In many cases, This section focuses on protection
tion are not directly a part of situational the IRBs are inadequately equipped to against cyber attack in the informa-
understanding, situational understand- handle cybersecurity experiments, which tion domain. However, adversaries may
ing is needed to enable response and are crucial to understanding attackers’ choose to interleave their cyber-attack
reaction, and situational understanding intent and next steps. Government steps with attack steps in the other
may drive certain kinds of responses could play a role in ensuring that IRBs three domains of conflict—namely the
(e.g., changing information collection are better equipped to expedite attack physical, cognitive, and social domains.
to improve attribution). Thus, advances attribution research. A set of best prac- Research on situational understand-
in reaction and response techniques tices would be beneficial in this area. ing and attribution tools that integrate
directly affect the kind of situational attack indicators from all four domains
awareness that is required. Government roles also include of conflict is also needed.
developing policy, funding research
(complementing industry), and exerting
Resources market leverage through its acquisition Measures of success
Situational understanding requires col- processes. There is government-spon- We will measure progress in numerous
lection or derivation of relevant data on sored research in intrusion detection, ways, such as decreased personnel hours
a diverse set of attributes. Some of the software engineering for security, required to obtain effective situational
attributes that support global situational malware analysis, traceback, informa- understanding; increased coverage of the
understanding and attack attribution tion sharing, scalable visualization, attack space; based on mission impact,
are discussed above relating to the kinds and other areas that affect this topic. improved ability to triage the serious
of data to collect. A legal and policy Government has also implemented attacks from the less important and
framework, including international fusion centers, common databases for the ones where immediate reaction is
coordination, is necessary to enable experimentation, and testbeds, sup- needed from those where an alterna-
the collection and permit the exchange porting collaboration. Continuing these tive approach is acceptable; improved
of much of this information, since it investments is crucial, particularly in response and remediation time; and
often requires crossing international the long-term range for areas that are timely attribution with sound forensics.
boundaries. In addition, coordination not conducive to short-term industry These all require reliable collection of
across sectors may be needed in terms investment. data on the diverse set of attributes listed
of what information can be shared and previously.
how to gather it in a timely way. Con- This topic is particularly dependent
sider an attack that involves patient data on public-private partnerships, and On the basis of these attributes, we
information systems within a hospital the definition of the nature of these could define measures of success at
in the United States, a military base in partnerships is essential. To a degree, a high level within a given organiza-
Germany, and an educational institution this depends on competing visions of tion’s stated security goals. For example,
in France. All three institutions have success. One may consider a central- an organization aimed primarily at
different requirements for what can and ized network operations center (NOC) maintaining customer access to a par-
cannot be shared or recorded. staffed by government, industry, and ticular service might measure success
researchers with a policy and procedural by observing and tracking over time
Modifications to U.S. law and policy framework designed to allow seamless such variables as the estimated number
may be needed to facilitate data sharing cooperation. An alternative view is a of hosts capable of serving information
and attack attribution research. As an distributed capability in which differ- over some service, and the estimated
example, institutional review boards ent network operators share situational near-steady-state number or growth
(IRBs) play an important role in protect- understanding but different parts of the trend of these machines.
ing individuals and organizations from picture are relevant to different system
the side effects of experimentation that missions. Success depends on timely identification


of adversaries, propagation of defenses, We require a methodology to quantify an open-source framework with defined
and remediation of affected systems. mission impact. Many stakeholders have standards and interfaces, and devel-
Another measure for success is tied to a primary need to maintain continuity oping relationships with entities that
a variation of the false-positive/true- of operations in spite of a large-scale could deploy it. Many results from this
positive discussion, in that effective attack. topic require distributed deployment
situational understanding should allow for meaningful test and evaluation. The
us to accurately categorize the potential honeynet community may be a good
impact of a detected attack. For either What needs to be in place for deployment platform with less resis-
actual attacks or emulated attacks on a test and evaluation? tance than commercial systems and less
realistic testbed, we would hope to be Several research testbeds are online concern about privacy issues. Significant
able to answer the following questions: (e.g., the existing DETER lab testbed, barriers exist in both the technical and or planned; organizational/policy domains, associ-
ƒƒ Can we differentiate between research in situational understanding ated with the difficulty of protecting the
nuisance and serious strategic would be advanced via federation of privacy and security of the real systems
attacks, for example, by these and other testbeds to emulate scale being scrutinized.
identifying a targeted attack and cross-domain issues. Large-scale
against a critical sector? simulation may provide initial rough Technologies resulting from research in
ƒƒ Can we share information across estimates of the efficacy of particular this topic area range from individual-
informational boundaries to approaches. In terms of Internet-scale host-level components (for example,
enable cooperative response? situational understanding, these testbeds inherently monitorable systems) to
ƒƒ Can we quickly quarantine can support advances in the malware global components (mechanisms for
intermediate attack platforms? and botnets topic area as well. reliable geolocation). In the former
category, R&D should be conducted
ƒƒ Can we maintain or quickly
from the start with system developers
restore critical functions, perhaps To what extent can we test to ensure adoptability of resulting solu-
according to some contingency real systems? tions. Success in the latter category may
policy of acceptable degradation? There are test environments that allow require some new frameworks in law,
ƒƒ Can we collect actionable data for deployment of prototype cybersecurity policy, and Internet governance.
ultimate attribution? modules. We should consider developing

[Fra2007] J. Franklin, V. Paxson, A. Perrig, and S. Savage. An inquiry into the nature and
causes of the wealth of Internet miscreants. Proceedings of ACM Computer and
Communications Security Conference, pp. 375-388, October 2007.

[GAO2007] CYBERCRIME: Public and Private Entities Face Challenges in Addressing Cyber Threats. Report
GAO-07-705, U.S. Government Accountability Office, Washington, D.C., July 2007.

[Hol2008] T. Holz, C. Gorecki, K. Rieck, and F. Freiling. Measuring and detecting fast-flux service networks. In
Proceedings of the 15th Annual Network & Distributed System Security (NDSS) Symposium, February 2008.


[ICA2008] Draft Initial Report of the GNSO Fast Flux Hosting Working Group. ICANN. December 8, 2008

[ISC] Internet Storm Center:

[Phi] PhishTank:


Current Hard Problems in INFOSEC Research
9. Provenance


What is the problem being addressed?

Individuals and organizations routinely work with, and make decisions based on,
data that may have originated from many different sources and also may have
been processed, transformed, interpreted, and aggregated by numerous entities
between the original sources and the consumers. Without good knowledge about
the sources and intermediate processors of the data, it can be difficult to assess the
data’s trustworthiness and reliability, and hence its real value to the decision-making
processes in which it is used.

Provenance refers to the chain of successive custody—including sources and

operations—of computer-related resources such as hardware, software, documents,
databases, data, and other entities. Provenance includes pedigree, which relates
to the total directed graph of historical dependencies. It also includes tracking,
which refers to the maintenance of distribution and usage information that enables
determination of where resources went and how they may have been used.

Provenance is also concerned with the original sources of any subsequent changes
or other treatment of information and resources throughout the life cycle of data.
That information may be in any form, including software, text, spreadsheets, images,
audio, video, proprietary document formats, databases, and others, as well as meta-
level information about information and information transformations, including
editing, other forms of markup, summarization, analysis, transformations from one
medium to another, formatting, and provenance markings. Provenance is generally
concerned with the integrity and reliability of the information and meta-information
rather than just the information content of the document.

Provenance can also be used to follow modifications of information—for example,

providing a record of how a document was derived from other sources or providing
the pervasive history through successive versions (as in the Concurrent Versions
System [CVS]), transformations of content (such as natural language translation
and file compression), and changes of format (such as Word to PDF).

The granularity of provenance ranges from whole systems through multi-level

security, file, paragraph, and line, and even to bit. For certain applications (such as
access control) the provenance of a single bit may be very important. Provenance
itself may require meta-provenance, that is, provenance markings on the provenance
information. The level of assurance provided by information provenance systems
may be graded and lead to graded responses. Note that in some cases provenance
information may be more sensitive, or more highly classified, than the underlying
data. The policies for handling provenance information are complex and differ for
different applications and granularities.

To determine provenance accurately, we scientific fields are examples where prov- What is the current state of
must have trustworthy systems that reli- enance markings are beginning to be practice?
ably track both usage and modification used. Other fields that can benefit from
of information and other resources. As provenance maintenance systems include Physical provenance markings in jewelry
with all computer systems, security of critical infrastructure providers (e.g., in (e.g., claiming your diamond is from a
provenance tracking cannot be absolute, SCADA and other control systems), blood-free mining operation, your silver
and trustworthiness of provenance track- emergency responders, military person- or gold is pure, and the style is not a
ing systems will be relative to the value nel, and other decision makers. Users in knockoff copy of a designer’s), explo-
of the provenance to the users of the all these areas need reliable information sive components (e.g., nitrates), and
information and resources. For example, obtained from many sources, commu- clothing have historically added value
a simple change-tracking mechanism nicated, aggregated, analyzed, stored, and enabled tracing of origin. Docu-
in a document preparation system may and presented by complex information ment markings such as wax seals and
provide adequate provenance track- processing systems. Information sources signatures have been used to increase
ing from the point of view of a small must be identified, maintained, and assurance of authenticity of high-value
group of authors collaborating in the tracked to help users make appropriate documents for centuries. More recently
publication of an article, even though decisions based on reliable understand- the legal, auditing, and medical fields
the document change history might ing of the provenance of the data used have begun to employ first-level authen-
not be protected from unauthorized as input to critical decisions. ticated provenance markings.
modification. On the other hand, the
same mechanism may be inadequate in In addition, new techniques are needed The current practice is rather rudimen-
the context of legal discovery, precisely that will allow management of prov- tary compared with what is needed to
because the change-tracking mechanism enance for voluminous data. Part of be able to routinely depend on prov-
does not guarantee the authenticity of what has made provenance easier to enance collection and maintenance.
the change history. manage up to now is its small volume. The financial sector (in part driven
Now, geospatial information-gathering by Sarbanes-Oxley requirements) has
What are the potential systems are being planned that will have developed techniques to enable track-
the capability of handling gigabytes of ing of origins, aggregations, and edits of
data per second, and the challenges of data sets. Users of document production
Without trustworthy provenance track- these data volumes will be exacerbated software may be familiar with change-
ing systems, there are threats to the by collection via countless other sensor tracking features that provide a form of
data and to processes that rely on the networks. Within 20 years, the govern- provenance, although one that cannot
data, including, for example, unattrib- ment will hold an exabyte of potentially necessarily be considered trustworthy.
uted sources of software and hardware; sensitive data. The systems for handling
unauthorized modification of data and establishing provenance of such As an example of provenance in which
provenance; unauthorized exposure of volumes of information must function security of the provenance has not been
provenance, where presumably pro- autonomously and efficiently with infor- a direct concern, software development
tected; and misattribution of provenance mation sources at these scales. teams have relied for decades on version
(intentional or otherwise). control systems to track the history of
Note that situations are likely to arise changes to code and allow for historical
Who are the potential where absence of provenance is impor- versions of code to be examined and
beneficiaries? What are their tant—for example, where information used. Similar kinds of systems are used
respective needs? that needs to be made public must not in the scientific computing community.
The legal, accounting, medical, and be attributable.

What is the status of current ƒƒ Provenance-aware storage ƒƒ Pedigree management. The
research? systems. A provenance- Pedigree Management and
Current research appears to be driven aware storage system supports Assessment Framework (PMAF)
largely by application- and domain-spe- automatic collection and [SPI2007] enables a publisher
cific needs. Undoubtedly, these research maintenance of provenance of information in a network-
efforts are seen as vital in their respective metadata. The system creates centric intelligence gathering and
communities of interest. provenance metadata as new assessment environment to record
objects are created in the system standard provenance metadata
Examples of active, ongoing research and maintains the provenance about the source, the manner
areas related to information and resource just as it maintains ordinary file- of collection, and the chain of
provenance include the following areas: system metadata. See [PAS]. The modification of information as it
Lineage File System [LFS] records is passed through processing and
ƒƒ Data provenance and
the input files, command-line assessment.
annotation in scientific
options, and output files when a
computing. Chimera [Fos2002] For further background, see the proceed-
program is executed; the records
allows a user to define a are stored in an SQL database ings of the first USENIX workshop on
workflow, consisting of data sets that can be queried to reconstruct the theory and practice of provenance
and transformation scripts. The the lineage of a file. [TAP2009].
system then tracks invocations,
ƒƒ Chain of custody in computer
annotating the output with
forensics and evidence and FUTURE DIRECTIONS
information about the runtime
change control in software
environment. The myGrid On what categories can we
development. The Vesta
system [Zha2004], designed subdivide the topic?
[Hey2001] approach uses
to aid biologists in performing
provenance to make software Provenance may be usefully subdivided
computer-based experiments, builds incremental and
allows users to model their along three main categories, each of
repeatable. which may be further subdivided, as
workflows in a Grid environment.
ƒƒ Open Provenance Model. The follows:
CMCS [Pan2003] is a toolkit for
Open Provenance Model is a
chemists to manage experimental ƒƒ Representation: data models
recently proposed abstract data
data derived from fields such and representation structures
model for capturing provenance.
as combustion research. ESSW for provenance (granularity and
The model aims to make it easier
[Fre2005] is a data storage system access control).
for provenance to be exchanged
for earth scientists; the system ƒƒ Management (creation; access;
between systems, to support
can track data lineage so that development of provenance tools, annotation [mark original
errors can be traced, helping to define a core set of inference documents/resources with
maintain the quality of large rules that support queries on provenance metadata]; editing
data sets. Trio [Wid2005] is a provenance, and to support [provenance-mark specific
data warehouse system that uses a technology-neutral digital fine-grained changes through
data lineage to automatically representation of provenance the life cycle]; pruning [delete
compute the accuracy of the for any object, regardless of provenance metadata for
data. Additional examples can be whether or not it is produced performance, security, and
found in the survey by Bose and by a computer system. See privacy reasons]; assurance; and
Frew [Bos2005]. [OPM2007]. revocation)

ƒƒ Presentation (query [request In the following itemization of gaps, the ƒƒ Pruning provenance, deleting
provenance information]; present letters R, M, P annotating each point and sanitizing extraneous item
[display provenance markings]; refer to the main categories—represen- for privacy and purpose of
alert [notify when provenance tation, management, and presentation, performance. (RMP)
absence, compromise, or fraud is respectively—where uppercase denotes ƒƒ Efficiently representing
detected]) high relevance (R, M, P), and lowercase provenance. An extreme
denotes some relevance (r, m, and p). goal would be to efficiently
Other useful dimensions to consider represent provenance for every
that are cross-cutting with respect to the ƒƒ Appropriate definitions and
means for manipulating bit, enabling bit-grained data
following dimensions: transformations, while requiring
meaningful granularity of
ƒƒ System engineering (human- a minimum of overhead in time
information provenance
computer interfaces; workflow and space. (RMp)
markings. Taxonomy of
implications; and semantic webs) provenance. (R) ƒƒ Scale: the need for solutions that
ƒƒ Legal, policy, and economic scale up and down efficiently. (R)
ƒƒ Given trends in markup
issues (regulation; standards; ƒƒ Dealing with heterogeneous data
languages, the metadata and
enforcement; market incentives) types and data sensors, domain
the underlying data are often
These are summarized in Table 9.1. intermixed (as in XML), specificity, and dependency
thus presenting challenges tracking. (Rm)
What are the major research in appropriate separation of ƒƒ Partial or probabilistic
gaps? concerns with data integrity and provenance (when the chain of
integrity of the provenance. (R) custody cannot be stated with
Numerous gaps in provenance and absolute certainty). (RMp)
ƒƒ Confidential provenance
tracking research remain to be filled,
and anonymous or partially ƒƒ Coping with legacy systems.
requiring a much broader view of the
anonymous provenance, (RM)
problem space and cross-disciplinary
to protect sources of ƒƒ Intrinsic vs. extrinsic provenance
efforts to capture unifying themes and
information. (R) and the consistency between
advance the state of the art for the
benefit of all communities interested in ƒƒ Representing the trustworthiness them when both are available.
provenance. of provenance. (R) (RMp)

TABLE 9.1: Potential Approaches

Category Definition Potential Approaches

Representation Data models and structures for provenance Varied granularities, integration with access
Management Creation and revocation of indelible Trustworthy distributed embedding with
distributed provenance integrated analysis tools
Presentation Queries, displays, alerts Usable human interfaces
System engineering Secure implementation Integration into trustworthy systems
Legal, policy, economic issues Social implications Regulation, standards, enforcement,

ƒƒ Developing and adopting tools Teams (CERTs) need to be What R&D is evolutionary,
based on existing research results. able to prove from where and what is more basic,
(RMP) they got information about higher risk, game changing?
ƒƒ Centralized versus distributed vulnerabilities and fixes; when Information provenance presents a
provenance. (M) they publish alerts, they should large set of challenges, but significant
be able to reliably show that impact may be made with relatively
ƒƒ Ensuring the trustworthiness of
the information came from an modest technical progress. For example,
provenance (integrity through the
appropriate, credible source—for it may be possible to develop a coarse-
chain of custody). (M)
example, to avoid publishing grain information provenance appliance
ƒƒ Tracking: where did the an alert based on incorrect that marks documents traversing an
information/resources go; how information submitted by a
intranet or resting in a data center and
were they used? (M) competitor. They also need
makes those markings available to deci-
ƒƒ Usable provenance respecting their customers to believe that
sion makers. Although this imagined
security and privacy concerns. the information being sent is
appliance may not have visibility into
(Mp) not from an imposter (although
all the inputs used to create a docu-
certificates are supposed to take
ƒƒ Information provenance systems ment, it could provide relatively strong
care of this problem).
should be connected to chain of assurances about certain aspects of the
ƒƒ Law enforcement forensics provenance of the information in ques-
custody, audit, and data forensic
for computer-based evidence, tion. It is important to find methods
approaches. Provenance should
surveillance data, and other to enable incremental rollout of prove-
connect and support, not repeat
computer artifacts, of sufficient nance tools and tags in order to maintain
functionality of these related
integrity and oversight to compliance with existing practices and
services. (MP)
withstand expert counter- standards. Another incremental view is
ƒƒ User interfaces. When dealing testimony. to consider provenance as a static type
with massive amounts of data ƒƒ Crime statistics and analyses from system for data. Static type systems exist
from many sources with massive which patterns of misuse can be for many programming languages and
communication processes, how is deduced. frameworks that help prevent runtime
the end user informed and about
ƒƒ Medical and health care errors. By analogy, we could create an
what aspects of the information
information, particularly with information provenance system that is
integrity? (P)
respect to data access and data able to prevent certain types of misuse of
ƒƒ Users of aggregated information modification. data by comparing the provenance infor-
need to be able to determine mation with policies or requirements.
ƒƒ Identity-theft and identity-fraud
when less reliable information
detection and prevention.
is interspersed with accurate Resources
information. It is of critical ƒƒ Financial sector—for example, With respect to the extensive list of
importance to identify and with respect to insider research gaps noted above, resources will
propagate the source and information, funds transfers, and be needed for research efforts, experi-
derivation (or aggregation) of partially anonymous transactions. mental testbeds, test and evaluation, and
the chain of custody with the ƒƒ Provenance embedded within technology transition.
information itself. (P) digital rights management.
Measures of success
What are some exemplary In many of the above examples, some One indicator of success will be the
problem domains for R&D in of the provenance may have to be ability to track the provenance of infor-
this area? encrypted or anonymized—to protect mation in large systems that process and
ƒƒ Computer Emergency Response the identity of sources. transform many different, heterogeneous

types of data. The sheer number of dif- also provide measures of success. Effi- ƒƒ In medical systems, personally
ferent kinds of sensors and information ciency of representations might also identifiable information
systems involved and, in particular, the be a worthwhile indicator, as would be connected with embarrassing or
number of legacy systems developed measures of overhead attributable to insurance-relevant information
without any attention to maintenance maintaining and processing provenance. may be used to make life-critical
of provenance present major challenges Metrics that consider human usability health care decisions.
in this domain. of provenance would be very appropri-
ƒƒ An emergency responder system
ate—especially if they can discern how
might be considered that could
Red Teaming can give added analysis— well people actually are able to distin-
provide more reliable provenance
for example, assessing the difficulty of guish authentic and bogus information
information to decision makers
planting false content and subverting based on provenance.
provenance mechanisms. (e.g., who must be evacuated,
who has been successfully
What needs to be in place for evacuated from a building).
Also, confidence-level indicators are
test and evaluation?
desirable—for example, assessing the ƒƒ A provenance system for the legal
estimated accuracy of the information Testing and evaluating the effectiveness profession.
or the probability that information of new provenance systems is challeng- ƒƒ Credit history and scoring—for
achieves a certain accuracy level. ing because some of the earliest adopters example, provenance on credit
of the technology are likely to be in
history data might help reduce
More generally, analytic tools can evalu- domains where critical decisions depend
delays involved in getting a
ate (measure) metrics for provenance. on provenance data. Thus, the impact
mortgage despite errors in credit
of mistaken provenance could be large.
Cross-checking provenance with
archived file modifications in environ- Potential testbed applications should be ƒƒ Depository services; title history;
ments that log changes in detail could considered, such as the following: personnel clearance systems.

[Bos2005] R. Bose and J. Frew. Lineage retrieval for scientific data processing: a survey. ACM Computing Surveys,
37(1):1-28, 2005.

[Fos2002] I.T. Foster, J.-S. Voeckler, M. Wilde, and Y. Zhao. Chimera: A virtual data system for representing, querying,
and automating data derivation. In Proceedings of the 14th Conference on Scientific and Statistical Database
Management, pp. 37-46, 2002.

[Fre2005] J. Frew and R. Bose. Earth System Science Workbench: A data management infrastructure for earth science
products. In Proceedings of the 13th Conference on Scientific and Statistical Database Management, p. 180,

[Hey2001] A. Heydon, R. Levin, T. Mann, and Y. Yu. The Vesta Approach to Software Configuration Management.
Technical Report 168, Compaq Systems Research Center, Palo Alto, California, March 2001.

[LFS] Lineage File System (

[OPM2007] L. Moreau, J. Freire, J. Futrelle, R.E. McGrath, J. Myers, and P. Paulson. The Open Provenance Model.
Technical report, ECS, University of Southampton, 2007 (

[Pan03] C. Pancerella et al. Metadata in the collaboratory for multi-scale chemical science. In Proceedings of the
2003 International Conference on Dublin Core and Metadata Applications, 2003.

[PAS] PASS: Provenance-Aware Storage Systems (

[SPI2007] M.M. Gioioso, S.D. McCullough, J.P. Cormier, C. Marceau, and R.A. Joyce. Pedigree management and
assessment in a net-centric environment. In Defense Transformation and Net-Centric Systems 2007. Proceedings
of the SPIE, 6578:65780H1-H10, 2007.

[TAP2009] First Workshop on the Theory and Practice of Provenance, San Francisco, February 23, 2009 (http://www.

[Wid2005] J. Widom. Trio: A system for integrated management of data, accuracy, and lineage. In Proceedings of the
Second Biennial Conference on Innovative Data Systems Research, Pacific Grove, California, January 2005.

[Zha2004] J. Zhao, C.A. Goble, R. Stevens, and S. Bechhofer. Semantically linking and browsing provenance logs for
e-science. In Proceedings of the 1st International Conference on Semantics of a Networked World, Paris, 2004.

Current Hard Problems in INFOSEC Research
10. Privacy-Aware Security


What is the problem being addressed?

The goal of privacy-aware security is to enable users and organizations to better
express, protect, and control the confidentiality of their private information, even
when they choose to—or are required to—share it with others. Privacy-aware
security encompasses several distinct but closely related topics, including anonymity,
pseudo-anonymity, confidentiality, protection of queries, monitoring, and appropri-
ate accessibility. It is also concerned with protecting the privacy of entities (such as
individuals, corporations, government agencies) that need to access private informa-
tion. This document does not attempt to address the question of what information
should be protected or revealed under various circumstances, but it does highlight
challenges and approaches to providing technological means for safely controlling
access to, and use of, private information. The following are examples of situations
that may require limited sharing of private information:

ƒƒ The need to prove things about oneself (for example, proof of residence)
ƒƒ Various degrees of anonymity (protection of children online, victims of
crime and disease, cash transactions, elections)
ƒƒ Enabling limited information disclosure sufficient to guarantee security,
without divulging more information than necessary
ƒƒ Identity escrow and management
ƒƒ Multiparty access controls
ƒƒ Privacy-protected sharing of security and threat information, as well as
audit logs
ƒƒ Control of secondary reuse
ƒƒ Remediation of incorrect information that is disclosed, especially if done
without any required user approval
ƒƒ Effective, appropriate access to information for law enforcement and
national security
ƒƒ Medical emergencies (for example, requiring information about allergic
reactions to certain medications)

What are the potential threats?

Threats to private information may be intrinsic or extrinsic to computer systems.
Intrinsic computer security threats attributable to insiders include mistakes, acciden-
tal breach, misconfiguration, and misuse of authorized privileges, as well as insider
exploitations of internal security flaws. Intrinsic threats attributable to outsiders
(e.g., intruders) include potential exploitations of a wide variety of intrusion tech-
niques. Extrinsic threats arise once information has been viewed by users or made

available to external media (via printers, information bureaus) is also bidding on a job, engaging in a
e-mail, wireless emanations, and so on), needed. collaborative venture, pursuing
and has come primarily outside the ƒƒ Organizations do not want mergers, and the like.
purview of authentication, computer proprietary information disclosed ƒƒ Social networks need means
access controls, audit trails, and other for other than specific agreed to share personal information
monitoring on the originating systems. purposes. within a community while
ƒƒ Research communities (e.g., protecting that information from
The central problem in privacy-aware abuse (such as spear-phishing).
security is the tension between compet- in medical research and social
ing goals in the disclosure and use of sciences) need access to accurate, ƒƒ Governments need to collect
private information. This document specific, and complete data for and selectively share information
takes no position on what goals should such purposes as analysis, testing for such purposes as census,
be considered legitimate or how the hypotheses, developing potential disease control, taxation, import/
tension should be resolved. Rather, treatments/solutions. export control, and regulation of
the goal of research in privacy-aware ƒƒ Law enforcement requires commerce.
security is to provide the tools necessary access to personal information to
to express and implement trade-offs conduct thorough investigations. What is the current state of
between competing legitimate goals ƒƒ National security/intelligence practice?
in the protection and use of private needs to detect and prevent
information. terrorism and hostile activity by Privacy-aware security involves a complex
nation-states and nonstate actors mix of legal, policy, and technological
Who are the potential while maintaining the privacy considerations. Work along all these
of U.S. persons and coalition dimensions has struggled to keep up
beneficiaries? What are their
partners. with the pervasive information sharing
respective needs?
that cyberspace has enabled. Although
ƒƒ Financial sector organizations
The beneficiaries for this topic are many the challenges have long been recog-
need access to data to analyze for
and widely varied. They often have nized, progress on solutions has been
indicators of potential fraud.
directly competing interests. An exhaus- slow, especially on the technology side.
tive list would be nearly impossible to ƒƒ Health care industries need At present, there are no widely adopted,
produce, but some illustrative examples access to private patient uniform frameworks for expressing and
include the following: information for treatment enforcing protection requirements for
purposes, billing, insurance, and private information while still enabling
ƒƒ Individuals do not generally reporting requirements. sharing for legitimate purposes. On the
want to reveal any more private technology side, progress has been made
ƒƒ Product development and
information than absolutely in certain application areas related to
marketing uses data mining
necessary to accomplish a privacy. Examples of privacy-enhancing
to determine trends, identify
specific goal (transaction, technologies in use today include the
potential customers, and tune
medical treatment, etc.) and following:
product offerings to customer
want guarantees that the
information disclosed will be ƒƒ Access controls (e.g., discretion-
used only for required and ƒƒ Business development, ary and mandatory, role-
authorized purposes. The ability partnerships, and based, capability-based, and
to detect and correct erroneous collaborations need to selectively database management system
data maintained by other reveal proprietary data to a authorizations) attempt to limit
organizations (such as credit limited audience for purposes of who can access what information,

but they are difficult to configure ƒƒ Minimizing data retention time parties, or providing the ability later
to achieve desired effects, are appropriately to identify the misusers. A significant
often too coarse-grained, and challenge to the DRM approach is the
ƒƒ Protecting data in transmission
may not map well to actual development of an indisputable defini-
and storage (e.g., with
privacy and data use policies. tion of who controls the distribution.
ƒƒ Encrypted storage and For example, should medical informa-
ƒƒ Conducting sensible risk analyses tion be controlled by the patient, by
communications can prevent
wholesale loss or exposure of ƒƒ Auditing of access audit logs doctors, by nurses, by hospitals, or by
sensitive data but do very little to (actually examining them, not insurance companies, or by some com-
prevent misuse of data accessed bination thereof? Each of them may be
just keeping them)
within allowed privileges or the originator of different portions of
ƒƒ Privacy policy negotiation and the medical information. Information
within flawed system security.
management provenance (Section 9) interacts with
ƒƒ Anonymous credential systems
privacy in defining the trail of who did
may enable authorization without what with the medical information, and
necessarily revealing identity (for What is the status of current
research? both interact with system and informa-
example, Shibboleth [Shib]). tion integrity.
ƒƒ Anonymization techniques, Security with privacy appears to require
such as mix networks, onion establishment of fundamental trust Many examples of ongoing or planned
routing, anonymizing proxy structures to reflect demands of privacy. privacy-related research are of interest
servers, and censorship-resistant It also requires means for reducing the here. For example, the following are
access technology, attempt risks of privacy breaches that can occur worth considering. NSF Trustworthy
to mask associations between (accidentally or intentionally) through Computing programs have explicitly
identities and information the use of technologies such as data included privacy in recent solicitations
content. mining. Ideas for reconciling such ( Some
ƒƒ One-time-use technologies, technologies in this context include research projects funded by the National
such as one-time authenticators privacy-aware, distributed association- Research Council Canada are also
and smart cards, can also rule mining algorithms that preserve relevant (
contribute. privacy of the individual sites, queries on r-d/security-securite_e.html), as are
encrypted data without decrypting, and British studies of privacy and surveil-
At the same time, there are known best a new formulation to address the impact lance, including a technology roadmap
practices that, if consistently adopted, of privacy breaches that makes it possible (
would also advance the state of the prac- to limit breaches without knowledge of reports/pdf/dilemmas_of_privacy_and_
tice in privacy-preserving information original data distribution. surveillance_report.pdf ).
sharing. These include
Digital rights management (DRM) Other privacy related research includes
ƒƒ Use of trustworthy systems and techniques, while not currently applied the following:
sound system administration, for privacy protection, could be used
with strong authentication, ƒƒ Microsoft Research database
to protect information in such diverse
settings as health care records and cor- privacy: (http://www.research.
differential access controls, and
porate proprietary data, allowing the
extensive monitoring
originator of the information to retain and
ƒƒ Adherence to the principle of mscorp/twc/iappandrsa/research.
some degree of access control even after
least privilege the information has been given to third mspx)

ƒƒ Project Presidio: collaborative ƒƒ See also ( Oxley; HIPAA), and economics
policies and assured information infosec/faith.pdf ) and (http:// and security (e.g., http://www.
( archives/2007/03/security_ html).
ƒƒ Stanford University Web Security
Research: private information What are the major research
retrieval (http://crypto.stanford. FUTURE DIRECTIONS gaps?
Following are some of the gaps in pri-
ƒƒ Security with Privacy ISAT On what categories can we vacy-aware security that need to be
briefing (http://www.cs.berkeley. subdivide the topic? addressed.
edu/~tygar/papers/ISAT-final- For purposes of a research and devel-
briefing.pdf ) opment roadmap, privacy-aware Selective disclosure and
information sharing can be usefully privacy-aware access
ƒƒ Naval Research Lab: Reputation
ƒƒ Sound bases are needed for
in Privacy Enhancing divided along the following categories,
directly mirroring the gaps noted above. selective disclosure through
Technologies (http://chacs.
See Table 10.1. techniques such as attribute-
based encryption, identity-based
ƒƒ Selective disclosure and encryption, collusion-resistant
cfp02.pdf )
privacy-aware access to data: broadcast encryption, private
ƒƒ ITU efforts related to security, information retrieval (PIR), and
theoretical underpinnings and
privacy, and legislation: (http:// oblivious transfer.
system engineering.
ƒƒ How do we share data sets
publications/2006/research- ƒƒ Specification frameworks for
while reducing the likelihood
legislation.pdf ) providing privacy guarantees:
that arbitrary users can infer
ƒƒ DHS report on the ADVISE languages for specifying privacy
individual identification? (The
program ( policies, particularly if directly
U.S. Census Bureau has long
xlibrary/assets/privacy/privacy_ implementable; specifications been concerned about this
rpt_advise.pdf ) for violations of privacy; and problem.)
ƒƒ UMBC Assured privacy detecting violations of privacy.
ƒƒ Data sanitization techniques are
preserving data mining, recipient ƒƒ Policy issues: establishing needed that are nonsubvertible
of DoD’s MURI award (http:// privacy policies, data correction, and that at the same time do not propagation of updates, privacy render analysis useless.
muri/) implications of data integrity. ƒƒ More generally, data quality
ƒƒ Anonymous communication This also includes legal (aspects must be maintained for research
( of current law that constrain purposes while protecting
ƒƒ Statistics research community, as technology development; privacy, avoiding profiling or
in the Knowledge Discovery and aspects of future law that could temporal analysis to deanonymize
Data Mining (KDD) conferences enable technology development; source data.
( questions of jurisdiction), ƒƒ Irreversible transformations
ƒƒ Framework for privacy metrics standards (best practices; privacy of content are needed that
[Pfi+2001]) standards analogous to Sarbanes- exhibit statistical characteristics

consistent with the original data Policy issues ƒƒ Policies are needed for dealing
without revealing the original ƒƒ Distinctions between individual with privacy violations, detection
content. and group privacy are unclear. of violations, consequences of
ƒƒ Privacy and security for very large ƒƒ Release of bogus information violations, and remediation of
data sets does not scale easily— about individuals is poorly damage.
for example, maintaining privacy handled today. However, with
of individual data elements is stronger protection it becomes
What are some exemplary
difficult. more difficult to check validity of
information. problems for R&D on this
ƒƒ Associations of location with topic?
users and information may ƒƒ Information gathered from some
Several problem domains seem par-
require privacy protection, persons can allow probabilistic
ticularly relevant, namely, data mining
particularly in mobile devices. inference of information about
for medical research, health care
ƒƒ Low-latency mix networks can records, data mining of search queries,
provide anonymization, but need ƒƒ Policies for data collection and census records, and student records at
further research. sharing with regard to privacy universities.
are needed, especially relating
ƒƒ Mechanisms to enforce retention
to what can be done with the
limits are lacking. What R&D is evolutionary and
private data. For example, who
what is more basic, higher
ƒƒ Sharing of security information are the stakeholders in genetic risk, game changing?
such as network trace data needs information? What policies are
privacy controls. needed for retention limits? Near term
ƒƒ Deriving requirements for
Specification frameworks ƒƒ Communications create further automating privacy policies:
ƒƒ Specification frameworks for privacy problems relating to learning from P3P
expressing privacy guarantees are identification of communication
ƒƒ Policy language development
weak or missing. In particular, sources, destinations, and
specification and enforcement of patterns that can reveal ƒƒ Implement best practices
context-dependent policies for information, even when other ƒƒ Research into legal issues in
data sharing and use are needed. data protections are in place. communications privacy

TABLE 10.1: Potential Approaches

Categories Definition Potential Approaches

Selective disclosure and privacy-preserving Technology to support privacy policies Varied granularities, integration with access
access to data controls and encryption
Specification frameworks Creation and revocation in distributed Implementable policy languages, analysis
provenance tools
Other privacy issues Policies and procedures to support privacy Canonical policies, laws, standards,
economic models underlying privacy

Medium term considerable commitment from govern- have worked on this, as in
ƒƒ Anonymous credentials ment funding agencies, corporations, determining statistical similarity
ƒƒ Role-based Access Control and application communities such as of purposely fuzzed data sets.)
(RBAC) health care to ensure that the research is How many queries are needed
relevant and that it has adequate testbeds to get to specific data items
ƒƒ Attribute-based encryption
for practical applications. It will also for individuals in databases
ƒƒ Distributed RBAC: no central that purport to hide such
engender considerable scrutiny from
enforcement mechanism required information?
the privacy community to ensure that
ƒƒ Protection against excess the approaches are adequately privacy ƒƒ Adversary work factors to violate
disclosure during inference and preserving. privacy.
ƒƒ Risk analysis: This has been
ƒƒ Application of DRM techniques
Measures of success applied to security (albeit
for privacy
somewhat haphazardly). Can risk
ƒƒ Searching encrypted data A goal for addressing concerns regarding
analysis be effectively applied to
without revealing the query; both data mining and identity theft is to
more generally, computation on quantify users’ ability to retain control
encrypted data of sensitive information and its dissemi- ƒƒ Costs for identity-fraud
nation even after it has left their hands. insurance.
Long term
For data mining, quantitative measures ƒƒ Black market price of stolen
ƒƒ Private information retrieval
of privacy have been proposed only identity.
recently and are still fairly primitive. For
ƒƒ Multiparty communication example, it is difficult to quantify the
effect of a release of personal informa-
What needs to be in place for
ƒƒ Use of scale for privacy
tion without knowing the full context test and evaluation?
ƒƒ Resistance to active attacks for
deanonymizing data with which it may be fused and within Access to usable data sets is important,
which inferences may be drawn. Evalu- for example,
ƒƒ Developing measures of privacy ation and refinement of such metrics are
ƒƒ Census data (see http://www.
Game changing certainly in order.
ƒƒ Limited data retention
Useful realistic measures are needed for ƒƒ Google Trends
ƒƒ Any two databases should be
capable of being federated evaluating privacy and for assessing the ƒƒ PREDICT (e.g., network traffic
without loss of privacy (privacy relative values of information. data;
composability) ƒƒ Medical research data
Possible measures of progress/success
ƒƒ Low-latency private include the following: ƒƒ E-mail data (e.g., for developing
communications resistant to spam filters)
timing attack ƒƒ Rate of publication of privacy-
breach stories in the media. Possible experimental testbeds include
Resources ƒƒ Database measures: Can we the following:
simulate a database without
real data? How effective would ƒƒ Isolated networks and their users
This topic is research-intensive, with
considerable needs for testbeds demon- approaches be that cleanse data ƒƒ Virtual societies
strating effectiveness and for subsequent by randomization? Can we
technology transfer to demonstrate the use such approaches to derive In addition, privacy Red Teams could
feasibility of the research. It will require metrics? (Statistical communities be helpful.

[Bri+1997] J. Brickell, D.E. Porter, V. Shmatikov, and E. Witchell. Privacy-preserving remote diagnostics, CCS ’07,
October 29 – November 2, 2007.

[Pfi+2001] A. Pfitzmann and M. Köhntopp. Anonymity, unobservability, and pseudonymity: A proposal for terminology.
In Designing Privacy Enhancing Technologies, pp. 1-9, Springer, Berlin/Heidelberg, 2001.

[Rab1981] M. Rabin. How to exchange secrets by oblivious transfer. Technical Report TR-81, Aiken Computation
Laboratory, Harvard University, 1981.

[Shib] The Shibboleth System (

Many additional references can be found by browsing the URLs noted above in the text of this section.

Current Hard Problems in INFOSEC Research
11. Usable Security


What is the problem being addressed?

Security policy making tends to be reactive in nature, developed in response to an
immediate problem rather than planned in advance based on clearly elucidated
goals and requirements, as well as thoughtful understanding and analysis of the
risks. This reactive approach gives rise to security practices that compromise system
usability, which in turn can compromise security—even to the point where intended
improvements in a system’s security posture are negated. Typically, as the security
of systems increases, the usability of those systems tends to decrease, because secu-
rity enhancements are commonly introduced in ways that are difficult for users to
comprehend and that increase the complexity of users’ interactions with systems.
Any regular and frequent user of the Internet will readily appreciate the challenge
of keeping track of dozens of different passwords for dozens of different sites, or
keeping up with frequent patches for security vulnerabilities in myriad applications.
Many users also are confused by security pop-up dialogs that offer no intuitive
explanation of the apparent problem and, moreover, appear completely unable
to distinguish normal, legitimate activity, such as reading e-mail from a friend,
or from a phishing attempt. Such pop-ups are typically ignored, or else blindly
accepted [Sun+09].

People use systems to perform various tasks toward achieving some goal. Unless the
tasks at hand are themselves security related, having to think about security inter-
feres with accomplishing the user’s main goal. Security as it is typically practiced in
today’s systems increases complexity of system use, which often causes confusion and
frustration for users. When the relationship between security controls and security
risks is not clear, users may simply not understand how best to interact with the
system to accomplish their main goals while minimizing risk. Even when there is
some appreciation of the risks, frustration can lead users to disregard, evade, and
disable security controls, thus negating the potential gains of security enhancements.

Security must be usable by persons ranging from nontechnical users to experts and
system administrators. Furthermore, systems must be usable while maintaining
security. In the absence of usable security, there is ultimately no effective security.
The need for usable security and the difficulties inherent in realizing adequate
solutions are increasingly being recognized. In attempting to address the chal-
lenges of usability and security, several guiding principles are worth considering.
Furthermore, when we refer here to usable security, we are really concerned with
trustworthy systems whose usability has been designed into them through proactive
requirements, constructive architectures, sound system and software development
practices, and sensible operation. As observed in previous sections, almost every
system component and every step in the development process has the potential to
compromise trustworthiness. Poor usability is a huge potential offender.

Security issues must be made as trans- it or switch to an alternative system that productivity. Security is poorly
parent as possible. For example, security is more user friendly but less secure. understood by nonexperts, and the
mechanisms, policies, and controls must consequences of disabled or weakened
be intuitively clear and perspicuous to What are the potential security controls are often indirect and
all users and appropriate for each user. not immediately felt; and the worst
In particular, the relationships among effects may be felt by those not directly
security controls and security risks must The threats from the absence of usable involved (e.g., credit card fraud), leading
be presented to users in ways that can be security are pervasive and mostly noted users to question the value of having
understood in the context of system use. in the above discussion. However, these security technology at all.
threats are somewhat different from
Users must be considered as fundamen- those in most of the other 10 topics—in At the same time, consciousness of secu-
tal components of systems during all that the threats are typically more likely rity issues is becoming more widespread,
phases of the system life cycle. Different to arise from inactions, inadvertence, and technology developers are paying
assumptions and requirements pertain- and mistakes by legitimate users. On increasing attention to security in their
ing to users’ interactions with systems the other hand, threats of misuse by products and systems. However, usabil-
must be made explicit to each type outsiders and insiders similar to those ity in general appears not to be much
of user—novices, intermittent users, in the other topics can certainly arise as better understood by software practi-
experts, and system administrators, to a result of the lack of usability. tioners than security is. This situation
name a few. In general, one-size-fits-all makes the problem of usable security
approaches are unlikely to succeed. Who are the potential even more challenging, since it com-
bines two problems that are difficult to
beneficiaries? What are their
Relevant education about security prin- solve individually.
respective needs?
ciples and operational constraints must
be pervasive. Security issues can never Although the problem of achieving Usability of systems tends to decrease as
be completely hidden or transparent. usable security is universal—it affects attempts are made to increase security
There will always be the possibility of everyone, and everyone stands to benefit and, more broadly, trustworthiness.
conflict between what users might want enormously if we successfully address Many current security systems rely on
to accomplish most easily and the secu- usability as a core aspect of security—it humans performing actions (such as
rity risks involved in doing so. Helping affects different users in different ways, typing passwords) or making decisions
users to understand these trade-offs must depending on applications, settings, (such as whether or not to accept an
be a key component of usable security. policies, and user roles. The guiding SSL certificate). For example, one e-mail
principles may indeed be universal, but system requires that users reauthenticate
Security metrics must take usability into as suggested above there is certainly every 8 hours to assure that they are
account. Although one might argue that no general one-size-fits-all solution. actually the authorized person. This
a system with a certain security control Examples of different categories of users requirement is a direct counter to system
is in principle more secure than an oth- and ways in which they are affected by usability. For example, some web brows-
erwise equivalent system without that problems in usable security are shown ers warn users before any script is run.
control—for example, a web browser in Table 11.1. But users may still browse onto a web
that supports client/server authentica- server that has scripts on every page,
tion vs. one that does not—the real What is the current state of causing pop-up alerts to appear on each
security may in fact be no greater (and page.
possibly even less) in a system that
implements that security control, if its Although the importance of secu- Many of the potential impacts of security
introduction compromises usability to rity technology is widely recognized, that is not usable involve increased sus-
the point that users are driven to disable it is often viewed as a hindrance to ceptibility to social-engineering attacks.

TABLE 11.1: Beneficiaries, Challenges, and Needs

Beneficiaries Challenges Needs

Nontechnical users Unfamiliar technology and terminology; Safe default settings; automated assistance
security risks unclear with simple, intuitive explanations when
user involvement is required
Occasional users Changing security landscape; deferred Automated, offline system maintenance;
security maintenance (e.g., antivirus automated adaptation of evolving security
updates, software patches) inhibits on- controls to learned usage patterns
demand system use
Frequent and expert users Hidden or inflexible security controls Security controls that adapt to usage
intended for nontechnical users; obtrusive patterns; security control interfaces that
security pop-up dialogs remain inconspicuous and unobtrusive, yet
readily accessible when needed
Users with special needs (e.g., visual, From a security standpoint, similar to other Adaptations of security controls (such as
auditory, motor control challenges) users, but with added challenges arising biometrics) that accommodate special
from special interface needs needs; for example, fingerprint readers may
be unsuitable for users with motor control
System administrators Configuration and maintenance of systems Better tools that help automatically
across different user categories; evolving configure systems according to
security threats and policies organizational policies and user
requirements; better tools for monitoring
security posture and responding to security
System designers Lack of security and/or usability emphasis in Design standards and documented best
education and training practices for usable security
System developers Complexity of adding security and usability Integrated development environments
requirements into development processes (IDEs) that incorporate security and usability
Policy makers Difficulty in capturing and expressing Tools for expressing and evaluating security
security requirements and relating them to policies, especially with respect to trade-
organizational workflows offs between usability (productivity) and

This might be an adversary sending an A few illustrative examples from the was cumbersome to configure, even
e-mail “this configuration change makes current state of the practice may help for experts, and imposed significant
your system more usable” to “this patch illuminate challenges in usable security system overhead. Key management
must be manually installed”. But it also and identify some promising directions was typically either cumbersome, or
involves attackers who gain the trust of from which broader lessons may be reduced to one key or perhaps just a
users by helping those users cope with drawn. few. Many newer operating systems
difficult-to-use systems. Thus, resistance now offer ready-to-use full-disk
to social engineering must be built into Somewhat positive examples of usable encryption out of the box, requiring
systems, and suitable requirements and security might include transparent little more than a password from the
metrics included from the outset of any file-system encryption. When first user, while imposing no noticeable
system development. introduced, file encryption technology performance penalty.

Other, more mixed examples illustrate understanding, leading to the Net Trust). Although this seem
how security technology still falls short frustration effects noted earlier. to enhance usability, many users
in terms of usability: ƒƒ Mail authentication. There may not adequately understand
are mechanisms to authenticate the implications of accepting
ƒƒ Passwords. Security pitfalls of trust information from systems
poorly implemented password senders of valid e-mails, such
as SPF (sender permitted that may be unknown to those
schemes have been extensively users. They are also unlikely to
documented over the years. from). DomainKeys Identified
Mail (DKIM) is an e-mail understand fully what factors
When users must resort to might be helpful, harmful, or
writing them on slips of paper authentication technology that
allows e-mail recipients to verify some of each.
or storing them unencrypted
on handheld devices, the risk whether messages that claim to ƒƒ CAPTCHA systems. A
of password exposure may have been sent from a particular CAPTCHA (Completely
outweigh the increased security of domain actually originated there. Automated Public Turing test
strong passwords. Nevertheless, It operates transparently for end to tell Computers and Humans
passwords are often simplistically users and makes it easier to detect Apart) is a challenge-response
believed to be a usable security possible spam and phishing mechanism intended to ensure
mechanism, and elaborate attacks, both of which often rely that the respondent is a human
procedures are promulgated on domain spoofing. Some large and not a computer. CAPTCHAs
purporting to define sensible e-mail service providers now are familiar to most web users
password practices (with respect support DKIM. as distorted images of words or
to frequency of changing, not ƒƒ Client-side certificates. Most other character sequences that
using dictionary words, including must be input correctly to gain
web browsers and e-mail
nonalphabetic characters, etc.). access to some service (such
applications in widespread use
Tools that help users select good as a free e-mail account). To
today support user authentication
passwords and manage their make a CAPTCHA effective
via certificates based on public-
passwords have been touted for distinguishing humans from
key cryptography. However, the
to enhance both usability and computers, solving it must be
technology is not well understood
security. However, to make difficult for computers but
by nonexpert users, and typically
passwords more effective for relatively easy for humans. This
the integration of client-side balance has proven difficult to
stronger security, they must be so certificate authentication into
long and so complex that users achieve, resulting in CAPTCHAs
applications makes the use and that are either breakable by
cannot remember them, which
management of these certificates computers or too difficult for
seriously compromises usability.
opaque and cumbersome for humans. Another challenge is
ƒƒ Security pop-up dialogs. No users. to produce CAPTCHAs that
matter how much effort is put accommodate users with special
ƒƒ The SSL lock icon. This
into making security controls needs.
approach gives the appearance
automated and transparent,
of security, but its limitations ƒƒ Not accounting for cultural
there are inevitably situations
are not generally understood. differences and personal
that require users to make
For example, it may be totally disabilities. For example,
security-related decisions. Today,
unfortunately, user involvement spoofed. Its presence or absence people of one ethnic group tend
appears to be required too may also be ignored. to have difficulty recognizing
often and usually in terms that ƒƒ “Web of trust”-like approaches different faces of people in
nontechnical users have difficulty to certificate trust (e.g., Google, other ethnic groups, which

could cause usability differences ƒƒ Overloading of security this area. An example of a new
in authentication. Similarly, attributions in the context direction might be making Tor
CAPTCHAs could be culture of domain-validation more usable for administration.
dependent. In addition, people certificates. People tend to trust ƒƒ Highlighting important changes
with a prosopagnosia disorder certificates too much or else are
to systems (e.g., operating
have difficulty distinguishing overwhelmed by their presence.
systems, middleware, and
between different people by sight. ƒƒ Revocation. Dealing with
This would seriously impair their applications) that could improve
change is typically difficult, but security and usability (rather than
ability to distinguish among usability may be impaired when
different pictorial authenticators just one).
revocation is required. If not
and CAPTCHAs. ƒƒ Reevaluating decisions/trade-offs
carefully designed into systems
ƒƒ Policies and centralized in advance with usability and made in past systems. A sense of
administration. Lack of user understandability in mind, history in cybersecurity is vital
flexibility is common. On the mechanisms for revocation but is too often weak.
other hand, it is generally unwise are likely to have unintended
to expect users to make security/ ƒƒ One Laptop Per Child Bitfrost
usability trade-off evaluations. security model.
ƒƒ Federated identity What is the status of current ƒƒ Integration of biometrics with
management. Cross-domain research? laptops (e.g., fingerprint, facial
access is complex. Simplistic recognition); this is in practice
Following is a brief summary of some
approaches such as single sign-
current research, along with gaps. For today, for better or worse. It
on can lead to trust violations.
background, see [SOU2008]. may be good for administration,
Conversely, managing too
many passwords is unworkable. but perhaps not so good from
ƒƒ Usable authentication. For
More work is needed on access the point of view of user
example, visual passwords and
cards such as the CAC system, understanding.
various other authentication
DoD’s Common Access Card,
approaches exist but need much
(which combines authentication,
encryption of files and e-mail, further work to determine
whether they can be used FUTURE DIRECTIONS
and key escrow) and other such
systems to identify security effectively. At present, they are
vulnerabilities. In all such often very difficult to use and On what categories can we
systems, usability is critical. seem unlikely to scale well to subdivide the topic?
ƒƒ PGP, S/MIME, and other large numbers of passwords. We consider the following three cat-
approaches to secure e-mail. ƒƒ User security. Currently funded egories as a useful subdivision for
Many past attempts to security-related usability research formulating a research roadmap for
encapsulate encryption into mail includes the CMU CyLab Usable usability and security:
environments have been hindered Privacy and Security Laboratory
by the lack of seamless usability. ƒƒ Interface design (I)
(CUPS), and Stanford University
ƒƒ Links. Phishing, cross-site ƒƒ Science of evaluation for usable
work on Web integrity. A list of
scripting, and related problems security (E)
CUPS projects with descriptions
with bogus URLs are laden with and papers can be found at ƒƒ Tool development (T)
risks. URLs may seem to increase
usability, but malicious misuse The following are second-level bins, with
of them can seriously diminish ƒƒ Ease of administration. descriptors defining their relevance to I,
security. Relatively little research exists in E, and T:

ƒƒ Principles of usable security; a designing for and evaluating usability security of novel approaches and out-
taxonomy of usable security (E) of computer systems. However, only of-the-box thinking in usable security.
ƒƒ Understanding users and their a small fraction of this research has
interactions with security controls focused on usability specifically as it There is a need to increase knowledge of
(IET) relates to security. At the same time, usability among security practitioners.
security research tends to focus on spe- A common lament in industry is that
ƒƒ Usable authentication and
cific solutions to specific problems, with programmers are too rarely taught how
authorization technology (IT) little or no regard for how those solu- to create secure programs, but even
ƒƒ Design of usable interfaces for tions can be made practical and, most those who do receive such training are
security, with resistance to social importantly, transparent to users and unlikely to be taught how to provide
engineering (I) system administrators. To the extent that both security and usability simultane-
ƒƒ Development tools that assist in security practitioners do consider the ously. Just as with security, usability is
the production of systems that practical implications of their proposed not a property that can easily be added
are both more secure and more solutions, the result is often a new or to existing systems, and it is not a prop-
usable (T) modified user interface component for erty that one member of a large team can
configuring and controlling the security provide for everyone else. The implica-
ƒƒ Adapting legacy systems
technology, which does little to address tion is that a large body of designers,
ƒƒ Building new systems the fundamental problem that most programmers, and testers needs to have a
ƒƒ Usable security for embedded users cannot and do not want to be much deeper understanding of usability.
and mobile devices (IET) responsible for understanding and man- Adding usability to existing curricula
aging security technology; they simply would be a good start but could not
ƒƒ Evaluation approaches and
want it to do the right thing and stay be expected to pay dividends for years
metrics for usability and
out of the way. to come. Methods to increase under-
security (E)
standing of usability among software
ƒƒ User education and In short, usable security is not funda- developers already working in industry
familiarization with security mentally about better user interfaces to are equally necessary.
issues and technology (IE) manage security technology; rather, it is
ƒƒ User feedback, experience (e.g., about evaluating security in the context We need to identify a useful framework
usability bug reports) (E) of tasks and features and of the user, and for discussing usability as it relates to
ƒƒ Security policies (especially, rearchitecting it to fit into that context. security, such as the following:
implementation of them) that
It is important to note the inherently ƒƒ Research on usable security
increase both usability and
interdisciplinary nature of usability “out of the box” (security
security (ET)
and security. Security researchers and transparency).
ƒƒ Tools for evaluating security
practitioners cannot simply expect that ƒƒ Identification of the most useful
the HCI experts will fix the usabil- points in the R&D pipeline at
ƒƒ Market creation for usable ity problem for trustworthy systems.
security technology which to involve users in the
Addressing the problem adequately
development of trustworthy
will require close collaboration between
members of the security and usabil-
What are the major research
ity research communities. One goal ƒƒ Research into the question of
is to develop the science of usability how to evaluate usability as it
Human-computer interaction (HCI) as applied to security. For example, relates to security. Here we would
research has made strides in both we need to have ways to evaluate the expect significant contributions

from HCI research that has ƒƒ Lessons from the automotive effects they want to achieve but are not
already developed methodologies industry experts in system administration. In
for evaluating usability. addition, if a user decides to modify the
What are some exemplary access configuration, how could that be
ƒƒ System architectures that starkly
problems for R&D on this done in a usable way, while achieving
reduce the size and complexity only the desired modifications (e.g., not
of user interfaces, perhaps by making access to sensitive data either
simplifying the interface, hiding One exemplary problem is protecting more or less restrictive than intended)?
the complexity within the users against those who pose as someone
interface, providing compatible else on the Internet. Techniques like What R&D is evolutionary and
interfaces for different types of certificates have not worked. Alerts from
what is more basic, higher
users (such as administrators), or browsers and toolbars and other add-ins
risk, game changing?
various other strategies, without about suspicious identities of websites or
losing the ability to do what must e-mail addresses do not work, because In the short term, the situation can be
be done especially in times of users either do not understand the alerts significantly improved by R&D that
system or component failures. or do not bother using the tools. Note focuses on making security technology
that, if used properly, these techniques work sensibly “out of the box”—ideally
ƒƒ The ability to reflect physical- could be effective. The failure is in their with no direct user intervention. More
world security cues in computer lack of easy usability. The goal here basic, higher-risk, game-changing
systems. should be not just to find any alternative research would be to identify funda-
ƒƒ Consideration of usability approach, but rather to find approaches mental system design principles for
from a data perspective; for that can work well for ordinary users. trustworthy systems that minimize
example, usability needs can direct user responsibility for trustwor-
drive collection of data that can Another exemplary problem is the secure thy operation.
lead to security problems (PII as handling of e-mail between an arbitrary
authenticators, for example) sender and an arbitrary receiver in a Near term
usable way. Judging from the limited ƒƒ Informing the security research
Hard problems use of encrypted e-mail today, existing community on the results
ƒƒ Usable security on mobile devices approaches are not sufficiently usable. obtained in the usable security
Yet, users are regularly fooled into believ-
ƒƒ Usable mutual authentication community on the design and
ing that forged e-mail is actually from
execution of usability studies
ƒƒ Reusable “clean” abstractions for the claimed sender. It is only a matter
usable security of time before serious problems are
encountered because of e-mail traveling ƒƒ Developing a bibliography of
ƒƒ Usable management of access
across its entire path unencrypted and best practices and developing
controls a community expectation that
unauthenticated. For a general discus-
ƒƒ Usable secure certificate services sion on why cryptography is typically security researchers will use them
ƒƒ Resistance to social engineering not very easily used, see [Whi+1999]. in their work
ƒƒ Identifying the common
Other areas we might draw on Another possibility is configuring an characteristics of “good” usable
ƒƒ Usability in avionics: reducing office environment so that only the
security (and also common
people who should have access to sensi-
the cognitive load on pilots characteristics of usability done
tive data can actually access it—so that
ƒƒ Lessons from safety in general, badly)
such a configuration can be accom-
especially warnings science plished by users who understand the ƒƒ Developing a useful framework

for discussing usability (in the Measures of success as part of all applicable research
context of security) Meaningful metrics for usable security in other areas.
ƒƒ Developing interdisciplinary must be established, along with ƒƒ Guidelines/How-Tos for
connections between the security generic principles of metrics. These usability studies. (See Garfinkel
and HCI communities (relates to must then be instantiated for specific & Cranor [Cra+2005].)
the first bullet above) systems and interfaces. We need to ƒƒ A “Usable Security 101” course,
ƒƒ Identifying ways of involving measure whether and to what extent including how to develop and
users in the security technology increased usability leads to increased
evaluate usable systems.
R&D process security, and to be able to find “sweet
spots” on the usability and security ƒƒ Standardized testbed for
Medium term curves. Usable security is not a black- conducting usability studies
ƒƒ Usable access control mechanisms and-white issue. It must also consider (perhaps learning from DETER
(such as a usable form of RBAC) returns on investment. and PlanetLab).
ƒƒ Usable authentication ƒƒ Anonymous reporting system
We do not have metrics that allow
ƒƒ Developing a common within a repository for usability
direct comparison of the usability of
framework for evaluating problems (perhaps learning from
two systems (e.g., we cannot say defini-
usability and security the avionics field).
tively that system A is twice as usable as
ƒƒ Long term system B), but we do perhaps have some
ƒƒ Composability of usable
To what extent can we test
well-established criteria for what consti-
components: can we put together tutes a good usability evaluation. One real systems?
good usable components for possible approach would be to develop Usability studies need to be based on real
particular functions and get a usable solution for one of the exemplar systems. They need not be live systems
something usable in the total problems and demonstrate both that used to conduct actual business, but
users understand it and that its adoption they need to be real in the sense that they
ƒƒ Tools, frameworks, and standards reduces the incidence or severity of the offer the same interfaces and operate in
for usable security associated attack. For example, demon- the same environments as such systems.
strate that a better anti-phishing scheme
Resources reduces the frequency with which users Usability competitions might be con-
follow bogus links. Admittedly, this sidered (e.g., who can come up with
Designing and implementing systems would demonstrate success on only a the most usable system for application/
with usable security is an enormously single problem, but it could be used to function X that satisfies security require-
challenging problem. It will necessitate show that progress is both possible and ments Y). A possible analogy would
embedding requirements for usability demonstrable, something that many be to the challenge of creating a more
in considerable detail throughout the people might not otherwise believe is usable shopping cart. Building test and
development cycle, reinforced by exten- true about usable security. evaluation into the entire research and
sive evaluation of whether it was done development process is essential.
adequately. If those requirements are What needs to be in place for
incomplete, it could seriously impair test and evaluation?
the resulting usability. Thus, significant
resources—people, processes, and soft- Several approaches could help:
ware development—need to be devoted
ƒƒ Test and evaluation for usability
to this challenge.

[Cra+2005] L.F. Cranor and S. Garfinkel, editors. Security and Usability: Designing Secure Systems That People Can Use.
O’Reilly Media, Inc., Sebastopol, California, 2005

[Joh2009] Linda Johansson. Trade-offs between Usability and Security. Master’s thesis in computer
science, Linkoping Institute of Technology Department of Electrical Engineering,
LiTH-ISY-EX-3165, 2001 (
pdf/Trade-offs%20Between%20Usiability%20and%20Security.pdf ).

[SOU2008] Symposium on Usable Privacy and Security. The fourth conference

was July 2008 (

[Sun+09] J. Sunshine, S. Edelman, H. Almuhimedi, N. Atri, and L.F. Cranor. Crying Wolf:
An empirical study of SSL warning effectiveness. USENIX Security 2009.

[Whi+1999] Alma Whitten and J.D. Tygar. Why Johnny can’t encrypt: A usability evaluation of PGP 5.0.
In Proceedings of the 8th USENIX Security Symposium, Washington, D.C., August 23–26, 1999,
pp. 169–184 (

In addition, several other websites might be worth considering.

Appendix A
Appendix A. Interdependencies Among Topics

This appendix considers the interdependencies among the 11 topic areas—namely,

which topics can benefit from advances in the other topic areas and which topics
are most vital to other topics. Although it is in general highly desirable to separate
different topic areas in a modular sense with regard to R&D efforts, it is also desir-
able to explicitly recognize their interdependencies and take advantage of them
synergistically wherever possible.

These interdependencies are summarized in Table A.1.

Table A.1: Table of Interdependencies

X: Topic 1 2 3 4 5 6 7 8 9 10 11 H M L
1: Scalable
- H H H H H H H H H H 10 0 0
2: Enterprise
M - H H H H H H H H H 9 1 0
3: Evaluation
H M - H H H H H H M H 8 2 0
Life Cycle
4: Combatting
H M M - H M M H M M H 4 6 0
5: Combatting
H M M M - M H H M M H 4 6 0
6: Global ID
H M M H H - M H H H H 7 3 0
7: System
H M M H M M - M M L H 3 6 1
8: Situational
M M M H H M H - M M H 4 6 0
9: Provenance M M M M H M M H - H H 4 6 0
10: Privacy-
M M L H L H M H M - H 4 4 2
Aware Security
11: Usable
M M M M M M M M M M - 0 10 0
H 5 1 2 7 7 4 5 8 4 4 10 *57

M 5 9 7 3 2 6 5 2 6 5 0 *50

L 0 0 1 0 1 0 0 0 0 1 0 *3

* Totals for H, M, and L, for both X and Y.

Note: H = high, M = medium, L = low. These are suggestive of the extent to which:
X can contribute to the success of Y.
Y can benefit from progress in X.
Y may in some way depend in the trustworthiness of X.

Almost every topic area has some poten- other topic areas, most obviously evolution must also be driven by
tial influence and/or dependence on the including enterprise-level metrics feedback from those other topics.
success of the other topics, as summa- and the system evaluation life  Combatting Insider Threats
rized in the table. The extent to which cycle (which together could drive (topic 4) will share some
topic X can contribute to topic Y is rep- the definitions and assessments common benefits with
resented by the letter H, M, or L, which of trustworthiness), global-scale Combatting Malware and
indicate that Topic X can make high, identity management, system Botnets (topic 5), particularly
medium, or low contribution to the survivability, and usable security,
with respect to the development
success of Y. These ratings, of course are but also including work on
and systematic use of fine-
very coarse and purely qualitative. On combatting insider misuse and
the other hand, any finer-grained ratings grained access controls and
combatting malware.
are not likely to be useful in this context. audit trails. However, note
 Enterprise-Level Metrics that combatting insider threats
The purpose of the table is merely to
(ELMS) (topic 2) is particularly can contribute highly (H) to
illustrate the pervasive nature of some
interesting. It is one topic to combatting malware, although
relatively strong interdependencies.
which all other topic areas the reverse contributions may
A preponderance of H in a row indicates must contribute to some extent, be somewhat less (M). Both
that the corresponding row topic is of because each other topic area of these topics have significant
fundamental importance to other topics. must explicitly include metrics benefits for the other topics.
That is, it can contribute strongly to the specific to that area. In the other Also, Situational Understanding
success of most other topics. direction of dependence, the (topic 8) is fundamental to both,
mere existence of thorough and and clearly is relevant to both
Examples: rows 1 (SCAL: all H), well-conceived enterprise-level insider threats and malware.
2 (METR: 9 H), 3 (EVAL: 8 H). metrics would drive R&D in Thus, the potential synergies here
the individual topic areas to will be very important.
The preponderance of H in a column help them contribute to the
 Global-Scale Identity
indicates that the corresponding column satisfaction of the enterprise-
Management (topic 6) and
topic is a primary beneficiary of the level metrics. This can also
Provenance (topic 9) can be
other topics. inspire the composability of the
mutually beneficial: the former
evaluation of topic metrics into
can significantly enhance the
Examples: columns 11 (USAB: 10 H), the evaluation of the enterprise-
latter (H), whereas the latter can
8 (SITU: 8 H), 4 (INSI: 7 H), level metrics, which is a major
enhance the former somewhat
5 (MALW: 7 H). research need. The enterprise-
less (M), although it can increase
level metrics topic area thus
the assurance of the former.
Not surprisingly, the table is not sym- interacts bidirectionally with all
metric. However, there are numerous the other topics, as exhibited by  Survivability of Time
potential synergies here, such as the the H entries in that row and the Critical Systems (topic 7) is
following: M entries in that column. strongly linked with Scalable
Trustworthy Systems (topic 1),
ƒƒ Scalable Trustworthy Systems The System Evaluation Life
 because survivability is one
(topic 1) is the one topic that Cycle (topic 3) is similar to of the fundamental aspects of
can highly enhance all the other Enterprise-Level Metrics (topic 2) trustworthiness. In addition,
topics. However, its success could in this context. It is fundamental it is particularly relevant to
also derive significant benefits to trustworthiness in almost all combatting insider threats and
from advances in some of the the other topic areas, but its malware.

 Situational Understanding and development, and operation. Failure to following topics can also contribute to
Attack Attribution (topic 8) is satisfy any of these requirements can advances in this topic area.
important throughout. potentially undermine the trustworthi-
ƒƒ Enterprise-level metrics (that
 Privacy-Aware Security ness of entire systems and indeed entire
enterprises. is, measures of trustworthiness
(topic 10) is somewhat of an that apply to systems and
outlier with respect to strong systems of systems as a whole):
dependence in both directions. To illustrate the pervasiveness of the
interdependencies summarized in Evaluation methodologies must
It is only moderately dependent allow composability of lower-
on other topics, and most other Table A.1, we consider the 11 topics,
in greater detail. For each topic, we layer metrics and the resulting
topics are only moderately evaluations. Formalization of
dependent on it. Nevertheless, consider first how success in the other
topic areas might contribute to that the ways in which metrics and
it is a very important and often evaluations can compose should
neglected broad topic area—one particular topic (that is, represented by
the corresponding column of the table), contribute to the composability
that is becoming increasingly of scalable systems and their
important as more applications and then consider how success in that
particular topic might benefit the other ensuing trustworthiness.
become heavily dependent on the
need for trustworthy computer 10 topics (represented by the corre- ƒƒ System evaluation life cycle:
systems. sponding rows of the table). These more Methodologies for evaluating
detailed descriptions are intended to be security should be readily
 Usable Security (topic 11) is beneficial for readers who are interested applicable to trustworthy system
fundamental throughout. It can in a particular column or row. They also developments; evaluations must
strongly influence the success of amplify some of the concepts raised in themselves be composable and
almost all the other topics but is the 11 sections of this report. scalable. Similar to the enterprise-
also a critical requirement of each
level metrics topic, advances in
of those topics. Generic gains
evaluation methodologies can
in achieving usability will have Topic 1: Scalable Trustworthy contribute to the composability
enormous impact throughout, in Systems of trustworthy systems of systems.
both directions. This is one of We consider first how success in the
many examples of an iterative ƒƒ Combatting insider threats:
other topic areas could contribute to
symbiotic feedback loop, where Various advances here could
scalable trustworthy systems, and then
advances in usability will help benefit scalable trustworthy
how success in scalable trustworthy
other topics, and advances in systems, including policy
systems might benefit the other topic
other topics will help usability. development, access control
mechanisms and policies,
The low incidence of low-order inter-
containment and other forms of
dependencies in Table A.1 may at first What capabilities from other topic areas
isolation, compromise-resistant
seem odd. However, it may actually are required or would be particularly desir-
and compromise-resilient
be a testament to the relative impor- able for effective progress in this topic area?
operation, and composable
tance of each of the 11 topic areas
metrics and evaluations
and the mutual synergies among the Research on the theory and practice
topics, as well as the inherently holistic of scalable trustworthiness is essen- applicable to insider threats.
nature of trustworthiness [Neu2006], tial. Although some of that research ƒƒ Combatting malware: Advances
which ultimately requires serious atten- must result from the pursuit of scalable such as those in the previous
tion to all the critical requirements trustworthy systems per se, research topic relating to malware
throughout system architecture, system and development experience from the detection and prevention can


also contribute, including the ƒƒ Privacy-aware security: Of directly or indirectly to almost all areas,
existence of contained and considerable interest are particularly global identity manage-
confined execution environments cryptographic techniques (for ment, time-critical system survivability,
(e.g., sandboxing), along with example, functional encryption provenance, privacy-aware security, and
vulnerability analysis tools and such as attribute-based usability. Usability is an example of
composable metrics. encryption that is strongly two-way interdependence: a system
ƒƒ Identity management: Tools linked with access controls), that is not scalable and not trustwor-
for large-scale trust management authentication, and authorization thy is likely to be difficult to use; a
would enhance scalability and mechanisms that can scale system that is not readily usable by users
trustworthiness of systems and of easily into distributed systems, and administrators is not likely to be
systems of systems. networks, and enterprises, operationally trustworthy. In addition,
especially if they transcend usability would be mutually reinforc-
ƒƒ System survivability: ing with evaluation methodologies and
centralized controls and
Availability models and management. global metrics. Other topic areas can
techniques, self-healing trusted benefit with respect to composability
computing bases (TCBs) ƒƒ Usable security: Techniques
are needed for building and scalability. Metrics must themselves
and subsystems, robustness be composable and scalable in order to
analysis, composable metrics trustworthy systems that are also
usable. Thus, any advances in be extended into enterprise-level metrics.
and evaluations would all be Time-critical systems must compose
beneficial to scalable trustworthy usability can contribute to the
development and maintenance predictably with other systems. Global-
systems. scale identity management, of course,
of trustworthiness operationally,
ƒƒ Situational understanding especially if they can help with must scale. Usability must compose
and attack attribution: Of scalability. smoothly.
considerable interest would be
scalable analysis tools. Such tools With respect to prototype systems, More detailed technological issues relat-
must scale in several dimensions, systems of systems and enterprises, ing to scalable trustworthy systems
including number of system testbeds and test environments are might address questions such as the
components, types of system needed that can be cost-effective and following. What fundamental building
components, and attack time enable timely evaluations, integrated blocks might be useful for other topic
scales. attention to interface design for internal areas, such as insider threats, identity
(developer) and external (user) inter- management, and provenance? Can any
ƒƒ Provenance: The ability of
faces, and composability with respect of these areas, such as usability, use these
provenance mechanisms and
to usability metrics. Methods for accu- building blocks composably? Clearly,
policies to scale cumulatively and rately evaluating large-scale systems in detailed metrics are needed for trustwor-
iteratively to entire enterprises testbeds of limited size would be useful, thiness, composability, and scalability.
and federated applications and especially if the methods themselves can Thoroughly documented examples are
be maintained under large-scale scale to larger systems. needed that cut across different topic
compositions of components areas. For example, trustworthy separa-
that would enhance scalable How does progress in this area support tion kernels, virtual machine monitors,
trustworthiness overall. Such advances in others? and secure routing represent areas of
mechanisms must be tamper Overall, this topic area has significant considerable interest for the future.
resistant, providing abilities for impact on each of the other areas. Scal-
both protection and detection. able composability would contribute

ƒƒ Scalable trustworthy systems over insider misuse can also
Topic 2: Enterprise-Level
would help address remote help prevent or at least limit the
Metrics (ELMs)
access by logical insiders as deleterious effects of malware.
What capabilities from other topic areas
are required for effective progress in this well as local access by physical The prevention aspects are closely
topic area? insiders, by virtue of distributed related.
authentication, authorization,
Each of the other topic areas is expected ƒƒ Life cycle protection must
and accountability.
to define local metrics relevant to its account for the insider threat.
own area. Those local metrics are likely ƒƒ Situational understanding and
ƒƒ Survivability of systems can
to influence the enterprise-level metrics. attack attribution must apply to
be aided by knowledge of the
insiders as well as other attackers.
presence of potential malware or
How does progress in this area support This dependency implies that
of insiders who may have been
advances in others? synergy is required between
detected in potential misuse.
Proactive establishment of sensible misuse detection systems and the
enterprise-level metrics would natu- access controls used to minimize
Topic 5: Combatting Malware
rally tend to drive refinements of the insider misuse.
and Botnets
local metrics.
ƒƒ Identity management relates What capabilities from other topic areas
to the accountability aspects of are required for effective progress in this
Topic 3: System Evaluation the insider threat, as well as to topic area?
Life Cycle remote access by insiders.
What capabilities from other topic areas Malware is a principal mechanism
are required for effective progress in this ƒƒ Malware can be used by insiders whereby machines are taken over for
topic area? or could act as an insider on botnets. Significant progress in the
Advances in scalability, composability, behalf of an outside actor. Thus, malware area will go far toward enabling
and overall system trustworthiness are malware prevention can help effective botnet mitigation. Economic
likely to contribute to the development combat insider threats. analysis of adversary markets supports
of scalable, composable evaluation meth- ƒƒ Provenance can also help combat this area, as well as botnet defense, and
odologies, and suggest some synergistic insider threats. For example, may provide background intelligence
evolution. Metrics that facilitate evalu- strong information provenance in support of situational understanding.
ation will also contribute significantly. can help detect instances where
How does progress in this area support
insiders improperly altered
How does progress in this area support advances in others?
critical data.
advances in others? Progress in the area of inherently secure
Effective evaluation methodologies can ƒƒ Privacy-aware security requires systems that can be thoroughly moni-
provide major benefits to all the other knowledge of insiders who were tored and audited will benefit other
topics. Otherwise, the absence of such detected in misuse, as well as topics, especially situational understand-
methodologies leaves significant doubts. mechanisms for privacy. ing. Attribution also links this topic to
situational understanding. Advances in
Topic 4: Combatting Insider How does progress in this area support detection enable malware repositories,
Threats advances in others?
which can be mined to identify families
What capabilities from other topic areas ƒƒ Progress in combatting insider and histories of malware, which in turn
are required for effective progress in this threats will support advances may make attribution possible.
topic area? in privacy and survivability for
Several dependencies on other topic time-critical systems, as well as Collaborative detection may depend
areas are particularly relevant: conventional systems. Controls on progress in global-scale identity


management, to prevent adversaries from of identity-laden information. It could tion would make it significantly harder
thwarting such an approach through simplify security evaluations. It could for an attacker to avoid attribution. This
spoofed information. also reduce the proliferation of malware depends on progress in global-scale
if identities, credentials, authentication, identity management.
Progress in security metrics is likely to authorization, and accountability were
make it easier to evaluate the effective- systematically enforced on objects and Subsystems for detecting and combat-
ness of proposed solutions to malware other computational entities. ting malware must be designed to
problems. enhance situational understanding and
Topic 7: Survivability of Time attack attribution. Local malware, of
Topic 6: Global-Scale Identity course, is a serious problem. However,
Critical Systems
botnets and the malware that can com-
Management What capabilities from other topic areas promise unsuspecting systems to make
What capabilities from other topic areas are required for effective progress in this them part of botnets are adversarial
are required for effective progress in this topic area?
enablers supporting important classes
topic area?
Advances in the development of scal- of attacks for which situational under-
Scalable trustworthy systems are essen- able trustworthy systems would have standing is critical. Attribution in the
tial to provide a sound basis for global immediate benefits for system surviv- case of botnets is difficult because the
identity management. Privacy-aware ability. Basic advances in usability launch points for attacks are them-
security could be highly beneficial. For could help enormously in reducing the selves victimized machines, and the
example, assurance that remote creden- burdens on system operators and system adversaries are becoming more adept
tials are in fact what they purport to be administrators of survivable systems. at concealing their control channels
would help. In addition, analyses, simu- Advances in situational understanding and “motherships” (e.g., via encryption,
lations, and data aggregation using real would also be beneficial in remediating environmental sensing, and fast-flux
data require strong privacy preservation survivability failures and compromises. techniques [ICANN 2008, Holz 2008]).
and some anonymization or sanitiza-
tion. Provenance will be important How does progress in this topic area sup- Advances in privacy-aware security
for increasing the trustworthiness and port advances in others? (particularly with respect to privacy-
reputations of remote identities. Usabil- Concise and complete requirements aware sharing of security-relevant
ity is fundamental, of course for users for survivability would greatly enhance information) would address many of
as well as administrators. Survivability enterprise-level metrics and contribute the hurdles to sharing as considered in
of identity management systems will be to the effectiveness of evaluation meth- this topic area.
critical especially, in real-time control odologies. They would also improve the
and transactional systems. development of scalable trustworthy The measures of success enumerated
systems overall, because of the many below require fundamental advances
How does progress in this topic area sup- commonalities between survivability, in metrics definition, collection, and
port advances in others? security, and reliability. evaluation.
Identity management would contrib-
ƒƒ Synthetic attacks (emulating the
ute to the trustworthiness of large-scale Topic 8: Situational
networked systems and certainly help best current understanding of
Understanding and Attack adversary tactics) provide some
in reducing insider misuse, particu-
Attribution metrics for attribution. Possible
larly by privileged insiders who are
accessing systems remotely. It would What capabilities from other topic areas metrics include time to detect,
also enhance privacy-preserving secu- are required for effective progress in this how close to the true origin
rity—for example, because assurances topic area? of the attack (adversary and
are required whenever there is sharing Effective authentication and authoriza- location), and the rate of fast flux

that can be tolerated while still mitigation draw on advances in surviv- can completely undermine would-be
being able to follow the adversary ability, for example. solutions. Global-scale identity man-
assets. agement is essential for enterprise-wide
ƒƒ We should examine metrics Topic 9: Provenance privacy. Usability is essential, because
related to human factors to assess What capabilities from other topic areas otherwise mechanisms tend to be
effectiveness of presentation misused or bypassed and policies tend
would facilitate progress in this topic
approaches. to be flouted. Situational understand-
ing and attack attribution, as well as
ƒƒ We should explore metrics
Provenance is dependent on most of the the ability to combat malware, may be
for information sharing—for other topics and most of the other topics somewhat less important but still can
example, the tradeoff between are dependent on provenance, but a few contribute to the detection of privacy
how much the sharer reveals topics have more direct connections. violations.
versus how actionable the Global-scale identity management
community perceives the shared How does progress in this area support
is required to track authorship as well
data to be. This issue may touch advances in others?
as chain-of-custody through informa-
on sharing marketplaces and tion processing systems. Privacy-aware Global-scale identity management
reputation systems. security is highly relevant to the dis- can benefit—for example, by being
ƒƒ The current state of metrics with semination of provenance information. shown how to build identity manage-
respect to adversary nets and fast Scalable trustworthiness is essential ment systems that protect privacy. The
flux are not adequately known. to trustworthy provenance. Usability system evaluation life cycle can benefit
We should examine how SANS would be important as well. from provenance. To some extent, this
and similar organizations collect topic can influence requirements for
measurement data. How does progress in this area support how scalable trustworthy systems are
advances in others? designed and developed.
How does progress in this area support Trustworthy provenance would con-
advances in others? tribute significantly to combatting Topic 11: Usable Security
malware and to situational under- What capabilities from other topic areas
For many attack situations of inter- standing. It could also contribute are required for effective progress in this
est, advances in analysis and attack to privacy-aware security. It would topic area?
taxonomy would also support malware provide considerable improvements in ƒƒ Identity management: Large-
defense and therefore mitigate botnets. system usability overall. scale identity management
Systems that are intrinsically monitor- systems could solve one of the
able and auditable would presumably Topic 10: Privacy-Aware most vexing security problems
be easier to defend and less prone to users face today—namely, how
malware. What capabilities from other topic areas to establish trust between
are required for effective progress in this and among users and systems,
Advances in attribution to the ultimate topic area? particularly within systems
attack source would support advances and networks that are easy to
in defense against botnets and other Information provenance is needed
use by ordinary users and by
attacks where the immediate launch for many different privacy mechanisms
point of the attack is itself a victimized applied to data. Scalable trustworthy
machine. systems are needed to ensure the integ- ƒƒ Survivability of time-
rity of the privacy mechanisms and critical systems: Advances in
This topic and the survivability area policies. Combatting insider threats availability directly enhance
are mutually reinforcing. Reaction and is essential, because otherwise insiders usability, especially whenever


manageability of configurations ƒƒ Privacy-aware security: As How does progress in this area support
and remediation of potentially with the other topics, this topic advances in others?
dangerous system configurations must address usability as a core Usability goes hand in hand with the
are included in the design and requirement. other topic areas; without success in
operation of those systems. usability, the benefits of progress in the
ƒƒ Malware: Technology that
ƒƒ Scalable trustworthy systems: other areas may be diminished. This
neutralizes the threat posed
Large-scale systems that are applies directly to each of the other topic
by malware would be of great
trustworthy must, by the areas, more or less bilaterally. Usabil-
benefit to usability, since it could
definition of the usability ity considerations must be addressed
eliminate any need for users to
problem, be usable, or they pervasively.
think about malware at all.
will not be trustworthy, either
architecturally or operationally. ƒƒ Metrics and evaluation: The
ƒƒ Provenance: Automated tools ability to know how well we are
for tracking provenance could doing in making secure systems
enhance usability by reducing usable (and usable systems that
the need for users to consider maintain security) would be
explicitly the source of the useful; a usable system lets you
information they are dealing know whether you got things
with. right or wrong.

[Neu2006] Peter G. Neumann. Holistic systems. ACM SIGSOFT Software Engineering Notes 31(6):4-5, November 2006.

Appendix B
Appendix B. Technology Transfer

This appendix considers approaches for transitioning the results of R&D on the
11 topic areas into deployable systems and into the mainstream of readily available
trustworthy systems.

B.1 Introduction
R&D programs, including cyber security R&D, consistently have difficulty in
taking the research through a path of development, testing, evaluation, and tran-
sition into operational environments. Past experience shows that transition plans
developed and applied early in the life cycle of the research program, with prob-
able transition paths for the research products, are effective in achieving successful
transfer from research to application and use. It is equally important, however, to
acknowledge that these plans are subject to change and must be reviewed often.
It is also important to note that different technologies are better suited for differ-
ent technology transition paths; in some instances, the choice of the transition
path will mean success or failure for the ultimate product. Guiding principles for
transitioning research products involve lessons learned about the effects of time/
schedule, budgets, customer or end-user participation, demonstrations, testing and
evaluation, product partnerships, and other factors.

A July 2007 Department of Defense Report to Congress on Technology Transition

noted evidence that a chasm exists between the DoD S&T communities focused
on demonstration of a component and/or breadboard validation in a relevant
environment and acquisition of a system prototype demonstration in an opera-
tional environment. DoD is not the only government agency that struggles with
technology transition. That chasm, commonly referred to as the valley of death,
can be bridged only through cooperative efforts and investments by research and
development communities as well as acquisition communities.

In order to achieve the full potential of R&D, technology transfer needs to be a

key consideration for all R&D investments. This requires the federal government
to move past working models in which most R&D programs support only limited
operational evaluations/experiments, most R&D program managers consider their
job done with final reports, and most research performers consider their job done
with publications. Government-funded R&D activities need to focus on the real end
goal, namely technology transfer, which follows transition. Current R&D Principal
Investigators (PIs) and Program Managers (PMs) are not rewarded for technology
transfer. Academic PIs are rewarded for publications, not technology transfer. The
government R&D community needs to reward government program managers
and PIs for transition progress.

There are at least five canonical transition paths for research funded by the
Federal Government. These transition paths are affected by the nature of the
technology, the intended end-user, participants in the research program, and

other external circumstances. Success B.2 Fundamental Issues for concepts discussed in this topic area into
in research product transition is often Technology Transition the mainstream of education, training,
accomplished by the dedication of the experience, and practice will be essential.
program manager through opportunistic What are likely effective ways to trans-
channels of demonstration, partnering, fer the technology? B.3 Topic-Specific
and occasional good fortune. However, Considerations
no single approach is more effective than There is no one-size-fits-all approach In this section, certain issues that are
a proactive technology champion who to technology transfer. Each of the 11 specific to each of the 11 topics are
is allowed the freedom to seek potential topic areas will have its own special considered briefly.
utilization of the research product. The considerations for effective transitioning.
five canonical transition paths can be For example, effective transitioning will Topic 1: Scalable Trustworthy Systems
identified simply, as follows: depend to some extent on the relevant
customer bases and the specific applica- Easy scalability, pervasive trustworthi-
ƒƒ Department/Agency direct to tions. However, this section considers ness, and predictable composability all
Acquisition (Direct) what might be common to most of the require significant and fundamental
ƒƒ Department/Agency to 11 topics. A few issues that are specific changes in how systems are developed,
Government Lab (Lab) to each topic are discussed subsequently. maintained, and operated. Therefore,
ƒƒ Department/Agency to Industry this topic clearly will require consider-
(Industry) It will be particularly important that able public-private collaboration among
the results (such as new systems, mecha- government, industry, and academia,
ƒƒ Department/Agency to Academia
nisms, policies, and other approaches) with some extraordinary economic,
to Industry (Start-up) be deployable incrementally, wherever social, and technological forcing func-
ƒƒ Department/Agency to Open appropriate. tions (see Section B.4). The marketplace
Source Community (Open has generally failed to adapt to needs for
Source) Technologies that are to be deployed trustworthiness in critical applications.
on a global scale will require some
Many government agencies and com- innovative approaches to licensing and Topic 2: Enterprise-Level Metrics
mercial companies use a measure known sharing of intellectual property, and (ELMs)
as a Technology Readiness Level (TRL). serious planning for test, evaluation,
The TRL is a term for discussing the and incremental deployment. They will This is perhaps a better-mousetrap
maturity of a technology, to assess the also require extensive commitments to analogy: if enterprise-level metrics
maturity of evolving technologies (mate- sound system architectures, software were well developed and able to be
rials, components, devices, etc.) prior engineering disciplines, and commit- readily evaluated (topic 3), we might
to incorporating that technology into ment to adequate assurance. presume the world would make a beaten
a system or subsystem. Whereas this path to their door. Such metrics need
mechanism is primarily used within Carefully documented worked examples to be experimentally evaluated and
the DoD, it can be considered a rea- would be enormously helpful, especially their practical benefits clearly demon-
sonable guideline for new technologies if they are scalable. Clearly, the concepts strated, initially in prototype system
for almost any department or agency. addressed in this document need to environments and ultimately in realistic
Table B.1 lists the various technology become a pervasive part of education large-scale applications.
readiness levels and descriptions from and training. To this end, relevant R&D
a systems approach for both hardware must be explicitly oriented toward real
and software. applicability. Furthermore, bringing the

Table B1: Typical Technology Readiness Levels

Technology Readiness Level Description

1. Basic principles observed and reported. Lowest level of technology readiness. Scientific research begins to be
translated into applied research and development. Examples might
include paper studies of a technology’s basic properties.
2. Technology concept and/or application formulated. Invention begins. Once basic principles are observed, practical
applications can be invented. Applications are speculative and there
may be no proof or detailed analysis to support the assumptions.
Examples are limited to analytic studies.
3. Analytical and experimental critical function and/or Active research and development is initiated. This includes analytical
characteristic proof of concept. studies and laboratory studies to physically validate analytical
predictions of separate elements of the technology. Examples include
components that are not yet integrated or representative.

4. Component and/or breadboard validation in laboratory Basic technological components are integrated to establish that they
environment. will work together. This is relatively “low fidelity” compared to the
eventual system. Examples include integration of “ad hoc” hardware in
the laboratory.
5. Component and/or breadboard validation in relevant Fidelity of breadboard technology increases significantly. The basic
environment. technological components are integrated with reasonably realistic
supporting elements so it can be tested in a simulated environment.
Examples include “high fidelity” laboratory integration of components.

6. System/subsystem model or prototype demonstration in a Representative model or prototype system, which is well beyond that
relevant environment. of TRL 5, is tested in a relevant environment. Represents a major step
up in a technology’s demonstrated readiness. Examples include testing
a prototype in a high-fidelity laboratory environment or in simulated
operational environment.
7. System prototype demonstration in an operational Prototype near, or at, planned operational system. Represents a major
environment. step up from TRL 6, requiring demonstration of an actual system
prototype in an operational environment such as an aircraft, vehicle, or
space. Examples include testing the prototype in a test bed aircraft.

8. Actual system completed and qualified through test and Technology has been proven to work in its final form and under
demonstration. expected conditions. In almost all cases, this TRL represents the end of
true system development. Examples include developmental test and
evaluation of the system in its intended weapon system to determine if
it meets design specifications.
9. Actual system proven through successful mission operations. Actual application of the technology in its final form and under
mission conditions, such as those encountered in operational test
and evaluation. Examples include using the system under operational
mission conditions.

Topic 3: System Evaluation Life system survivability requires an encourage and fund research and
Cycle overarching commitment to system development relating to all of the
Similarly, if effective evaluation meth- trustworthiness that must transcend topics considered here, with particular
odologies could be developed, their what has been done in the past. emphasis on trustworthy systems, com-
usefulness would need to be clearly dem- posability, scalability, and evolutionary
onstrated on real systems, as in topic 2. Topic 8: Situational Understanding system architectures. It also needs to
Thoroughly specified and relatively and Attack Attribution encourage the incorporation of source-
complete requirements would also be R&D in this area has been slow to available and nonproprietary systems
required. Given a few well-documented find its way into commercial products. that can demonstrably contribute to
demonstrations of effectiveness, the Recognition of the pervasive needs for trustworthiness.
incentives for technology transfer would monitoring and accountability would
be greatly increased. be of great value. Academic research needs to pursue theo-
ries and supporting tools that enable
Topic 4: Combatting Insider Threats Topic 9: Provenance systematic development of composable
Once again, the proof is in the pudding. Provenance would be very useful in and scalable trustworthy systems and all
Demonstrations of the effectiveness finance, government, health care, and the other topics discussed here.
of approaches that combat insider many other application areas, and
misuse would encourage adoption of would facilitate forensics. Commercial developers need to instill a
the techniques. more proactive discipline of principled
Topic 10: Privacy-Aware Security system developments that allow interop-
Topic 5: Combatting Malware and Advances in this topic could be particu- erability among different systems and
Botnets larly useful in many application areas, subsystems, that employ much better
As noted in Appendix A, the com- such as health care, financial records, software engineering practices, that
monalities among insider threats and communication logs, and so on. result in trustworthy systems that are
malware suggest that demonstrations more composable and scalable, and that
of the effectiveness of approaches that Topic 11: Usable Security provide cost-effective approaches for all
combat malware are likely to be rapidly Almost anything that significantly the topics discussed here.
and widely adopted in practice. increased the usability of security and
helped manage its inherent complex- Topic 4: Combatting Insider Threats
Topic 6: Global-Scale Identity Man- ity would be likely to find its way into Governments need to establish base-
agement practice fairly readily. lines and standards. Legal issues
It will be important to design mech- relating to trap-based defensive strate-
anisms and policies that can be gies and entrapment law should be
incrementally deployed. Technologies B.4 Forcing Functions (Some addressed. Applying these to the many
that are to be deployed on a global scale Illustrative Examples) real situations in government activity
will require some innovative approaches For several of the 11 topics, this section where insider behavior is a genuine
to licensing and sharing intellectual addresses the question What are the threat would be beneficial. Current
properties, and serious planning for test, appropriate roles for government, aca- government efforts to standardize on
evaluation, and incremental deployment. demia, industry, and markets? Many authentication and authorization (e.g.,
of the suggested forcing functions are the Common Access Card) are worth-
Topic 7: Survivability of Time Criti- applicable in other topics as well. while despite their potential limitations,
cal Systems particularly in helping combat insider
R&D communities have long under- Topic 1: Scalable Trustworthy Sys- misuse. Academia needs to pursue
stood how to take advantage of tems R&D that is realistically relevant to the
fault-tolerance mechanisms. However, The federal government needs to insider threat. Industry research needs

to be more closely allied with the needs eat its own dog food, establishing sound ƒƒ Provide suitable funding for basic
of practical systems with fine-grained identity management mechanisms and research in usable security.
access controls and monitoring facilities. policies, and adhering to them. ƒƒ Encourage interdisciplinary
Industry is also the most likely source of research in usable security.
data sets that contain instances of insider Academia needs to recognize more
ƒƒ Adopt usability reviews for
misbehavior, or at least more detailed widely the realistic problems of global
security research.
knowledge of some kind on how real identity management and to embed
insider misbehavior tends to manifest more holistic and realistic approaches ƒƒ Establish appropriate standards,
itself. The marketplace needs to be into research. criteria, and best practices.
responsive to customers demanding ƒƒ Pervasively embed usability
better system solutions. Note also the Industry needs to recognize the enor- requirements into the
possible relevance of HSPD-12 PIV-I mous need for interoperability within procurement process.
and PIV-II. multivendor and multinational feder-
ƒƒ Reconsider security policies from
ated systems.
a usability perspective.
Various incentive structures might be
considered: The marketplace needs to anticipate ƒƒ Ensure that usable security is
long-term needs and somehow inspire a criteria for evaluating NSA
ƒƒ Business cases as incentive governments, academia, and industry centers of academic excellence.
(investment vs. potential cost) to realize the importance of realistic (This will provide an incentive
ƒƒ Insurance as financial protection approaches. to get usability into the
against insiders curriculum.)

ƒƒ Major players in the bonding Topic 11: Usable Security Academia

markets, who might possibly ƒƒ Incorporate usability pervasively
provide data for research into computer system curricula.
Remove impediments to usability
in exchange for better loss- ƒƒ Lead by example by making their
research. For example, federal law
reduction approaches own systems more usably secure.
currently requires review before data
ƒƒ Nonfinancial incentives, as in can be used in an experiment/study; ƒƒ Incorporate usability into the
the FAA near-miss reporting, simply having the data in your posses- research culture by demanding
granting some sort of immunity sion does not give you the right to use that system security research
(but being careful not to shoot it (e.g., e-mail you have received and papers and proposals always
the whistle-blowers) wish to use to test a new spam filtering address issues of usability.
algorithm); Minimize administrative
ƒƒ International efforts might
burdens; making sure Institutional Industry
include bilateral and multilateral ƒƒ Develop standards for usable
Review Boards (IRBs) are familiar with
quid-pro-quo cooperations. security.
the unique aspects of usable security
research (especially as contrasted, for ƒƒ Develop consistent terminology.
example medical research); and create
Topic 6: Global-Scale Identity Man- ƒƒ Identify best practices.
mechanisms for expediting usability
research approval. ƒƒ Contribute deployment
Governments need to unify some of experience. (Provide feedback to
the conflicting requirements relating to ƒƒ Avoid inappropriate restrictions the research community: what
identity management, credentials, and that prevent government entities works and what does not.)
privacy. The U.S. government needs to from participating in research.

Appendix C
Appendix C. List of Participants in the Roadmap Development
We are very grateful to many people who contributed to the development of this roadmap for cybersecurity research,
development, test, and evaluation. Everyone who participated in at least one of the five workshops is listed here.

Deb Agarwal Bob Hutchinson William H. Sanders

Tom Anderson Cynthia Irvine Mark Schertler
Paul Barford Markus Jakobsson Fred Schneider
Steven M. Bellovin David Jevans Kent Seamons
Terry Benzel Richard Kemmerer John Sebes
Gary Bridges Carl Landwehr Frederick T. Sheldon
KC Claffy Karl Levitt Ben Shneiderman
Ben Cook Jun Li Pete Sholander
Lorrie Cranor Pat Lincoln Robert Simson
Rob Cunningham Ulf Lindqvist Dawn Song
David Dagon Teresa Lunt Joe St Sauver
Claudiu Danilov Doug Maughan Sal Stolfo
Steve Dawson Jenny McNeill Paul Syverson
Drew Dean Miles McQueen Kevin Thompson
Jeremy Epstein Wayne Meitzler Gene Tsudik
Sonia Fahmy Jennifer Mekis Zach Tudor
Rich Feiertag Jelena Mirkovic Al Valdes
Stefano Foresti Ilya Mironov Jamie Van Randwyk
Deb Frincke John Mitchell Jim Waldo
Simson Garfinkel John Muir Nick Weaver
Mark Graff Deirdre Mulligan Rick Wesson
Josh Grosh Clifford Neuman Greg Wigton
Minaxi Gupta Peter Neumann Bill Woodcock
Tom Haigh David Nicol Bill Worley
Carl Hauser Chris Papadopoulos Stephen Yau
Jeri Hessman Vern Paxson Mary Ellen Zurko
James Horning Peter Reiher
James Hughes Robin Roy

Appendix D
Appendix D. Acronyms

A/V antivirus
AMI Advanced Metering Infrastructure
BGP Border Gateway Protocol
C2 command and control
CAC Common Access Card
CAPTCHA Completely Automated Public Turing test to tell Computers and Humans Apart
CASSEE computer automated secure software engineering environment
CERTs Computer Emergency Response Teams
CMCS Collaboratory for Multi-scale Chemical Science
COTS commercial off-the-shelf
CUI Controlled Unclassified Information
CVS Concurrent Versions System
DAC discretionary access controls
DARPA Defense Advanced Research Projects Agency
DDoS distributed denial of service
DETER cyber-DEfense Technology Experimental Research
DHS Department of Homeland Security
DKIM DomainKeys Identified Mail
DNS Domain Name System
DNSSEC DNS Security Extensions
DoS denial of service
DRM digital rights management
ESSW Earth System Science Workbench
EU European Union
FIPS Federal Information Processing Standards
FISMA Federal Information Security Management Act
GPS Global Positioning System
HDM Hierarchical Development Methodology
HIPAA Health Insurance Portability and Accountability Act
HSI human-system interaction
HVM hardware virtual machine
I&A identification and authentication
I3P Institute for Information Infrastructure Protection
IDA Institute for Defense Analyses
IDE integrated development environment
IDS intrusion detection system
INL Idaho National Laboratory
IPS intrusion prevention system

IPsec Internet Protocol Security
IPv4 Internet Protocol Version 4
IPv6 Internet Protocol Version 6
IRB institutional review board
ISP Internet service provider
IT information technology
LPWA Lucent Personalized Web Assistant
MAC mandatory access controls
MIT Massachusetts Institute of Technology
MLS multilevel security
MTBF mean time between failures
NIST National Institute of Standards and Technology
NOC network operations center
OODA Observe, Orient, Decide, Act
OS operating system
OTP one-time password
P2P peer-to-peer
P3P Platform for Privacy Preferences
PDA personal digital assistant
PGP Pretty Good Privacy
PII personally identifiable information
PIR private information retrieval
PKI public key infrastructure
PL programming language
PMAF Pedigree Management and Assessment Framework
PSOS Provably Secure Operating System
PREDICT Protected Repository for the Defense of Infrastructure against Cyber Threats
QoP Quality of Protection
RBAC role-based access control
RBN Russian Business Network
RFID radio frequency identification
ROM read-only memory
SBU Sensitive But Unclassified
SCADA Supervisory Control and Data Acquisition
SCAP Security Content Automation Protocol
SIEM security information and event management
SOHO small office/home office
SPF sender permitted from
SQL Structured Query Language
SRS Self-Regenerative Systems
SSL Secure Sockets Layer
T&E test and evaluation
TCB trusted computing base
TCP/IP Transmission Control Protocol/Internet Protocol
TLD top-level domain
TPM Trusted Platform Module
TSoS trustworthy systems of systems
UI user interface
UIUC University of Chicago at Urbana-Champaign
USB universal serial bus
US-CERT United States Computer Emergency Readiness Team
VM virtual machine
VMM Virtual Machine Monitor