ArticlePDF Available

Computer Security Strength & Risk: A Quantitative Approach

Authors:

Abstract and Figures

When attacking a software system is only as di#cult as it is to obtain a vulnerability to exploit, the security strength of that system is equivalent to the market price of such a vulnerability. In this dissertation I show how security strength can be measured using market means, how these strength measures can be applied to create models that forecast the security risk facing a system, and how the power of markets can also be unleashed to increase security strength throughout the software development process. In short, I provide the building blocks required for a comprehensive, quantitative approach to increasing security strength and reducing security risk.
Content may be subject to copyright.
Computer Security Strength & Risk:
A Quantitative Approach
A thesis presented
by
Stuart Edward Schechter
to
The Division of Engineering and Applied Sciences
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy
in the subject of
Computer Science
Harvard University
Cambridge, Massachusetts
May 2004
2004 - Stuart Edward Schechter
All rights reserved.
Thesis advisor Author
Michael D. Smith Stuart Edward Schechter
Computer Security Strength & Risk:A Quantitative Approach
Abstract
When attacking a software system is only as difficult as it is to obtain a vulner-
ability to exploit, the security strength of that system is equivalent to the market
price of such a vulnerability. In this dissertation I show how security strength can be
measured using market means, how these strength measures can be applied to create
models that forecast the security risk facing a system, and how the power of markets
can also be unleashed to increase security strength throughout the software develop-
ment process. In short, I provide the building blocks required for a comprehensive,
quantitative approach to increasing security strength and reducing security risk.
The importance of quantifying security strength and risk continues to grow as indi-
viduals, businesses, and governments become increasingly reliant on software systems.
The security of software deployed to date has suffered because these systems are de-
veloped and released without any meaningful measures of security, causing consumers
to be unable to differentiate stronger software products from weaker ones. Even if we
knew that we could make systems measurably stronger, the lack of accurate security
risk models has blurred our ability to forecast the value to be gained by strengthening
these systems. Without the tools introduced in this dissertation, those of us tasked
with making security decisions have been forced to rely on expert opinion, anecdotal
evidence, and other unproven heuristics.
Contents
Title Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1 Introduction 1
1.1 Economic approaches to security . . . . . . . . . . . . . . . . . . . . . 3
1.2 A new approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 What is security? 9
2.1 Threat scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Safeguards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Expanding and organizing threat scenarios . . . . . . . . . . . . . . . 15
2.3.1 Trees and graphs . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Limitations of threat modelling . . . . . . . . . . . . . . . . . 23
2.4 Do threats and safeguards encompass all security models? . . . . . . 24
2.5 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 Why measuring security is hard 27
3.1 Security risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.1 Annual Loss Expected (ALE) . . . . . . . . . . . . . . . . . . 29
3.1.2 Security savings (S) and benefit (B) . . . . . . . . . . . . . . 31
3.1.3 Investment return: ROI and IRR . . . . . . . . . . . . . . . . 33
3.1.4 The elusiveness of quantitative models . . . . . . . . . . . . . 35
3.2 Security strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4 Measuring the security strength of software 47
4.1 Security strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Why measure software systems? . . . . . . . . . . . . . . . . . . . . . 51
iv
Contents v
4.3 Pricing vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Precedent for vulnerability discovery rewards . . . . . . . . . . . . . . 56
4.5 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5 Differentiating software products by security strength 62
5.1 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6 Developing strong software 68
6.1 Desirable properties of markets for defects . . . . . . . . . . . . . . . 69
6.2 Market requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.3 Simplifying Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.4 Approaching Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.4.1 The presence of multiple defects . . . . . . . . . . . . . . . . . 76
6.4.2 Knowledge about others’ costs (part one) . . . . . . . . . . . . 77
6.4.3 The time and cost of searching . . . . . . . . . . . . . . . . . 79
6.4.4 Knowledge about others’ costs (part two) . . . . . . . . . . . . 80
6.4.5 Defect dependencies and learning about the skills of others . . 81
6.4.6 Some defects are more important than others . . . . . . . . . 85
6.5 Adversaries and the one-buyer assumption . . . . . . . . . . . . . . . 86
6.6 Delayed publication of reported defects . . . . . . . . . . . . . . . . . 88
6.7 Applying strength metrics throughout
product development . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.8 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7 Modelling security risk 92
7.1 An introduction to regression models . . . . . . . . . . . . . . . . . . 93
7.2 Modelling security risk . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.3 The scenario of home burglary . . . . . . . . . . . . . . . . . . . . . . 96
7.4 Regression models in computer security . . . . . . . . . . . . . . . . . 98
7.4.1 Prior work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.4.2 A problem of data . . . . . . . . . . . . . . . . . . . . . . . . 100
7.5 Insider threats vs. network attacks . . . . . . . . . . . . . . . . . . . 102
7.6 The growing significance of security strength . . . . . . . . . . . . . . 106
7.7 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8 Anticipating new threats 111
8.1 The threat of outside theft . . . . . . . . . . . . . . . . . . . . . . . . 113
8.1.1 Serial thieves . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.1.2 Parallel thieves . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.2 Serial Theft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.2.1 Homogeneous Targets . . . . . . . . . . . . . . . . . . . . . . 116
8.2.2 Unique Targets . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Contents vi
8.3 Parallel Theft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
8.4 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.5 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
9 Conclusion 131
A A brief history of fault, threat, and attack trees 135
Bibliography 138
Acknowledgments
The first paragraph of this section is a mad-lib. Each number in the list below
describes a term of your choice which should be filled into the paragraph below it.
Please fill in a term for:
(1) an inanimate object or a synonym for idiot.
(2) the name of a third-world country.
(3) the name of an academic field not closely related to Computer Science.
(4) the name of a highly renowned graduate school.
(5) the name of a profession that pays minimum wage.
(6) the name of the place you would least like to live in.
(7) the name of a software company.
Michael D. Smith could successfully advise a (1) with a pre-school
level education from (2) to complete a doctoral degree in the study
of (3) at (4). If it had not been for his support, flex-
ibility, sense of humor, and relaxed attitude I suspect that I would now be employed
as a (5) in (6), or worse might be working in the
standards compliance division of (7).
Glenn Holloway has been like a second advisor to those of us in Mike’s research
group. He has the patience to read every pap er we write, the ability to quickly
understand what the paper is about, a knack for figuring out how to best improve
the paper in the time available, and the attention to detail to find the typos. His
endless knowledge of the tools of the trade, from L
A
T
E
X to Visual Studio, has proved
invaluable. I suspect that having Glenn in our group reduces the time-to-graduate
for Mike’s students by at least a semester.
Acknowledgments viii
Michael Rabin and H. T. Kung provided invaluable advice from early in my grad-
uate career through the final draft of this document. I especially appreciate their
encouragement to attack problems I found interesting even when these problems were
not connected to existing research projects within DEAS.
Marty Loeb was kind enough to read some of the earlier, less p olished, thesis
drafts and fly to Boston for the defense. Without the seminal papers he wrote with
Larry Gordon, and those of Ross Anderson, Jean Camp (my unofficial fifth com-
mittee member), Hal Varian, and the other founding organizers of the Workshop on
Economics and Information Security, I would likely still be looking for a dissertation
topic. I might not have discovered this research area if Ron Rivest had not taken the
time to point me in their direction.
I cannot count the number of times Susan Wieczorek, the fairy godmother of
DEAS graduate students, has waved her wand to make bureaucratic tasks and pa-
perwork disappear.
Much of the background research in risk management was performed over a sum-
mer visitation at the University of California at Berkeley that was kindly arranged
by Doug Tygar and Hal Varian.
For my grandparents and parents,
whose examples I can only aspire to follow,
and for the students of Leverett House,
who I implore not to follow mine.
Chapter 1
Introduction
How secure is a software system? How secure does a system need to be? By how
much can security be improved by putting safeguards into place?
Those of us who work to secure systems ask these questions in order to evaluate the
efficacy of our security efforts. We seek answers that provide measures of how effective
our security efforts have been in reducing risk, or that forecast the reduction in risk
that we expect from further security efforts. Often this means estimating past values,
and forecasting future values, of such security risk metrics as the frequency of security
incidents or the annual cost of these incidents. To make security decisions we must
use these metrics to gauge how the choices we make will influence the effectiveness of
our security strategies in reducing our security risk.
A general methodology for modelling security risks has proved elusive because the
security of a system is affected not only by our actions, but by the strategic choices
of our adversaries. What’s more, security questions are approached differently when
the answers are to be presented in terms meaningful to these adversaries.
1
Chapter 1: Introduction 2
When an adversary asks how secure a system is, his primary concern is most likely
to be either the personal risk to his safety or freedom from attacking the system, or
the difficulty he will face in attempting to subvert or bypass the system’s safeguards.
An adversary will perceive a system with an additional safeguard to be more resilient
to attack only if that safeguard interferes with the plan of attack that would be used
by the adversary in the safeguard’s absence. A system’s security becomes stronger as
more time, effort, or other resources are required to subvert it. From an adversary’s
perspectives, this security strength, in combination with the personal risk of the attack
to the adversary’s reputation, safety, or freedom, are the metrics of interest when
evaluating the security of a prospective target. For example, when using a cost-benefit
analysis to evaluate the attractiveness of a potential target system, that system’s
security strength and the personal risk incurred in attacking that system represent
deterrent costs that must be weighed against the benefits of targeting the system.
Because security strength measures the resource cost of breaching a system’s secu-
rity, it is fundamentally an economic measure. Formal methods of Computer Science,
such as information theory, have been used to address questions of security strength
when these questions can be translated into assertions of what can and cannot be
calculated. For example, the one-time pad cipher [100] and Shamir’s secret shar-
ing algorithm [99] have been proven to be secure against certain classes of attack,
regardless of the resources available to the attacker. However, the applicability of
computational approaches to security strength is severely limited. Computer systems
and networks, which make up the environment in which security mechanisms are de-
ployed and in which adversaries attack, have become too complex to be addressed
Chapter 1: Introduction 3
using formal proofs of computability.
In this dissertation I overcome these limitations by providing an economic ap-
proach for measuring the security strength of software in units of dollars. Because
this methodology does not rely on computational assumptions, it can b e applied in
situations where purely computational methods are not applicable.
1.1 Economic approaches to security
Economic approaches have only recently been introduced into the study of com-
puter security. In his seminal paper, Ross Anderson [9] asserts that perverse incentives
to create insecure systems have caused as much harm as technological shortcomings.
Both Anderson [7] and Hal Varian [113] describe how security failings result when
those parties who are charged with protecting systems are not the parties who must
pay the consequences when security is breached. Examples include ATMs deployed
in Europe, which banks were tasked with securing but for which customers paid the
costs of ATM fraud, and distributed denial of service attacks, in which computers
secured by home users were compromised by hackers and then used to attack other’s
networks.
Other researchers have gone beyond using economics to describe problems of in-
formation security and have used economic approaches in formulating solutions. Jean
Camp and Catherine Wolfram [23] proposed addressing the problem of denial of ser-
vice attacks through governmental issue of vulnerability credits (similar to pollution
credits). In their proposal, owners of systems left vulnerable to network attack are
made to pay for the negative externality that results when these systems are compro-
Chapter 1: Introduction 4
mised and used to attack other systems.
Researchers have also applied economic models to security decision making and
measurement. Lawrence Gordon and Martin Loeb [52] have used economic models
to bound the amount that a firm should spend on security as a function of the size of
the potential loss. Kevin Soo Hoo [59] proposed measuring changes in security risk
by extending metrics used to gauge the absolute level of security risk. By doing so,
these metrics could be used to choose the most effective bundle of safeguards possible
within one’s budget constraint.
At the first Workshop on Economics and Information Security, many researchers
argued that the software industry should borrow from the quantitative models of
safety and reliability used in the insurance industry [16, 41, 96]. They reasoned that
if insurance companies can model the safety of a car and driver, or gauge the risk of
the failure of a safety system at a factory, then similar models could be developed to
assess software security risks. They proposed that insurers could then drive security
investment by discounting policies to firms that implemented safeguards that reduce
risk. While insurers could play an important role in managing security risks if accurate
risk models were available to them, in the past these firms have relied on actuarial
tables to gauge risk. This approach is acceptable for safety risks for which historical
data can be used to forecast future event likelihoods. However, the actions that cause
security breaches in computing systems are strategic, and the strategies available to
both the adversaries and those who secure systems are in a constant state of flux.
As such, no actuarial table or risk model available today can accurately forecast the
effect of choosing one security strategy over another.
Chapter 1: Introduction 5
1.2 A new approach
Past research has left open the problem of quantitatively measuring the security
strength of software and quantitatively forecasting security risk metrics. If we are to
make good security decisions today we need to be able to estimate the effect each
of these choices will have in reducing our security risk in the future. Because the
security strength of a system deters attack, the influence of security strength should
be accounted for when forecasting security risks.
Most existing security risk models already take into account the deterrent effect of
the personal risk incurred by an adversary who chooses to attack a system. However,
for networked software, security strength often has a greater effect in deterring attack
than the presence of risks that an adversary must incur to stage an attack. After all,
there is little risk to searching software for vulnerabilities when that software can be
copied to one’s own computer and tested in private. When vulnerabilities are found,
attackers can minimize their personal risk by staging attacks remotely and rerouting
their communications in order to hide their identities.
Measures of security strength have additional uses beyond forecasting security
risk. They can be used to improve processes for making systems stronger, and they
can help consumers differentiate more secure systems from less secure ones.
This dissertation is structured to attack these problems in a bottom-up manner.
Before we can address problems of measuring and forecasting security metrics we
must first formalize how these questions should be posed. The study of security
lacks a common language, and so I begin in Chapter 2 by introducing a lexicon and
conceptual framework for use in addressing questions of security. The terms and
Chapter 1: Introduction 6
approach are not new, but are rather the distillation of language and techniques from
a diverse body of security and risk management literature.
I then detail barriers that have limited the measurement and forecasting of security
risk and strength in Chapter 3. This background material includes relevant history of
security strength from century-old ciphers to public key cryptography. It also covers
risk management, a field developed to model reliability and safety risks posed by
natural failures that has recently struggled to adapt to the security risks posed by
adversaries.
In Chapter 4, I introduce a methodology for measuring the security strength of
software systems. I argue that the strength of systems with known vulnerabilities is
negligible, and that the security strength of systems with no known vulnerabilities is
bounded by the market price to obtain a new vulnerability. The metric I propose is
the market price to discover the next new vulnerability, rather than the actual costs
expended to discover a vulnerability, because the value must be measured under
circumstances in which the next new vulnerability has yet to be found.
One immediate application of security strength metrics, their use in differenti-
ating products, is introduced in Chapter 5. This new approach, which applies the
framework of the previous chapter, is significant because the lack of security metrics
has left consumers unable to distinguish more secure products from less secure ones.
When consumers cannot make this distinction, software developers cannot justify
investments in making their products more secure.
In Chapter 6, I integrate market methods for security measurement into the soft-
ware development process so that programs can be built to be more secure from
Chapter 1: Introduction 7
the start. The continuous measurement of security strength during the development
process enables software development firms to create incentives for developers and
testers to increase the security strength of the firm’s products. These new techniques
have the potential to transform the software development process so that trade-offs
between security, cost, and time to market can be better understood and managed.
With tools in hand to measure and improve security strength, I show in Chapter 7
how to b etter model and forecast security risks by employing measures of security
strength. As evidence for the utility of security risk forecasting models, I explain why
these models have been successful outside the realm of computer security, against
threat scenarios where the effect of security strength in deterring adversaries has
been negligible.
What remains is to forecast how the introduction of new threat scenarios, for
which historical data is not available, will affect risk. Anticipating new threats is es-
sential because no analysis of past security incidents can prepare one for the potential
risks posed by future threat scenarios that can result from radical changes in your
adversary’s strategy or technology, and thus fall outside of existing risk models. New
and evolving threats have the potential to affect a variety of systems and a potentially
catastrophic number of installations of these systems. The best we can do to prepare
is to anticipate new threats, evaluate their viability, and do our best to understand
their impact on our security risks without using historical data, as none is available.
Chapter 8 shows one approach to modelling a specific class of threat scenario: attacks
on large numbers of systems with the intent of financial gain.
When combined, the techniques and methods presented in this dissertation can
Chapter 1: Introduction 8
be used to answer the questions with which it began. We can measure how secure
a software system is by determining the market price of a vulnerability. We can
forecast how secure a system needs to be by applying models of security risk that
employ security strength. Similarly, I show that these models can be used to gauge
how much security can be improved by putting safeguards into place.
Thus, the tools I introduce for the measurement improvement of security strength
and the models that I refine to better forecast security risk can be used to create a
comprehensive quantitative approach to security. These quantitative techniques can
be immediately applied to measure and improve the strength of existing software. As
we acquire security strength statistics, we can begin to employ security risk mo dels
to forecast risk even in newly released software.
Chapter 2
What is security?
Security
1. The process of identifying events that have the potential to
cause harm (or threat scenarios) and implementing safe-
guards to reduce or eliminate this potential.
2. The safeguards, or countermeasures, created and main-
tained by the security process.
At its simplest, security is the process of protecting against injury or harm. Secu-
rity also describes the safeguards, or countermeasures, put in place by that process.
In computer security, harm implies a loss of desired system properties such as confi-
dentiality, integrity, or availability. The goals of security may be distinguished from
those of reliability in that they focus on preventing injury or harm resulting not only
from random acts of nature, but also from the intentional strategic actions of those
with goals counter to your own.
Used to describe a diverse and ever growing set of processes, the word ‘security’
appears to be condemned to imprecision by the weight of its own generality. If
9
Chapter 2: What is security? 10
security is, in the words of Bruce Schneier, ‘meaningless out of context’ [95, page
13], can a useful definition remain relevant in the face of ever changing technological
contexts? To address these matters, I have presented at the top of this chapter
a general definition of security as the combination of two sub-processes that are
themselves free of references to any specific technology or application.
In this chapter I will further refine the concepts of threat scenarios and safeguards
on which this definition of security is built. I will describe existing tools for modelling
threats and discuss their limitations. Finally, I will argue that this definition is
sufficiently general for modelling questions of security as they have changed over
time.
2.1 Threat scenarios
Threat Scenario
1. A series of events through which a natural or intelligent
adversary (or set of adversaries) could cause harm.
2. (In Computer Security)
A series of events through which a natural or intelligent
adversary (or set of adversaries) could use the system in an
unauthorized way to cause harm, such as by compromising
the confidentiality, integrity, or availability of the system’s
information.
Like security, safeguards must be understood within a context. When presented
with a new safeguard the first question one is likely to ask is what is it intended
to guard against? This is why the security process, as defined above, begins by
describing chains of events with undesirable consequences that we wish to avert. We
Chapter 2: What is security? 11
call these chains of events threat scenarios.
In the security and risk management literature, the word threat does not refer
to the adversary who may cause harm, but to the chain of events that begin with
the adversary (who may also be known as a ‘threat source’) and end in damage or
harm [42, page 37] [79, page 21]. The word scenario is also used to describe such
chains of events [3, 6, 42, 67]. Threat scenarios, as I will call them
1
, provide the
context for thinking about security. By establishing a common understanding of the
events that lead to harm in a threat scenario, parties can better agree on what is at
stake and what safeguards may reduce the risk. Exploring threat scenarios by adding
detail not only helps us understand what can go wrong to cause security failures, but
also helps to ask how and why these events may occur and who may be likely to cause
them to occur.
The information security literature divides threat scenarios into three data-centric
categories, based on what desired property of the data is lost: confidentiality, integrity,
or availability. These basic threat scenarios might be written as follows.
Information is exposed to someone who should not have access to it.
Information is modified in a manner contrary to policy.
Authorized users are prevented from accessing information or resources in a
timely manner.
As we dig deeper into the who, how, and why harm may occur, the descriptions of
these threat scenarios can become much more detailed. Taking the first basic threat
1
I will occasionally use ‘threat’ alone to describe the most basic threat scenarios in which no
means of attack is specified.
Chapter 2: What is security? 12
scenario, described ab ove, and detailing how a violation of confidentiality could come
about might result in the description of the following chain of events.
An employee uses a laptop running a packet sniffer in order to eaves-
drop on data sent over the local area network connected to his office. He
or she reads confidential management documents attached to emails as
they are in transit between managers, and then sells these documents to
a competitor.
If the events described above occur, then we say that security has been breached,
or that the threat scenario has been realized. The more detailed the description of
the threat scenario, the better we can can understand the likelihood of a breach, the
potential harm a breach may cause, and what we can do to prevent this scenario from
being realized.
For example, a firm with little need for confidentiality might examine the above
scenario and determine that the consequences do not justify additional security mea-
sures. Another organization may surmise that this specific scenario is already well
guarded against. This may be true if network jacks lead directly to switches, rather
than hubs, such that network cables only carry information that originates or ter-
minates at the machine connected to the data jack.
2
Another organization might
conclude that the risk posed by this threat scenario can be reduced if managers are
encouraged to share files using a secure file system, rather than an insecure email net-
work. All of these conclusions can b e made because the threat scenario is described
in adequate detail.
On the other hand, dividing general threat scenarios into many unnecessarily
detailed sub-scenarios may make any form of analysis intractable. We might ask if the
2
Note that while the location of the network switch may counter this specific threat scenario, it
will not safeguard against all other eavesdropping scenarios.
Chapter 2: What is security? 13
threat scenario above need only apply to situations in which an employee eavesdrops
using a laptop and his own network connection, or whether a single threat scenario
should be used to describe all network eavesdropping attacks. Similarly, we may ask
whether we need separate threat scenarios for each possible motive for the theft of
information, or whether we should simply assume the most compelling motivation we
can conceive of.
Containing the potentially exponential explosion of increasingly detailed threat
scenarios is of particular concern as past approaches to security and risk management
have failed due to the multitude of scenarios generated [59, page 7]. As we start with
more general threat descriptions and divide them into more detailed ones, we must
be sure only to make these divisions when the value of the insights gained outweighs
the cost in added complexity. Fortunately, procedures for generating and detailing
threats are well known, and are described in Section 2.3.
2.2 Safeguards
Safeguard
A policy, process, algorithm, or other measure used to prevent
or limit the damage from one or more threat scenarios.
Synonyms: countermeasure, control, security measure
Safeguards act to prevent or reduce the damage caused by the realization of one
or more threat scenarios. Safeguards are also often called countermeasures, controls,
or security measures, though the last of these terms will be used sparingly to avoid
Chapter 2: What is security? 14
confusion with the process of measuring (gauging the level of) security. Safeguards
may take any number of forms from physical barriers, to sensors, to software algo-
rithms.
3
Safeguards needn’t even be objects unto themselves, but may encompass
investments in improving existing policies, procedures, or processes such as design,
development, or quality assurance. Safeguards can be compliments, working more
effectively in combination than when apart, or substitutes.
When adding a safeguard to a system, one must consider that an adversary might
break through the protections that the safeguard offers, or find a way to bypass the
safeguard entirely. Thus each safeguard introduced may lead to the introduction of
additional, more detailed, threat scenarios that describe events in which safeguards
are circumvented or penetrated. These new detailed scenarios may in turn lead to
the introduction of new safeguards. Security is often referred to as an ‘arms race’
because of this cycle in which new safeguards and new plans of attack are introduced
to counter each other. The process of adding safeguards, and responding to new threat
scenarios targeting those safeguards, terminates when all remaining unimplemented
safeguards are impractical or uneconomical.
Those charged with securing systems may disregard threat scenarios when the
responsibility for implementing safeguards lies outside the system’s boundaries. For
example, printers are not expected to control access to documents released into their
output trays. Rather than requiring authentication to take place at a printer before
documents are released, the onus to safeguard against printed documents reaching
the wrong hands has been placed upon organizational procedures for locating print-
3
Using a single term to encompass organizational procedures and algorithms is not revolutionary.
The use of the word ‘software’ to refer to organizational procedures appears in risk management
texts as early as 1980 [105, page 20].
Chapter 2: What is security? 15
ers and distributing the documents that they print. Defining clear boundaries for
the components of a system and their inputs and outputs, is known as a systems ap-
proach [105, page 11] and its necessity is widely accepted. Well-defined boundaries are
essential for determining which threats a system component (itself a system) should
safeguard against, and which threats can be disregarded and countered elsewhere
within the larger system in which the component will reside. The documentation of
these delineations of responsibility is essential.
When using a systems approach one sees that security is not a single feature
one can buy in a product. Rather, the prodcut development process and products
themselves contain safeguards which, if properly combined with the safeguards of the
systems and organizations that the product is integrated into, may help to counter
threats. Careful specification of system b oundaries and responsibilities is necessary
to prevent common security failures.
2.3 Expanding and organizing threat scenarios
To choose the right safeguards to protect a system, it is necessary to understand
the threat scenarios faced by that system. The first steps in this process are to find
all plausible threat scenarios, add detail as necessary, and organize them so that we
can best determine the effect of safeguards.
We begin the threat scenario discovery process from the simplest point possible:
enumerating the most basic threat scenarios faced by the system. These simple
threats scenarios are sometimes just called threats, as they are devoid of most scenario
specifics such as the means of attack, the motive of the adversary, or the asset targeted.
Chapter 2: What is security? 16
For example, armed robbery and check fraud would be two basic, yet distinct, threats
faced by a bank. These acts are worth distinguishing as separate basic threats as
breaking into a bank requires a very different type of adversary, resources, and skills
than passing a false check.
The choice of basic threats is subjective and is specific to the type of system
being modelled. The occupants of a castle might differentiate the basic threat of
castle wall penetration from external sieges intended to starve the occupants and
avoid a fight. For an operating system, privilege escalation and denial of service
are examples of basic threats. Basic threats need not be entirely independent. For
example, in operating system security, the scenario of privilege escalation may lead
to the distinctly different scenario of denial of service.
2.3.1 Trees and graphs
Once you have identified basic threat scenarios, it is time to add specifics. This is
done by taking individual threat scenarios and expanding them into multiple threat
scenarios, distinguished by either the means of attack, motive, type of adversary,
or asset targeted. For example, a castle penetration scenario may be divided into
additional threat scenarios based on means of attack. Castle walls may be penetrated
by digging tunnels underneath them, using ladders or platforms to scale them, through
an open gate, or by destroying the walls and going straight through them. Each time
a threat scenario is augmented with more specifics a new threat scenario is formed.
A tree may be formed by placing the basic threat scenario at the root, placing the
augmented threat scenarios into its child nodes, and then repeating the process by
Chapter 2: What is security? 17
Figure 2.1: A tree detailing how a castle’s walls may be penetrated by an invading
army.
expanding each of the child nodes. When nodes no longer benefit from the addition
of further detail, the leaves that remain represent detailed, unique, threat scenarios.
Figure 2.1 is an example of such a tree applied to a castle and the threat of
penetration by an enemy army. The basic threat scenario appears as the root node
at the top level of our tree. The four ways we can envision the walls being penetrated
are represented by the second level of the tree, which contains the children of the root
node. If any of these child threat scenarios should be realized then the parent threat
scenario is also said to have been realized.
While we could expand each of these child nodes, I have only expanded the scenario
in which the enemy passes through the castle gate. There are a variety of ways that
the adversary could carry out the gate-penetration scenario. To illustrate this we
expand this node as we expanded the root node. Its children may represent options
available to the enemy army such as (but are not limited to) bribing the gatekeeper,
forcing the gate open, or even emulating the ancient Greeks by hiding soldiers in a
statue of a horse.
Chapter 2: What is security? 18
The relationship between the “hide in gift horse” node and its child scenarios is
different than the other parent/child relationships in the tree. In order for the gift
horse scenario to be realized, the enemy must have the means to perform not just
one, but all of the tasks described by the child nodes. If the horse is not brought
inside the walls, or if the soldiers do not escape and open the gate, then all of the
attacker’s other steps will be for naught. Many different representations have been
used to indicate this all or nothing requirement, from AND gates (borrowed from the
field of computer architecture) to semicircle connectors that link the edges between
the parent and those child nodes that must be realized together. The latter is used
in Figure 2.1, with the addition of an arrow at the end of the semicircle. The arrow
represents a further requirement, that the events in each of the child nodes occur in
the sequential order given by the direction of the arrow. After all, the attack will fail
if the attackers convince the occupants to take the horse and then realize they have
yet to hide soldiers inside.
4
A more contemporary example of partially completed threat tree is shown in
Figure 2.2, which shows how an outsider (or nonuser) can obtain root access to an
operating system. For each feature described in the castle example, an analogous
feature is present in this example.
Originally called fault trees [18], these tree-based threat enumeration techniques
were first used to model natural threats for safety analyses. The first application of
fault trees, in 1961, was to ensure the safety of missile launch control systems [117].
Fault trees were quickly adopted by the Nuclear Regulatory Commission [115] and
4
Fans of the movie “Monty Python and the Holy Grail” [24] may recognize this scenario from
King Arthur’s failed castle siege.
Chapter 2: What is security? 19
Figure 2.2: A tree detailing how an outsider (nonuser) might gain access to a net-
worked operating system.
NASA [64]. They have been applied not only in preventing failures, but also to
analyze the cause of accidents after they have occurred. They were used after the
Apollo 1 disaster, Three Mile Island, and the losses of the spaces shuttles Challenger
and Columbia [64, 12]. As fault tree analysis has been adapted to include adversarial
threat sources for use in security analysis, they have also been called threat trees [4, 62]
or attack trees [93, 94]. In addition to the graphical representation of trees, complex
trees are often represented textually as one would represent the structure of an outline
or a table of contents. Those interested in how these approaches evolved and where
they differ are encouraged to read Appendix A.
Because fault trees evolved from the study of safety where the adversary (nature)
has no sense of motive, the expansion of nodes focused exclusively on the events of
the scenario and not the actors. Events were modelled as resulting from stochastic
processes. Because these systems all faced the same, consistent, natural adversary
it was possible to move and reuse tree branches from system to system. Safeguards
could be added at the very end of the process because their efficacy, in the context of
safety, could be reliably estimated using historical data.
Chapter 2: What is security? 20
One point of contention when using trees to model those threat scenarios that are
posed by intelligent adversaries is where to place safeguards. Some assume the system
being analyzed is static and so all countermeasures are assumed to be existing parts
of the system [93]. Microsoft’s Howard and LeBlanc propose mitigation circles [62]
be placed below the leaves of the tree, with each circle addressing the threat scenario
described by the leaf above it. They even admonish the reader from focusing on
safeguards during the threat modeling process.
Note that you should not add the mitigation circles during the threat-
modeling process. If you do you’re wasting time coming up with mitiga-
tions; Rather, you should add this extra detail later. [62, page 91]
The problem with placing safeguards below the trees is that many threat scenar-
ios involve attacks on the safeguards themselves. When Howard and LeBlanc place
encryption protocols in mitigation circles at the bottom of their trees, they leave no
room in which to expand the attacks on the encryption algorithm or protocols they
have chosen. As Anderson and Kwok have pointed out, safeguards do not eliminate
threats, but rather transform them into less plausible threats [5, page 246].
Leaving safeguards out of the trees entirely has consequences as well. In Schneier’s
example attack tree [94, Figure 21.1], in which the basic threat scenario is the opening
of a safe, both “pick lock” and “learn combo” appear as potential attacks. The
relevance of each of these nodes, and the threat scenarios they represent, depends on
the choice of a countermeasure (lo ck) with which to safeguard the do or a key lock
may be picked and a combination may be guessed. If all possible countermeasures are
assumed to be present, the tree will grow unwieldy. If any analysis is to be performed
it would first be necessary to remove those scenarios that target countermeasures that
Chapter 2: What is security? 21
are not present in the system.
Instead of trees, Caelli, Longley, and Tickle [22] use directed graphs that inte-
grate safeguards by representing them as nodes, placed as needed, throughout the
diagram. Safeguard nodes are placed below threat nodes to prevent the threat sce-
narios represented by those nodes from being realized. Additional threat nodes can
then be placed below a safeguard node to represent attacks on that safeguard, and
so the process iterates. The graph terminates at those threat scenario nodes that
do not pose a great enough risk to justify further countermeasures. Safeguards may
be similarly integrated into a tree based approach, though Directed Acyclic Graph
(DAG) representations are more compact. Whereas a single safeguard may counter
a number of threats in a DAG, the safeguard node and all of its children must be
replicated in a tree.
5
Regardless of whether scenario diagrams are trees or DAGs, integrating safeguards
throughout the representation can be beneficial for understanding their effect. Placing
safeguards into the diagram makes explicit the modeler’s assumptions about where
safeguards are deployed and what scenarios they are intended to counter. Adding
safeguards into the threat modelling process also increases the chance that threat
scenarios in which the safeguards are attacked or bypassed will be included in the
model.
Figure 2.3 is threat scenario diagram representing the opening of a safe, adapted
from one of Schneier’s attack trees [94, Figure 21.1], in which safeguards have been
added in the form of transparent boxes with rounded corners. The safe being modelled
5
DAGs may be converted to a tree by making copies of each child node, and all of its children,
for each of its parents.
Chapter 2: What is security? 22
Figure 2.3: A Directed Acyclic Graph (DAG) representing the threat scenarios that
result in the opening of a safe and the countermeasures used to safeguard against
them.
contains both a combination lo ck and a key lock, configured so that both must be
unlocked if the door is to be opened. In addition, the safe do or’s hinge is placed
on the inside of the safe to make it harder to remove or destroy. The diagram is a
Directed Acyclic Graph (DAG) and not a tree, as tree nodes cannot have multiple
parents. For example, the two different lock safeguards share a single child node that
represents that a key, be it metallic or a combination code, could be obtained from
an insider.
Safeguard nodes may also have multiple parents. For example, audits that safe-
guard a system by simulating attempts to obtain keys both discourage employees
from giving out their keys and from leaving their keys unguarded. Similarly, a pro-
hibition against granting both the combination and the key to a single employee will
help to counter two threat nodes, as it reduces the likelihood that the carelessness or
corruption of a single employee can be exploited to open both locks.
Chapter 2: What is security? 23
2.3.2 Limitations of threat modelling
No amount of historical research or brainstorming can ensure that all basic threats
will be discovered. New technologies or attack strategies can result in new threat
scenarios, or may uncover threats that had long been present but not discovered. For
example, b efore the link between disease and germs was established, the threat posed
by poor sanitation could neither be understood nor countered. Before the invention of
the skateboard, noise pollution was not considered a threat when architects designed
concrete steps and ramps. When settling on a set of basic threat scenarios to analyze,
it is important to also look at the rate at which new basic threats have been discovered.
Even when all the basic threats are known, there is no way to ensure that all of
a node’s child scenarios have been discovered. If the scenario of interest has been
modelled with existing threat diagrams you may find assurance in knowing these
diagrams have stood the test of time. However, there is always the possibility that
new attacks will appear.
Finally, countermeasures are not always implemented correctly and even those
that are act only to reduce threats, not eliminate them. Keys may be guessed, well
screened employees may be bribed, and components that functioned during a million
consecutive tests may fail the next time they are used.
Despite the admitted imperfections of the security process, the better threats
and safeguards can be understood the better the effectiveness of the process can be
measured.
Chapter 2: What is security? 24
2.4 Do threats and safeguards encompass all secu-
rity models?
There are many reasons why one might b e tempted to reject a definition of a se-
curity process consisting of discovery of threat scenarios and placement of safeguards.
One might wonder if such a model is general enough to reflect existing and future
security theory and practice.
One motivation for rejecting the definition of security in this thesis is the impli-
cation that one cannot reach a state of perfect and complete security. Even if perfect
countermeasures existed for each threat scenario, there is no way to know if threats
haven’t been envisioned and thus left unaddressed. The security process, as I have
described it, also bears an unfortunate resemblance to the oft-criticized ‘discover and
patch’ approach to security, in which the security process encroaches into the period
after a product’s release. One might also ask where decades of progress creating
formal security models, such as those used to prove statements about cryptographic
primitives or network protocols, fit into this framework.
In fact, such formal models have always been based on scenarios. The threat
models used by cryptographers act to separate those scenarios that their algorithms
and protocols protect against from those circumstances that they cannot control.
The guarantees of formal models have always been limited to a set of known threat
scenarios used to construct the models. If new methods of attack are found outside of
those threat scenarios, the system may not be secure despite the assurances of formal
methods. In the words of Dorothy Denning [33], “Security models and formal methods
do not establish security. Systems are hacked outside the models’ assumptions.”
Chapter 2: What is security? 25
Given that security, especially computer security, is a constantly changing field,
one might also ask if the definition and approach will stand the test of time. While
one cannot anticipate all future events, we can apply this security process to historical
examples to see whether it remains timely.
One might consider the security offered by the safeguards of a castle in protecting
the kings and nobles of medieval times from their enemies.
6
One can imagine the
threat posed by having to fight an advancing line of enemy knights, and how gates
and strong lower walls would initially limit the number of forces that the defense
would need to face at a time. High upper walls might be added to counter the threat
posed by the archers’ flying arrows. Still more countermeasures, such as moats, were
required as an adversary might tunnel under the castle walls, scale them on a ladder
or belfry (a mobile siege tower), or attempt to destroy the walls by slamming them
with a battering ram or a large projectile. A scenario-based approach would also
remind the king that his castle could not protect him from all possible threats, such
as poisoning by his kitchen staff. He’d need a food taster for that.
Castles even provide an example of how new technologies can render insecure
those systems that had seemed impenetrable when they were designed. Towards the
end of the Middle Ages, the range and size of projectiles fired from tr´ebuchets and
cannons increased. Walls could no longer substitute for the combined safeguards of
a strong army and powerful weapons. Those that believe that we could completely
eliminate the problems of updating (patching) systems if only we improved the design
and testing process may also learn from this example.
6
For a historical description of medieval siege tactics and countermeasures, see The Medieval
Fortress by Kaufmann and Jurga [66].
Chapter 2: What is security? 26
2.5 Chapter summary
In this Chapter, I defined security in terms of threat scenarios and the safeguards
that are put in place to counter them. I showed how graphs, most commonly in the
form of trees, can be used to model the interaction between these threat scenarios and
safeguards. I also addressed the limitations of these approaches to threat mo delling.
Finally, I argued that the definition of security provided in this chapter is generally
applicable, and will be as useful for discussing the security questions of tomorrow as
it is for understanding the problems faced by our ancestors.
Chapter 3
Why measuring security is hard
3.1 Security risk
From home users, to firms, to governmental agencies, those of us who rely on our
systems to be secure would like to be able to forecast the risk to these systems and
the effectiveness of different security strategies in reducing these risks.
From the perspective of a business, security is an investment to be measured in
dollars saved as a result of reduced losses from security breaches, or in profits from
new ventures that would be too risky to undertake without investments in security. As
a result, security modelling often falls under the control of a firm’s risk management
function.
Government also considers information security, and the effectiveness models that
guide decisions, to be a matter best addressed through risk management. The Com-
puter Security Act of 1987 [112] mandates that government agencies should have
security plans “commensurate with the risk and magnitude of . . . harm.” The guide-
lines for such a plan take the form of the Risk Management Guide for Information
Technology Systems [109] from the National Institute of Standards and Technology
27
Chapter 3: Why measuring security is hard 28
(NIST) which defines risk management as “the pro cess of identifying risk, assessing
risk, and taking steps to reduce risk to an acceptable level.” However, these guidelines
lack a quantitative method for modelling risk in order to asses it.
The risk management literature includes safeguards among the responses to the
risk posed by threat scenarios. These responses are avoidance, assumption, limita-
tion, and transference [69, 80, 109]. One avoids an optional risk by choosing not
to participate in the risky activity, sacrificing the opp ortunity to benefit from that
activity but safeguarding against the risk that the activity could result in a breach.
When risk is unavoidable or one chooses to accept the potential losses rather than
implement additional safeguards, one is said to assume a risk. By buying insurance
one may transfer the risk of loss to another party. If entering a contract presents a
risk, that risk may also be transferred through clauses in the terms of the contract
that assign liability to each party. Finally, one may limit risk by introducing safe-
guards that reduce the likelihood of harmful events or that limit the damage caused
when such events occur.
While the study of computer security has centered around those safeguards that
limit risk, avoidance is now a more commonly accepted security practice. For example,
firewalls can be seen as techniques for risk avoidance, as they require an organization
to forgo the benefits of open networks, such as ease of access, in order to avoid risk.
The choice not to adopt electronic ballots for public elections is also the result of risk
avoidance, as is the current push to ship software with lesser used features turned off
by default.
From the risk management literature a number of metrics have evolved to measure
Chapter 3: Why measuring security is hard 29
security risks. The remainder of this section will cover a progression of these metrics,
which are summarized in Figure 3.1, and why procedures for quantitatively measuring
them have been evasive.
3.1.1 Annual Loss Expected (ALE)
The most common measure for the risk of a harmful event is Annual Loss Ex-
pected, or ALE, which is the product of the expected yearly rate of occurrence of the
event times the expected loss resulting from each occurrence.
ALE = expected rate of loss × value of loss
The annual loss expected from all of an organization’s operations would be the
sum of the expected yearly losses that could result from each threat scenario. Unfor-
tunately, determining accurate inputs to the ALE equation is significantly harder for
security threats than natural ones [6, 59].
If a risk manager is calculating the ALE of losses due to a natural threat source,
such as an earthquake or degradation (wear and tear) of components, she can fore-
cast the expected rate of future occurrences by using the simplest possible model,
Annual Loss Expected ALE (rate of loss) × (value of loss)
Savings (reduction in ALE) S ALE
baseline
ALE
with new safeguards
Benefit B S + (profit from new ventures)
Return On Investment ROI
B
cost of safeguards
Internal Rate of Return IRR C
0
=
P
n
t=1
B
t
C
t
(
1+IRR
)
t
Figure 3.1: Common metrics used by security risk managers
Chapter 3: Why measuring security is hard 30
substituting in the historical rate. This forecast must then be adjusted to account
for recent trends. Models can accurately forecast future events using historical data
only when the statistical relationships on which they are built remain stationary over
time. If the influence of the independent variables on the dependent variable changes,
the model’s forecast and error estimates will be unreliable. This stationarity require-
ment is difficult to achieve when modelling events caused by strategic adversaries, as
acting counter to known models is often the dominant strategy of these adversaries.
Unlike nature, strategic adversaries learn to attack a system at its weakest point,
improve their skills over time, and thwart attempts to measure their behavior. Even
if adversaries did cooperate, historical data for human threats is lacking [76, page 63].
Between 1988 and 1991, the National Institute of Standards and Technology
(NIST) held four workshops in hopes of improving risk models for information tech-
nology. The dearth of progress is reflected in today’s state of the art. In October of
2001, NIST published its Risk Management Guide for Information Technology Sys-
tems [109], in which it recommended that event likelihoods be estimated into only
three qualitative categories: low, medium, and high. The same three categories are
recommended by the OCTAVE approach [2, 3] from Carnegie Mellon’s Software
Engineering Institute (SEI) and CERT Coordination Center. Unlike quantitative
likelihood estimates, these qualitative categories cannot be used to estimate Annual
Loss Expected (ALE) or other quantitative cost metrics. Upon reading the NIST
guide, the reader is left to wonder how this process can meet its requirement of
enabling senior managers to “use the least-cost approach and implement the most
appropriate controls to decrease mission risk.” [109, page 27]
Chapter 3: Why measuring security is hard 31
Gordon and Loeb [52] use a version of ALE that is modified for situations in
which at most one loss will occur. Thus the dollar cost of a loss is multiplied by
the likelihood of a loss, rather than the expected frequency of loss used to calculate
ALE. They model the probability that a breach will occur as a function of the dollars
invested in security. In this theoretical work, they then assume that this security
breach probability function is continuously twice differentiable (no discrete invest-
ments), that the first derivative is negative (breach becomes less likely as investment
increases), and that the second derivative is positive (diseconomies of scale
1
). Their
results show upper bounds for optimal levels of security investment against a loss,
informing the risk manager of the maximum that should be spent on safeguards if all
the assumptions of the mo del hold. Beyond this result, the technique is not intended
to be applied for quantitative risk management decisions, such as which safeguards
to chose.
3.1.2 Security savings (S) and benefit (B)
In his doctoral dissertation [59], Kevin Soo Hoo puts aside the problem of estimat-
ing how secure any system is and focuses on measuring the benefits of investments in
additional safeguards. He contends that the benefits of an investment in safeguards
goes beyond the reductions in expected cost of security breaches (decreased losses).
In addition, he adds the expected profits from new activities that could not have been
profitably undertaken without the added security measures. We call the savings S,
and we call the sum of the savings and this new revenue the total benefit, B, of a
1
Diseconomies of scale means that each dollar invested in security provides a smaller fractional
reduction in security breaches than the previous dollar did.
Chapter 3: Why measuring security is hard 32
security investment.
Soo Hoo uses an ALE-based methodology to calculate security savings. The
amount that can be saved by reducing the rate of successful attacks and damage
per successful attack is calculated as the decrease in annual losses that results.
S = ALE
baseline
ALE
with safeguards
Soo Hoo models the effect of a safeguard as causing a fractional reduction, s
i
, in ALE.
ALE
with safeguard i
= s
i
· ALE
baseline
where 0 s
i
1
To keep things simple, Soo Hoo assumes that a safeguard has the same fractional
reduction on ALE regardless of the other safeguards implemented. That is:
ALE
with safeguards i,j
= s
i
· s
j
· ALE
baseline
for all i, j
There are obvious limitations to the applicability of this model. For one, it over-
states the reduction in risk resulting from the use of safeguards that act as substitutes
for each other. For example, turning off unused local network services when these
services are already inaccessible due to a machine-level (“personal”) firewall is likely
to have a significantly lower impact on risk than if no firewall were present. However,
in Soo Hoo’s model, removing services will result in equal fractional risk reductions
regardless of whether a firewall is already in place. The model also fails to capture
the effects of complimentary safeguards. For example, an investment in an intrusion
detection system, which alerts administrators to suspicious network activity, reduces
risk only when paired with a procedure for acting on these alerts.
Chapter 3: Why measuring security is hard 33
As with previous ALE based approaches, Soo Hoo’s leaves open the question of
how to forecast the rate at which loss events will occur and how to forecast the reduc-
tions in these rates that will result from adding safeguards. Instead, his metho dology
requires as its input the fractional reduction in security breaches that can be ex-
pected from implementing each of the safeguards under consideration. While Soo
Hoo cites data from the fourth and fifth Computer Security Institute/FBI Computer
Security and Crime Surveys [27, 28], and believes that safeguard effectiveness could
be derived from past incident data, he provides no pro cedure for producing these
forecasts. In lieu of a methodology and data source, his analyses rely on safeguard
efficacy estimates based on “expert judgements” [59, page 52].
3.1.3 Investment return: ROI and IRR
A metric that is quickly gaining in popularity is return on security investment,
also known as security ROI [15, 21, 65, 48, 60]. Assuming that the annual benefit of
a security investment will be received not only in the first year, but in all subsequent
years, security ROI is defined by Blakley [15] as the amount of this annual benefit
over its cost. The benefit is calculated as it was earlier by Soo Hoo [59], by adding the
expected cost savings (reduced expected loss) to the new profit expected from ventures
that could not have been profitably undertaken without the additional safeguards.
The cost is assumed to be incurred immediately.
Chapter 3: Why measuring security is hard 34
ROI =
benefit of safeguards
cost of safeguards
=
(savings from safeguards) + (profit from new ventures)
cost of safeguards
=
ALE
baseline
ALE
with safeguards
+ (profit from new ventures)
cost of safeguards
Gordon and Loeb [53] advocate that firms should discard the above ROI formula
and instead use the Internal Rate of Return (IRR, also known as the economic rate
of return) because IRR incorporates discounted cash flows for investments that have
different costs and benefits in different years. If C
0
is the initial cost of an investment,
and C
t
and B
t
are the respective costs and benefits in year t, one can solve for the
IRR using the following equation:
C
0
=
n
X
t=1
B
t
C
t
(1 + IRR)
t
Though they contend that IRR is superior to ROI, Gordon and Loeb also warn
that even the correct rate of return can be used inappropriately. They caution that
rates of return not be used when comparing two investments, as an investment can
have a greater net benefit but lesser rate of return. If enough cash is available to
invest in either of the options, but a firm can invest in only one (the options are
substitutes or mutually exclusive for other reasons), it would be less profitable to
choose the investment with the higher rate of return over that with the greater net
benefit.
Rate of return is useful for determining whether a security investment is justified
given the investor’s cost of capital. Rates of return have been growing in popularity
Chapter 3: Why measuring security is hard 35
because the metric is familiar to CFOs and others who control corporate budgets and
approve expenditures.
Unfortunately, calculating either of these rates of return requires that one first
calculate security benefit (B or B
t
) which in turn requires one to calculate ALE.
Like our other metrics, security ROI and IRR forecasts are only as accurate as the
forecasts of loss event frequencies on which they rely and today these forecasts use
best guesses rather than quantitative models.
3.1.4 The elusiveness of quantitative models
The lack of quantitative studies in computer security is attributed, in part, to a
lack of data on variables that can be shown to influence security risk.
Information collected by CERT provides data about security incidents, but not
about aspects of the larger environment, such as properties of systems that were not
victims of security incidents. Thus, these data are similar to what criminologists
would call victim studies. Broader information is obtained by the CSI/FBI survey,
but the survey’s questions were not crafted to fit the needs of regression studies [82],
especially not the kind that seek to model the effects of different system and safeguard
choices. Both the CERT and CSI/FBI data sources suffer from reporting omissions,
as many firms decide not to report security breaches in fear that they might lose
customers if breach reports were to be leaked to the public.
Surprisingly, studies using regression models have been successful in forecasting
security risk outside of software security in domains such as home security, as we
will see in Chapter 7. These studies are successful because they measure the risk to
Chapter 3: Why measuring security is hard 36
systems with a homogenous architecture (homes), with fairly homogenous safeguards
(deadbolts, alarms), from a threat that remains stationary over time (burglary). Most
importantly, we will see that the adversary at the source of the home burglary threat
is one that is deterred by the risks to his person of capture or harm during the attack,
and that the factors that indicate this personal risk are measurable. Many of the
adversaries faced by computer systems attack from a great distance, and are thus
deterred more by the strength of the system than the personal risk resulting from the
attack. Unlike indicators of personal risk, factors that affect the difficulty of attacking
a system (or the strength of a system to make attack difficult) have been difficult to
measure.
Thus, regression studies in network security that excluded measures of security
strength would suffer from omitted variable bias. Because network attackers are often
deterred by the difficulty of attacking systems, and not risk, one or more indicators
of security strength must be included as independent variables if a model’s forecasts
are to be accurate enough to be useful.
In lieu of a measure of security strength, the best we can do is to substitute
independent variables that are believed to b e correlated with it. For example, in
software, code complexity is often considered to be negatively correlated with se-
curity [95], though complexity is itself difficult to measure.
2
If one found enough
independent variables correlated with security strength, such as complexity, security
budget, testing period, the recent rate of vulnerabilities reported, and the version
2
Complexity is often measured in terms of code size. This may not be the right measure as an
increase in the size of source code due to comments may reduce, and not increase, the complexity.
An increase in the size of compiled code may be the result of an inclusion of b ounds checks, which
would also be expected to make the software more secure.
Chapter 3: Why measuring security is hard 37
number, then one might be able to estimate security strength well enough to then
forecast security risks. Even if these results were at first accurate, the moment such
a security strength forecasting methodology was published its accuracy would begin
to quickly degrade. Software vendors would use the least cost approach available to
increase their software’s security strength scores regardless of whether these actions
actually increased the actual strength of the software. Reports of testing budgets
and testing periods would be inflated to make testing appear more rigorous and code
would be condensed to make it app ear less complex, all without any change to the
strength of the actual product.
One might attempt a regression analysis on a single system configuration in order
to isolate security strength as a constant that need not be measured. If the systems
studied do not change, one might hope that their security strength would also remain
constant. However, security strength changes the moment a new vulnerability in one
of these systems is discovered. If that system were updated to repair the vulnerability
then the strength would change yet again. The approach of using a single set of
systems and versions is doomed because the systems themselves are not stationary.
Thus, historical data becomes unable to predict future security risks should a new
vulnerability be found or should the system be updated. On the other hand, if
we know the strength of a system and the incentive that would lead adversaries to
look for vulnerabilities, we could look to historical data to tell us how often new
vulnerabilities have been found and attacks staged in similar situations in the past.
In the end, there’s no substitute for a direct means to measure security strength.
An alternative approach to calculating the likelihood of a security breach might
Chapter 3: Why measuring security is hard 38
appear to b e available in the tools of fault tree analysis, using the structures intro-
duced in Chapter 2. The goal of this analysis is to determine the likelihood of the
basic threat event at the root of the tree. The technique is based on the observation
that for any parent node, the probability that the event will occur is a function of the
probabilities that the child nodes will occur. Assume that the event represented by a
parent has two children, c
1
and c
2
, which occur with probabilities P (c
1
) and P (c
2
) re-
spectively, either of which trigger the parent event. The laws of probability tell us that
the probability that the parent event will occur is P (c
1
c
2
) = P (c
1
) + P (c
1
)P (c
2
|c
1
),
which will simplify to P (c
1
) + (1 P (c
1
)) P (c
2
) if the two child events are indepen-
dent. If both child events must occur to trigger the parent event then the likelihood
of the parent event is P (c
1
c
2
) = P (c
1
) × P (c
2
|c
1
), which simplifies to P (c
1
) × P (c
2
)
if the child events are independent.
If the parent event has more than two children, the analysis can be performed
iteratively over all the children of the parent so long as the laws of precedence are
respected. In other words:
P (c
1
(c
2
c
3
)) = P (c
1
) + P (c
1
)P (c
2
c
3
|c
1
)
= P (c
1
) + P (c
1
)P (c
2
|c
1
)P (c
3
|c
2
c
1
)
One need also account for the p ossibility that one has not anticipated a threat
that would result in an additional child node. The longer a given portion of a tree
has been in use, the less likely it is that new child nodes will be discovered. One
can use bayesian stochastic mo dels, based on the previous rate of discovery of child
nodes for the node in question, to estimate the likelihood that a new child no de will
Chapter 3: Why measuring security is hard 39
be discovered over a given time period. In calculating the likelihood of a security
breach at a given node, one can then combine the likelihood of a breach at known
child nodes with the likelihood of a breach at a previously undiscovered child node.
Unfortunately, this tree-based analysis does not solve the problem of estimating
probabilities, but rather pushes the problem down to the leaf nodes. The likelihood
of a breach at any given node cannot be calculated until all the leaf nodes below it are
calculated. Thus, the analysis does not eliminate the need for regression analysis or
other means of obtaining security risk statistics, but instead enables us to move the
task of calculating event probabilities down to the level of those events represented
by the leaf nodes.
3.2 Security strength
Safeguards have long been measured by how difficult they are to circumvent.
Metrics of security strength attempt to quantify the time, effort, and other resources
that must be consumed in order to bypass a system’s safeguards. Whereas metrics
of security risk are useful from the perspective of those the safeguards are intended
to defend, strength metrics are intended to be viewed when taking the perspective of
the adversary. For example, strength metrics might tell us how a castle’s safeguards
may increase the number of soldiers, equipment, time, or motivation required to lay
a siege with a reasonable expectation of success.
The history of security strength metrics is perhaps best understood through de-
velopment of ciphers and cryptography. A cipher is a set of rules for encoding and
decoding messages so as to ensure their confidentiality and integrity. A good cipher
Chapter 3: Why measuring security is hard 40
should require little effort to encode or decode a message with a key. The security
of a cipher rests on how difficult it is to correctly decode a message if one does not
possess the decryption key. Threat scenarios against ciphers are often differentiated
by the amount and type of information available to the adversary. For example, a
known text attack is a threat scenario, on a symmetric cipher, in which the adversary
analyzes both the plain-text and enciphered copies of one or more messages in order
to derive the decryption key.
Many have made the mistake of assuming that a complex cipher, that appears
impenetrable to those who use it, will confound all adversaries who try to decode
it. One early example of such a mistake took place in 1586 when Mary Queen of
Scots, imprisoned by her cousin Queen Elizabeth of England, received an offer of
rescue enciphered in a letter.
3
Rather than accept indefinite imprisonment, Mary
chose to trust the strength of her cipher and risked a reply in the affirmative. This
was an act of treason that could lead to almost certain death if the letter fell into
the wrong hands and could be deciphered. The risk undertaken by Mary was very
much a function of the strength of the cipher in resisting cryptanalysis by Queen
Elizabeth’s agents. Those agents, empowered with the resources of a ruling monarch,
could dedicate more time and effort to breaking the cipher than Mary’s agents had
been able to. As Mary was a threat to Elizabeth’s thrown, Elizabeth also had the
motivation to expend these resources. In the end, Elizabeth’s codebreakers were able
to decipher the message using a laborious technique known as frequency analysis.
Mary was sent to the gallows.
3
The cryptographic history is from “The Code Book” [101] by Simon Singh, who in turn cites [45,
102, 107].
Chapter 3: Why measuring security is hard 41
Since the mechanisms of a cipher must be known if the cipher is to be fully
examined, a widely tested cipher cannot be a very well kept secret. Because of the
clear danger in relying on poorly tested ciphers, Auguste Kerckhoffs proposed in his
1883 book, La cryptographie militaire, that the strength of a cipher should rest only
on keeping the key secret, and not on the secrecy of the design of the cipher or cipher
machine. This became known as Kerckhoffs’ law. Even with this advance, ciphers
still came in two strengths those that had been broken and those that had not.
The only way to test the strength of a cipher was to publish encoded messages and
challenge others to decode them.
In the first half of the twentieth century, it was the Germans who failed to learn
from Mary’s example and failed to heed Kerckhoffs’ law. In 1926 they introduced
the first version of their Enigma cipher and the Enigma machine used to encipher
and decipher messages. The machine and its cipher were complex, and their designs
were a German secret. However, by the 1930s, devices known as a bombes, invented
by Marian Rejewski of Poland, were being used to decipher messages and determine
the key used for transmission [101]. Like Queen Elizabeth, the Polish had ample
motivation to break the cipher and were willing to dedicate more resources than the
Germans who relied on the code to protect their messages. Why, after all, would the
Germans waste resources testing such a seemingly impenetrable machine?
While the Germans increased the complexity of the Enigma machine and its cipher
for World War II, the tremendous resources of the allied forces at England’s Bletchley
Park were able to decipher the German messages for much of the war [58, 101]. The
intelligence obtained from German messages allowed the allies to avoid submarine
Chapter 3: Why measuring security is hard 42
attack, land at Normandy with a minimum of casualties, and frustrate Rommel’s
forces in Africa [11]. While Rommel suspected that his messages were being read
by the allies, his superiors refused to believe that such a complex code could be
broken [10]. The lesson was finally learned that security is never certain, especially
if your adversary is willing to expend more resources to test the security of your
safeguards than you are.
There remain no unbreakable ciphers outside of the one-time pad, a symmetric
cipher proven by Shannon to be information theoretically secure [100], but that re-
quires impractically large keys.
4
Rather, the developers of fixed-length key ciphers
test strength by challenging themselves and others to try to find a means of crypt-
analysis that would break the cipher. The problem faced by those who employ ciphers
is how to determine when enough effort has been expended in attempting to break
the cipher before it can be deemed suitable for use. No matter how many resources
are expended in attempts to break a cipher, there is always the possibility that the
enemy can exert greater effort or have better fortune in picking the right strategy to
attack the cipher.
In the 1970s, a loophole was discovered through which the problem of dedicating
testing resources to each new cipher could be bypassed. The loophole emerged with
the advent of public key cryptography, in which the key used to encrypt a message
is public knowledge and a matching private key is used to decrypt messages. As
4
One time pads require one bit of randomly generated key material for each bit of data that
might b e sent between the communicating parties before they next meet and can exchange new key
material. Thus, while one time pads are information theoretically secure against the scenario in
which an adversary attempts to derive the encryption key, they are vulnerable to attacks that target
key generation, key distribution, or the secure storage of keys.
Chapter 3: Why measuring security is hard 43
first publicly described by Whitfield Diffie and Martin Hellman [34]
5
, such schemes
required the use of a trap door function, f (x), for which it was prohibitively time
consuming or expensive to compute its inverse f
1
(x). However, with knowledge of
the secret key (the trap door), a fast inverse function f
1
(x) can be constructed.
The first such trapdoor function and resulting public key encryption scheme was
the RSA scheme invented by Ron Rivest, Adi Shamir, and Leonard Adleman [83] in
1978.
6
The public key contained a composite number that was the pro duct of two
large prime numbers (factors). The private key could be constructed if one knew
the prime factors. If the adversary could factor the composite number in the key, he
could recreate the private key and the cipher would be broken. The inventors of the
RSA scheme could not rule out the possibility that it would be possible to decrypt
the message without knowing the private key and without the ability to factor the
composite in the public key. Still, RSA remains the most commonly used public key
cipher.
In 1979, Michael Rabin introduced his public key encryption scheme which also
used a composite of two large primes as the public key, and for which the factoriza-
tion of the composite served as the private key [81]. Rabin went further and proved
that if one could build a machine to decrypt randomly chosen messages, one could
use the same machine to factor composites. Thus, if used within these constraints,
breaking Rabin’s cryptosystem was proven to be as hard as factoring the composite
that composed the public key. The problem of efficiently factoring composites is one
5
Public key cryptography was first conceived in 1970 as ‘non-secret encryption’ by James Ellis [38,
39] as part of his classified research within the British Communications-Electronics Security Group
(CESG).
6
Once again, this discovery was preceded by classified work at the British CESG, this time by
Clifford Cocks [26, 39].
Chapter 3: Why measuring security is hard 44
that has intrigued mathematicians, and later computer scientists, for ages. Thus, the
security strength of Rabin’s encryption scheme had already been subject to decades
of testing by some of the world’s most talented, highly motivated, individuals. Shafi
Goldwasser and Sylvio Micali [51] would later devise an encryption algorithm that
reduced the security of all encrypted messages (not just randomly generated ones)
to the strength of long studied computational problems. In addition to the factor-
ing problem, they added the assumption that it is difficult to distinguish quadratic
residuosity modulo a composite with unknown factors [50].
Security measurement benefited from the advent of cryptosystems that were es-
tablished to be as difficult to break as a known computational problem. One could
estimate the amount of time required to break the system with current technology
and knowledge by using computational complexity analysis, or even by simulating a
portion of the algorithm. Thus, one could answer the question of how much money
your adversary would need to spend on equipment and how long he would have to
work in order to break your cryptosystem using existing technology and algorithms.
One could also crudely estimate the effect of advancing technology in reducing the
cost of future attacks on the cryptosystem. Moore’s law, which predicts exponential
decline in computing costs over time, could be used to model the decline in the cost of
computation. Crude models were also created to predict advances in factoring algo-
rithms (critical to many cryptosystems) based on past progress in the search for faster
algorithms. The necessity of these models demonstrates that even strength results
in cryptography, a field known for formal and precise results, depend on uncertain
factors such as human innovation in overcoming unsolved problems. Unless scientists
Chapter 3: Why measuring security is hard 45
prove that there exist problems that are significantly more expensive to solve than to
verify (such as by proving P 6= N P )
7
, the strength of cryptosystems will continue to
depend, in part, on how well the computational problems on which they are based
have been tested. Regardless, cryptographic models will remain subject to new forms
of attack outside the models on which they were built, as was discovered with the
advent of timing and other side channel attacks.
While cryptography has advanced greatly through the reduction of cryptographic
problems to known computational problems, few other security safeguards can benefit
from this approach. As a result, the progress in the theory of cryptography has far
outpaced that of most other security research. Public key cryptography has also far
outpaced the development of the public key infrastructure required to support it.
Almost a quarter of a century after the invention of RSA, almost all the systems
that use it rely on infrastructure for certifying the identity of public key holders that
experts consider unacceptably weak [40, 55].
Outside of cryptography, estimating the strength of a safeguard still requires ex-
tensive examination. It is a common lament that testing cannot prove the absence of
any security flaws, and that at best it can probabilistically show the expected time
or effort required to find the next flaw. However, given that any measure of secu-
rity strength in real-world systems will contain uncertainty, testing can still play an
important role in reducing and measuring this uncertainty.
If the strength of a system depends on how well the system has been tested, how
does one measure it? Some security experts track the security strength of software
7
That is to say, proving that there exist problems which take exponentially more time to solve
than it takes to verify that their solution is correct.
Chapter 3: Why measuring security is hard 46
by watching the rate at which flaws are discovered and reported, look at which fea-
tures are included, or examine the reputation and size of the manufacturer. However,
little evidence has been published to support any of these metrics. The problem is
even more complex for systems with human components (organizations) as people
introduce a level of nondeterminism into the system that makes security strength
impossible to measure with the traditional computer science tools of logic and com-
putational complexity alone.
3.3 Chapter summary
In this chapter we reviewed a number of security risk metrics: Annual Loss Ex-
pected (ALE), security savings (S), security benefit (B), and Return On Investment
(ROI and IRR). While the values of these metrics may be calculated after the fact
(ex-post) from historical data, forecasting future values has proved problematic. The
crux of the problem is that the frequency and severity of incidents has been chang-
ing over time, and is not likely to remain stationary given the flux inherent to an
environment driven by advancing technology.
We also reviewed the problems inherent in measuring security strength. In essence,
problems of security strength come down to measuring how hard it is for the adversary
to violate your security requirements. Measuring time, effort, and other resources is
the domain of economics. In the following chapter, I’ll explain how, using economic
principles, markets can be used to to the measure security strength of software sys-
tems.
Chapter 4
Measuring the security strength of
software
4.1 Security strength
For any given threat scenario, a system is only as strong as it is difficult for
an adversary to cause the scenario to b e realized. Difficulty not only implies the
adversary’s effort but also his or her need to obtain other resources, such as time and
equipment. The sum of the resource costs incurred in breaking through safeguards
to realize a threat scenario is called the cost-to-break [90].
We saw in Chapter 3 that, when the security of a system rests on a well stud-
ied computational problem, cost-to-break is often measured in units of computation.
The amount of computation required to accomplish that adversary’s goals is rarely
known ahead of time, but is instead probabilistic. For example, when guessing a
password or using a randomized factoring algorithm against a public key cryptosys-
tem, there is an extremely small, but non-zero, probability that the adversary will
succeed immediately based on lucky guesswork. Because the amount of computation
required to breach security is probabilistic, it is the expected cost-to-break that must
47
Chapter 4: Measuring the security strength of software 48
be measured.
Solving computational problems is only one of the many ways to breach the secu-
rity of real, more complicated, systems, especially those systems composed of people
with motives and flaws of their own. While the cost of equipment, effort, and time
may not be measurable with the same level of formal precision as the cost of compu-
tation, we need not abandon measurement. Measuring the current cost of equipment
may be as simple as shopping around to find its market price. Estimating the future
cost of equipment may be nearly as straightforward. The cost of computation has
declined at a rate very close to that predicted by Moore’s law. Like equipment, time
and effort can also be measured in units of dollars or other currency. If the amount of
time and skill required to breach security is known, we can use labor market statistics
to estimate its dollar cost.
One of the difficulties in measuring cost-to-break is that it is a function of your
adversaries’ costs, not your own. It’s impossible to research the skill of every potential
adversary, the value they place on their time, and the other resource costs they require
to realize a threat scenario. What’s more, the adversaries themselves are not likely to
know how much time and resources they will need to expend to breach security. This
is especially true if breaching the security of a system requires that the adversary find
a vulnerability to exploit.
To understand why even the adversary may not be able to measure his own costs,
assume that finding a vulnerability is an essential step in breaching the security of
a system. There are a series of tasks the adversary can p erform to look for vul-
nerabilities, from inspecting code to writing and executing tests. To maximize his
Chapter 4: Measuring the security strength of software 49
0.2 0.4 0.6 0.8 1
0.2
0.4
0.6
0.8
1
0.2 0.4 0.6 0.8 1
0.2
0.4
0.6
0.8
1
Cost as fraction of reward for reporting vulnerability
Probability vulnerability found
Figure 4.1: A cumulative probability distribution that represents an individual’s be-
liefs of the likelihood that he or she can find a vulnerability in a system (the y axis)
for the cost represented by the x axis.
productivity, the adversary will start with the tasks that have the greatest chance of
success in finding a vulnerability per unit cost. Diminishing expected returns result
because the tasks with the highest expected profitability are executed first. We can
see in Figure 4.1 that individuals may perceive cost-to-break not as a single value,
but a cumulative probability distribution. The chance of success in finding a vulnera-
bility increases with total investment (the first derivative is positive), but the chance
of success for each additional dollar invested is smaller than for the previous dollar
(the second derivative is negative).
Economically rational individuals will only perform tasks so long as their expected
return is greater than their expected cost. If an individual believes a vulnerability
is worth r dollars, the cost of task i will be c
i
, and the probability that task i will
result in the discovery of a vulnerability is p
i
, then he will perform the task only when
c
i
p
i
· r, or equivalently when
c
i
p
i
r. In fact, while a risk-neutral individual will
Chapter 4: Measuring the security strength of software 50
continue to perform tasks when
c
i
p
i
< r and be indifferent to performing tasks when
c
i
p
i
= r, a risk-averse individual will not search for vulnerabilities when
c
i
p
i
> r ² for
some positive measure of risk aversion ².
There is no reason to believe an individual will expend the same amount of re-
sources to find a vuln