Page 1

Enforcing Non-safety Security Policies with

Program Monitors

Jay Ligatti1, Lujo Bauer2, and David Walker1

1Princeton University

2Carnegie Mellon University

Princeton University

Department of Computer Science

Technical Report TR-720-05

January 31, 2005

Abstract. We consider the enforcement powers of program monitors,

which intercept security-sensitive actions of a target application at run

time and take remedial steps whenever the target attempts to execute a

potentially dangerous action. A common belief in the security commu-

nity is that program monitors, regardless of the remedial steps available

to them when detecting violations, can only enforce safety properties. We

formally analyze the properties enforceable by various program monitors

and find that although this belief is correct when considering monitors

with simple remedial options, it is incorrect for more powerful monitors

that can be modeled by edit automata. We define an interesting set of

properties called infinite renewal properties and demonstrate how, when

given any reasonable infinite renewal property, to construct an edit au-

tomaton that provably enforces that property. We analyze the set of

infinite renewal properties and show that it includes every safety prop-

erty, some liveness properties, and some properties that are neither safety

nor liveness.

1Introduction

A ubiquitous technique for enforcing software security involves dynamically mon-

itoring the behavior of programs and taking remedial actions when the programs

behave in a way that violates a security policy. Firewalls, virtual machines, and

operating systems all act as program monitors to enforce security policies in this

way. We can even think of any application containing security code that dy-

namically checks input values, queries network configurations, raises exceptions,

warns the user of potential consequences of opening a file, etc. as containing a

program monitor inlined into the application.

Because program monitors, which react to the potential security violations

of target programs, enjoy such ubiquity, it is important to understand their capa-

bilities as policy enforcers. Having well-defined boundaries on the enforcement

powers of security mechanisms allows security architects to determine exactly

Page 2

when certain mechanisms are needed and saves the architects from attempting

to enforce policies with insufficiently strong mechanisms.

Schneider discovered one particularly useful boundary on the power of certain

program monitors [Sch00]. He defined a class of monitors that respond to po-

tential security violations by halting the target application, and he showed that

these monitors can only enforce safety properties—security policies that specify

that “nothing bad ever happens” in a valid run of the target [Lam77]. When

a monitor in this class detects a potential security violation (i.e., “something

bad”), it must halt the target.

Although Schneider’s result applies only to a particular class of program

monitors, it is generally believed that all program monitors, even ones that

have greater abilities than just to halt the target, are able to enforce only safety

properties. The main result of the present paper is to prove that certain program

monitors can enforce non-safety properties. These monitors are modeled by edit

automata, which have the power to insert actions on behalf of and suppress

actions attempted by the target application. We prove an interesting lower bound

on the properties enforceable by such monitors: a lower bound that encompasses

strictly more than safety properties.

1.1Related Work

A rich variety of security monitoring systems has been implemented [JZTB98]

[EAC98,ES99,ET99,KVBA+99,BLW03,Erl04,BLW05]. In general, these systems

allow arbitrary code to be executed in response to potential security violations,

so they cannot be modeled as monitors that simply halt upon detecting a vio-

lation. In most cases, the languages provided by these systems for specifying

policies can be considered domain-specific aspect-oriented programming lan-

guages [KHH+01].

Theoretical efforts to describe security monitoring have lagged behind the

implementation work, making it difficult to know exactly which sorts of security

policies to expect the implemented systems to be able to enforce. After Schneider

made substantial progress by showing that safety properties are an upper bound

on the set of policies enforceable by simple monitors [Sch00], Viswanathan, Kim,

and others tightened this bound by placing explicit computability constraints on

the safety properties being enforced [Vis00,KKL+02]. Viswanathan also demon-

strated that these computable safety properties are equivalent to CoRE proper-

ties [Vis00]. Fong then formally showed that placing limits on a monitor’s state

space induces limits on the properties enforceable by the monitor [Fon04]. Re-

cently, Hamlen, Schneider, and Morrisett compared the enforcement power of

static analysis, monitoring, and program rewriting [HMS03]. They showed that

the set of statically enforceable properties equals the set of recursively decidable

properties of programs, that monitors with access to source program text can

enforce strictly more properties than can be enforced through static analysis,

and that program rewriters do not correspond to any complexity class in the

arithmetic hierarchy.

Page 3

In earlier theoretical work, we took a first step toward understanding the en-

forcement power of monitors that have greater abilities than simply to halt the

target when detecting a potential security violation [LBW05]. We introduced edit

automata, a new model that captured the ability of program monitors to insert

actions on behalf of the target and to suppress potentially dangerous actions.

Edit automata are semantically similar to deterministic I/O automata [LT87]

but have very different correctness requirements. The primary contribution of

our earlier work was to set up a framework for reasoning about program monitors

by providing a formal definition of what it even means for a monitor to enforce a

property. Although we also proved the enforcement boundaries of several types of

monitors, we did so in a model that assumed that all target programs eventually

terminate. Hence, from a practical perspective, our model did not accurately

capture the capabilities of real systems. From a theoretical perspective, only

modeling terminating targets made it impossible to compare the properties en-

forceable by edit automata to well-established sets of properties such as safety

and liveness properties.

1.2

This paper presents the nontrivial generalization of earlier work on edit au-

tomata [LBW05] to potentially nonterminating targets. This generalization al-

lows us to reason about the true enforcement powers of an interesting and real-

istic class of program monitors, and makes it possible to formally and precisely

compare this class to previously studied classes.

More specifically, we extend previous work in the following ways.

– We refine and introduce formal definitions needed to understand exactly

what it means for program monitors to enforce policies on potentially non-

terminating target applications (Section 2). A new notion of enforcement

(called effective=enforcement) enables the derivation of elegant lower bounds

on the sets of policies monitors can enforce.

– We show why it is commonly believed that run-time monitors enforce only

computable safety properties (Section 3). We show this by revisiting and

extending earlier theorems that describe the enforcement powers of simple

monitors. The earlier theorems are extended by considering nonterminating

targets and by proving that exactly one computable safety property—that

which considers everything a security violation—cannot be enforced by pro-

gram monitors.

– We define an interesting set of properties called infinite renewal properties

and demonstrate how, when given any reasonable infinite renewal property,

to construct an edit automaton that provably enforces that property (Sec-

tion 4).

– We prove that program monitors modeled by edit automata can enforce

strictly more than safety properties. We demonstrate this by analyzing the

set of infinite renewal properties and showing that it includes every safety

property, some liveness properties, and some properties that are neither

safety nor liveness (Section 5).

Contributions

Page 4

2Technical Apparatus

This section provides the formal framework necessary to reason precisely about

the scope of policies program monitors can enforce.

2.1Notation

We specify a system at a high level of abstraction as a nonempty, possibly

countably infinite set of program actions A (also referred to as program events).

An execution is simply a finite or infinite sequence of actions. The set of all finite

executions on a system with action set A is notated as A?. Similarly, the set of

infinite executions is Aω, and the set of all executions (finite and infinite) is A∞.

We let the metavariable a range over actions, σ and τ over executions, and Σ

over sets of executions (i.e., subsets of A∞).

The symbol · denotes the empty sequence, that is, an execution with no

actions. We use the notation τ;σ to denote the concatenation of two finite se-

quences. When τ is a (finite) prefix of (possibly infinite) σ, we write τ?σ or,

equivalently, σ?τ. If σ has been previously quantified, we often use ∀τ?σ as an

abbreviation for ∀τ ∈ A?: τ?σ; similarly, if τ has already been quantified, we

abbreviate ∀σ ∈ A∞: σ?τ simply as ∀σ?τ.

2.2Policies and Properties

A security policy is a computable predicate P on sets of executions; a set of

executions Σ ⊆ A∞satisfies a policy P if and only if P(Σ). For example, a

set of executions satisfies a nontermination policy if and only if every execution

in the set is an infinite sequence of actions. A key uniformity policy might be

satisfied only by sets of executions where the cryptographic keys used in all the

executions forms a uniform distribution over the universe of key values.

Following Schneider [Sch00], we distinguish between properties and more gen-

eral policies as follows. A security policy P is a property if and only if there exists

a decidable characteristic predicateˆP over A∞such that for all Σ ⊆ A∞, the

following is true.

P(Σ) ⇐⇒ ∀σ ∈ Σ :ˆP(σ)(Property)

Hence, a property is defined exclusively in terms of individual executions

and may not specify a relationship between different executions of the program.

The nontermination policy mentioned above is therefore a property, while the

key uniformity policy is not. The distinction between properties and policies is

an important one to make when reasoning about program monitors because a

monitor sees individual executions and thus can only enforce security properties

rather than more general policies.

There is a one-to-one correspondence between a property P and its charac-

teristic predicateˆP, so we use the notationˆP unambiguously to refer both to a

characteristic predicate and the property it induces. WhenˆP(σ), we say that σ

Page 5

satisfies or obeys the property, or that σ is valid or legal. Likewise, when ¬ˆP(τ),

we say that τ violates or disobeys the property, or that τ is invalid or illegal.

Properties that specify that “nothing bad ever happens” are called safety

properties [Lam77]. No finite prefix of a valid execution can violate a safety

property; stated equivalently: once some finite execution violates the property,

all extensions of that execution violate the property. Formally,ˆP is a safety

property on a system with action set A if and only if the following is true.

∀σ ∈ A∞: (¬ˆP(σ) ⇒ ∃σ??σ : ∀τ?σ?: ¬ˆP(τ)) (Safety)

Many interesting security policies, such as access-controlpolicies, are safety prop-

erties where security violations cannot be “undone” by extending a violating

execution.

Dually to safety properties, liveness properties [AS85] state that nothing ex-

ceptionally bad can happen in any finite amount of time. Any finite sequence of

actions can always be extended so that it satisfies the property. Formally,ˆP is

a liveness property on a system with action set A if and only if the following is

true.

∀σ ∈ A?: ∃τ?σ :ˆP(τ)(Liveness)

The nontermination policy is a liveness property because any finite execution

can be made to satisfy the policy simply by extending it to an infinite execution.

General properties may allow executions to alternate freely between satisfying

and violating the property. Such properties are neither safety nor liveness but

instead a combination of a single safety and a single liveness property [AS87].

We show in Section 4 that edit automata effectively enforce an interesting new

sort of property that is neither safety nor liveness.

2.3 Security Automata

Program monitors operate by transforming execution sequences of an untrusted

target application at run time to ensure that all observable executions satisfy

some property [LBW05]. We model a program monitor formally by a security

automaton S, which is a deterministic finite or countably infinite state machine

(Q,q0,δ) that is defined with respect to some system with action set A. The set

Q specifies the possible automaton states, and q0 is the initial state. Different

automata have slightly different sorts of transition functions (δ), which accounts

for the variations in their expressive power. The exact specification of a transition

function δ is part of the definition of each kind of security automaton; we only

require that δ be complete, deterministic, and Turing Machine computable. We

limit our analysis in this work to automata whose transition functions take the

current state and input action (the next action the target wants to execute) and

return a new state and at most one action to output (make observable). The

current input action may or may not be consumed while making a transition.