Edit Automata: Enforcement Mechanisms for
Run-time Security Policies∗
Jay Ligatti Lujo BauerDavid Walker
Department of Computer Science
Princeton, NJ 08544
Technical Report TR-681-03
May 30, 2003
We analyze the space of security policies that can be enforced by moni-
toring and modifying programs at run time. Our program monitors, called
edit automata, are abstract machines that examine the sequence of appli-
cation program actions and transform the sequence when it deviates from
a specified policy. Edit automata have a rich set of transformational pow-
ers: They may terminate the application, thereby truncating the program
action stream; they may suppress undesired or dangerous actions without
necessarily terminating the program; and they may also insert additional
actions into the event stream.
After providing a formal definition of edit automata, we develop a rig-
orous framework for reasoning about them and their cousins: truncation
automata (which can only terminate applications), suppression automata
(which can terminate applications and suppress individual actions), and
insertion automata (which can terminate and insert).
theoretic characterization of the policies each sort of automaton can en-
force and we provide examples of policies that can be enforced by one sort
of automaton but not another.
We give a set-
When designing a secure, extensible system such as an operating system that
allows applications to download code into the kernel or a database that allows
users to submit their own optimized queries, we must ask two important ques-
∗This is a revised and extended version of “More Enforceable Security Policies,” a paper
that first appeared in the Workshop on Foundations of Computer Security, June 2002 [BLW02].
1. What sorts of security policies can we expect our system to enforce?
2. What sorts of mechanisms do we need to enforce these policies?
Neither of these questions can be answered effectively without understanding
the space of enforceable security policies and the power of various enforcement
The first significant effort to define the class of enforceable security policies
is due to Schneider [Sch00]. He investigated the security properties that can be
enforced by a specific type of program monitor. One of Schneider’s monitors
can interpose itself between an untrusted program and the machine on which
the program runs. The monitor can examine the sequence of security-relevant
program actions one at a time and if it recognizes an action that will violate
its policy, the monitor terminates the program. This mechanism is very general
since decisions about whether or not to terminate the program can depend upon
the entire history of the program’s execution. However, since these monitors
can only terminate programs and cannot otherwise modify their behavior, it is
possible to define still more powerful enforcement mechanisms.
In this paper, we re-examine the question of which security policies can be
enforced at run time by monitoring untrusted programs. Our overall approach
differs from Schneider’s, who also used automata to model program monitors, in
that we view program monitors as transformers that edit the stream of actions
produced by an untrusted application. This new viewpoint leads us to define
a hierarchy of enforcement mechanisms, each with different transformational
• A truncation automaton can recognize bad sequences of actions and halt
program execution before the security policy is violated, but cannot other-
wise modify program behavior. These automata are similar to Schneider’s
original security monitors.
• A suppression automaton, in addition to being able to halt program ex-
ecution, has the ability to suppress individual program actions without
terminating the program outright.
• An insertion automaton is able to insert a sequence of actions into the
program action stream as well as terminate the program.
• An edit automaton combines the powers of suppression and insertion au-
tomata. It is able to truncate action sequences and insert or suppress
security-relevant actions at will.
We use the general term security automaton to refer to any automaton that is
used to model a program monitor, including the automata mentioned above.1
The main contribution of this article is the development of a robust theory
for reasoning about these machines under a variety of different conditions. We
1Previous authors [Sch00] have used the term to refer specifically to automata with powers
similar to our truncation automata, which we discuss in Section 3.
use our theory to characterize the class of security policies that can be enforced
by each sort of automaton, and we provide examples of security policies that lie
in one class but not another.
More important than any particular result is our methodology, which gives
rise to straightforward, rigorous proofs concerning the power of security mech-
anisms and the range of enforceable security policies. This overall methodology
can be broken down into four main parts.
Step 1. Define the underlying computational framework and the range of secu-
rity policies that will be considered. In this paper, we define the software
systems and sorts of policies under consideration in Section 2.1 through
Step 2. Specify what it means to enforce a security policy. As we will see in Sec-
tion 2.4, there are several choices to be made in this definition. One must
be sure that the enforcement model accurately reflects the desires of the
system implementer and the environment in which the monitor operates.
Section 2.5 explains some of the limitations induced by our decisions in
steps 1 and 2.
Step 3. Formally specify the operational behavior of the enforcement mechanism
in question. Sections 3, 4, 5 and 6 define the operational semantics of four
different sorts of monitors and provides examples of the policies that they
Step 4. Prove from the previous definitions that the security mechanism in ques-
tion is able to enforce the desired properties. Sections 3, 4, 5 and 6 state
theorems concerning the security policies that each type of monitor can
enforce. The formal proofs can be found in the Appendix A.
After completing our analysis of edit automata and related machines, we
discuss related work (Section 7). Finally, Section 8 concludes the paper with
a taxonomy of security policies and a discussion of some unanswered questions
and our continuing research.
2 Security Policies and Enforcement Mechanisms
In this section, we define the overarching structure of the secure systems we
intend to explore. We also define what it means to be a security policy, and
what it means to enforce a security policy. Finally, we give a generic definition
of a security automaton as an action sequence transformer.
2.1Systems, Executions, and Policies
We specify software systems at a high level of abstraction.
(A,Σ) is specified via a set of program actions A (also referred to as program
events) and a set of possible executions Σ. An execution σ is simply a finite
sequence of actions a1,a2,...,an. Previous authors have considered infinite
executions as well as finite ones [Sch00]. Some of the applications on which we
might want to enforce policies (such as web servers or operating systems) are
often considered to run infinitely, but in practice their executions will always
eventually terminate. Although we allow A and Σ to be countably infinite,
in this paper we restrict ourselves to finite but arbitrarily long executions to
simplify our analysis. We use the metavariables σ and τ to range over finite
sequences of actions.
The symbol · denotes the empty sequence. We use the notation σ[i] to
denote the ithaction in the sequence (beginning the count at 0). The notation
σ[..i] denotes the subsequence of σ involving the actions σ through σ[i], and
σ[i + 1..] denotes the subsequence of σ involving all other actions. We use the
notation τ;σ to denote the concatenation of two sequences. When τ is a prefix
of σ we write τ ? σ. Given a set of executions Σ, pre(Σ) is the set of all prefixes
of all executions in Σ.
In this work, it will be important to distinguish between uniform systems
and nonuniform systems. (A,Σ) is a uniform system if Σ = A?where A?is the
set of all finite sequences of symbols from A. Conversely, (A,Σ) is a nonuni-
form system if Σ ⊂ A?. Uniform systems arise naturally when a program is
completely unconstrained; unconstrained programs may execute operations in
any order. However, an effective security system will often combine static pro-
gram analysis and preprocessing with run-time security monitoring. Such is the
case in Java virtual machines, for example, which combine type checking with
stack inspection. Program analysis, preprocessing, model checking, control- or
data-flow analysis, program instrumentation, type checking, and proof-carrying
code can also give rise to nonuniform systems.
A security policy is a predicate P on sets of executions. A set of executions
Σ satisfies a policy P if and only if P(Σ). Most common extensional program
properties fall under this definition of security policy, including the following.
A system S =
• Access Control policies specify that no execution may operate on certain
resources such as files or sockets, or invoke certain system operations.
• Availability policies specify that if a program acquires a resource during
an execution, then it must release that resource at some (arbitrary) later
point in the execution.
• Bounded Availability policies specify that if a program acquires a resource
during an execution, then it must release that resource by some fixed point
later in the execution. For example, the resource must be released in at
most ten steps or after some system invariant holds. We call the condition
that demands release of the resource the bound for the policy.
• An Information Flow policy concerning inputs s1and outputs s2might
specify that if s2= f(s1) in one execution (for some function f) then there
must exist another execution in which s2?= f(s1).
Alpern and Schneider [AS87] distinguish between properties and more general
policies as follows. A security policy P is deemed to be a (computable) property
when it has the following form.
P(Σ) = ∀σ ∈ Σ.ˆP(σ)
whereˆP is a computable predicate on A?.
Hence, a property is defined exclusively in terms of individual executions.
It may not specify a relationship between possible executions of the program.
Information flow, for example, which can only be specified as a condition on the
set of possible executions of a program, is not a property. The other example
policies provided in the previous section are all security properties.
We assume that the empty sequence is contained in any property. This
describes the idea that an untrusted program that has not started executing is
not yet in violation of any property. From a technical perspective, this decision
allows us to avoid repeatedly considering the empty sequence as a special case
of an execution sequence in future definitions of enforceable properties.
Given some set of actions A, a predicateˆP over A?induces the security
property P(Σ) = ∀σ ∈ Σ.ˆP(σ). We often use the symbolˆP interchangeably as
a predicate over execution sequences and as the induced property. Normally,
the context will make clear which meaning we intend.
are called safety properties [Lam77]. We can make this definition precise as
follows. PredicateˆP induces a safety property if and only if,
∀σ ∈ pre(Σ).¬ˆP(σ) ⇒ ∀σ?∈ Σ.(σ ? σ?⇒ ¬ˆP(σ?))
Properties that specify that “nothing bad ever happens”
Informally, this definition states that once a bad action has taken place, thereby
excluding the initial segment of an execution from the property, there is no
extension of that segment that can remedy the situation. For example, access-
control policies are safety properties since once a restricted resource has been
accessed, the policy is broken. There is no way to “un-access” the resource and
fix the situation afterwards.
Our definition of safety differs slightly from that of previous authors. Since
we wish to consider nonuniform systems, σ ranges over pre(Σ) rather than Σ. On
uniform systems Σ = A?and therefore pre(Σ) = Σ; consequently, the definition
we give corresponds exactly to previous work. On nonuniform systems pre(Σ)
is a superset of Σ. In our definition of safety, this implies that a sequence may
become irremediably bad at a point that does not correspond to a full execution