Edit automata: enforcement mechanisms for run-time security policies
ABSTRACT We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: they may terminate an application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another.
- SourceAvailable from: psu.edu[Show abstract] [Hide abstract]
ABSTRACT: Smart card technology has advanced to the point where computerized cards the size of credit cards can hold multiple interacting programs. These multi-applet cards are beginning to be exploited by business and government in security, transport and financial applications. We conduct a thorough analysis of a programmable payment card application: a smart card for making purchases which can be customized to allow or reject purchases based on various policies that are installed by users. We describe a framework for specifying, merging and analyzing modular policies. We present policy automata , a formal model of computations that grant or deny access to a resource. This model combines state machines with a voting system whereby the vote of each state machine is consolidated and resolved into a decision to accept or reject. We use defeasible logic as the primary mechanism for describing and resolving votes. This formal model effectively represents complex policies as combinations of simpler modular policies. We present Polaris, a tool which analyzes policy automata to reveal potential conflicts and compiles automata into an executable form when combined with our on-card policy manager. We show the effectiveness of our model in a case-study where actual University of Pennsylvania purchasing policies are encoded as policy automata. We demonstrate the feasibility of our framework with experiments that show that our implementation can convert formal policy automata to executable Java Card applets whose performance meets the requirements for retail credit card transactions.Dissertations available from ProQuest.
Conference Paper: A Tool for the Synthesis of Controller Programs.[Show abstract] [Hide abstract]
ABSTRACT: In previous works we have developed a theory based on formal methods for enforcing security properties by defining process algebra controller operators. In this paper we continue our line of research, by describing a tool developed for synthesizing a model for a given security property that is also a control program for a given controller operator. The tool implements the partial model checking technique and the satisfiability procedure for a modal μ-calculus formula.Formal Aspects in Security and Trust, Fourth International Workshop, FAST 2006, Hamilton, Ontario, Canada, August 26-27, 2006, Revised Selected Papers; 01/2006
- [Show abstract] [Hide abstract]
ABSTRACT: In modern computer topics, such as usage control, privacy protection and regulatory compliance, it is essential to enforce that computer systems adhere to the policies governing their operation, i.e. to prevent systems from violating the policies by transitioning into an illegal state. Reference monitors are employed to enforce policies during the execution. However, the increasing demand to demonstrate correct policy enforcement and the impossibility to fully enforce some of the policy elements at runtime raise the need for audits to decide whether systems in fact obey the policies. In computer systems, an audit is the a posteriori examination of system records conducted by an independent third-party to generate evidence about policy adherence. Despite the soaring need for audits, current state-of-the-art exhibits the following shortcomings: • logging mechanisms do not completely provide for authentic system records, so that a suitable basis for audits is not guaranteed. • audits are at best semi-automated, which has a negative impact on the time and cost involved in conducting audits, as well as on the correctness and credibility of generated evidence. This thesis tackles these shortcomings by introducing a novel model for automated audits and elaborates on the design, properties and implementation of two of its components, namely the BBox and ExaMINA. Like a flight recorder, the BBox is a digital black box for systems. It employs a trusted co-processor and a secure logging mechanism to protect system records, thereby providing for authentic and tamper-evident system records. The BBox also allows the extraction of portions of the system records filtered according to some simple search criteria, which reduce the size of the system records to be audited. Using an exemplary policy language for the expression of policies, ExaMINA automatically audits selected system records against the corresponding policy and generates evidence. To conduct audits, ExaMINA uses falsification: instead of showing that the system adheres to each rule, ExaMINA searches the system records for counterexamples for the adherence to the policy, thereby trying to refute the hypothesis that the system to which the records belong obeys the policy. Since finding a single counterexample suffices for refutation, counterexampledriven audits have the potential to provide for faster evidence generation in case of policy violations. Die Durchsetzung von Richtlinien in Computersystemen ist von großer Bedeutung, um das Auftreten unzulässiger Zustände zu verhindern. Unter anderem im Zusammenhang mit der Nutzungskontrolle, dem Schutz der Privatsphäre und der Einhaltung gesetzlicher Auflagen (der sog. Compliance) ist dies unabdingbar. Bislang werden Ausführungsmonitore eingesetzt, um Richtlinien zur Laufzeit durchzusetzen. Dies reicht jedoch wegen der zunehmenden Forderung nach nachweisbarer Durchsetzung und der Unmöglichkeit, manche Elemente der Richtlinien während der Ausführung erzwingen zu können, allein nicht aus. Somit müssen nachträgliche, von unabhängigen Dritten durchgeführte Analysen der Ereignisprotokolle eines Systems herangezogen werden, um Evidenzen, also Nachweise, über die Einhaltung der Richtlinien zu erzeugen bzw. die Quelle aufgetretener Verletzungen aufzuzeigen. Derartige Analysen nennt man Audits. Trotz der Notwendigkeit von Audits weisen derzeitige Verfahren die folgenden Mängel auf: • Mechanismen zur Aufzeichnung von Ereignisprotokollen gewährleisten nur bedingt die Authentizität der erfassten und gespeicherten Ereignisse, sodass eine angemessene Basis für Audits nicht garantiert werden kann. • Audits sind bestenfalls halbautomatisiert. Dies wirkt sich nicht nur auf die Zeit und Kosten negativ aus, die zur Durchführung von Audits benötigt werden, sondern auch auf die Korrektheit und Glaubwürdigkeit erzeugter Evidenzen. Die vorliegende Arbeit führt ein neues Audit-Modell ein, das diese Mängel behebt, und beschreibt den Aufbau, die Eigenschaften und die Implementierung seiner wesentlichen Komponenten, nämlich BBox und ExaMINA. Ähnlich einem Flugschreiber ist die BBox eine digitale Black-Box zur Gewährleistung authentischer und manipulationssicherer Aufzeichnung von Ereignisprotokollen, welche auch die Erzeugung gefilterter Ereignisprotokolle ermöglicht und damit den Aufwand für Audits reduziert. Anhand einer beispielhaften Sprache zum Ausdruck von Richtlinien, führt ExaMINA automatisierte Audits durch und generiert dabei Evidenzen über die Einhaltung der Richtlinien. Zur Durchführung von Audits verwendet ExaMINA die Falsifikation: anstatt zu zeigen, dass das System die Regeln einer Richtlinie einhält, durchsucht ExaMINA die Ereignisprotokolle nach Gegenbeispielen. Da die Existenz eines Gegenbeispiels zur Falsifikation genügt, ermöglichen gegenbeispiel-orientierte Audits eine schnellere Erzeugung von Evidenzen im Falle von Verletzungen.
Edit Automata: Enforcement Mechanisms for
Run-time Security Policies∗
Jay LigattiLujo BauerDavid Walker
Department of Computer Science
Princeton, NJ 08544
Technical Report TR-681-03
May 30, 2003
We analyze the space of security policies that can be enforced by moni-
toring and modifying programs at run time. Our program monitors, called
edit automata, are abstract machines that examine the sequence of appli-
cation program actions and transform the sequence when it deviates from
a specified policy. Edit automata have a rich set of transformational pow-
ers: They may terminate the application, thereby truncating the program
action stream; they may suppress undesired or dangerous actions without
necessarily terminating the program; and they may also insert additional
actions into the event stream.
After providing a formal definition of edit automata, we develop a rig-
orous framework for reasoning about them and their cousins: truncation
automata (which can only terminate applications), suppression automata
(which can terminate applications and suppress individual actions), and
insertion automata (which can terminate and insert).
theoretic characterization of the policies each sort of automaton can en-
force and we provide examples of policies that can be enforced by one sort
of automaton but not another.
We give a set-
When designing a secure, extensible system such as an operating system that
allows applications to download code into the kernel or a database that allows
users to submit their own optimized queries, we must ask two important ques-
∗This is a revised and extended version of “More Enforceable Security Policies,” a paper
that first appeared in the Workshop on Foundations of Computer Security, June 2002 [BLW02].
1. What sorts of security policies can we expect our system to enforce?
2. What sorts of mechanisms do we need to enforce these policies?
Neither of these questions can be answered effectively without understanding
the space of enforceable security policies and the power of various enforcement
The first significant effort to define the class of enforceable security policies
is due to Schneider [Sch00]. He investigated the security properties that can be
enforced by a specific type of program monitor. One of Schneider’s monitors
can interpose itself between an untrusted program and the machine on which
the program runs. The monitor can examine the sequence of security-relevant
program actions one at a time and if it recognizes an action that will violate
its policy, the monitor terminates the program. This mechanism is very general
since decisions about whether or not to terminate the program can depend upon
the entire history of the program’s execution. However, since these monitors
can only terminate programs and cannot otherwise modify their behavior, it is
possible to define still more powerful enforcement mechanisms.
In this paper, we re-examine the question of which security policies can be
enforced at run time by monitoring untrusted programs. Our overall approach
differs from Schneider’s, who also used automata to model program monitors, in
that we view program monitors as transformers that edit the stream of actions
produced by an untrusted application. This new viewpoint leads us to define
a hierarchy of enforcement mechanisms, each with different transformational
• A truncation automaton can recognize bad sequences of actions and halt
program execution before the security policy is violated, but cannot other-
wise modify program behavior. These automata are similar to Schneider’s
original security monitors.
• A suppression automaton, in addition to being able to halt program ex-
ecution, has the ability to suppress individual program actions without
terminating the program outright.
• An insertion automaton is able to insert a sequence of actions into the
program action stream as well as terminate the program.
• An edit automaton combines the powers of suppression and insertion au-
tomata. It is able to truncate action sequences and insert or suppress
security-relevant actions at will.
We use the general term security automaton to refer to any automaton that is
used to model a program monitor, including the automata mentioned above.1
The main contribution of this article is the development of a robust theory
for reasoning about these machines under a variety of different conditions. We
1Previous authors [Sch00] have used the term to refer specifically to automata with powers
similar to our truncation automata, which we discuss in Section 3.
use our theory to characterize the class of security policies that can be enforced
by each sort of automaton, and we provide examples of security policies that lie
in one class but not another.
More important than any particular result is our methodology, which gives
rise to straightforward, rigorous proofs concerning the power of security mech-
anisms and the range of enforceable security policies. This overall methodology
can be broken down into four main parts.
Step 1. Define the underlying computational framework and the range of secu-
rity policies that will be considered. In this paper, we define the software
systems and sorts of policies under consideration in Section 2.1 through
Step 2. Specify what it means to enforce a security policy. As we will see in Sec-
tion 2.4, there are several choices to be made in this definition. One must
be sure that the enforcement model accurately reflects the desires of the
system implementer and the environment in which the monitor operates.
Section 2.5 explains some of the limitations induced by our decisions in
steps 1 and 2.
Step 3. Formally specify the operational behavior of the enforcement mechanism
in question. Sections 3, 4, 5 and 6 define the operational semantics of four
different sorts of monitors and provides examples of the policies that they
Step 4. Prove from the previous definitions that the security mechanism in ques-
tion is able to enforce the desired properties. Sections 3, 4, 5 and 6 state
theorems concerning the security policies that each type of monitor can
enforce. The formal proofs can be found in the Appendix A.
After completing our analysis of edit automata and related machines, we
discuss related work (Section 7). Finally, Section 8 concludes the paper with
a taxonomy of security policies and a discussion of some unanswered questions
and our continuing research.
2Security Policies and Enforcement Mechanisms
In this section, we define the overarching structure of the secure systems we
intend to explore. We also define what it means to be a security policy, and
what it means to enforce a security policy. Finally, we give a generic definition
of a security automaton as an action sequence transformer.
2.1Systems, Executions, and Policies
We specify software systems at a high level of abstraction.
(A,Σ) is specified via a set of program actions A (also referred to as program
events) and a set of possible executions Σ. An execution σ is simply a finite
sequence of actions a1,a2,...,an. Previous authors have considered infinite
executions as well as finite ones [Sch00]. Some of the applications on which we
might want to enforce policies (such as web servers or operating systems) are
often considered to run infinitely, but in practice their executions will always
eventually terminate. Although we allow A and Σ to be countably infinite,
in this paper we restrict ourselves to finite but arbitrarily long executions to
simplify our analysis. We use the metavariables σ and τ to range over finite
sequences of actions.
The symbol · denotes the empty sequence. We use the notation σ[i] to
denote the ithaction in the sequence (beginning the count at 0). The notation
σ[..i] denotes the subsequence of σ involving the actions σ through σ[i], and
σ[i + 1..] denotes the subsequence of σ involving all other actions. We use the
notation τ;σ to denote the concatenation of two sequences. When τ is a prefix
of σ we write τ ? σ. Given a set of executions Σ, pre(Σ) is the set of all prefixes
of all executions in Σ.
In this work, it will be important to distinguish between uniform systems
and nonuniform systems. (A,Σ) is a uniform system if Σ = A?where A?is the
set of all finite sequences of symbols from A. Conversely, (A,Σ) is a nonuni-
form system if Σ ⊂ A?. Uniform systems arise naturally when a program is
completely unconstrained; unconstrained programs may execute operations in
any order. However, an effective security system will often combine static pro-
gram analysis and preprocessing with run-time security monitoring. Such is the
case in Java virtual machines, for example, which combine type checking with
stack inspection. Program analysis, preprocessing, model checking, control- or
data-flow analysis, program instrumentation, type checking, and proof-carrying
code can also give rise to nonuniform systems.
A security policy is a predicate P on sets of executions. A set of executions
Σ satisfies a policy P if and only if P(Σ). Most common extensional program
properties fall under this definition of security policy, including the following.
A system S =
• Access Control policies specify that no execution may operate on certain
resources such as files or sockets, or invoke certain system operations.
• Availability policies specify that if a program acquires a resource during
an execution, then it must release that resource at some (arbitrary) later
point in the execution.
• Bounded Availability policies specify that if a program acquires a resource
during an execution, then it must release that resource by some fixed point
later in the execution. For example, the resource must be released in at
most ten steps or after some system invariant holds. We call the condition
that demands release of the resource the bound for the policy.
• An Information Flow policy concerning inputs s1and outputs s2might
specify that if s2= f(s1) in one execution (for some function f) then there
must exist another execution in which s2?= f(s1).
Alpern and Schneider [AS87] distinguish between properties and more general
policies as follows. A security policy P is deemed to be a (computable) property
when it has the following form.
P(Σ) = ∀σ ∈ Σ.ˆP(σ)
whereˆP is a computable predicate on A?.
Hence, a property is defined exclusively in terms of individual executions.
It may not specify a relationship between possible executions of the program.
Information flow, for example, which can only be specified as a condition on the
set of possible executions of a program, is not a property. The other example
policies provided in the previous section are all security properties.
We assume that the empty sequence is contained in any property. This
describes the idea that an untrusted program that has not started executing is
not yet in violation of any property. From a technical perspective, this decision
allows us to avoid repeatedly considering the empty sequence as a special case
of an execution sequence in future definitions of enforceable properties.
Given some set of actions A, a predicateˆP over A?induces the security
property P(Σ) = ∀σ ∈ Σ.ˆP(σ). We often use the symbolˆP interchangeably as
a predicate over execution sequences and as the induced property. Normally,
the context will make clear which meaning we intend.
are called safety properties [Lam77]. We can make this definition precise as
follows. PredicateˆP induces a safety property if and only if,
∀σ ∈ pre(Σ).¬ˆP(σ) ⇒ ∀σ?∈ Σ.(σ ? σ?⇒ ¬ˆP(σ?))
Properties that specify that “nothing bad ever happens”
Informally, this definition states that once a bad action has taken place, thereby
excluding the initial segment of an execution from the property, there is no
extension of that segment that can remedy the situation. For example, access-
control policies are safety properties since once a restricted resource has been
accessed, the policy is broken. There is no way to “un-access” the resource and
fix the situation afterwards.
Our definition of safety differs slightly from that of previous authors. Since
we wish to consider nonuniform systems, σ ranges over pre(Σ) rather than Σ. On
uniform systems Σ = A?and therefore pre(Σ) = Σ; consequently, the definition
we give corresponds exactly to previous work. On nonuniform systems pre(Σ)
is a superset of Σ. In our definition of safety, this implies that a sequence may
become irremediably bad at a point that does not correspond to a full execution