Authentication for Mobile Agents.
ABSTRACT In mobile agent systems, program code together with some process state can autonomously migrate to new hosts. Despite its many practical benets, mobile agent technology results in signican t new se- curity threats from malicious agents and hosts. In this paper, we propose a security architecture to achieve three goals: certication that a server has the authority to execute an agent on behalf of its sender; exible selection of privileges, so that an agent arriving at a server may be given the privileges necessary to carry out the task for which it has come to the server; and state appraisal, to ensure that an agent has not become malicious as a consequence of alterations to its state. The architecture models the trust relations between the principals of mobile agent systems and includes authentication and authorization mechanisms.
- SourceAvailable from: psu.edu
Conference Proceeding: Kerberos: An Authentication Service for Open Network Systems.[show abstract] [hide abstract]
ABSTRACT: In an open network computing environment, a workstation cannot be trusted to identify its users correctly to network services. Kerberos provides an alternative approach whereby a trusted third-party authentication service is used to verify users' identities. This paper gives an overview of the Kerberos authentication model as imple- mented for MIT's Project Athena. It describes the protocols used by clients, servers, and Kerberos to achieve authentication. It also describes the management and replication of the database required. The views of Kerberos as seen by the user, programmer, and administrator are described. Finally, the role of Kerberos in the larger Athena picture is given, along with a list of applications that presently use Kerberos for user authentica- tion. We describe the addition of Kerberos authentication to the Sun Network File Sys- tem as a case study for integrating Kerberos with an existing application.01/1988
Article: Efficient certi cate revocation
- [show abstract] [hide abstract]
ABSTRACT: Mobile agents are processes which can autonomously migrate to new hosts. Despite its many practical ben-ets, mobile agent technology results in signicant new security threats from malicious agents and hosts. The primary added complication is that, as an agent traverses multiple hosts that are trusted to dierent degrees, its state can change in ways that adversely impact its functionality. In this paper, we i n v estigate these new threats and develop a set of achievable se-curity requirements for mobile agent systems.
Authentication for Mobile Agents?
S. Berkovits??, J. D. Guttman, and V. Swarup
The MITRE Corporation
202 Burlington Road
Bedford, MA 01730-1420
Abstract. In mobile agent systems, program code together with some
process state can autonomously migrate to new hosts. Despite its many
practical benefits, mobile agent technology results in significant new se-
curity threats from malicious agents and hosts. In this paper, we propose
a security architecture to achieve three goals: certification that a server
has the authority to execute an agent on behalf of its sender; flexible
selection of privileges, so that an agent arriving at a server may be given
the privileges necessary to carry out the task for which it has come to
the server; and state appraisal, to ensure that an agent has not become
malicious as a consequence of alterations to its state. The architecture
models the trust relations between the principals of mobile agent systems
and includes authentication and authorization mechanisms.
Currently, distributed systems employ models in which processes are statically
attached to hosts and communicate by asynchronous messages or synchronous
remote procedure calls. Mobile agent technology extends this model by including
mobile processes, i.e., processes which can autonomously migrate to new hosts.
Numerous benefits are expected; they include dynamic customization both at
servers and at clients, as well as robust remote interaction over unreliable net-
works and intermittent connections [7,15,25].
Despite its many practical benefits, mobile agent technology results in sig-
nificant new security threats from malicious agents and hosts. In fact, several
previous uses of mobile agents have been malicious, e.g., the Internet worm. Se-
curity issues are recognized as critical to the acceptability of distributed systems
based on mobile agents. An important added complication is that, as an agent
traverses multiple machines that are trusted to different degrees, its state can
change in ways that adversely impact its functionality.
Threats, vulnerabilities, and countermeasures for the currently predominat-
ing static distributed systems have been studied extensively; sophisticated dis-
tributed system security architectures have been designed and implemented [13,
?This work was supported by the MITRE-Sponsored Research Program. Appeared
in Mobile Agents and Security, G. Vigna (Ed.), LNCS 1419, Springer Verlag, 1998.
??S. Berkovits is also affiliated with the Department of Mathematical Sciences, Uni-
versity of Massachusetts–Lowell.
21]. These architectures use the access control model, which provides a basis
for secrecy and integrity security policies. In this model, objects are resources
such as files, devices, processes, and the like; principals are entities that make
requests to perform operations on objects. A reference monitor is a guard that
decides whether or not to grant each request based on the principal making the
request, the operation requested, and the access rules for the object.
The process of deducing which principal made a request is called authenti-
cation. In a distributed system, authentication is complicated by the fact that
a request may originate on a distant host and may traverse multiple machines
and network channels that are secured in different ways and are not equally
trusted . Because of the complexity of distributed authentication, a formal
theory is desirable: The formal theory shows how authentication decisions may
be made safely and uniformly using a small number of basic principles.
The process of deciding whether or not to grant a request—once its principal
has been authenticated—is called authorization. The authentication mechanism
underlies the authorization mechanism in the sense that authorization can only
perform its function based on the information provided by authentication, while
conversely authentication requires no information from the authorization mech-
In this paper, we examine a few different ways of using mobile agents, with
the aim of identifying many of the threats and security issues which a meaningful
mobile agent security infrastructure must handle. We identify three security goals
for mobile agent systems and propose an abstract architecture to achieve those
goals. This architecture is based on four distinct trust relationships between the
principals of mobile agent systems. We present and prove conditions necessary
to establish each trust relation and then create an architecture that establishes
the conditions. We use existing theory—the distributed authentication theory
of Lampson et al. —to clarify the architecture and to show that it meets its
objectives. Finally, we describe a set of practical mechanisms that implement
the abstract architecture.
This paper draws heavily from two papers that we have published [6,5]. For
related work on mobile agent security, see [3,4,16,22–24,11].
A mobile agent is a program that can migrate from one networked computer
to another while executing. This contrasts with the client/server model where
non-executable messages traverse the network, but the executable code remains
permanently on the computer it was installed on. Mobile agents have numerous
potential benefits. For instance, if one needs to perform a specialized search of
a large free-text database, it may be more efficient to move the program to the
database server rather than move large amounts of data to the client program.
In recent years, several programming languages for mobile agents have been
designed. These languages make different design choices as to which compo-
nents of a program’s state can migrate from machine to machine. For instance,
Java  permits objects to migrate. In Obliq , first-class function values
(closures) can migrate; closures consist of program code together with an envi-
ronment that binds variables to values or memory locations. In Kali Scheme ,
again, closures can migrate; however, since continuations [10,8] are first-class
values, Kali Scheme permits threads to migrate autonomously to new hosts. In
Telescript , functions are not first-class values; however, Telescript provides
special operations that permit processes to migrate autonomously.
The languages also differ in their approach to transporting objects other than
agents. When a closure or process migrates, it can either carry along all the
objects (mutable data) that it references or leave the objects behind and carry
along network references to the objects. Java lets the programmer control object
marshalling. Object migration uses copy semantics which results in multiple
copies of the same object; data consistency needs to be programmed explicitly if
it is desired. In Obliq, objects remain on the node on which they were created and
mobile closures contain network references to these objects; if object migration
is desired, it needs to be programmed explicitly by cloning objects remotely and
then deleting the originals. In Kali Scheme, objects are copied upon migration as
in Java. In Telescript, objects can either migrate or stay behind when an agent
that owns them migrates. However, if other agents hold references to an object
that migrates, those references become invalid.
In this paper, we adopt a fairly general model of mobile agents. Agent servers
are abstract processors, e.g., individual networked computers, interpreters that
run on computers, etc. Agent servers communicate among themselves using host-
to-host communication services. An agent consists of code together with execu-
tion state. The state includes a program counter, registers, local environment,
control stack, and store.
Agents execute on agent servers within the context of global environments
(called places) provided by the servers. The places provide agents with (re-
stricted) access to services such as communication services or access to compu-
tational or data resources of the underlying server. Agents communicate among
themselves by message passing. In addition, agents can invoke a special asyn-
chronous “remote apply” operation that applies a closure to arguments on a
specified remote server. Remote procedure calls can be implemented with this
primitive operation and message passing. Agent migration and cloning can also
be implemented with this primitive operation, using first-class continuation val-
3 Example: Travel Agents
In this section, we will study an example that is typical of many—though not of
all—of the ways that mobile agents can be used effectively. We will try to draw
out the most important security issues that they raise, as a concrete illustration
of the problems of secure mobile agents.
Consider a mobile agent that visits the Web sites of several airlines searching
for a flight plan that meets a customer’s requirements. We focus on four servers:
a customer server, a travel agency server, and two servers owned by competing
airlines, for instance United Airlines and American Airlines, which we assume for
the sake of this example do not share a common reservation system. The mobile
agent is programmed by a travel agency. A customer dispatches the agent to
the United Airlines server where the agent queries the flight database. With
the results stored in its environment, the agent then migrates to the American
Airline server where again it queries the flight database. The agent compares
flight and fare information, decides on a flight plan, migrates to the appropriate
airline server, and reserves the desired flights. Finally, the agent returns to the
customer with the results.
The customer can expect that the individual airlines will provide true infor-
mation on flight schedules and fares in an attempt to win her business, just as
we assume nowadays that the reservation information the airlines provide over
the telephone is accurate, although it is not always complete.
However, the airline servers are in a competitive relation with each other.
The airline servers illustrates a crucial principle: For many of the most natural
and important applications of mobile agents, we cannot expect the participants
to trust one another.
There are a number of attacks they may attempt. For instance, the second
airline server may be able to corrupt the flight schedule information of the first
airline, as stored in the environment of the agent. It could surreptitiously raise
its competitor’s fares, or it could advance the agent’s program counter into the
preferred branch of conditional code. Current cryptographic techniques can pro-
tect against some but not all such attacks. Thus, the mobile agent cannot decide
its flight plan on an airline server since the server has the ability to manipulate
the decision. Instead, the agent would have to migrate to a neutral server such as
the customer’s server or a travel agency server, make its flight plan decision on
that server, and then migrate to the selected airline to complete the transaction.
This attack illustrates a principle: An agent’s critical decisions should be made
on neutral (trusted) servers.
A second kind of attack is also possible: the first airline may hoodwink the
second airline, for instance when the second airline has a cheaper fare available.
The first airline’s server surreptitiously increases the number of reservations to
be requested, say from 2 to 100. The agent will then proceed to reserve 100
seats at the second airline’s cheap fare. Later, legitimate customers will have to
book their tickets on the first airline, as the second believes that its flight is full.
This attack suggests two additional principles: A migrating agent can become
malicious by virtue of its state getting corrupted; and unchanging components of
the state should be sealed cryptographically.
Security is a fundamental concern for a mobile agent system. Harrison et al. 
identified security as a “severe concern” and regarded it as the primary obstacle
to adopting mobile agent systems.
The operation of a mobile agent system will normally be subject to vari-
ous agreements, whether declared or tacit. These agreements may be violated,
accidentally or intentionally, by the parties they are intended to serve. A mo-
bile agent system can also be threatened by parties outside of the agreements:
they may create rogue agents; they may hijack existing agents; or they may
There are a variety of desirable security goals for a mobile agent system. Most
of these concern the interaction between agents and servers. The user on behalf of
whom an agent operates wants it to be protected—to the extent possible—from
malicious or inept servers and from the intermediate hosts which are involved in
its transmission. Conversely, a server, and the site at which it operates, needs to
be protected from malicious or harmful behavior by an agent.
Not all attractive goals can be achieved, however, except in special circum-
stances. In the case of mobile agents, one of the primary motivations is that
they allow a broad range of users access to a broad range of services offered
by different—frequently competing—organizations. Thus, in many of the most
natural applications, many of the parties do not trust each other. In our opin-
ion, some previous work (for instance ) is vitiated by this fact: It assumes a
degree of trust among the participants which will not exist in many applications
of primary interest.
Nevertheless, the special cases may be of interest to some organizations.
A large organization like the United States Department of Defense might set
up a mobile agent system for inter-service use; administrative and technical
constraints might ensure that the different parties can trust each other in ways
that commercial organizations do not. In this paper, however, we will focus on
the more generic case, in which there will be mistrust and attempts to cheat.
We assume that different parties will have different degrees of trust for each
other, and in fact some parties may be in a competitive or even hostile relation
to one another. As a consequence, we may infer that one party cannot be certain
that another party is running an untampered server. An agent that reaches that
party may not be allowed to run correctly, or it may be discarded. The server
may forge messages purporting to be from the agent. Moreover, the server may
inspect the state of the agent to ferret out its secrets. For this reason, we assume
that agents do not carry keys.
Existing approaches for distributed security  allow us to achieve several
basic goals. These include authenticating an agent’s endorser and its sender,
checking the integrity of its code, and offering it privacy during transmission, at
least between servers willing to engage in symmetric encryption.
However, at least three crucial security goals remain:
(1) Certification that a server has the authority to execute an agent on behalf
of its sender. If executing an agent involves contacting other servers, then
a server may have to authenticate that it is a legitimate representative of
the agent. The sender of an agent may want to control which servers will be
allowed to authenticate themselves in this role.
(2) Flexible selection of privileges, so that an agent arriving at a server may be
given the privileges necessary to carry out the task for which it has come to
the server. There are some applications in which a sender wants his agent to
run with restricted authority most of the time, but with greater authority in
certain situations. For instance, in the travel agent example of Section 3, a
data-collection agent collecting flight information on an airline server needs
only ordinary privilege. However, when it returns to its home server or a
travel agency server, the agent must request privilege so that it can select
a flight plan and purchase a ticket. Thus, there must be a mechanism to
allow an agent to request different levels of privilege depending on its state
(including its program counter).
(3) State appraisal, to ensure that an agent has not become malicious as a con-
sequence of alterations to its state. Because a migrating agent can become
malicious if its state is corrupted, as in the case of the travel agent of Sec-
tion 3, a server may want to execute a procedure to test whether an agent
is in a harmful state. However, the test must be application-specific, which
suggests that reputable manufacturers of mobile agents may want to pro-
vide each one with an appropriate state appraisal function to be used each
time a server starts an agent. The code to check the agent’s state may be
shipped under the same cryptographic signature that protects the rest of the
agent’s code, so that a malicious intermediary cannot surreptitiously modify
the state appraisal function.
In the remainder of this paper, we will focus our attention on achieving these
5 Security for Mobile Agents: Theory
In this section, we will describe a security architecture for mobile agent systems
that is designed to achieve the security goals listed in Section 4. The architec-
ture consists of two levels. The first is the authentication level. The mechanisms
at this level combine to meet the first of the above security goals. The other
two goals are achieved via a pair of state appraisal functions together with the
mechanisms of the authorization layer of the architecture which determine with
what authorizations the agent is to run.
Authentication is the process of deducing which principal has made a specific re-
quest. In a distributed system, authentication is complicated by the fact that a
request may originate on a distant host and may traverse multiple machines and
network channels that are secured in different ways and are not equally trusted.
For this reason, Lampson and his colleagues  developed a logic of authenti-
cation that can be used to derive one or more principals who are responsible for
Elements of a Theory of Authentication The theory—which is too rich
to summarize here—involves three primary ingredients. The first is the notion
of principal. Atomic principals include persons, machines, and keys; groups of
principals may also be introduced as principals; and in addition principals may
be constructed from simpler principals by operators. The resulting compound
principals have distinctive trust relationships with their component principals.
Second, principals make statements, which include assertions, requests, and per-
formatives.1Third, principals may stand in the “speaks for” relation; one prin-
cipal P1 speaks for a second principal P2 if, when P1 says s, it follows that
P2says s. This does not mean that P1is prevented from uttering phrases not
already uttered by P2; on the contrary, it means that if P1makes a statement,
P2will be committed to it also. For instance, granting a power of attorney cre-
ates this sort of relation (usually for a clearly delimited class of statements) in
current legal practice. When P1 speaks for P2, we write P1 ⇒ P2. One of the
axioms of the theory allows one principal to pass the authority to speak for him
to a second principal, simply by saying that it is so:
(P2says P1⇒ P2) ⊃ P1⇒ P2
This is called the handoff axiom; it says that a principal can hand his authority
off to a second principal. It requires a high degree of trust.
Three operators will be needed for building compound principals, namely
the as, for, and quoting operators. If P1and P2are principals, then P1as P2
is a compound principal whose authority is more limited than that of P1. P2
is in effect a role that P1 adopts. In our case, the programs (or rather, their
names or digests) will be regarded as roles. Quoting, written P | Q is defined
straightforwardly: (P |Q) says s abbreviates P says Q says s.
The for operator expresses delegation. P1for P2expresses that P1is acting
on behalf of P2. In this case P2must delegate some authority to P1; however, P1
may also draw on his own authority. For instance, to take a traditional example,
if a database management system makes a request on behalf of some user, the
request may be granted based on two ingredients, namely the user’s identity
supplemented by the knowledge that the database system is enforcing some
constraints on the request. Because P1is combining his authority with P2’s, to
authenticate a statement as coming from P1for P2, we need evidence that P1
has consented to this arrangement, as well as P2.
Mobile agents require no additions to the theory presented in ; the theory
as it exists is an adequate tool for characterizing the different sorts of trust
relationships that mobile agents may require.
1A statement is a performative if the speaker performs an action by means of uttering
it, at least in the right circumstances. The words “I do” in the marriage ceremony are
a familiar example of a performative. Similarly, “I hereby authorize my attorneys,
Dewey, Cheatham and Howe, jointly or severally, to execute bills of sale on my
behalf.” Semantically it is important that requests and performatives should have
truth values, although it is not particularly important how those truth values are
Atomic Principals for Mobile Agents Five categories of basic principals
are specifically relevant to reasoning about mobile agents:
– The authors (whether people or organizations) that write programs to exe-
cute as agents. Authors are denoted by C, C?, etc.
– The programs they create, which, together with supplemental information,
are signed by the author. Programs and digests of programs are denoted by
D, D?, etc.
– The senders (whether people or other entities) that send agents to act on
their behalf. A sender may need a trusted device to sign and transmit agents.
Senders are denoted by S, S?, etc.
– The agents themselves, consisting of a program together with data added
by the sender on whose behalf it executes, signed by the sender. Agents and
digests of agents are denoted by A, A?, etc.
– The places where agents are executed. Each place consists of an execution
environment on some server. Places may transfer agents to other places, and
may eventually return results to the sender. Places are denoted by I, I?, etc.
Each author, sender, and place is assumed to have its own public/private key
pair. Programs and agents are not allowed to have keys since they are handled
by places that may be untrustworthy.
In addition to these atomic principals, the theory also requires:
– Public keys; and
– Compound principals built from keys and the five kinds of atomic principals
given above, using the operators of the theory of authentication.
Three functions associate other principals with any agent A:
– The agent’s program denoted as program(A).
– The agent’s author (i.e., the author of the agent’s program) denoted as
– The agent’s sender denoted as sender(A).
Naturally, an implementation also requires certification authorities; the (stan-
dard) role they play is described in Section 7.
The Natural History of an Agent There are three crucial types of events
in the life history of an agent. They are the creation of the underlying program;
the creation of the agent; and migration of the agent from one execution site
to another. These events introduce compound principals built from the atomic
principals given above.
Program Creation. The author of a program prepares source code and a state
appraisal function (denoted by max) for the program. The function max will
calculate, as a function of the agent’s current state, the maximum set of permis-
sions to be accorded an agent running the program. Should max detect that the
agent state has been corrupted, it will set the maximum set of permissions at a
reduced level, possibly allowing no permissions at all.
In addition, a sender permission list (SPL) may be included for determining
which users are permitted to send the resulting agent. In the event that the
entire SPL is not known at the time the program is created, another mechanism
such as a sender permission certificate (SPC) can be used.
After compiling the source code for the program and its state appraisal func-
tion, the author C then combines these compiled pieces of code with the SPL
and her name, constructs a message digest D for the result, and signs that with
her private key. D is regarded as a name of the program of which it is a digest.
C’s signature on D certifies that C is the one who created the program named
by D. With this certification, any entity can later verify that the C did indeed
create the program and that the program’s code, state appraisal function, and
SPL have not changed, either accidentally or maliciously. Should C wish to add a
sender to the permission list, she creates and signs an SPC certificate containing
the program name D and the sender’s name S.
By signing D, the author is effectively making a statement about agents A
whose programs are D and about senders S who appear on the SPL of D: The
author C is declaring that the sender S of a signed agent (A for S) speaks for
the agent. Formally, this is the statement
C|A|S says [S|(A for S) ⇒ (A for S)]
for all C, A, and S such that C = author(A), S = sender(A), and S is on the
SPL of program(A). The author’s signature on an SPC makes a similar statement
about the sender named in the certificate.
We assume as an axiomatic principle that the author of an agent speaks for
the agent. Formally, this is the statement
C|A ⇒ A
for all C and A such that C = author(A).
Agent Creation. To prepare a program for sending, the sender attaches a second
state appraisal function (denoted by req), called the request function. req will
calculate the set of permissions the sender wants an agent running the program to
have, as a function of the agent’s current state. For some states Σ, req(Σ) may be
a proper subset of max(Σ); for instance, the sender may not be certain how D will
behave, and she may want to ensure she is not liable for some actions. The sender
may also include a place permission list (PPL) for determining which places
are allowed to run the resulting agent on the sender’s behalf, either via agent
delegation or agent handoff (see below under Agent Migration). One can also
consider place permission certificates (PPCs) whereby the sender can essentially
add such acceptable places to the PPL even after the agent has been launched.
The sender S computes a message digest A for the following items: the pro-
gram, its digest D, the function req, the PPL, S’s name, and a counter S incre-
ments for each agent she sends. A is regarded as a name of the agent of which
it is a digest. She then signs the message digest A with her private key. S’s sig-
nature on A certifies that S created the agent named by A to act on her behalf.
The signed agent is identified with principal A for S.
By signing A, the sender S is effectively saying that it speaks for the signed
agent (A for S). Formally, this is the statement
S says [S|(A for S) ⇒ (A for S)]
for all A and S such that S = sender(A).
By signing the PPL within A, the sender S is saying that places I that appear
on the PPL with an Agent Handoff tag can execute A as the principal (A for S),
while places I that appear on the PPL with an Agent Delegation tag can execute
A as the principal (I for A for S). Formally, these are the statements:
S|(A for S) says [I|(A for S) ⇒ (A for S)]
S|(A for S) says [I|(I for A for S) ⇒ (I for A for S)] (Agent Delegation)
for all A, S, and I such that S = sender(A) and I is on the PPL of A. The
sender’s signature on a PPC makes a similar statement about the place named
in the certificate.
The act of creating the agent establishes the trust relationship embodied in
the following theorem.
Theorem 1. Let A be an agent such that C = author(A), S = sender(A), and
S is on the SPL of program(A) or S holds an SPC for program(A). Then:
S|(A for S) ⇒ (A for S)
Proof. The following assumptions hold:
(a) C|A ⇒ A (axiom).
(b) C |A |S says [S |(A for S) ⇒ (A for S)] (derived from C’s signature on
program(A) and the SPL or SPC of program(A)).
(c) S says [S|(A for S) ⇒ (A for S)] (derived from S’s signature on A).
Applying (a) to (b) yields A | S says [S | (A for S) ⇒ (A for S)] (d). The
delegation axiom X ∧ (Y | X) ⇒ (Y for X) applied to (c) and (d) yields
(A for S) says [S |(A for S) ⇒ (A for S)] (e). The result of the theorem then
follows from (e) using the handoff axiom.
Before the sender dispatches A, she also attaches a list of parameters, which
are in effect the initial state Σ0 for the agent. The state is not included un-
der any cryptographic seal, because it must change as the agent carries out its
computation. However, S’s request function req may impose invariants on the
Agent Migration. When an agent is ready to migrate from one place to the next,
the current place must construct a request containing the agent A, its current
state Σ, the current place I1, the principal P1on behalf of whom I1is executing
the agent, and a description of the principal P2on behalf of whom the next place
I2should execute the agent starting in state Σ.
The statement I2|P2⇒ P2asserts the expected trust relationship between I2
and P2, namely, that, whenever I2says P2makes a statement s, P2is committed
to s. The authentication machinery can be construed as providing a proof of this
statement. Depending on whether I2 is trusted by I1 or by the agent A, four
different values of P2are possible, expressing four different trust relationships.
(1) Place Handoff. I1can hand the agent off to I2. I2will then execute the agent
on behalf of P1. In this case, P2 is P1, and the migration request by I1 is
assumed to say I1|P1says I2|P2⇒ P2.
(2) Place Delegation. I1can delegate the agent to I2. I2will combine its authority
with that of P1while executing the agent.2In this case, P2is (I2for P1),
and the migration request by I1is assumed to say I1|P1says I2|P2⇒ P2.
The response by I2to accept the delegation is assumed to say I2|P1says I2|
(3) Agent Handoff. The agent can directly hand itself off to I2. I2will execute
A on behalf of the agent. In this case, P2 is (A for S), and A’s PPL or a
PPC must imply S | (A for S) says (I2|P2⇒ P2). The migration request
by I1does not assert anything and can be unsigned.
(4) Agent Delegation. The agent can delegate itself to I2. I2 will combine its
authority with that of the agent while executing A. In this case, P2 is
(I2for A for S), and A’s PPL or a PPC must imply S|(A for S) says (I2|
P2 ⇒ P2). The response by I2 to accept the delegation is assumed to say
I2| (A for S) says (I2| P2 ⇒ P2). The migration request by I1 does not
assert anything and can be unsigned.
In the first case, I2does not appear in the resulting compound principal. This
requires I1to trust I2not to do anything I1would not be willing to do. In the
third and fourth cases, because the agent itself is explicitly expressing trust in
I2, the resulting compound principal does not involve I1. The agent trusts I2to
appraise the state before execution. Assuming that the result of the appraisal is
accepted, I1has discharged its responsibility. Place handoff and place delegation
will usually be initiated directly by the server of the place, while agent handoff
and agent delegation will usually be initiated by agent’s code.
Agent launch may be regarded as place handoff where the sender’s home
place plays the role of I2and the sender herself acts as I1.
Each time an agent migrates to a new place, the authentication machinery
must verify that the statement I2|P2⇒ P2is true. How this is done depends on
which of the four cases of migration is involved. In each case, however, the veri-
fication is performed simply by checking to see if a small number of statements
2After such delegation, where the agent travels after I2 and what privileges it will be
given thereafter may depend on input from I2 or trust in I2.
are true. The following four theorems show what these statements are in each of
the four respective cases.
Let A be an agent such that S = sender(A), and assume that A migrates
from place I1as principal P1to place I2as principal P2.
Theorem 2 (Place Handoff). Let P2= P1. Then I2|P2⇒ P2 follows from
the following assumptions:
(a) I1|P1⇒ P1 (derived from A’s certificates).
(b) I1|P1says I2|P2⇒ P2 (derived from I1’s request).
Proof. Applying (a) to (b) yields P1 says I2| P2 ⇒ P2 (c). The result of the
theorem follows from (c) using P1= P2and the handoff axiom.
Theorem 3 (Place Delegation). Let P2 = I2 for P1. Then I2| P2 ⇒ P2
follows from the following assumptions:
(a) I1|P1⇒ P1 (derived from A’s certificates).
(b) I1|P1says I2|P2⇒ P2 (derived from I1’s request).
(c) I2|P1says I2|P2⇒ P2 (derived from I2’s response).
Proof. Applying (a) to (b) yields P1says I2|P2⇒ P2(d). The delegation axiom
X ∧ (Y |X) ⇒ Y for X applied to (d) and (c) yields P2says I2|P2⇒ P2(e).
The result of the theorem then follows from (e) using the handoff axiom.
Theorem 4 (Agent Handoff). Let P2= A for S. Then I2|P2⇒ P2follows
from the following assumption:
(a) S|(A for S) ⇒ (A for S) (derived by Theorem 1).
(b) S |(A for S) says [I2|P2⇒ P2] (derived from A’s PPL or accompanying
Proof. The result of the theorem follows from (a) and (b) using P2= (A for S)
and the handoff axiom.
Theorem 5 (Agent Delegation). Let P2= I2for A for S. Then I2| P2⇒
P2follows from the following assumptions:
(a) S|(A for S) ⇒ (A for S) (derived by Theorem 1).
(b) S |(A for S) says [I2|P2⇒ P2] (derived from A’s PPL or accompanying
(c) I2|(A for S) says [I2|P2⇒ P2] (derived from I2’s response).
Proof. Applying (a) to (b) yields (A for S) says [I2 | P2 ⇒ P2] (d). The
delegation axiom X
∧ (Y | X) ⇒ Y for X applied to (c) and (d) yield
P2says [I2|P2⇒ P2]. The result of the theorem then follows using the handoff
We can now describe what happens when a place I2 receives a request to
execute an agent A with a state Σ on behalf of a principle P2. First, I2will check
the author’s signature on the program of A and the sender’s signature on A itself.
This would be done using standard, well-understood, public key certification
mechanisms . Second, I2will authenticate P2by verifying that I2|P2⇒ P2
is true. This would be done by checking to see that the assumptions (given by
the theorems above) which imply I2|P2⇒ P2follow from A’s PPL and PPCs,
the certificates carried by A, and the certificates held by certification authorities.
We have now met the first of the security goals proposed in Section 4, namely
certification that a place has the authority to execute an agent, ultimately on
behalf of its sender.
Admissible Agent Principals. Let an admissible agent principal be defined in-
(1) A for S is an admissible agent principal if A is an agent and S is a sender.
(2) I for P is an admissible agent principal if I is a place and P is an admissible
If we assume that an agent can be created and can migrate only in the ways
described above, then an agent can only be executed on behalf of an admissible
The result of the authentication layer is a principal P2 on behalf of whom I2
has been asked to execute the agent. The purpose of the authorization layer
is to determine what level of privilege to provide to the agent for its work.
The authorization layer has two ingredients. First, the agent’s state appraisal
functions max and req are executed; their result is to determine what privileges
(“permits”) the agent would like to request given its current state. Second, the
server has access control lists associated with these permits; the access control
lists determine which of the requested permits it is willing to grant.
We will assume that the request is for a set α of permits; thus, a request is a
statement of the form please grant α. In our approach, agents are programmed
to make this request when they arrive at a site of execution; the permits are
then treated as capabilities during execution: no further checking is required.
We distinguish one special permit run. By convention, a server will run an agent
only if it grants the permit run as a member of α.
The request is made by means of the two state appraisal functions. The
author-supplied function max is applied to Σ returning a maximum safe set
of permits. The sender-supplied appraisal function req specifies a desired set of
permits; this may be a proper subset of the maximum judged safe by the author.
However, it should not contain any other, unsafe permits. Thus, we consider P2
to be making the conditional statement:
if req(Σ) ⊆ max(Σ) then please grant req(Σ) else please grant ∅
I2evaluates req(Σ) and max(Σ). If either req or max detects dangerous tamper-
ing to Σ, then that function will request ∅. Likewise, if req makes an excessive
request, then the conditional ensures that the result will be ∅. Since run ?∈ ∅,
the agent will then not be run by I2. Otherwise, P2has requested some set α0
In the logic of authentication presented in , authorization—the granting
of permits—is carried out using access control lists. Logically, an access control
list is a set of formulas of the form (Q says s) ⊃ s, where the statements s are
requests for access to resources and Q is some (possibly compound) principal. If
a principal P says s0, then I2tries to match P and s0against the access control
list. For any entry (Q says s) ⊃ s, if P ⇒ Q and s0 ⊃ s, then I2may infer s,
thus effectively granting the request. This matching may be made efficient if P,
Q, and s take certain restricted syntactic forms.
Since we are concerned with requests for sets of permits, if α ⊆ α0 then
please grant α0 ⊃ please grant α. Hence, a particular access control list entry
may allow only a subset of the permits requested. The permits granted will be
the union of those allowed by each individual access control list entry
(Q says please grant α) ⊃ please grant α
that matches in the sense that P2⇒ Q and α ⊆ α0.
6Example Revisited: Secure Travel Agents
We now return to our travel agents example (Section 3) and describe how the
various trust relationships of that example can be expressed in our security
architecture, and how state appraisal functions may be used to achieve their
In the example, a travel agency purchases a travel reservation programcontaining
a state appraisal function from a software house. The state appraisal function
determines when and how the agent will have write privileges to enter actual
reservations in the databases of an airline, a hotel, or a car rental firm. Otherwise,
it requests only read privileges to obtain pricing and availability information from
When a customer submits a tentative itinerary for a business trip or a vaca-
tion (via an HTML form, for example), the travel agency prepares to launch the
travel reservation agent. It adds a permit request function. The agency has spe-
cial relationships with certain airlines, hotels, car rental companies, and other
travel agencies. The agency provides a PPL or PPCs to hand off or delegate
authority to servers. For instance, the travel agency may be willing to hand off
authority to its own server and to a neutral, trusted travel agency server, but it
may wish only to delegate authority to Airline 1 and Airline 2 (since they have
vested interests). Alternatively, the agency may get special commissions from
Airline 2 and may be eager to accept anything that airline suggests. As a result,
it may be willing to hand off to Airline 2. The travel agency launches the agent
at its server, with an initial state containing the customer’s desired travel plans.
As its first task, the agent migrates to the Airline 1 server I1. The migration
request is for place delegation to Airline 1, giving I1 the authority to speak
on the agent’s behalf. Airline 1 accepts this delegation and runs the agent as
I1for A for S. This ensures that Airline 1 takes responsibility while speaking
for the agent, for instance, while deciding that it is to the customer’s advantage
to visit a hotel that Airline 1 owns before moving to Airline 2. This is an example
of the agent delegating its authority to Airline 1 (Theorem 5).
Airline 1 owns a hotel chain and has strong trust in its hotels such as Hotel 1.
It sends the agent to the Hotel 1 server I2and gives Hotel 1 whatever authority
it has over the agent. Hotel 1 runs the agent as I1for A for S, which is the prin-
cipal that I1hands it. This kind of trust relationship is an example of Airline 1’s
server handing off its authority to Hotel 1 (Theorem 2). As a consequence of
this trust, I2may grant the agent access to a database of preferred room rates.
Next, the agent migrates to Airline 1’s preferred car rental agency Car
Rental 1, whose server is I3. Since Airline 1 does not own Car Rental 1,
it delegates its authority to Car Rental 1. Car Rental 1 runs the agent as
I3 for I1 for A for S. This causes Car Rental 1 to take responsibility while
speaking on Airline 1’s behalf. It also gives the agent combined authority from
I1 and I3; for instance, the agent can obtain access to rental rates negotiated
for travelers on Airline 1. Airline 1’s server has delegated its authority to Car
Rental 1 (Theorem 3).
The agent now migrates to the Airline 2 server I4. The agent’s PPL includes
Airline 2 or the agent holds a PPC that directly delegates to Airline 2 the
authority to speak on the agent’s behalf. Airline 2 accepts this delegation and
runs the agent as I4for A for S, again agent delegation (Theorem 5). Airline 1’s
server I1has now discharged its responsibility; it is no longer an ingredient in the
compound principal. Except that the agent is carrying the results of its inquiries
at Airline 1, Hotel 1 and Car Rental 1, it is as if the travel agency had just
delegated the agent to Airline 2.
Once the agent has collected all the information it needs, it migrates to
the customer’s trusted travel agency (Travel Agency 1) server I5 to compare
information and decide on an itinerary. The agent’s PPL or a PPC permits
directly handing Travel Agency 1 the authority to speak on its behalf. Travel
Agency 1 can thus run the agent as A for S. This permits Travel Agency 1
to make critical decisions for the agent, for instance, to make reservations or
purchase a ticket. This kind of trust relationship is an example of the agent
handing off its authority to Travel Agency 1 (Theorem 4).
We next illustrate how state appraisal functions may be used to achieve their
security goals. In particular, we will stress the goals 2 and 3 of Section 4, namely