Content uploaded by Eric Werner
Author content
All content in this area was uploaded by Eric Werner on Dec 10, 2018
Content may be subject to copyright.
Distributed Algorithms for Cooperating
Agents∗
Eric Werner and Alexander Reinefeld†
University of Hamburg
Distributed Systems Research Group
Bodenstedtstr. 16, D-2000 Hamburg 50
E-Mail: werner@rz.informatik.uni-hamburg.dbp.de‡
Abstract
Distributed algorithms are presented that describe various degrees of
cooperation between autonomous agents. The algorithms generate
styles of cooperation that cover a whole spectrum, from total
cooperation, to complete self-interest, to absolute antagonism, to
complete self-destruction and every mixture of these. A classification
of cooperation styles is described. The algorithms act on the
intentional states of the agents. To resolve goal conflicts, three
compromise methods are considered: rank-based compromise using a
simple ranking of goals, value-optimal compromise using costs and
values with the branch-and-bound procedure, and a more efficient
value-optimal compromise using a zero-one integer programming
method that exploits goal dependencies.
∗Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating
Agents’, DAI-90, Proceedings of the 10th International Workshop on Distributed Artificial
Intelligence, Bandera, Texas, 1990.
†c
Werner and Reinefeld 1990. All rights reserved.
‡Address and email no longer valid. Use: eric.werner@oarf.org See also: https://www.
ericwerner.com
1
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 2
Key Words: Distributed algorithms, social algorithms, cooperation, spec-
trum of cooperation styles, cooperation protocols, uncooperative, selfishness,
sacrifice, antagonism, robot social psychology, web bot social psychology,
human social psychology, complete indifference, degree of self-interest,
complete self-interest, primary self-interest, secondary self-interest, degree
of self-destruction, completely self-destructive, secondary self-destructive,
degree of cooperation, total cooperation, secondary cooperation, degree of
antagonism, absolute antagonism, primary antagonism, secondary antagonism.
Historical Significance and Impact1. While written in 1990 these algo-
rithms for generating the full spectrum of cooperation styles is all the more
relevant today (2018). The fundamental nature and full range of total co-
operation, to uncooperative behavior to total selfishness and antagonism is
described. It includes the algorithms that generate that behavior in robots,
internet bots, animals and humans. These algorithms have direct application
to cooperative and noncooperative artificial intelligent agents whether they
control physical robots or software. The spectrum of cooperation styles (see
Fig. 1) is also relevant in classifying animal cooperative and non-cooperative
behavior in animal and human communities. Furthermore, the spectrum of
cooperation styles and the underlying interaction protocols (algorithms) pro-
vide a new direction for analysis of human psychology in social contexts.
Contents
1 Introduction 4
2 Agents 4
3 The Structure of Cooperation Algorithms 6
3.1 Cooperation Procedures ..................... 6
3.2 Rank-Based Compromise ..................... 9
3.3 Value-Optimal Compromise ................... 9
3.4 Constraints on Goal Combinations ............... 13
3.5 Rank and Value .......................... 15
1This paragraph, the key words and the table of contents were not in the original
article. The rest except for reformatting is the original article.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 3
4 A Spectrum of Cooperation Styles 16
4.1 Dimensions of Social Interaction ................. 16
4.2 Indifference ............................ 18
5 Cooperation and Self-Interest 18
5.1 Total Cooperation ........................ 18
5.2 Primary Cooperation with Secondary Self-Interest ....... 19
5.3 Primary Self-Interest with Secondary Cooperation ....... 19
5.4 Primary Self-Interest ....................... 20
6 Antagonism and Self-Interest 20
6.1 Primary Self-Interest with Secondary Antagonism ....... 20
6.2 Primary Antagonism with Secondary Self-Interest ....... 21
7 Self-Destructive Agents 21
7.1 Self-Destruction and Antagonism ................ 22
7.2 Self-Destruction and Cooperation ................ 22
7.3 Mixtures of Social Interaction Attitudes ............ 22
8 Relaxation and Alternatives to Resolve Conflicts 23
8.1 A Relaxation Hierarchy ...................... 23
8.2 Selfish Relaxation ......................... 23
9 Interactions Among Cooperation Styles 24
10 Limitations and Extensions 25
10.1 Knowledge of Cooperation Styles ................ 25
10.2 Multi-Rankings .......................... 25
10.3 Metaknowledge About Intentions ................ 26
10.4 Communication, Roles and Social Context ........... 26
11 Conclusion 27
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 4
1 Introduction
In distributed artificial intelligence a key issue is the generation of cooperative
interactions between agents so that they can accomplish social goals [Durfee
& Lesser 87, Gasser & Rouquette 89, Rosenschein 86, Werner 89]. In this
paper a class of abstract distributed algorithms is explored that generates
and describes a whole range of social styles including cooperative, antago-
nistic, self-interested, and self-destructive styles. An agent may have any
mixture of these interaction styles. We will refer to these interaction types
as cooperation styles even though they include very uncooperative attitudes.
Furthermore, the styles are agent and context dependent. The algorithms
generate a social interaction style by affecting the intentional state of the
agent it governs. As we will see, a key feature of a cooperation style is the
method of compromise used by the agent. The compromise style depends on
the ranking and evaluations the agent makes of his and other agents’ goals.
By looking at abstract algorithms that generate various kinds of coopera-
tion we hope to deepen our understanding of cooperation and other forms of
social interaction between agents. The paper extends the previous work on
cooperation styles [Werner 90a], by finding a cost optimal compromise.
To develop the algorithms we rely on a formal theory of the cognitive
structure of the agents developed in [Werner 88, 89]. The reader is refered
to that work for a more detailed account of agent architecture including its
relation to communication and cooperation. After some brief preliminaries,
we describe the abstract algorithms. Then, we look at some typical styles of
cooperation generated by the algorithms. This work falls on the boundary
of the fields of computer algorithms [Hororwitz & Sahni 78] and distributed
artificial intelligence. We hope to show that both fields can contribute to one
another.
2 Agents
Our overall goal is to understand the cooperative and noncooperative so-
cial interactions in which agents may participate. Cooperation we view as
a process of mutual social action that leads to individual and social goals.
Cooperation emerges out of the mutual adjustment of intentions of the par-
ticipating agents [Werner 88, 89]. This mutual adjustment of intentions
results from communication, from the proclivity to engage in a cooperation
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 5
style, and from a cooperation style. The cooperation style emerges from the
cooperation algorithm that governs the agent’s adjustment of his intentions
[Werner 90a].
We take the perspective of two or more agents acting together in some
environment Ω. Each agent Ahas a capacity to partially represent infor-
mation IAabout his world and to represent his intentions SA. The agent A
may also have have the capacity to partially represent the information IB
A
and intentions SB
Aof other agents B. The agent’s information state, inten-
tional state, goal-information, and evaluations (values and costs) form his
representational state RA=< IA, RA, VA, GA, . . . >. The agents and their
world is dynamic with the external state σchanging through agent actions
and world processes. The agents’ internal representational state RAis also
dynamic, being transformed by perception, communication, and cooperation
algorithms.
An intentional state SAof an agent Ais defined to be a class of strategies
that govern the actions of the agent. The uncertainty of one agent Aabout
the intentions of another agent Bcan also be defined as a class of strategies.
SB
Arepresents agent A’s knowledge and uncertainty about B’s intentions. A
goal combination gc is an ordered set of goals. gc is ordered by a goal ranking
relation <rank. Let Gbe the unordered set of goals in gc.
The potential of a class of strategies Sis the set of all histories that are
compatible with at least one of the strategies in S. It is defined as follows:
S∗=df ∪π∈Sπ∗.πis a strategy and π∗is the set of worlds (past, present and
future) that are consistent with the strategy π. Given a goal g, the potential
g∗is the set of worlds compatible with g.
A class of strategies SBsatisfies a goal combination gc relative to agent
B’s knowledge of A’s intentions SA
Bif those two strategy classes together bring
about the goals in the goal combination. More formally, Satisfies(SB, gc, SA
B)
holds if and only if S∗
B∩SA∗
B⊆ ∩g∈Goals(gc)g∗.
A class of strategies ˆ
SBis maximal for gc relative to SA
B, if for all πthat
satisfy gc, then π∈SB, i.e., ˆ
SB={π:Satisfies(π, gc, SA
B)}. Thus a class of
strategies ˆ
SBis maximal for a goal combination gc if the class contains all
strategies that satisfy the goal combination, relative to what Bknows about
A’s intentions.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 6
3 The Structure of Cooperation Algorithms
A cooperation algorithm describes the mental process that is performed by an
agent in order to come up with a suitable cooperation strategy. The algorithm
takes the perspective of the agent it governs. This agent is called the active
or cooperating agent. When an active agent Ais controlled by an algorithm,
we say the algorithm takes A’s perspective. We assume that there are two
agents Aand Bwho are each controlled by their own cooperation algorithm.
In this paper we restrict ourselves to two agent interactions. However, the
work of [Werner 90b] on the formalization of group intentions and group
abilities, makes it possible to extend these algorithms to cooperation styles
of agents in groups.
If the active agent is A, i.e., the cooperation algorithm takes A’s perspec-
tive, then the input to the algorithm is a set of active and not yet active
goals in the goal sets GAand GBof agents Aand B. Additional input is
what the cooperating agent Aknows about the intentions of the other agent
B. The output is a maximal partial intentional state ˆ
Sthat achieves some,
possibly compromised, subset of the goals of Aand B. When this maximal
partial intentional state is composed with the agent’s given intentional state
SA, a new intentional state S0
Ais formed that describes the new intentions
of the agent A.
3.1 Cooperation Procedures
Let GAand GBbe the goal sets of two agents Aand B. We assume that
the algorithm takes A’s perspective. It is also assumed that Ahas some
knowledge of B’s intentions due to prior communication or due to a model
that Ahas of B. Note that, in accordance to real life, A’s information about
B’s strategies and plans need not necessarily be perfect.
We now describe the general structure of the cooperation algorithm by
looking at the key procedures that constitute such an algorithm. The algo-
rithm is given in pseudo Pascal notation, extended by a return statement.
In the procedures below, we assume the following variable type declarations
with the interpretations specified above: A,B: agents; GA,GB: goal-sets;
ˆ
SA,SB
A: strategy-classes; gc: goal-combination.
Procedure Cooperation(A, B, GA, GB,ˆ
SA, SB
A);
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 7
{Returns a maximal class of strategies ˆ
SAsatisfying an opt. goal combination }
begin
gc := Initialize(GA, GB);
ˆ
SA:= Cooperate(A, B, gc, S B
A);
end;
The procedure Cooperation consists of two core functions: an initializa-
tion function which ranks the goals of the agents and a function Cooperate
that attempts to come up with an intentional state that satisfies as many
goals as possible given the priorities inherent in the ranking of goals. A key
feature is the way goals are compromised when all the given goals are not
simultaneously satisfiable.
Function Initialize(GA, GB) : goal combination;
{Returns ranked set of goals gc }
begin
gc := Rankgoals(GA∪GB, <rank );
return gc;
end;
The function Initialize first ranks the combined set of goals GA∪GBby
means of a ranking function Rankgoals and then returns the resulting goal
combination gc to the calling Cooperation procedure. Rankgoals ranks the
goals of Aand Baccording to the priorities of agent Athat form his unique
cooperation style. Formally, the ranking is given by the relation <rank. We de-
liberately do not specify the function Rankgoals here, because—as explained
above—the actual ranking is influenced by the various parameters of agent
Athat are mostly not deterministic. It is this agent relative ranking relation
<rank that ultimately affects the whole cooperation process.
Function Cooperate(A, B, gc, SB
A) : set of strategies;
{Returns maximal satisfying set of strategies ˆ
SA}
begin
if gc is empty
then return fail;
find a maximal ˆ
SAthat Satisfies(ˆ
SA, gc, SB
A);
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 8
if ˆ
SAnot found
then ˆ
SA:= Cooperate(A, B , Compromise(GA∪GB, gc), SB
A);
return ˆ
SA;
end;
The function Cooperate tries to find a set of strategies ˆ
SAthat satisfies
as many of the goals of Aand Bas possible. If there is no such set of
strategies to be found, then it compromises and tries to cooperate with the
compromised set of goals.
The resulting set of strategies ˆ
SAis maximal. In other words, the pro-
cedure finds a whole set of strategies each of which satisfies the goals in gc.
Thereby, minimal commitment is made as to which strategy will actually be
followed. The set of strategies ˆ
SAis, in effect, a minimally constrained partial
or general plan. Thus overcommitment is avoided. As a result, the gener-
ated partially specified strategy can be combined with other constraints on
intentions to form new intentional states in the agent with a minimal chance
of conflict.
The agent Acooperates using his knowledge of B’s intentions, since the
general plan ˆ
SAsatisfies the goal combination gc relative to A’s information
SB
Aabout B’s intentions.
Function Compromise(G, gc) : goal combination;
{Returns the next best goal combination gc0}
begin
Generate the next goal combination gc0from G;
gc := gc0;
return gc;
end;
The compromise algorithm generates the next goal combination gc0and
returns it. Given a linearly ordered compromise graph < gc1, . . . , gcn>of
all goal combinations gc, then Compromise(G, gci) = gci+1. Thus, given an
ordered compromise graph of goal combinations, we can define a compromise
function, Compromise(G, gc), and, vice versa, given a compromise function
we can generate a compromise graph of goal combinations.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 9
3.2 Rank-Based Compromise
In our previous approach [Werner 90a], the cooperation procedures used a
compromise function based on the ranking of goals. The active agent A
initially tries to satisfy his maximal goal combination gc ={GA∪GB, <rank}.
If this is not consistent with SB
A, he then takes out the lowest ranked goal
combination gc (an individual goal at the outset) and sees if he gets a conflict
free goal set. If not, the next lowest ranking goal combination is taken
out, etc., until a conflict free goal set is found. In effect, all combinations
are considered, the largest first and then it systematically tries to find the
minimal reduction of that set.
The linearly ordered compromise graph below illustrates the rank-based
compromise function for a completely self-interested agent who is secondar-
ily cooperative. Let GA={g1
A, g2
A}and GB={g1
B, g2
B}. Then the pro-
cedure Cooperation(A, B, GA, GB,ˆ
SA, SB
A) starts by ranking the goals with
the function Rankgoals(GA∪GB, <rank). For the self-interested agent this
initial ranking will be gc =gc1=< g1
A, g2
A, g1
B, g2
B>. If the function Cooper-
ate(A, B, gc, SB
A) does not find a maximal ˆ
SAthat Satisfies(ˆ
SA, gc, SB
A), then
Cooperate(A, B, Compromise(G, gc), SB
A) is recursively called on the compro-
mised set of goals.
1. Initially gc =gc1=< g1
A, g2
A, g1
B, g2
B>
2. Compromise(G, gc1) = < g1
A, g2
A, g1
B>9. Compromise(G, gc8) = < g2
A, g1
B>
3. Compromise(G, gc2) = < g1
A, g2
A, g2
B>10. Compromise(G, gc9) = < g2
A, g2
B>
4. Compromise(G, gc3) = < g1
A, g1
B, g2
B>11. Compromise(G, gc10) = < g1
B, g2
B>
5. Compromise(G, gc4) = < g2
A, g1
B, g2
B>12. Compromise(G, gc11) = < g1
A>
6. Compromise(G, gc5) = < g1
A, g2
A>13. Compromise(G, gc12) = < g2
A>
7. Compromise(G, gc6) = < g1
A, g1
B>14. Compromise(G, gc13) = < g1
B>
8. Compromise(G, gc7) = < g1
A, g2
B>15. Compromise(G, gc14) = < g2
B>
This simple rank-based compromise approach ignores lower ranked con-
flicting goals even if the sum of the ranks of those goals is greater than the
rank of the higher conflicting goal. A more subtle approach would take into
consideration both the rank and the value of the goals.
3.3 Value-Optimal Compromise
As seen above, a simple binary ranking relation is not sufficient to insure
a value-optimal compromise will be found. And, as we will show, rank-
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 10
based compromise is a special case of value-based compromise. Therefore,
we extend our model to include a value vi
A, that describes the value of goal
gi
Ain terms of priority, utility, desirability, or some kind of merit. Of course,
the vi
A-values are totally relative to the cooperating agent Aand usually do
not hold for B.
Few things in life are free. Instead, a certain amount of effort must be
invested to reach a goal. The effort can be measured in terms of energy (e.g.,
physical energy, money, or other resources) and/or in terms of time (e.g.,
patience). Hence, we introduce a notion of costs ci
Aassociated with reaching
goal gi
A. Typically, the resources available to an agent Aare restricted to
some upper limit cmax that models the boundaries on A’s abilities.
With this notation at hand, we are now able to formulate the process
of compromising as performed by the Compromise function in terms of the
classical zero-one knapsack problem. This problem might be stated as follows:
“Suppose that a knapsack has to be filled with different objects of profits vi
and weights ciwithout exceeding a prescribed total weight cmax. The problem
is to find a feasible assignment of the objects so that the total value of the
objects in the knapsack is maximized.”
In our case, the goal combination gc corresponds to the final contents
of the knapsack and each single goal corresponds to an object that is to
be decided whether to put it in the knapsack or not. Every single goal (=
object) has associated a value viand a cost ci(the weight). The task is to
find a goal combination gc of maximum value vithat does not exceed the
given cost restriction cmax:
maximize
r
X
j=1
vj
Axj+
s
X
k=1
vk
Bxr+k
subject to r
X
j=1
cj
Axj+
s
X
k=1
ck
Bxr+k≤cmax
with
xi∈ {0,1}, i = 1,2, . . . , r +s.
The resulting vector xi, which is a concatenation of both agents’ goals
xj
Aand xk
B, represents the optimal goal combination gc. It contains a 1 for
those goals that are to be included in the optimal goal combination and a 0
for the goals that are not feasible due to the cost restriction cmax. The goal
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 11
combination gc gives the best payoff, i.e., the maximal overall value subject
to the cost restriction cmax.
Let n=r+s, i.e., the total number of goals of both agents is n. We can
then drop the agent index from the above optimization equations and form
a single objective function n
X
i=1
vixi
which is subject to the constraint
n
X
i=1
cixi≤cmax
with
xi∈ {0,1}, i = 1,2, . . . , n.
Despite its simplicity, the knapsack problem is a member of the so called
NP-hard problems. The solution space of an n-object knapsack problem con-
sists of 2nvectors of 0’s and 1’s, which correspond to the 2ndifferent ways
of assigning 0’s and 1’s to the problem variables. A variety of techniques
has been proposed for solving the knapsack problem. They include enumera-
tion techniques, the network approach and dynamic programming (see, e.g.,
[Horowitz & Sahni 78]).
The following branch-and-bound procedure is based on simple backtrack-
ing. It solves the assignment problem by first trying to include the goal in the
goal combination and then by excluding it. The inclusion/exclusion of goals
is systematically tried out for all giwith i= 1,2, . . . , n. In the algorithm,
the current state of the search process is given by two variables: the cost
variable represents the total cost of the currently investigated partial goal
combination and the value variable represents the maximum value that can
still be achieved. At the beginning, cost is initialized to 0 and value is set to
the sum over all goal values Pn
i=1 vi.
Procedure Branch-and-Bound(i, cost, value);
{Returns the cost-optimal goal combination in the global variable best gc }
begin
{Try to include goal giin the current goal combination gc }
if cost +ci≤cmax then begin
gc := gc +{gi};
if i<nthen
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 12
Branch-and-Bound(i+ 1, cost +ci, value){Try next goal }
else
if value > best value then begin
best value := value;
best gc := gc;{Save new goal combination }
end;
end;
{now try without goal gi}
if value −vi> best value then begin
if i<nthen
Branch-and-Bound(i+ 1, cost, value −vi){Try next goal }
else begin
best value := value −vi;
best gc := gc;{Save new goal combination }
end;
end;
end;
Branch-and-Bound returns the optimal goal combination in the global
variable best gc. The sum of all feasible goals is given in the global variable
best value. Before invoking Branch-and-Bound,best value must be set to
0 and the two variables gc and best gc, which contain the temporary goal
combination and the best goal combination, respectively, must be initialized
to the empty set.
Function Compromise(G, gc) : goal combination;
begin
gc := {};best gc := {};{Initialize goal combination sets }
best value := 0;
Branch-and-Bound(1,0,Pn
i=1 vi);
gc := max gc;
cmax := Pn
i=1 cixi−1; {Cost value of next iteration is current cost minus 1: }
return gc;
end;
Compromise calls the Branch-and-Bound procedure with the parameters
i= 1, cost = 0 and value =Pn
i=1 vi. The returned goal combination gc will
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 13
be checked whether it Satisfies(ˆ
SA, gc, SB
A). If not, the least valued goal must
either be relaxed or dropped so that Compromise produces a new (weaker)
goal combination in the next iteration. However, rather than “manually”
dropping the least valued goal from G, we simply reduce the cost restriction
cmax to the cost of the goal combination found in the last iteration minus 1,
i.e. to Pn
i=1 cixi−1. The cost restriction will systematically be lowered until
a conflict free goal set is found. In the end a goal combination ˆ
SAwill be
determined that is both minimally constrained and value-optimal.
3.4 Constraints on Goal Combinations
Cooperation has been described as an iterative process of searching and test-
ing: First, Branch-and-Bound searches for a value-optimal goal combination
gc. Then, Satisfies(ˆ
SA, gc, SB
A) is called to determine a maximal set of strate-
gies ˆ
SAthat satisfies as many goals as possible. If no such strategy could be
found, a compromise will be considered. In effect, Branch-and-Bound will
be invoked again to search for the next weaker goal combination. The whole
process iterates until a maximal set of strategies ˆ
SAis known.
In practice, various constraints and dependencies among goal combina-
tions must be taken into consideration. As a consequence, a feasible solution
will only be found after a number of search-and-test iterations. In many
cases, some of the constraints are known beforehand. This a priori knowl-
edge can be incorporated into the Compromise function to speed up the
search process. We distinguish two types of constraints:
Goal Dependencies: A goal g1depends on another goal g2, if g1can only
be satisfied in combination with g2. In Boolean algebra, such a goal
dependency is denoted by an implication g1→g2. This proposition is
equivalent to ¬g1∨g2. Note, that for multiple staged dependencies (i.e.
chains of dependencies) the dependency relation is transitive.
Mutual Exclusion: A mutual exclusion of two goals g1and g2is given,
when only one of the goals can be satisfied, but not both of them.
The mutual exclusion is denoted by the exclusive or g1˙
∨g2, which is
equivalent to (g1∧ ¬g2)∨(¬g1∧g2). Mutual exclusion of an arbitrary
number of goals can be expressed similarly.
For our purpose, it is necessary to derive quantitative equations, that can
be used in the compromising function. This is accomplished by using Boolean
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 14
variables together with algebraic operators [Dantzig 65]. For example, a
complete dependency of two goals g1→g2can be expressed by the restriction
(1 −x1) + x2≥1
where xdenotes the vector of the optimal goal combination. It contains a
xi= 1 for those goals githat are to be included in gc and a 0 for the goals
that are not in gc.
Similarly, the mutual exclusion g1˙
∨g2of two goals g1and g1is transformed
to the expression
x1+x2≤1.
Known constraints on feasible goal combinations can now be modeled in
terms of a zero-one integer programming formulation. By substituting for
the Branch-and-Bound procedure, in the above Cooperate routine, one of the
standard solution methods for zero-one integer programming problems (e.g.,
the Balas zero-one additive algorithm [Balas 65]), we can avoid generating a
large amount of infeasible goal combinations.
As an example, consider a scenario with two cooperating agents Aand B
having the goal sets GA={g1
A, g2
A, g3
A}and GB={g1
B, g2
B, g3
B}. Let’s assume
the existence of the following constraints:
1. Agent Acan satisfy at most two of his three goals (e.g., due to time
restrictions). Expressed in Boolean algebra we get the proposition
¬(g1
A∧g2
A∧g3
A).
2. Agent Aknows, that he can satisfy his own goal g3
Aonly if the oppo-
nent’s goal g3
Bis satisfied, i.e. g3
A→g3
B. Hence, some cooperation is
required to satisfy g3
A.
3. Only one of the two goals g1
Aand g2
Bcan be satisfied. In other words,
there is a mutual exclusion g1
A˙
∨g2
B.
4. Agent Aknows that his opponent Bcan only satisfy either g1
Bor g2
B,
that is, g1
B˙
∨g2
B.
Let the vector xiwith i= 1,...,6 be a linearly ordered list of both agents’
goal sets GA∪GB={g1
A, g2
A, g3
A, g1
B, g2
B, g3
B}. We can then convert the above
constraints into quantitative equations:
x1+x2+x3≤2
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 15
(1 −x3) + x6≥1
x1+x5≤1
x4+x5≤1 (1)
We are interested in obtaining a goal combination gc with a maximal overall
value. Hence, the objective function might be stated as follows:
maximize
6
X
i=1
vixi
subject to the goal-constraints of equation (1) and the cost restriction
6
X
i=1
cixi≤cmax
with
xi∈ {0,1}, i = 1,2,...,6.
This zero-one integer programming model can be solved by one of the
standard library packages (e.g., XMP [Marsten 87]). Similar to the branch-
and-bound formulation, the unique cooperation style is formed by the values
vi, the costs ciand the maximal cost restriction cmax. The main advantage
of integer programming lies in the fact that only feasible goal combinations
are generated which greatly reduces the number of iterations required for
generating and testing goal combinations.
3.5 Rank and Value
In distinguishing rank and value we free the algorithm to exhibit a wider
range of cooperative behavior. In the rank-based compromise function the
ranking of goals was the key in determining the cooperation style of an agent.
In contrast, the cost-optimal compromise function depends on the values
and costs assigned to the goals. The compromise function returns the next
optimal goal subset. The evaluations the agent Aassigns to his goals GAand
to the goals of others GBgenerate a ranking of those goals. It is this value
structure, with its implicit value-based ranking of goals, that determines the
cooperation style of an agent. Given a set of goals G=GA∩GBand an
evaluation Vof those goals, Vinduces an ranking of those goals. We can
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 16
make this a linear ranking by simply taking one of two equally valued goals
before the other.
Given any ranking of goals, we can always model the resulting rank-based
compromise function by giving the goals appropriate values. In particular, let
g(i) be the goal with rank i, then if we let V(g(i)) >Pi−1
k=1 V(g(k)) then the
value-based compromise function will generate the same compromise graph
as the rank-based compromise function. Therefore, rank-based compromise
is a special case of value-based compromise.
4 A Spectrum of Cooperation Styles
Various styles of cooperation come about through the unique characteristics
of the compromise function. How much an agent is willing to give up and
how much an agent is willing to take on in terms of other agents’ goals gives
that agent an individual style of interaction. This degree of cooperativity
depends on the values the agent assigns to his and other agents’ goals. The
cooperation styles and their relationships are schematically illustrated in the
picture (Fig. 1. below.)
4.1 Dimensions of Social Interaction
The fundamental social dichotomy of self and other corresponds to two basic
dimensions of social interaction. These dimensions form the latitudinal and
longitudinal axis depicted in Fig. 1. Given an active agent Awho is inter-
acting with another agent B,Acan further his own goals or he can further
the goals of the other agent B. Each major dimension is composed of two
opposing axis representing two polar opposite social interactionstyles.
Thus, one of the dimensions focuses on the other agent, and consists of
the cooperation *
)antagonism polarity. One axis is the degree of cooperation
the agent Ahas for achieving the other agent’s goals gB. This degree of
cooperation shades off into indifference and then into its opposing pole, the
degree of antagonism. The antagonistic agent Atries to achieve the opposite
of what the other agent Bwants, namely, the negation ¬gBof B’s goals.
The other fundamental dimension is concerned with the self, and consists
of the self- interest *
)self-destruction polarity. The degree of self-interest of
the agent Aindicates the extent to which the agent focusses on achieving
his own goals gAover the goals of other agents. This degree of self-interest
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 17
Figure 1: The Spectrum of Cooperation Styles. All possible styles of cooperative
and uncooperative interaction between agents.
shades into indifference and then into its polar opposite the agent’s degree of
self-destruction. The degree of self-destruction of an agent Adescribes the
extent to which the agent works against himself.
Each of the four major axes in Fig. 1represents a basic attitude to social
interaction. They generate four quadrants of complementary social inter-
action attitudes. These quadrants are again divided by the broken lines of
relative indifference to produce two octants per quadrant. For example, in
the quadrant between the complementary attitudes of self-interest and co-
operation are the two octants of primary self-interest/secondary cooperation
and primary cooperation/secondary self-interest. Each of these octants to-
gether with each of the main axis represents a basic type of social interaction
style.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 18
Summing up, an agent can have four fundamental social attitudes towards
himself and others. He can further his own goal gAor another’s goal gBor the
negation ¬gAof his own goals or the negation ¬gBof another agent’s goals.
These attitudes can merge in all combinations and degrees of strength. If,
for example, as the result of the evaluation function gAis ranked higher than
gBthen this is shown as gAgB. We have included only two goals in the
picture, but the algorithms are completely general and apply to any finite
set of goals.
4.2 Indifference
Notice, that at the center of the axis, the origin, is complete indifference to
all goal combinations. Here the agent views all goals, his own and others’,
positive or negative, as having no value or distinction. An agent at the center
of the axis does nothing and, in effect, just lets things happen. The dotted
diagonal lines represent relative indifference. For example, an agent on the
dotted line between primary cooperative and primary self- interest values
both cooperation and self-interest equally and thus is torn between his own
interests and those of others. His response would not be predictable from
the evaluations and would be based on an accidental arrangement of goals.
We now describe some of the major types of cooperative and non-
cooperative styles of agents and their interactions.
5 Cooperation and Self-Interest
We have seen how evaluations can generate goal rankings. While value-based
compromise guarantees a value-optimal goal combination as a compromise,
rank-based compromise is perhaps intuitively easier to grasp. We will, there-
fore, emphasize rank-based compromise when we illustrate the different social
interaction styles that agents can participate in.
5.1 Total Cooperation
The totally cooperative agent Aconsiders only the goals of the other agent.
In terms of value, this means that his own goals are given no value at all. He
is thus only interested in achieving the goals of the other agent disregarding
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 19
his own interests. In terms of ranking this means that he will not even rank
his own goals. This agent is thus totally selfless.
5.2 Primary Cooperation with Secondary Self-Interest
Unlike the totally cooperative agent, an agent with this cooperation style
will consider his own goals, but only secondarily. He will always value the
goals of the other agent higher than his own goals. In terms of ranking, he
will rank the goals of the other agent before his own goals. He will thus
compromise and even give up his own goals before those of the other agent.
In the compromise graph produced by a rank-based compromise function was
shown in the section 3.2 on rank-based compromise above.
Looking at the way this agent always gives up his goals in favor of the oth-
ers, we might expect that this agent will suffer from “he depressed housewife
syndrome”.
5.3 Primary Self-Interest with Secondary Cooperation
An agent with this social interaction style will always put his own goals first.
He is cooperative, but only secondarily when this does not conflict with the
achievement of his own goals. Interestingly, his compromise graph is identical
to that of the totally cooperative agent with secondary self-interest. Both
try to achieve A’s goals first. Thus, these two types of agents complement
each other. We will say they form a complementary social dyad.
A domineering husband and a submissive housewife illustrate such a func-
tioning social dyad. Another example, is a private soldier and a seargent in
the army. A master-slave relationship results when a primary self-interested
and primary cooperative agent interact. As the seargent-private example
shows, however, the master-slave social dyad may be relative. Relative
to their relationship, the seargent’s goals outweigh the privates, yet the
sergeants goals may themselves be slaved to a higher authority. Thus the
term “self-interested” is relative to a dyadic social relationship. In triadic
or multi-agent social structures an agent may be both very cooperative and
very domineering in different social dyads.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 20
5.4 Primary Self-Interest
Whereas the selfless agent is obsessed with the other, the unqualified primary
self-interested agent is obsessed with himself. He totally disregards the goals
of other agent even when he could help that agent without jeprodizing his
own interests.
Note, however, that this does not mean the agent disregards the inten-
tions of other agents. The agent may utilize his knowledge of the intentions
of another agent to enhance his own abilities and thereby further his own
goals. In [Werner 90b] we present a logic and semantics of group ability
that describes a single agent’s, as well as a group’s, ability relative to the
intentions of another group of agents.
6 Antagonism and Self-Interest
In an antagonistic relationship each agent tries to prevent the other agent
from achieving his goals. Such a relationship is one extreme of the many
possible antagonistic relationships. It may be, for example, that only one of
the agents is antagonistic and the other is cooperative.
6.1 Primary Self-Interest with Secondary Antagonism
The primary self-interested agent who is secondarily antagonistic to another
agent, will first attempt to achieve his own goals and only if it is possible
will he try to prevent the other agent from achieving his goals. A realistic
varient is the mixed case, where a primarily self-interested agent will be
cooperative or indifferent with regard to most of the other agent’s goals, but
be antagonistic to only some of the other agent’s goals.
An agent can be competitive as a result of following his self-interest with-
out being antagonistic. A self-interested agent will be accidentally antagonis-
tic to a goal of the other agent if that goal conflicts with his interests. Here,
however, we have a different case, that of essential antagonism, where the
antagonistic agent will oppose a goal even if this goal does not hurt his own
goals. While on the face of it this may appear to be irrational, it may make
sense in a larger social context where, for example, we distinguish personal
goals and goals induced by a hierarchical relationship. Then an agent Amay
be antagonistic to one of B’s goals not because this conflicts with A’s own
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 21
goals, but because this is required by a goal induced by a social relationship
to a third agent C.
6.2 Primary Antagonism with Secondary Self-Interest
We now enter a realm that from the perspective of a dyadic social relationship
is irrational. Here the agent Aattempts to prevent Bfrom achieving his goals
even if this hurts all of his own goals. His own goals are considered, but
only secondarily. An extreme, but unfortunately all too common, example
is murder. A jealous husband murders his wife knowing he will face life
imprisonment. Less extreme, is a wife who seeks revenge against her husband
by making a divorce especially difficult even if this means extra costs in terms
of time, stress and money, for her. Another example, is the soldier who offers
his life to do harm to the enemy’s goals. The latter, is an example that
from the broader social perspective may seem rational, but is irrational in
a dyadic perspective for then it is akin to the case of husband murdering
his wife; personally the soldier gains nothing for his harming of the enemy.
Clearly, one can begin to see that rationality is relative to the perspective of
social relationships the agent has in view.
7 Self-Destructive Agents
The cooperative and antagonistic agents are focussed on the other either
positively or negatively. In contrast, the self-destructive agent, like the self-
interested agent, is focussed on himself. The focus is negative. While an
extremely cooperative agent even with secondary self-interest may harm his
own interests if that is the only way to help the other agent achieve his goals,
the self-destructive agent has the primary or secondary interest of destroying
his own goals. It is thus hard to find any sort of sense in the attitudes of
self-destructive agents. Unfortunately, examples of self-destruction abound
in human kind with suicide being the extreme example.
Again, the only sort of rationality that may account for self-destructive
behavior is a larger social context of agent relationships that demand such
sacrifice for social goals. An example might be the Japanese pilots or the
culture of suicide depicted in Jukio Mishima’s novel ’Runaway Horses’. But
here the agents are extreme examples of a cooperating agent who sacrifices
his own goals to achieve the goals of the other agents. The logically possible
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 22
concept of self-destructivity generated by our algorithms would have the
agent’s goal be to not achieve his goals. The primarily self-destructive agent
attempts to destroy his goals irrespective of other agents.
7.1 Self-Destruction and Antagonism
The primarily antagonistic agent who is secondarily destructive will attempt
to thwart the other agent above his own self-destructive tendencies. An exam-
ple might be a person whose main motive is revenge, but who is secondarily
suicidal.
The primarily self-destructive agent with secondary antagonism will have
the main interest of preventing his own goals but will try to prevent those
of the other agent if this does not prevent him from hindering his own goals.
An example of such an agent may be an agent who is primarily suicidal but
will try to take someone with him if this does not prevent him from killing
himself.
7.2 Self-Destruction and Cooperation
An agent that is primarily self-destructive, but secondarily cooperative we
might call “the saint”. The saint has the primary motivation to negate his
worldly goals, but has the secondary goal of helping others.
The primarily cooperative agent who is secondarily self-destructive will
attempt to help the other agent even if this means not hindering his own
goals. We can imagine the man on the 10th floor about to jump, but decides
not to jump in order to help a woman who is also about to jump.
7.3 Mixtures of Social Interaction Attitudes
The agent’s evaluation of goals can result in any mixtures of the basic at-
titudes. In real situations this is generally the case. An agent may be co-
operative for some goals, indifferent to others, self-interested or even self-
destructive with other goals. A realistic agent will have a core of basic needs
that he will not compromise in most social situations. For some goal types,
the agent’s attitude may be agent-specific and varying relative to the other
agent.
Thus, between the extremes of pure cooperation and complete selfishness
there is every possible mixture of cooperativity and self-interest. The de-
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 23
gree of cooperativity depends on the ranking function and the compromise
function. How the goals are ranked and how the goal combinations are then
generated determines the extent to which the given agent is cooperative or
not. For more details on mixtures with various rankings see [Werner 90a].
8 Relaxation and Alternatives to Resolve
Conflicts
Beyond simply dropping a goal when it conflicts with higher priority goals,
one might relax the goal or one might replace the goal with an alternative.
Relaxation involves the abstraction of some of the features of the goal making
it less specific and, thereby, more easily satisfiable. For example, the goal
g2of going down Oak Street may conflict with the higher priority goal g1
of seeing Mary. If we relax the goal of walking down Oak Street to that of
going to the store relax(g2), then the relaxed goal may be satisfiable if, for
instance, Mary lives on Elm Street, which is also on the way to the store.
8.1 A Relaxation Hierarchy
One approach for the generation of alternatives is to postulate a relaxation
hierarchy where more abstract goals are higher up on the relaxation tree.
Going back down the tree by a different branch yields an alternative goal
of the same type. Another approach to generating alternative goals is to
associate a set of alternatives directly with a given goal instead of indirectly
through a relaxation hierarchy.
Clearly, we could extend the above compromise functions to included re-
laxation and the generation of alternatives. The compromise function would
first go through various relaxation steps and possible alternative generation
steps before finally dropping a goal. We now get more subtle forms of coop-
eration and selfishness depending on the willingness of an agent to relax his
goals or come up with alternative goals.
8.2 Selfish Relaxation
Another style of cooperation results from agents that use relaxation and
alternatives for themselves more than for others or vice versa. For example
consider the following partial compromise graph:
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 24
1. Initially gc =gc1=< g1
A, g2
A, g1
B, g2
B>
2. Compromise(gc1) = < g1
A, g2
A, g1
B>
3. Compromise(gc2) = < g1
A, g2
A, g2
B>
4. Compromise(gc3) = < g1
A, g2
A>
5. Compromise(gc4) = < g1
A,relax(g2
A), g1
B, g2
B>
6. Compromise(gc5) = < g1
A,relax(g2
A), g1
B>
7. Compromise(gc6) = < g1
A,relax(g2
A), g2
B>
8. Compromise(gc7) = < g1
A,relax(g2
A)>
9. Compromise(gc8) = <relax(g1
A), g2
A, g1
B, g2
B>
...
In this case the selfish agent Arelaxes his own goals, but not those of B.
If Bis the active agent, then Bis even more cooperative than before. He
not only ranks the other agent’s goals ahead of his own, he further attempts
to satisfy those goals by relaxation while not relaxing his own goals. This is
a truly accommodating agent.
9 Interactions Among Cooperation Styles
When two agents Aand Bwith cooperation styles CAand CBinteract,
the goals that are achieved will depend on those styles. A selfish agent
Ainteracting with a totally cooperative agent Bwill result in each agent
satisfying the same goal combination as the other agent.
Two selfish agents will each attempt to satisfy their own goals first. If
their goals are not in conflict, then both will satisfy their goals. If their goal
sets conflict, then one or the other or both will not be able to satisfy all of
their goals.
Of special interest is the fact that two totally cooperative agents will
have an interaction result that is equivalent to the interaction result of two
completely selfish agents. For the cooperative agents will attempt to satisfy
the other agents’ goals as if they were their own goals. In effect, the agents
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 25
will be competing as noncooperative agents except that they have switched
roles. One way to resolve this sort of overcooperativity is to have the agents
recognize their cooperation styles and then form a new compromise on that
basis.
Consider a typical example: Two men are standing a doorway and one
says to the other “Please, after you”. The other responds with “No, please
after you”. The first says “Oh no, no, after you”, and so on.
10 Limitations and Extensions
10.1 Knowledge of Cooperation Styles
When persons get to know one another they also get to know the cooperation
styles of the other. They come to know in what contexts and with whom a
given person is cooperative, noncooperative, indifferent or competitive. Such
knowledge of cooperation style may be placed in the representation of that
agent’s intentional state. The intentional state guides the agent’s actions and
in this case is decomposable into distinct cooperation styles. Which subin-
tentional state (cooperation style) is active will depend on complex factors
that include the social relations and other goals of the given agent. Looked
at dynamically, given a new set of goals GAand GB, the cooperation style
generates a new intentional state SBin the cooperating agent B. Whether we
place the cooperation algorithms within the intentional state of the agent or
view them as meta-intentions that operate on simpler intentional states is a
methodological and theoretical issue which may depend on other theoretical
or practical system design constraints.
Whichever approach we choose, the agent must have a way of representing
partial knowledge of cooperation styles. Given that that is the case, an agent
Bwill be better off knowing that agent Ais noncooperative than not knowing
the cooperation style of the agent at all. We can then also investigate the
dynamics of uncertainty and information about cooperation styles.
10.2 Multi-Rankings
In this paper we considered only the ranking that the active agent gives to
the goals. We assumed that B’s ranking of A’s goals corresponds to the
ranking made by the other agent A. A possible extension is to allow an
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 26
agent a partial representation of another agent’s ranking function. A further
measure of cooperativity would then include the extent to which the active
agent is willing to assume the same ordering of goals.
10.3 Metaknowledge About Intentions
The knowledge that one agent has about the intentions of another can sig-
nificantly affect the intentions of both agents to the degree that there is
metaknowledge about that knowledge. In our algorithms we have implicitly
assumed that during the construction of the intentions ˆ
SBthe intentions of
the other agent are not affected either by metaknowledge or knowledge about
intentions. Thus, we assumed that the process of finding a partial strategy
ˆ
Swhich forms new intentions in the active agent Bdoes not effect the inten-
tions of the other agent A. For the agent Bconsiders only his knowledge SA
B
and not what happens to the intentions SA(and, hence, to SA
B) when agent
Aattains knowledge about B’s newly formed intentions ˆ
SB.
10.4 Communication, Roles and Social Context
Closely related to and a way out of the problem about metaknowledge and
the resulting unwanted interactions between the intentional states of agents
is communication. Not considered in this paper is the communication nec-
essary to avoid duplication of effort, to avoid the undoing of actions and
to insure the necessary coordination. For more details see [Werner 88a,b;
89b]. Implicitly, communication is part of the strategies in ˆ
Sgenerated by
the procedure Cooperate. However, the cooperation algorithm as it stands
assumes the intentions of the other are constant. Therefore, communication
and social roles (contractual agreements as to who does what) are necessary
to avoid overcooperation between agents. Communication and cooperation
algorithms need to be integrated.
Also not considered in this paper is the social context within which the
agents find themselves. The assumption of a cooperative style by an agent
may well depend on the power relations in which the agent finds himself.
Thus the preconditions to cooperation may well need to consider the social
roles of agents [Werner 88c, 89b]
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 27
11 Conclusion
We have described a whole range of types of cooperative and noncooper-
ative behavior by way of describing the algorithms that govern the coop-
erative style of an agent. These algorithms transform the cognitive states
of agents, in particular, the intentions of such agents, to generate cooper-
ative or non-cooperative social interactions. The paper extends rank-based
compromise [Werner 90a], by considering two alternative compromise meth-
ods: value-optimal compromise using the Branch-and-Bound procedure, and
a more efficient value-optimal compromise using a zero-one integer program-
ming method which exploits known goal dependencies to prevent the genera-
tion of infeasible compromises. Although we have not considered multi-agent
communication here, we expect that cooperation algorithms are influential
in the cooperative or non-cooperative communication styles of an agent as
well. We expect the spectrum of cooperation styles generated by cooperation
algorithms will be of interest to areas beyond computer science.
References
[Balas 65] Balas, E., “An additive algorithm for solving linear programs with zero-
one variables”, Operations Research 13 (1965), 517-545.
[Barwise & Perry 83] Barwise, J., and Perry, J., Situations and Attitudes, Brad-
ford Books/MIT Press, 1983.
[Dantzig 63] Dantzig, G., Linear Programming and Extensions, Princeton Univ.
Press, 1963.
[Durfee & Lesser 87] Durfee, E.H., and Lesser, V.R., ”Using Partial Global Plans
to Coordinate Distributed Problem Solvers”, Proc. of the Tenth International
Joint Conference on Artificial Intelligence, pp. 875-883, 1987.
[Gasser & Rouquette 88] Gasser, L., and Rouquette, N., ”Representing and Using
Organizational Knowledge in Distributed AI Systems”, Distributed Artificial
Intelligence, Vol. 2, M. Huhns & L. Gasser (eds.), Morgan Kaufmann and
Pitman Publishers, London, pp. 55-78, 1989.
[Horowitz & Sahni 78] Horowitz, E., Sahni, S., Fundamentals of Computer Algo-
rithms, Computer Science Press, Maryland 1978.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.
Werner & Reinefeld, Distributed Algorithms for Cooperating Agents 28
[Marsten 87] Marsten, R., ZOOM/XMP User’s Manual. Release 4.0, XMP Opti-
mization Software Company, Tucson, Ariz. 1987.
[Milner 80] Milner, R., A Calculus of Communicating Systems, Springer-Verlag,
New York 1980.
[Reinefeld 89] Reinefeld, A., Spielbaum-Suchverfahren, Springer-Verlag, Berlin
1989.
[Rosenschein 86] Rosenschein, Jeffrey S., ”Rational Interaction: Cooperation
Among Intelligent Agents,” Ph.D. Thesis, Stanford University, 1986.
[Werner 88] Werner, E., ”Toward a Theory of Communication and Cooperation
for Multiagent Planning”, Theoretical Aspects of Reasoning About Knowledge:
Proceedings of the 2nd Conference, Morgan Kaufman Publishers, pp. 129-142,
1988.
[Werner 89] Werner, E., ”Cooperating Agents: A Unified Theory of Communi-
cation and Social Structure”, Distributed Artificial Intelligence, Vol. 2, M.
Huhns & L. Gasser (eds.), Morgan Kaufmann and Pitman Publishers, Lon-
don, pp. 3-36, 1989.
[Werner 90a] Werner, E., ”Distributed Cooperation Algorithms”, Decentralized
AI, Y. Demazeau & J-P. Muller (eds.), Elsevier Science Publishers (North
Holland), pp. 17-31, 1990.
[Werner 90b] Werner, E., ”What Can Agents Do Together? A Semantics of Co-
operative Ability”, ECAI-90,Proceedings of the 9th European Conference on
Artificial Intelligence, Stockholm, Sweden, Pitman Publishers, pp. 694-701,
1990.
Cite as: Werner, E. and Reinefeld, A., ’Distributed Algorithms for Cooperating Agents’, DAI-90,
Proceedings of the 10th International Workshop on Distributed Artificial Intelligence, Bandera, Texas,
1990.