Content uploaded by Frank Dignum
Author content
All content in this area was uploaded by Frank Dignum
Content may be subject to copyright.
Content uploaded by John-jules Meyer
Author content
All content in this area was uploaded by John-jules Meyer
Content may be subject to copyright.
Ontology Negotiation: Goals, Requirements and Implementation 1
Ontology Negotiation: Goals, Requirements
and Implementation
Jurriaan van Diggelen, Robbert-Jan
Beun, Frank Dignum, Rogier M. van
Eijk, John-Jules Meyer
Institute of Information and Computing Sciences
Utrecht University, the Netherlands
{jurriaan, rj, dignum, rogier, jj}@cs.uu.nl
Abstract
Communication in heterogeneous multi agent systems is hampered by the lack of
shared ontologies. Ontology negotiation offers an integrated approach that enables
agents to gradually build towards a semantically integrated system by sharing parts
of their ontologies. This solution involves a combination of a normal agent com-
munication protocol with an ontology alignment protocol. For such a combination
to be successful, it must satisfy several criteria. This paper discusses the goals
and requirements that are important for any ontology negotiation protocol. Fur-
thermore, we will propose some implementations that are constructed according to
these criteria.
1 Introduction
Most protocols which are studied in the agent communication community build
on the assumption that the agents share a common ontology (we refer to these as
normal communication protocols). However, normal communication protocols are
difficult to apply in open multi agent systems, such as those on the internet, in
which common ontologies are typically not available. In these systems, it is diffi-
cult to realize consensus between all involved system developers on which ontology
to use [15]. This has motivated researchers to develop tools that assist people in
creating generally shared ontologies. Chimaera [20] and FCA-Merge [33] are ex-
amples of to ols that assist people in merging ontologies. Other approaches aim at
developing one all-purpose ontology which is to be used as a “golden standard”
by everyone; examples of such large scale ontologies are Cyc [18] and Sensus [34].
However, the common ontology paradigm forces every agent to use the same ontol-
ogy and thereby give up their own way of viewing the world. Because an ontology
2 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
is task-dependent [8], this may be disadvantageous for the problem solving capacity
of an agent.
Ontology alignment [24] has been proposed as a technique that enables agents
to keep their individual ontologies by making use of mappings between the differ-
ent ontologies. To communicate, the agents exploit the mapping and translate to
and from each other’s ontology. Most techniques for reconciling heterogeneous on-
tologies in the semantic web (e.g. [6],[31]) also adhere to the alignment approach,
i.e. the original ontologies are linked by semantic mappings. Although ontology
alignment is a step in the right direction to achieve a semantically integrated multi
agent system, it assumes that the mappings are pre-defined before the agents start
interacting. In an open system, agents may enter and leave the system at any mo-
ment. Therefore, it should also be possible to align ontologies at agent interaction
time.
One way to solve this problem may be to use mediation services [37] or ontol-
ogy agents [1]. An ontology agent provides a central point which can be consulted
by agents with communication problems. This approach opens the possibility to
reconcile heterogeneous ontologies at agent interaction time, thereby being more
flexible than ontology alignment. However, this argument only holds when every
agent trusts and knows how to find the ontology agent. Furthermore, the ontology
agent should be capable of finding the correct mappings between the agent’s ontolo-
gies. This problem was not addressed by the FIPA Ontology Service Specification
[1] which was only intended to be a specification of an ontology agent. Most im-
plementations that make use of ontology agents (e.g. KRAFT [25]), or mediation
services (e.g. OBSERVER [21]) assume that the mappings between ontologies are
established manually. For large open MAS’s, this is not an option because inter
ontology mappings have to be established on such a large scale that human in-
volvement in this task is not feasible. For an ontology agent to perform this task
automatically, it would be a very resource-consuming task as the agent should be
capable of generating mappings between every ontology in the system.
These problems have led agent researchers to investigate even more flexible ap-
proaches for the semantic integration problem. Recently, a few approaches appeared
in literature which tackled the problem in a fully decentralized way. There is no
central coordinating ontology agent; the agents solve their communication problems
at interaction time by exchanging parts of their ontologies. W. Truszkowski and
S. Bailin have coined the term Ontology Negotiation to refer to such approaches
[3]. In their paper, the authors present a communication mechanism which enables
agents to exchange parts of their ontology in a pattern of successive clarifications.
The DOGGIE approach [38] makes similar assumptions, but focuses mainly on the
machine learning aspects of ontology exchange, namely the problem of teaching the
meaning of a concept to another agent. Another approach is proposed by Soh and
Chen [28], where agents exchange ontological knowledge when they believe it would
improve operational efficiency.
Whereas ontology negotiation is a promising approach, it is also regarded as
the most ambitious approach [35]. This is mainly because it requires fully auto-
matic ontology matching and because the agents should be able to detect when
their ontologies are insufficiently aligned for successful communication to proceed.
Related work on fully automatic ontology matching has been conducted for over ten
Ontology Negotiation: Goals, Requirements and Implementation 3
years, starting with automatic database schema matching techniques [26]. Increased
interest in ontologies has given rise to matching techniques that are specialized in
the rich representations of ontologies [23]. Research on the detection of ontology
mismatches in agent systems is reported in [4]. Ontology negotiation follows an
integrated approach which combines agent communication protocols, automatic
ontology matching techniques and automatic detection of ontology mismatches.
This combination raises many questions which are not satisfactorily answered by
the individual areas of research. The purpose of this pap er is to clarify the goals
and requirements of ontology negotiation and to propose some implementations
according to these requirements.
Figure 1 Overview of the ontology negotiation protocol
As depicted in Figure 1, communication involving ontology negotiation is es-
tablished by two protocols: the normal communication protocol and the ontology
alignment protocol. The goal of the normal communication protocol is to convey
assertional knowledge, i.e. knowledge relevant to a particular problem or task.
Normal communication proceeds by making use of an intermediate shared ontology
which indirectly aligns the agent’s local ontologies (cf. interlingua [36]). Because
the intermediate ontology is used only for communication purposes, we refer to it
as communication vocabulary (cv). The agents participate in normal communi-
cation by translating to and from the communication vocabulary. When the cv
insufficiently aligns the agent’s local ontologies to enable normal communication,
the agents make a transition to the ontology alignment protocol. This protocol
aims at enabling normal communication by adding concepts to the communication
vocabulary. This is done using ontology exchange: one agent teaches ontological
information to the other agent. After the agents have solved the problem, they
return to the normal communication protocol.
There are different ways to realize a communication mechanism of the type de-
scribed above. To be able to judge the quality of an ontology negotiation protocol,
we will characterize the requirements for normal communication, ontology align-
ment and the transition between them. We will do so by drawing on the general
framework of negotiation, of which, as its name suggests, ontology negotiation is a
special kind.
4 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
Negotiation protocols are well studied interaction mechanisms that enable agents
with different interests to cooperate [27]. For example, they may be used to make
a buyer and a seller agree on prices for certain goods, or in air traffic control, to
decide which airplanes are allowed to land first. What ontology negotiation pro-
tocols have in common with these protocols is their distributed nature. As this
is one of the most striking differences with other semantic integration techniques,
ontology negotiation is an appropriate name to characterize this approach. There is
no central coordinating entity which manages the interactions between the agents,
but the agents reach an agreement among themselves. In ontology negotiation the
agreement is about a (piece of) shared ontology. Similarly to other negotiation pro-
tocols, the agents’ interests may be conflicting. It is easiest for an agent to make
other agents adapt to its own ontology, thereby saving the costs of learning foreign
ontologies. Of course, not every agent can maintain that policy. The negotiation
protocol serves to resolve that issue. Negotiation protocols also have other char-
acteristics, such as efficiency and simplicity [27]. These characteristics are usually
ignored by ontology negotiation protocols (such as [3] and [38]).
Efficiency states that the agents should not waste resources in coming to an
agreement. This is an important issue in ontology negotiation, as agents should
efficiently establish their communication vocabulary. As agents negotiate the cv
they want it to be sufficiently large such that they are capable of conveying what
they want to convey. Agents may easily achieve this by adding every concept in
their local ontology to the communication vocabulary. However, this would require
agents to learn more concepts than necessary which would be a waste of resources.
In an open system, it would lead to a forever growing ontology, which would burden
the agents with large and slowly processable ontologies, and which would be diffi-
cult to learn for newcomers. Therefore, in the ontology alignment protocol, agents
should come to agree on a cv that is an acceptable solution to the communication
problem and which is somehow minimal in size. We call this requirement minimal
cv construction (see Ontology Alignment Protocol in Figure 1).
This requires us to define what an acceptable solution is. Negotiation protocols
often use formal game theoretic criteria to qualify an agreement as acceptable. For
ontology negotiation, we will define a formal criterion of sound and lossless commu-
nication (see Normal Communication Protocol in Figure 1). Sound communication
concerns the quality of information exchange, i.e. the receiver’s interpretation of
the message should follow from what the sender intended to convey. Communica-
tion is subjectively lossless (or lossless for short), if, from the perspective of the
receiver, no information is lost in the process of translating to and from the cv [9].
The lossless criterium concerns the quantity of information exchange.
Another characteristic of negotiation protocols is simplicity, stating that it
should impose low computational and bandwidth demands on the agents. This
is also a relevant issue for ontology negotiation as concept learning is a computa-
tionally expensive process. We therefore enable the agents to learn concepts from
each other on an as-need basis. This way, they find local solutions to communica-
tion problems at the time they arise. Each time they teach concepts to each other,
they incrementally generate a solution for their semantic integration problem. We
call this lazy ontology alignment (see the transition in Figure 1). This issue regards
the transition between the normal communication protocol and the ontology align-
ment protocol.
Ontology Negotiation: Goals, Requirements and Implementation 5
This paper adopts a formal p erspective on ontology negotiation to precisely
define goals and requirements and to give solid proofs that the proposed implemen-
tations actually possess the desirable properties. Furthermore, a formal treatment
of ontology negotiation is needed to clarify the relations with research on formal
ontologies (such as description logics). Readers interested in a practical evaluation
of our work are referred to [10] which presents an application that is based on the
framework introduced in this paper.
The next section is about the requirements of ontologies and communication. It
presents a formal underpinning that is neeeded to precisely define the requirements
of ontology negotiation. Section 3 is about implementation of ontologies. It shows
how ontologies can b e implemented using description logics and concept classifiers
such that their requirements are fulfilled. Section 4 is about the implementation
of communication. It presents three ontology negotiation protocols. We evalu-
ate these protocols according to the criteria of sound and lossless communication,
laziness and minimal cv construction. We conclude and give directions for future
research in section 5.
2 Conceptual framework
In this paper, we restrict ourselves to dialogues between two agents. As a
running example, we use a travel-agent which assists a customer in planning a
holiday trip to the United States (a scenario envisioned in [19]). The travel agent
performs services such as finding a cheap flight, investigating prices for car rental,
suggesting other transport possibilities, and finding out which licenses are required
for campsites on the way. We consider communication between a travel-agent α
1
with ontology O
1
and a car rental service α
2
with ontology O
2
. O
1
shows the
expertise of α
1
on different sorts of accommodation and O
2
shows the expertise of
α
2
on cars. Besides O
1
and O
2
, also other ontologies can be distinguished in this
system: the communication vocabulary (O
cv
), the mapping between α
1
’s ontology
and the cv (O
1·cv
), the mapping between α
2
’s ontology and the cv (O
2·cv
), and the
ontology that would arise if we would combine O
1
and O
2
(O
1·2
). Figure 2 shows
these six ontologies in the initial situation when the cv is empty. The diagrams
represent different conceptualizations (represented by circles) on the same domain.
A circle that is included in another circle represents a subconcept relation (or
conversely, a superconcept relation). An arrow from ontology O
x
to O
y
represents
that O
y
is included in ontology O
x
. The situation after α
2
has added the concept
roadvehicle to the cv is shown in Figure 3. This figure shows that roadvehicle has
been added to O
cv
, which enables this concept to be used in communication. The
figure also shows that α
1
has learned the meaning of roadvehicle by representing
the relations between roadvehicle and the concepts in its local ontology in O
1·cv
.
The next section explains the ontologies in the system in further depth. Section
2.2 explains the agent’s knowledge on these ontologies as well as their dynamics.
Section 2.3 explains how normal communication is established in this framework
and formalizes sound and lossless communication.
6 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
Figure 2 Ontologies of agents in the initial situation
2.1 Ontologies
The objects that exist in the world that the agent inhabits are given by the
universe of discourse (∆). The names of the elements in ∆ are given by the set IND.
For an agent to be able to store knowledge about ∆, it needs a conceptualization
(ρ). We will focus on conceptualizations that consist of sets of individuals, i.e.
ρ ⊆ 2
∆
. Furthermore, as commonly done in AI systems ([29]), we assume that the
elements in ρ form a bounded lattice structure by considering the partial ordered
set (ρ, ⊆). This means that ∆ ∈ ρ (a maximal element, or top-concept), and
that ∅ ∈ ρ (a minimal element, or bottom concept). Furthermore, for every two
elements x, y ∈ ρ, x ∩ y ∈ ρ and x ∪ y ∈ ρ. Note that, at this level, the elements in
the conceptualization are not yet named. Rather, the conceptualization contains
the meanings which the agent uses to represent knowledge about its environment.
In the example Figure, the elements in ρ are represented by circles. There is no
general way to decide what constitutes a good conceptualization as this depends on
the agent’s task. In our framework, different agents are allowed to adopt different
conceptualizations which best suit their needs.
To be able to formalize knowledge about the domain of discourse, the meanings
in the conceptualization must carry a name. This is done by the ontology, which
specifies the conceptualization [14]. The ontology introduces a set of symbols C
which, when interpreted under their intended interpretation, refer to the elements
in the conceptualization (conforming to the treatment by Genesereth and Nilsson
Ontology Negotiation: Goals, Requirements and Implementation 7
Figure 3 Ontologies of agents that share one concept
in [12]). We will refer to the intended interpretation function with I
INT
: C → ρ.
I
INT
is a surjective function. This means that for every element y ∈ ρ, an element
x ∈ C exists for which I
INT
(x) = y, i.e. every element in the conceptualization
is named. In the example, I
INT
is represented by the gray horizontal lines that
connect concept names to circles (their meanings).
For an ontology to fully specify the conceptualization, it would have to spell
out the exact value of I
INT
which would be unfeasible. Therefore, the ontology only
specifies that aspect of I
INT
which is most relevant for the agent, namely the subset
ordering in ρ. An ontology is thus defined as O = hC, ≤i where ≤ ⊆ C × C is a pre-
order for which ∀x, y ∈ C.x ≤ y ⇔ I
INT
(x) ⊆ I
INT
(y). This states that an ontology
specifies a conceptualization as a reflexive, transitive relation which is conforming
to the subset ordering on the intended interpretations of the concepts. Note that,
although the conceptualization has the anti-symmetry property, this property does
not necessarily hold for the ontology. An ontology may specify multiple ways to
refer to the same element in the conceptualization (e.g. synonyms). If two elements
x, y ∈ C have the same intended interpretation, it is not necessarily the case that
x = y, as x may be syntactically different from y. We will write x ≡ y as a
shorthand for x ≤ y ∧ y ≤ x, and x < y as a shorthand for x ≤ y ∧ ¬(y ≤ x).
Definition 1 Given the following ontologies:
• O
i
= hC
i
, ≤
i
i (for i ∈ {1, 2}): The local ontology of α
i
.
8 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
• O
cv
= hC
cv
, ≤
cv
i : The communication vocabulary of the agents, where C
cv
⊆
C
1
∪ C
2
.
We define the ontologies:
• O
1·2
= hC
1·2
, ≤
1·2
i : A god’s eye view over the ontologies in the system,
where C
1·2
= C
1
∪ C
2
. ≤
1·2
conforms to the subset ordering on the intended
interpretations of the elements in C
1·2
.
• O
i·cv
= hC
i·cv
, ≤
i·cv
i (for i ∈ {1, 2}): The local ontology of α
i
and the com-
munication vocabulary, where C
i·cv
= C
i
∪ C
cv
. ≤
i·cv
conforms to the subset
ordering on the intended interpretations of the elements in C
i·cv
.
We use a subscript notation whenever we need to stress that something belongs to
O
i
, O
cv
, etc. For example, a concept with the name d
cv
is assumed to be member
of C
cv
. The subscripts are omitted when no confusion arises. If a concept occurs
in multiple ontologies, the intended interpretation of this concept is assumed to
be unvarying in each of these ontologies. The following definition introduces some
useful terminology:
Definition 2 Given two concepts c, d ∈ S, a preorder ≤: S × S and a set S
0
⊆ S
• c is a subconcept of d in S
0
iff c ≤ d and c ∈ S
0
• c is a superconcept of d in S
0
iff d ≤ c and c ∈ S
0
• c is a strict subconcept of d in S
0
iff c < d and c ∈ S
0
• c is a strict superconcept of d in S
0
iff d < c and c ∈ S
0
• c is most specific in the set S
0
if no strict subconcept of c in S
0
exists
• c is most general in the set S
0
if no strict superconcept of c in S
0
exists
We will now discuss the ontologies introduced in 1 in further depth.
Local ontologies
The local ontology introduces a vocabulary which the agent uses to represent and
reason with assertional knowledge. The local ontology itself does not represent
knowledge that is of practical use to the agent. Rather, it enables the agent to
store its assertional knowledge in an efficient and useful way. An agent α
i
stores
this knowledge in its assertional knowledge base A
i
which consists of a set of mem-
bership statements that state which individuals belong to which concepts. A mem-
bership statement in A
i
is of the form c(a), where c ∈ C
i
and a ∈ IND. We avoid
naming conflicts b etween two local ontologies by assuming that the sets C
1
and
C
2
are disjoint. This can be easily achieved by prefixing the concept names using
namespaces.
God’s eye view ontology
O
1·2
is the ontology that would arise if the local ontologies of the agents were com-
bined. O
1·2
is a virtual ontology, i.e. it is not materialized in its totality anywhere
in the system. For us, it is convenient to adopt this god’s eye view over the on-
tologies to discuss the issues involved in ontology negotiation. From definition 1,
Ontology Negotiation: Goals, Requirements and Implementation 9
it follows that every other ontology in the system is included in this ontology, i.e.
C
i
, C
cv
, C
i·cv
⊆ C
1·2
. Note that ≤
1·2
is not equal to ≤
1
∪ ≤
2
, but that it is a superset
(except in the hypothetical case when C
1
or C
2
is empty). This is because, as argued
before, ≤
1·2
is conforming to the subset ordering on the intended interpretations
of the concepts. It therefore also contains the relations between the elements of C
1
and C
2
which are not present in ≤
1
∪ ≤
2
. For example, ≤
1·2
contains the relation
roadvehicle ≤ vehicle, which is present in neither ≤
1
, nor ≤
2
. In the following, when
we speak of a sub- or superconcept of another concept, we do so with regard to O
1·2
.
Ontologies for alignment
The communication vocabulary O
cv
indirectly aligns the agent’s local ontologies.
α
1
and α
2
maintain a mapping from their local ontologies to the communication vo-
cabulary in respectively O
1·cv
and O
2·cv
. This mapping states the relation between
concepts in the communication vocabulary and concepts in the local ontology. It
can be represented that a concept in the cv is equivalent to a concept in the local
ontology, or that it is a subconcept, or a superconcept. From definition 1, it fol-
lows that C
cv
= C
1·cv
∩ C
2·cv
. By adopting the ontology O
i·cv
to define mappings
between the communication vocabulary and the local ontology of α
i
, we avoid the
introduction of special mapping operators (proposed in [32], [5]).
2.2 Knowledge and dynamics
Knowledge distribution
Not every ontology is known by the agents. For example, O
2
is unknown to α
1
(the agents do not have access to each other’s local ontologies). O
cv
on the other
hand, is known by both agents, whereas O
1·2
is neither known by α
1
nor α
2
.
We distinguish between local knowledge, common knowledge and implicit group
knowledge [22]. Local knowledge refers to the knowledge of an individual agent
which is not accessible to other agents. Something is common knowledge if it is
known by every agent and every agent knows that every agent knows it, which
is again known by every agent etc. Something is implicit group knowledge, if
someone in the group knows it, or the knowledge is distributed over the members
of the group. This includes the knowledge that would become derivable after the
knowledge sources would be joined. By means of communication, the agents can
only acquire knowledge that was already implicit in the group.
Assumption 1
1. O
i
is local knowledge of α
i
2. O
cv
is common knowledge of α
1
and α
2
3. O
i·cv
is local knowledge of α
i
4. O
1·2
is implicit group knowledge of α
1
and α
2
In the graphical representation, the different types of knowledge are indicated in
the dashed boxes.
The assumption that O
cv
is common knowledge makes this ontology appropriate
10 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
for communication. The assumption that O
1·2
is implicit group knowledge opens
up the possibility for automatic ontology alignment. This is a necessary condition
for any system where the agents must learn to share meaning. Two agents can not
learn something from each other which was not already implicitly present before-
hand.
Dynamics
Another aspect of ontologies is whether they are static or dynamic [16]. Static
ontologies do not change over time, whereas dynamic ontologies may change over
time.
Assumption 2
• O
i
and O
1·2
are static ontologies.
• O
cv
and O
i·cv
are dynamic ontologies.
In the graphical representation, dynamic ontologies are boxed by a rounded rect-
angle.
Changing an agent’s local ontology can not be straightforwardly established, as
other components of the agents are dependent on it. For example the agent’s as-
sertional knowledge base and the agent’s deliberation process (the process in which
the agent reasons about what to do next) are defined using terms of the agent’s lo-
cal ontology. When the lo cal ontology is changed, these other components must b e
adjusted as well. These issues are beyond the scope of this paper, and we therefore
assume that the local ontologies O
1
and O
2
are static. As a consequence O
1·2
is
also a static ontology. O
cv
on the other hand is a dynamic ontology. This causes
no side effects as no other component of the agent is dependent on O
cv
. In fact, it
makes O
cv
suitable as an alignment ontology as it enables agents to add concepts
to it at runtime. As a consequence, the ontologies O
1·cv
and O
2·cv
are also dy-
namic. Remember that C
cv
, C
i·cv
⊆ C
1·2
. This imposes a restriction on which kind
of changes may occur in O
cv
and O
i·cv
, i.e. only concepts from C
1·2
may be added
or removed from them.
Concept Learning
We can now specify, from a conceptual viewpoint, how ontology exchange affects
the ontologies in the system. As is apparent in Figure 2 and 3, when α
2
adds the
concept roadvehicle to the communication vocabulary, the concept is added to C
cv
,
and becomes common knowledge. Definition 1 states that, as a consequence from
this change of O
cv
, roadvehicle also becomes part of C
1·cv
. Consequently, the rela-
tion ≤
1·cv
is extended with the information that motorhome≤roadvehicle≤vehicle.
As has been argued before, the static ontologies O
1
, O
2
and O
1·2
remain unaffected.
2.3 Communication
In this framework, the sender makes itself understandable to the receiver by
translating the message stated in terms of its local ontology to a message stated
in terms of the communication vocabulary. The receiver interprets this message
Ontology Negotiation: Goals, Requirements and Implementation 11
by translating this message from the communication vocabulary to its own local
ontology.
For example, consider the ontologies in Figure 3. Suppose that α
1
intends
to convey the message that individual a is a motorhome. It translates message
motorhome(a) (stated in terms of O
1
) to roadvehicle(a) (stated in terms of O
cv
).
α
2
receives this message and translates it to its local ontology O
2
, in this case
also roadvehicle(a). Generally, the following three concepts can be identified in the
communication process.
Definition 3
• The transferendum (c
i
∈ C
i
): what is to be conveyed. α
i
(the speaker) intends
to convey this concept to α
j
.
• The transferens (d
cv
∈ C
cv
): what conveyes. This concept functions as a
vehicle to convey the transferendum to α
j
.
• The translatum (e
j
∈ C
j
): what has been conveyed. α
j
(the hearer) interprets
the received message as this concept.
Requirements for normal communication
Using the three concepts in the definition above, we state the following require-
ments for normal communication. The first requirement concerns the quality of
information exchange, i.e. soundness. Soundness means that the interpretation
of the message by the hearer (the translatum) must follow from what the speaker
intended to convey in the message (the transferendum). In ontological reasoning,
when a is member of a concept c, it follows that a is also member of a superconcept
of c. This is stated in the following definition:
Definition 4 Sound communication
Let c
i
be the transferendum, and e
j
be the translatum. Communication is sound iff
e
j
is a superconcept of c
i
in C
j
.
An example of sound communication from α
1
to α
2
is: α
1
translates transferendum
motorhome to transferens roadvehicle which α
2
“translates” to translatum roadve-
hicle. An example of non-sound communication from α
1
to α
2
is transferendum:
vehicle, transferens: roadvehicle, translatum: roadvehicle.
It is not difficult to satisfy only the soundness requirement of communication.
In the extreme case, the translatum is the top concept to which all individuals in
∆ belong. This is guaranteed to be sound as this concept is a superconcept of all
other concepts. However, an assertion stating that an individual belongs to the
top concept, does not contain any information about the individual; it is a trivial
fact. To prevent overgeneralization, a second requirement is needed which takes
the quantity of information exchange into account.
The lossless requirement states that the translatum should not only be a su-
perconcept of the transferendum, but that it should also be the most specific one.
From the perspective of the receiver, no information is lost in the process of trans-
lating to and translating from the communication vocabulary. From an objective
viewpoint, however, information may get lost. Because this information-loss is not
representable in the receiver’s ontology, this loss is not present from a subjective
viewpoint. For this reason, this requirement is properly called subjectively lossless
12 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
communication. From now on, we shall simply refer to it as lossless communication.
The definition of lossless communication is stated as follows:
Definition 5 Lossless communication
Let c
i
be the transferendum and e
j
the translatum. Communication is lossless iff
e
j
is most specific in the set of superconcepts of c
i
in C
j
Note that in definition 4 and 5 no mention is made of the transferens. This is
because the concepts in the communication vocabulary only serve as vehicles to
convey the speaker’s information to the hearer. To enable sound and lossless com-
munication, there must be sufficient vehicles available. Note that this definition
defines lossless communication from the god’s eye view. Section 4 describes how
the agents can assess lossless communication using their local knowledge.
Example:
The empty communication vocabulary in the initial situation (Figure 2), does not
enable the agents to losslessly communicate any local concept (except the top con-
cept).
The communication vocabulary in Figure 3 sufficiently aligns O
1
and O
2
for
α
1
to losslessly communicate motorhome to α
2
, viz. α
1
translates transferendum
motorhome to transferens roadvehicle which α
2
“translates” to translatum road-
vehicle. This is lossless communication because the translatum roadvehicle is the
most specific in the set of superconcepts of the transferendum motorhome in C
2
.
O
cv
does not sufficiently align O
1
and O
2
for α
1
to losslessly communicate
RV-XL to α
2
. Suppose α
2
translates RV-XL to roadvehicle which α
1
translates
to vehicle. This communication process has not been lossless because motorhome
would have been a more specific translation of RV-XL to α
1
’s ontology.
3 Operational framework
This section describes the data-structures and actions that can be used to im-
plement the ontologies in the system. One of the most widely used ontology im-
plementation languages is description logic [2], which we will explain next. Af-
ter introducing the logic, we show how this language can be used to implement
the ontologies described in definition 1 and how the requirements regarding their
knowledge distribution (assumption 1) can be met. Note that the description logic
implementation we will introduce here is in some respects more powerful than the
conceptual framework we introduced in section 2. For example, it allows specifica-
tion of disjointness relations which is not strictly required for our communication
mechanisms to work.
3.1 Description Logic
A description logic knowledge base is represented as a tuple hT , Ai, containing a
TBox and an ABox [2]. The TBox T is described by a set of terminological axioms
which specify the inclusion relations between the concepts; it represents the agent’s
ontology. The ABox A contains a set of membership statements which specify
which individuals belong to which concepts; it implements the agent’s assertional
knowledge. We use the description logic ALC without roles as a concept language
Ontology Negotiation: Goals, Requirements and Implementation 13
that is used in the TBox and the ABox.
Syntax
The syntax of description logic serves to implement the set of concept names C
we have discussed in section 2. Given a set of atomic concepts C
a
, we define the
language L(C
a
) as follows:
• > ∈ L(C
a
) (top concept)
• ⊥ ∈ L(C
a
) (bottom concept)
• c ∈ C
a
→ c ∈ L(C
a
) (atomic concept)
• c ∈ L(C
a
) → ¬c ∈ L(C
a
) (negation)
• c, d ∈ L(C
a
) → c u d ∈ L(C
a
) (intersection)
• c, d ∈ L(C
a
) → c t d ∈ L(C
a
) (union)
The set of concepts in the ontology is given by C = L(C
a
). Given two concepts
c, d ∈ C, a terminological axiom takes the form of c v d. Given a concept c ∈ C and
an individual a ∈ IND, a membership statement is of the form c(a).
Semantics
Semantics of the concept language is defined by an interpretation function I which
maps every individual a ∈ IND to an element in ∆ and every atomic concept c ∈ C
a
to a subset of ∆. The interpretation function is extended to non-atomic concepts
as follows:
• I(>) = ∆
• I(⊥) = ∅
• I(¬c) = ∆\I(c)
• I(c u d) = I(c) ∩ I(d)
• I(c t d) = I(c) ∪ I(d)
An interpretation I satisfies a terminological axiom c v d, written |=
I
c v d iff
I(c) ⊆ I(d). For a set of statements Γ, we write that |=
I
Γ iff for every γ ∈ Γ, it
holds that |=
I
γ. We write that Γ |= Γ
0
iff for all I : |=
I
Γ implies |=
I
Γ
0
. Given
a TBox, the relation v can be computed efficiently using standard DL reasoning
techniques.
The semantics of membership statements is defined as: |=
I
c(a) iff I(a) ∈ I(c). We
assume that the ABox is sound w.r.t. to the intended interpretation, i.e. |=
I
INT
A.
Note that we do not assume that the ABox completely specifies the intended in-
terpretation. This would make communication unnecessary as the agents would
already know everything. However, the assumption of a complete ABox is unreal-
istic as the domain of discourse will typically be of such size that it is unfeasible to
enumerate all membership statements.
14 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
3.2 Implementing local and common knowledge
Local knowledge of α
i
over O
i
and O
i·cv
(assumption 1.1 and 1.3) can be
straightforwardly established using two TBoxes: T
i
and T
i·cv
. Common knowl-
edge over O
cv
(assumption 1.2) is established using the TBox T
cv
of which both
agents maintain a version. Because both agents have the same version of T
cv
and
they know that of each other, O
cv
becomes common knowledge. We do not index
T
cv
with the agent name. The following property states that these TBoxes fully
implement the agent’s knowledge over the ontologies.
Property 1
1. For i ∈ {1, 2}, for all c, d ∈ C
i
: T
i
|= c v d iff c ≤ d.
2. For i ∈ {1, 2}, for all c, d ∈ C
i·cv
: T
i·cv
|= c v d iff c ≤ d.
3. For all c, d ∈ C
cv
: T
cv
|= c v d iff c ≤ d.
The first item of the property should be established at design time: the system
developer should specify enough terminological axioms to completely specify the
agent’s local ontology. The second and third item of the property concern dynamic
ontologies, and should be fulfilled by the ontology alignment protocol.
Example
Consider the ontologies introduced in Figure 3. We show how these ontologies can
be implemented such that property 1 is fulfilled.
The following TBoxes are possessed by α
1
T
1
T
cv
T
1·cv
motorhome v vehicle roadvehicle v > roadvehicle v vehicle
hotel v ¬vehicle motorhome v roadvehicle
hotel v ¬vehicle
The TBoxes that are possessed by α
2
are:
T
2
T
cv
T
2·cv
RV-large v roadvehicle roadvehicle v > RV-large v roadvehicle
RV-XL v roadvehicle RV-XL v roadvehicle
RV-XL v ¬ RV-large RV-XL v ¬ RV-large
3.3 Implementing implicit group knowledge
Until now, we have described how the first three items of assumption 1 are
implemented using common techniques available from description logic research.
The fourth item of the assumption is not yet met, i.e. O
1·2
is (not even) implicit
group knowledge. The data structures as described until now do not give rise to
implicit group knowledge of the relations between two different agent’s local con-
cepts. For example, T
1·cv
∪T
2·cv
does not specify the relations between α
1
’s concept
motorhome and α
2
’s concept RV-large. This relation must be (at least) implicit
group knowledge, otherwise the agents are not capable of retrieving it. Therefore,
we assume that the agents know more about their local ontologies than just the
ordering between concepts, namely that they have access to the intended interpre-
tation of their local concepts. This is done using the action Classify.
Ontology Negotiation: Goals, Requirements and Implementation 15
Action Classify(c,a)
Output specification:
if a ∈ I
INT
(c) then add c(a) to A
else add ¬c(a) to A
For example, Classify can be thought of as a subsystem of a robot which recog-
nizes and classifies objects in the real world (cf. Luc Steels’ approach to language
creation [30]). In a scenario where the domain of discourse consists of text corpora,
the action Classify can be implemented using text classification techniques [17].
4 Communication
Before we propose protocols for ontology negotiation in section 4.2, we will
discuss how ontology exchange can be implemented in our framework (the lower
layer in Figure 1). After that, we propose some communication protocols that
implement normal communication, ontology alignment and a transition b etween
them. We evaluate these protocols using the criteria of minimal cv construction,
laziness, and soundness and losslessness.
The communicative abilities of the agents are specified as actions. During the
execution of actions, the instruction send(α
j
, htopic, p
1
, .., p
n
i) may be used to send
a message, where α
j
is the addressee of the message, the topic specifies what the
message is about, and p
1
..p
n
are parameters of the message. The effect of this
instruction is that α
j
is able to perform a Receive(α
i
, htopic, x
1
, .., x
n
i) action,
where α
i
is the sender of the message and x
1
..x
n
are instantiated to p
1
..p
n
. For
clarity reasons, we will omit Receive actions from the protocols. In the specification
of actions and protocols we will adopt α
i
as the sender and α
j
as the receiver of
messages.
4.1 Implementation of concept learning
Concept learning, or automatic ontology matching, is a widely studied issue in
computer science [7, 38, 23]. It is not our intention to contribute to this type of
research, as the focus of this paper is on the combination of such techniques with
normal agent communication protocols. Therefore, we will adopt a simple but ad-
equate concept learning technique that is correct with respect to the theoretical
framework introduced in this paper. In particular, it matches well with the seman-
tics of description logics. Despite its simplicity, we have successfully applied this
technique in the domain of internet-news in the anemone system [10]. Neverthe-
less, we stress that this implementation of concept learning should be viewed as
one possibility and that it can be replaced by other approaches, depending on the
chosen domain (see [11] for an extensive survey on other possibilities).
For this approach to work, we require that the agents have access to the same
elements in the universe of discourse (∆), and use the same signs to refer to these
individuals (given by the set IND). These requirements are readily met in the
anemone system, where every agent has access to IP-addresses (i.e. ∆ is the set
of IP-addresses), and every agent uses URL’s to refer to these addresses (i.e. IND
16 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
is the set of URL’s). The ontology of news-topics that are used to classify news-
articles differs from agent to agent. This is where ontology negotiation fulfills its
task in anemone.
Ontology exchange is implemented using the action AddConcept which enables
an agent to add a concept to the communication vocabulary. The effects of adding
a concept were described in section 2.2. To realize these effects, α
j
’s TBoxes T
j·cv
and T
cv
must be updated such that property 1.2 and 1.3 hold. To realize the effects
regarding T
cv
, α
i
must communicate to α
j
the relations of the newly added concept
c with the other concepts in the communication vocabulary. It does this in the
SendBoundaries action:
Action SendBoundaries(α
j
,c)
Let mss be most specific in the set of superconcepts of c in C
cv
and mgs be most
general in the set of subconcepts of c in C
cv
- add c v mss and mgs v c to T
cv
- send (α
j
,hboundaries,c, mss, mgsi)
Action Receive(hboundaries,c, mss, mgsi)
- add c v mss and mgs v c to T
j·cv
Realizing the effects of concept learning on O
j·cv
is more difficult to establish be-
cause neither α
i
nor α
j
has explicitly represented these relations in a TBox. For
example, consider the ontologies in Figure 2 and 3. In the initial situation, neither
of the agents has local knowledge that motorhome≤roadvehicle≤vehicle. Hence
this information must be conveyed differently. α
i
conveys this information to α
j
by
sending a set of positive and negative examples of concept c. Upon receiving these
examples, α
j
uses inductive inference to derive the relations of c with the concepts
in its local ontology. This is done by the Explicate action. Remember that the
agents have access to the intended interpretation of concepts using the Classify
action described earlier.
Action Explicate(α
j
, c)
- send (α
j
,hexplication,c,{p|I(p) ∈ I
INT
(c)}, {n|I(n) 6∈ I
INT
(c)}i)
Action Receive(hexplication,c,P,N i)
- add c v d
j
to T
j·cv
, where d
j
is most specific in the set {d
0
j
|∀p ∈ P.I(p) ∈
I
INT
(d
0
j
)}
- add d
j
v c to T
j·cv
, where d
j
is most general in the set {d
0
j
|∀n ∈ N.I(n) 6∈
I
INT
(d
0
j
)}
We assume that the number of examples in the sets P and N are sufficiently large,
to enable α
j
to derive every relation of c with the concepts in C
j
.
Given that the agents’ classifiers are free of errors, as stated in the specification
of Classify, the SendBoundaries and Explicate actions are sufficient to convey
the meaning of a concept to another agent. An agent that adds an atomic concept
Ontology Negotiation: Goals, Requirements and Implementation 17
Sound and lossless Lazy Minimal cv
P1 + – –
P2 + + –
P3 + + +
Figure 4 Evaluation of the protocols
c to C
a
cv
, may introduce more than one concept in C
cv
, namely the concept c, and
the concepts that can be composed using that concept and other concepts in the
cv. The set of new concepts that are introduced in C
cv
after an atomic concept
c is added is given by L(C
a
cv
∪ {c})\L(C
a
cv
\{c}). The sending agent α
i
conveys
the meanings of these concepts that are also in C
i
. We can now define the action
AddConcept as follows:
Action AddConcept(α
j
, c)
- For all d ∈ (L(C
a
cv
∪ {c})\L(C
a
cv
\{c})) ∩ C
i
:
- SendBoundaries(α
j
,d)
- Explicate(α
j
, d)
4.2 Ontology negotiation protocols
In this section, we will propose three ontology negotiation protocols of the type
depicted in Figure 1. The protocols differ in the way they implement normal com-
munication, how they recognize when normal communication cannot proceed, and
the communication vocabularies they give rise to. We will evaluate these protocols
according to the criteria of soundness and losslessness, laziness and minimal cv
construction (Figure 4).
In section 2.3 we defined successful communication as being sound and lossless.
Whereas these properties are defined using a God’s eye view over the agents on-
tologies, the agents can only use their local knowledge to assess these properties.
This plays a central role in our discussion.
Protocol 1
We begin with a very simple protocol. In protocol 1, normal communication is im-
plemented by translating the transferendum (in the sender’s local ontology) to an
equivalent transferens (in the communication vocabulary). The receiver translates
the transferens to the most specific superconcept in its local ontology, the transla-
tum. This is done by the InformExact action. If there is no transferens available in
the communication vocabulary that is equivalent to the transferendum, the speaker
decides that normal communication can not proceed, and adds the transferendum
to the communication vocabulary.
Action InformExact(α
j
, c
i
(a))
if ∃d
cv
.d
cv
≡ c
i
then send(α
j
, hInformExact, d
cv
(a)i)
Action Receive(α
i
,hInformExact,d
cv
(a)i )
Add e
j
(a) to A
j
, where e
j
is most specific in the set of superconcepts of d
cv
in C
j
When the condition in the if statement of InformExact is not met, the agent
18 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
Figure 5 Proto col P1
must perform an AddConcept action. It is not difficult to prove that in protocol 1,
communication proceeds in a lossless fashion as defined in definition 5. The event
that is triggered upon receiving an InformExact message, produces a translatum
e
j
which is most specific in the set of superconcepts of d
cv
in C
j
. Because the
action that produces an InformExact message requires the transferendum c
i
to be
equivalent to d
cv
, it follows that e
j
is also most specific in the set of sup erconcepts
of c
i
in C
j
, thereby meeting the lossless requirement.
Example:
Consider the initial situation in Figure 2, where the agents have not yet taught
concepts to each other. Suppose α
2
intends to convey the assertion roadvehicle(a)
to α
1
. Below, the actions are described which are performed by the agents. We
describe some of the instructions that are executed within an action; these are pre-
ceded with x.
α
2
: AddConcept(α
1
, roadvehicle)
xα
2
: add roadvehicle v > to T
cv
xα
1
: add motorhome v roadvehicle v vehicle to T
1·cv
α
2
: InformExact(α
1
,roadvehicle(a))
xα
2
: send(α
1
, hInformExact, roadvehicle(a)i)
α
1
: receive(α
2
, hInformExact, roadvehicle(a)i)
xα
1
:add vehicle(a) to A
1
This conversation has given rise to a communication vocabulary as in Figure 3.
Now, suppose that α
1
intends to convey the message hotel(a) and that the cv con-
tains the concept roadvehicle (as in Figure 3). The agents perform the following
actions:
α
1
: AddConcept(α
2
,hotel)
xα
1
: add hotel v ¬roadvehicle to T
cv
xα
2
: add hotel v ¬roadvehicle to T
2·cv
α
1
: InformExact(α
2
,hotel(a))
xα
1
: send(α
2
,hInformExact, hotel(a)i)
α
2
: receive(α
1
, hInformExact, hotel(a)i)
xα
2
:add ¬roadvehicle(a) to A
2
After this conversation has finished, the communication vocabulary contains the
concepts roadvehicle and hotel.
Although P1 ensures sound and lossless communication, it is not lazy and does
not give rise to a minimal cv. In the second dialogue of the example, it was not
necessary to add the concept hotel to the cv, as lossless communication was already
enabled by the concept ¬roadvehicle. If α
1
would have translated hotel to the su-
perconcept ¬roadvehicle, then α
2
could have interpreted this as ¬roadvehicle, and
Ontology Negotiation: Goals, Requirements and Implementation 19
this would have been sound and lossless communication. However, this dialogue is
not allowed by P1. Using P1, the sender sometimes adds concepts to the cv that
do not contribute to successful communication. In fact, after the agents have ex-
changed a number of messages, the communication vocabulary will simply consist
of every transferendum that was conveyed by one of those messages. Therefore pro-
tocol 1 is not satisfactory w.r.t. minimal cv construction and laziness (cf. Figure
4). The following protocol attempts to overcome these problems.
Protocol 2
In protocol 2, the sender uses the InformExact action when allowed. When this
is not allowed, i.e. the sender is not able to express itself exactly in shared con-
cepts, it does not immediately add the concept to the communication vocabulary.
Instead, it conveys the message as specifically as possible using a superconcept of
the transferendum. This is done using an Inform action. It is upon the receiver to
decide whether the transferens in an Inform-message is specific enough to meet the
lossless criterion.
Action Inform(α
j
, c
i
(a))
send(α
j
, hInform, d
cv
(a)i) where d
cv
is most specific in the set of superconcepts of
c
i
in C
cv
The Receive action that is triggered by an Inform message is equal to the Receive
action that is triggered when an InformExact message is received. We will now turn
our attention to the issue of how the receiver can recognize when communication
has been lossless and when not.
Because the receiver does not know the transferendum, it can not directly check
definition 5 for lossless communication. However, the receiver knows that the sender
has obeyed the rules of the Inform action, and therefore that the transferens is most
specific in the set of superconcepts of the transferendum. This enables the receiver,
in some cases, to check the lossless condition nonetheless. In philosophy of language,
such a derivation is known as a conversational implicature [13]. In protocol 2, it
works as follows: consider the ontologies O
1
and O
2
from Figure 2, and suppose
that C
a
cv
= {vehicle, motorhome}. Suppose that the transferendum is α
2
’s concept
roadvehicle and that α
2
uses vehicle as a transferens. Upon receiving this message,
α
1
knows that α
2
did not intend to convey the following subconcepts in C
cv
: mo-
torhome and vehicle u ¬motorhome. This is because otherwise α
2
should have used
these more specific concepts in the message. Knowing that the transferendum is
more general than these concepts, α
1
knows that communication has been lossless.
In protocol 2, the receiver α
j
responds OK when it believes that communica-
tion has been lossless. The condition of OK first identifies a set D that contains all
concepts which are most general in C
cv
among the set of strict subconcepts of the
transferens. It knows that the sender did not intend to convey any information
that is as specific or more specific than any concept in D (otherwise it would have
been obliged to use one of these more specific concepts). Then, it checks whether
any concepts exist in C
j
that are more specific than the translatum but not more
specific than any concept in D. If there are none such concepts, it regards commu-
nication as lossless and the conversation terminates. Otherwise, it responds with
ReqSpec (Request Specification) to start the ontology alignment protocol where α
i
20 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
adds the transferendum to cv. The OK action can only be done if the receiver can
Figure 6 Proto col P2
assess that communication was lossless.
Action OK(α
i
)
Responding to hInform, (d
cv
(a))i
Let D be the set of concepts that are most general among the set of strict subcon-
cepts of d
cv
in C
cv
Let e
j
be most specific in the set of superconcepts of d
cv
in C
j
(e
j
is the translatum)
if every strict subconcept of e
j
in C
j
is a subconcept of any of the concepts in D
then send(α
i
, hOKi)
Example: Consider the ontologies in Figure 3 and suppose that α
1
wishes to
communicate hotel(a) (as in the last example of P1). The dialogue proceeds as
follows:
α
1
: Inform(α
2
, hotel(a))
xα
1
: send(α
1
, hInform, ¬roadvehicle(a)i)
α
2
: Receive (α
1
, hInform, ¬roadvehicle(a)i)
xα
2
: add ¬roadvehicle(a) to A
2
α
2
: OK
In this example, α
2
responded with OK, because in O
2
the information provided
by ¬roadvehicle is as specific as possible.
Now consider the ontologies in Figure 3 with a different cv. Suppose that α
2
wishes
to communicate RV-large(a), and that C
a
cv
= {motorhome}. The dialogue proceeds
as follows:
α
2
: Inform(α
1
, RV-large(a))
xα
2
: send(α
1
, hInform, motorhome(a)i)
α
1
: Receive (α
2
, hInform, motorhome(a)i)
xα
1
: add motorhome(a) to A
1
α
1
: OK
In this example, α
1
responded with OK, because in O
1
the information provided
by motorhome is as specific as possible.
Now, suppose that α
2
wishes to communicate RV-large(a), C
a
cv
= {vehicle}.
α
2
: Inform(α
1
, RV-large(a))
xα
2
: send(α
1
,hInform, vehicle(a)i)
Ontology Negotiation: Goals, Requirements and Implementation 21
α
1
: Reqspec
α
2
: AddConcept(α
1
,RV-large)
α
2
: InformExact(α
1
,RV-large(a))
α
1
: Receive (α
1
, hInformExact, RV-large(a)i)
xα
1
: add motorhome(a) to A
1
In this example α
1
did not respond OK at first, because motorhome caused the
action the fail. Hereby, α
1
correctly recognized non-lossless communication.
Now, suppose that α
2
wishes to communicate roadvehicle(a), and C
a
cv
= {vehicle, motorhome}
α
2
: Inform(α
1
, roadvehicle(a))
xα
2
: send(α
1
,hInform, vehicle(a)i)
α
1
: Receive (α
2
, hInform, vehicle(a )i)
xα
1
: add vehicle(a) to A
1
α
1
: OK
In this example, α
1
responded OK, because it knew that if α
2
had more information
available about individual a, e.g. motorhome, it would have used a more specific
term, e.g. motorhome. Hereby, α
1
correctly recognized lossless communication.
Theorem 1 If the receiver responds OK then communication has been lossless.
Proof: Suppose c
i
is the transferendum, d
cv
the transferens and e
j
the translatum.
We prove the theorem by showing that the situation where the receiver responds
OK while communication was not lossless leads to a contradiction. Non-lossless
communication means that e
j
is not a most specific concept in the set {e
0
j
|c
i
≤
1·2
e
0
j
∧ e
0
j
∈ C
j
} (definition 5 does not hold). This means that either e
j
is not in the
set {e
0
j
|c
i
≤
1·2
e
0
j
∧ e
0
j
∈ C
j
} (option (a)), or that e
j
is not a most specific element in
that set (option (b)). We will show that both options lead to a contradiction. The
conditions for sending and receiving an inform speech act ensure that c
i
≤ d
cv
≤ e
j
,
and therefore c
i
≤ e
j
; this contradicts with option (a). If e
j
is not most specific in
the set {e
0
j
|c
i
≤
1·2
e
0
j
∧ e
0
j
∈ C
j
}, it means that some concept e
00
j
exists in this set for
which c
i
≤ e
00
j
< e
j
. According to the condition in the if-statement of OK, it holds
that some concept d
0
cv
∈ D exists for which e
00
j
≤ d
0
cv
< d
cv
. Because c
i
≤ d
0
cv
and
d
0
cv
< d
cv
, it follows that d
cv
is not most specific in the set {d
00
cv
|c
i
≤
1·2
d
00
cv
∧ d
00
cv
∈
C
cv
}. Therefore, option (b) is in contradiction with the condition of Inform.
¤
Because P2 enables the agents to communicate without learning every concept in
their local ontologies from each other, this protocol scores better than P1, w.r.t.
laziness (cf. Figure 4). However, the protocol may still give rise to a communication
vocabulary which is unnecessarily large, as shown by the following example:
Example: Consider the initial situation in Figure 2. Suppose that α
1
intends
to convey motorhome.
α
1
: Inform(α
2
, motorhome(a))
xα
1
: send(α
2
,hInform, >(a)i)
α
2
: Reqspec
α
1
: AddConcept(α
2
,motorhome)
α
1
: InformExact(α
2
,motorhome(a))
α
2
: Receive (α
1
, hInformExact, motorhome(a )i)
22 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
xα
2
: add roadvehicle(a) to A
2
After this dialogue, the cv is {motorhome}. In the next dialogue, α
2
intends to con-
vey roadvehicle. A similar dialogue follows; after this dialogue, the cv has become
{motorhome, roadvehicle}. In the next dialogue, α
1
intends to convey vehicle. Af-
ter this dialogue has finished, the cv has become {motorhome, roadvehicle, vehicle}.
The last communication vocabulary is unnecessarily large, because {motorhome, vehicle}
enables the agents to losslessly communicate the same concepts as {motorhome, roadvehicle, vehicle}.
For this reason, P2 is not satisfactory w.r.t. minimal cv construction. The next
protocol aims to overcome this problem by allowing the agents to remove superflu-
ous concepts from their communication vocabulary.
Protocol 3 Concepts can be removed from the vocabulary if they are mutually
redundant, i.e. redundant for both agents. Mutually redundant concepts have the
property that their removal does not affect what the agents can losslessly commu-
nicate to each other during normal communication. This is stated in the following
definition.
Definition 6 d ∈ C
a
cv
is mutually redundant if L(C
a
cv
\{d}) allows α
i
and α
j
to
losslessly communicate the same concepts to each other as L(C
a
cv
).
The above definition does not state how agents can recognize redundant concepts.
An agent may consider a concept redundant if it determines that another concept
in the cv could serve as a substitute for sending messages and that another concept
in the cv could serve as a substitute for receiving messages. This is expressed in
the following definition.
Definition 7 α
i
considers a concept d
cv
redundant iff both of the following holds:
• d
0
cv
is a superconcept of c
i
, where
– d
0
cv
is most general in the set of subconcepts of d
cv
in the set L(C
a
cv
\{d
cv
})
– c
i
is most general in the set of subconcepts of d
cv
in C
i
.
• d
00
cv
is a subconcept of c
0
i
, where
– d
00
cv
is most specific in the set of superconcepts of d
cv
in the set L(C
a
cv
\{d
cv
})
– c
0
i
is most specific in the set of superconcepts of d
cv
in C
i
.
In this definition, the formula L(C
a
cv
\{d
cv
}) denotes the communication vocabulary
that remains after d
cv
is removed. The concept d
0
cv
is the substitute for d
cv
for
sending messages. This is because the most general transferendum c
i
that can be
conveyed using d
cv
, is a subconcept of d
0
cv
, and can therefore also be conveyed
using d
0
cv
. The concept d
00
cv
is the substitute for d
cv
for receiving messages. This is
because d
00
cv
yields the same translatum c
0
i
as d
0
cv
. Because d
00
cv
is more general than
d
cv
the other agent can convey its messages using d
00
cv
instead of d
cv
.
For example, suppose that C
a
cv
= {motorhome, roadvehicle, vehicle}. α
1
believes
the concept roadvehicle to be redundant because vehicle satisfies the first item in
the condition of definition 7, and motorhome satisfies the second item.
Ontology Negotiation: Goals, Requirements and Implementation 23
Theorem 2 If α
i
considers a concept d
cv
redundant (according to definition 7),
then concept d
cv
is mutually redundant (according to definition 6)
Proof: We will prove the theorem for communication from α
i
to α
j
and from α
j
to α
i
.
From α
i
to α
j
: Observe that α
i
never requires transferens d
cv
to communicate a
transferendum c
i
. Supp ose that c
i
≤ d
cv
, which is a necessary condition for d
cv
to qualify as a transferens. According to definition 7 (first bullet), d
0
cv
exists for
which c
i
≤ d
0
cv
≤ d
cv
. Hence, α
i
uses d
0
cv
as a transferens instead of d
cv
. The
same argument holds for the transferens ¬d
cv
. The second bullet in 7 ensures that
every local concept c
0
i
that is subconcept of ¬d
cv
is also subconcept of a subconcept
of ¬d
cv
, namely ¬d
00
cv
. Because conjunction and disjunction are compositionally
defined, α
i
would also never use d
cv
or ¬d
cv
as a conjunct or disjunct either.
From α
j
to α
i
: For every concept c
j
which α
j
communicates using d
cv
, α
j
may also
use d
00
cv
. The second bullet in definition 7 ensures d
00
cv
yields the same translatum as
d
cv
, namely c
0
i
. Therefore, every concept c
j
that is losslessly communicated using
d
cv
can also be losslessly communicated using d
00
cv
. Furthermore, α
i
responds “OK”
to messages with d
00
cv
(and thereby recognizes lossless communication), because
bullet 1 in definition 7 ensures that all subconcepts in C
i
of d
cv
are also subconcepts
of d
00
cv
. A similar argument can be made for α
j
that communicates using ¬d
cv
.
¤
Action RemoveConcept(α
j
, d
cv
)
if α
i
considers d
cv
redundant then
• Remove d
cv
from C
a
cv
• send(α
j
, hRemoveConcept, di)
else fail
Action Receive(hRemoveConcept, di )
Remove d
cv
from C
a
cv
An agent performs a RemoveConcept action on a concept d
cv
, when it considers
it redundant using the criteria described in definition 7. Concepts may become
redundant after a new term is added to the communication vocabulary. Because
both agents have different perspectives on the redundancy of terms, both agents get
a chance to perform RemoveConcept. Sometimes, after one concept is added, two
concepts can be removed from the communication vocabulary. To exploit this idea,
also the receiver α
j
is allowed to add concepts to the communication vocabulary
(state 4). The second example below illustrates the application of this.
Example: Consider the ontologies in Figure 2, and that C
a
cv
= {motorhome, roadvehicle}.
Suppose that α
1
wishes to communicate vehicle(a)
α
1
: Inform(α
2
, vehicle(a))
xα
1
: send(α
2
, hInform, >(a)i)
α
2
: Reqspec
α
1
: AddConcept(α
2
,vehicle)
24 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
Figure 7 Proto col P3
α
1
: RemoveConcept(α
1
,roadvehicle)
α
1
: Exit
α
2
: Exit
α
1
: InformExact(α
2
,vehicle( a))
· · ·
In this example α
1
considers the concept roadvehicle redundant after vehicle was
added to it. As a sender, α
1
would never use roadvehicle, and as a receiver α
1
finds
vehicle equally informative as roadvehicle.
Another example: Consider the ontologies in Figure 3. Suppose that C
a
cv
= {RV-large}
and that α
2
intends to communicate RV-XL(a)
α
2
: Inform(α
1
, RV-XL(a))
xα
2
: send(α
1
, hInform, ¬RV-large(a)i)
α
1
: Reqspec
α
2
: AddConcept(α
1
,RV-XL)
α
2
: Exit
α
1
: AddConcept(α
2
,Motorhome)
α
1
: RemoveConcept(α
2
,RV-XL)
α
1
: RemoveConcept(α
2
,RV-large)
α
1
: Exit
α
2
: Inform(α
1
, RV-XL(a))
xα
2
: send(α
1
, hInform, Motorhome(a)i)
α
1
: OK
Because P3 enables the agents to remove redundant concepts from the communi-
cation vocabulary, P3 scores better w.r.t. minimal cv construction than P2. As
is shown by Figure 4, this makes P3 the best protocol for ontology negotiation we
have proposed in this paper.
Ontology Negotiation: Goals, Requirements and Implementation 25
5 Conclusion
In this paper, we have discussed the use of ontology negotiation protocols to
overcome communication problems between agents with heterogeneous ontologies.
We hereby formulated the goals and requirements of an ontology negotiation pro-
tocol. An ontology negotiation protocol should enable sound and lossless com-
munication between the agents. The agents should build on their solution on an
as-need basis, dealing with communication problems as they arise: it should be
lazy. Furthermore, the agents should build up a relatively small communication
vocabulary such that it remains easy to learn and to process. We have proposed
three implementations of protocols that all give rise to sound and lossless commu-
nication. However, they were shown to differ in quality w.r.t. laziness and minimal
cv construction.
We will continue this line of research by considering situations with more than
two agents. This introduces some additional complexity, as not every agent knows
who has taught which concepts to whom. However, we believe that also in these
situations, the principles we have described in this paper will remain valid.
References and Notes
1 FIPA Ontology Service Specification. http://www.fipa.org/specs/fipa00086/.
2 F. Baader, D.L. McGuinnes, and P.F. Patel-Schneider. The description logic handbook:
Theory, implementation and applications. Cambridge University Press, 2003.
3 S.C. Bailin and W. Truszkowski. Ontology negotiation between intelligent information
agents. Knowledge Engineering Review, 17(1):7–19, 2002.
4 R.J. Beun, R.M. van Eijk, and H. Pr¨ust. Ontological feedback in multiagent sys-
tems. In Proceedings of Third International Conference on Autonomous Agents and
Multiagent Systems, pages 110–117, New York, 2004. ACM Press.
5 Alex Borgida and Luciano Serafini. Distributed description logics: Directed domain
correspondences in federated information sources. Proceedings of the International
description logics Workshop DL’2002, 2002.
6 P Bouquet, G Kuper, M Scoz, and S Zanobini. Asking and answering semantic queries.
In Paolo Bouquet and Luciano Serafini, editors, Proceedings of the ISWC-04 workshop
on Meaning Coordination and Negotiation (MCN-04), 2004.
7 M. Burnstein, D. McDermott, D.R. Smith, and S.J. Westfold. Derivation of glue code
for agent interoperation. Autonomous Agents and Multi-Agent Systems, 6(3):265–286,
2003.
8 T. Bylander and B. Chandrasekaran. Generic tasks for knowledge-based reasoning:
the ”right” level of abstraction for knowledge acquisition. International Journal of
Man-Machine Studies, 26(2):231–243, 1987.
9 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, and J.-J.Ch. Meyer. Optimal
communication vocabularies and heterogeneous ontologies. In Developments in Agent
Communication, LNAI 3396. Springer Verlag, 2004.
10 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, and J.-J.Ch. Meyer.
ANEMONE: An effective minimal ontology negotiation environment. Proceedings of
the Fifth International Conference on Autonomous Agents and Multi-agent Systems
(AAMAS), 2006.
26 J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk, J-J.Ch. Meyer
11 Diana Maynard et al. D2.2.3: State of the art on current alignment tech-
niques. http://knowledgeweb.semanticweb.org/semanticportal/servlet/download?
ontology=Documentation+Ontology&concept=Deliverable&instanceSet=
kweb&instance=D2.2.3+techniques&attribute=On-line+PDF+Version&value=kweb-
223.pdf, 2004.
12 Michael R. Genesereth and Nils J. Nilsson. Logical foundations of artificial intelligence.
Morgan Kaufmann Publishers Inc., 1987.
13 H. Paul Grice. Logic and conversation. Cole, P., and J.L. Morgan, eds. Speech Acts.
New York: Academic Press, 4158, 1975.
14 T.R. Gruber. A translation approach to portable ontology specifications. Knowledge
Acquisition, 5:199–220, 1993.
15 Adil Hameed, Alun Preece, and Derek Sleeman. Ontology reconciliation. In Steffen
Staab and Rudi Studer, editors, Handbook of ontologies, International handbooks on
information systems, chapter 12, pages 231–250. Springer Verlag, Berlin (DE), 2004.
16 J. Heflin and J. Hendler. Dynamic ontologies on the web. In Proceedings of American
Association for Artificial Intelligence Conference (AAAI-2000), pages 443–449, Menlo
Park, CA, USA, 2000. AAAI Press.
17 Peter Jackson and Isabelle Moulinier. Natural Language Processing for Online Ap-
plications: Text retrieval, extraction, and categorization . John Benjamins Publishing,
2002.
18 Douglas B. Lenat and R. V. Guha. Building Large Knowledge-Based Systems: Repre-
sentation and Inference in the Cyc Project. Addison-Wesley Publishers, 1990.
19 M. Luck, P. McBurney, and C. Preist. Agent technology: Enabling next generation
computing. Agent link community, 2003.
20 D. L. McGuinness, R. Fikes, J. Rice, and S. Wilder. The chimaera ontology environ-
ment. In Proceedings of the Seventeenth National Conference on Artificial Intelligence
(AAAI 2000), Austin, Texas, 2000.
21 E. Mena, A. Illarramendi, V. Kashyap, and A. Sheth. OBSERVER: An approach
for query processing in global information systems based on interoperation across
pre-existing ontologies. International journal on Distributed And Parallel Databases
(DAPD), 8(2):223–272, April 2000.
22 J-J. Ch. Meyer and W. Van Der Hoek. Epistemic Logic for AI and Computer Science.
Cambridge University Press, 1995.
23 Prasenjit Mitra, Natalya F. Noy, and Anuj R. Jaiswal. OMEN: a probabilistic ontology
mapping tool. In Paolo Bouquet and Luciano Serafini, editors, Proceedings of the
ISWC-04 workshop on Meaning Coordination and Negotiation (MCN-04), 2004.
24 N. F. Noy and M. A. Musen. Prompt: Algorithm and tool for automated ontol-
ogy merging and alignment. In Proceedings of the National Conference on Artificial
Intelligence (AAAI), 2000.
25 Alun D. Preece, Kit ying Hui, W. A. Gray, P. Marti, Trevor J. M. Bench-Capon,
D. M. Jones, and Zhan Cui. The KRAFT architecture for knowledge fusion and
transformation. Knowledge Based Systems, 13(2-3):113–120, 2000.
26 Erhard Rahm and Philip A. Bernstein. A survey of approaches to automatic schema
matching. The VLDB Journal, 10(4):334–350, 2001.
27 Jeffrey S. Rosenschein and Gilad Zlotkin. Rules of Encounter: Designing Conventions
for Automated Negotiation Among Computers. MIT Press, Cambridge, Massachusetts,
1994.
Ontology Negotiation: Goals, Requirements and Implementation 27
28 L-K Soh and C. Chen. Balancing ontological and operational factors in refining mul-
tiagent neigborhoods. Proceedings of the Fourth International Conference on Au-
tonomous Agents and Multi-Agent Systems, 2005.
29 John F. Sowa. Knowledge Representation: Logical, Philosophical and Computational
Foundations. Bro oks/Cole Publishing Co., Pacific Grove, CA, USA, 2000.
30 L. Steels. Synthesising the Origins of Language and Meaning Using Co-evolution,
Self-organisation and Level formation. Edinburgh University Press, 1998.
31 H. Stuckenschmidt, F. van Harmelen, L. Serafini, P. Bouquet, and F. Giunchiglia.
Using c-owl for the alignment and merging of medical ontologies. In Udo Hahn, editor,
Proceedings of the First International Workshop on Formal Biomedical Knowledge
Representation (KRMed’04), pages 8–101, Whistler, Colorado, June 2004.
32 Heiner Stuckenschmidt and Ingo J. Timm. Adaption communication vocabularies us-
ing shared ontologies. Proceedings of the Second International Workshop on Ontologies
in Agent Systems (OAS) , July 2002.
33 Gerd Stumme and Alexander Maedche. FCA-MERGE: Bottom-up merging of ontolo-
gies. In IJCAI, pages 225–234, 2001.
34 B. Swartout, R. Patil, K. Knight, and T. Russ. Toward distributed use of large-
scale ontologies. Proceedings of the Tenth Knowledge Acquisition for Knowledge-based
Systems Workshop, 1996.
35 M. Uschold and M. Gruninger. Creating semantically integrated communities on the
world wide web. Semantic Web Workshop Co-located with WWW 2002 Honolulu,
2002.
36 Mike Uschold and Michael Gr¨uninger. Ontologies: principles, methods, and applica-
tions. Knowledge Engineering Review, 11(2):93–155, 1996.
37 Gio Wiederhold and Michael Genesereth. The conceptual basis for mediation services.
IEEE Expert: Intelligent Systems and Their Applications, 12(5):38–47, 1997.
38 A.B. Williams. Learning to share meaning in a multi-agent system. Autonomous
Agents and Multi-Agent Systems, 8(2):165–193, 2004.