Conference PaperPDF Available

Towards a Logical Model of Induction from Examples and Communication.

Authors:

Abstract

This paper focuses on a logical model of induction, and specifically of the common machine learning task of inductive concept learning (ICL). We define an inductive derivation relation, which characterizes which hypothesis can be induced from sets of examples, and show its properties. Moreover, we will also consider the problem of communicating inductive inferences between two agents, which corresponds to the multi-agent ICL problem. Thanks to the introduced logical model of induction, we will show that this communication can be modeled using computational argumentation.
Towards a Logical Model of Induction
from Examples and Communication
Santiago ONTAÑÓN a,1, Pilar DELLUNDE b,a, Lluís GODO a, and Enric PLAZA a
aArtificial Intelligence Research Institute, IIIA-CSIC
bUniversitat Autònoma de Barcelona, UAB
Abstract. This paper focuses on a logical model of induction, and specifically of
the common machine learning task of inductive concept learning (ICL). We de-
fine an inductive derivation relation, which characterizes which hypothesis can be
induced from sets of examples, and show its properties. Moreover, we will also
consider the problem of communicating inductive inferences between two agents,
which corresponds to the multi-agent ICL problem. Thanks to the introduced log-
ical model of induction, we will show that this communication can be modeled
using computational argumentation.
Keywords. Induction, Logic, Argumentation, Machine Learning
Introduction
Inductive inference is the basis for all machine learning methods which learn general
hypotheses or models from examples. However, there has been little effort in finding a
logical characterization of inductive inference, except for a few proposals such as [6].
This paper focuses on a logical model of inductive inference, and specifically of the
common machine learning task of inductive concept learning (ICL).
The lack of a formal logical model of induction has hindered the development of
approaches that combine induction with other forms of reasoning, such as the defeasi-
ble reasoning used in computational argumentation. In this paper, we define an induc-
tive derivation relation (denoted by |), which characterizes which hypotheses can be
induced from sets of examples, and show the properties of this inductive derivation re-
lation. We will focus both in the single-agent inductive concept learning process as well
as in a multi-agent setting. To consider multi-agent settings, we will show that the prob-
lem of communicating inductive inferences can be modeled as an argumentation frame-
work. Since inductive inference is a form of defeasible inference we will see that our
inductive derivation relation can be easily combined with an argumentation framework,
constituting a coherent model of multi-agent inductive concept learning.
The remainder of this paper is organized as follows. Section 2 introduces the prob-
lem of inductive concept learning as typically framed in the machine learning literature.
Then, Section 3 introduces a logical model of induction and proposes an inductive deriva-
1Corresponding Author: IIIA (Artificial Intelligence Research Institute), CSIC (Spanish Council for
Scientific Research), Campus UAB, 08193 Bellaterra, Catalonia (Spain), santi@iiia.csic.es.
tion relation. Section 4 then focuses on the multi-agent induction problem, framing it as
an argumentation process. Finally, the paper closes with related work and conclusions.
1. Inductive Concept Learning
Concept learning [10] using inductive techniques is not defined formally, rather it is
usually defined as a task, as follows:
Given 1. A set of instances Xexpressed in a language LI
2. A space of hypotheses or generalizations H(expressions in a language LH)
3. A target concept cdefined as a function c:X {0,1}
4. A set Dof training examples, where a training example is a pair hxi, c(xi)i
Find a hypothesis hHsuch that xX:h(x) = c(x)
This strictly Boolean definition is usually weakened to allow the equality h(x) = c(x)
not being true for all examples in Xbut just for a percentage, and the difference is called
the error of the learnt hypothesis. This definition, although widespread, is unsatisfactory
and leave several issues without a precise characterization. For example, the space of
hypotheses Husually is expressed only by conjunctive formulas. However, most con-
cepts need more than one conjunctive formula (more than one generalization) but this
is “left outside” of the definition and is explained as part of the strategy of an inductive
algorithm. For instance, the set-covering strategy, where one definition h1is found but
covers only part of the positive examples in D, proceeding then to eliminate the covered
examples and obtain a new D0that will be used in the next step.
Another definition of inductive concept learning (ICL) is that used in Inductive
Logic Programing (ILP) [9], where the background knowledge, in addition to the exam-
ples, has to be taken into account. Nevertheless, ILP also defines ICL as a task to be
achieved by an algorithm, as follows:
Given 1. A set of positive E+and negative Eexamples of a predicate p
2. A set of Horn rules (background knowledge) B
3. A hypothesis language LH(a sublanguage of Horn logic language)
Find A hypothesis H LHsuch that
eE+:BH|=e(His complete)
eE:BH6|=e(His consistent)
In this paper our goal is to provide a logical model of inductive inference in ICL that
covers the commonly held but informally defined task of learning concept description by
induction in Machine Learning.
2. Inductive Inference for Concept Learning
In order to present our model of induction, let us start by describing the language we will
use, which corresponds to a small fragment of first order logic and is built as follows. For
the sake of simplicity we assume to work with two disjoint finite sets of unary predicates:
a set of predicates to describe attributes P red_At ={P1, . . . , Pn}and a set of predicates
to denote concepts to be learnt P red_Con ={C1, . . . , Cm}. To simplify notation, for
each CP red_Con, we will write C(·)to denote C(·)or ¬C(·); moreover, we will
write ¬C(·)to denote ¬C(·)if C(·) = C(·), and ¬C(·)to denote C(·)if C(·) = ¬C(·).
Moreover we assume a finite domain of constants D={a1, . . . , am}which will be used
as example identifiers. For instance, if PP red_At,CP red_Con and aD,
then P(a)will denote that example ahas the attribute P, and C(a)will denote that the
concept Capplies to a. Our formulas will be of two kinds:
Examples will be conjunctions of the form ϕ(a)C(a), where ϕ(a) = Q1(a)
. . . Qk(a), with Qi(a)being of the form Pi(a)or ¬Pi(a). A positive example
of Cwill be of the form ϕ(a)C(a); a negative example of Cwill be of the form
ϕ(a) ¬C(a).
Rules will be universally quantified formulas of the form (x)(ϕ(x)C(x)),
where ϕ(x) = Q1(x). . . Ql(x), with Qi(x)being of the form Pi(x)or
¬Pi(x).
The set of examples will be noted by Leand the set of rules by Lr, and the set of all
formulas of our language will be L=Le Lr. In what follows, we will use the symbol
`to denote derivation in classical first order logic. By background knowledge we will
refer to a finite set of formulas K Lr, although sometimes we will consider Kas the
conjunction of its formulas.
Definition 1 (Covering) Given background knowledge K, we say that a rule r:=
(x)(α(x)C(x)) covers an example e=ϕ(a)
b
C(a)when ϕ(a)K`α(a).
These notions allow us to define inductive inference of rules from examples.
Definition 2 (Inductive Derivation) Given background knowledge K, a set of exam-
ples Leand a rule r= (x)(α(x)C(x)), the inductive derivation |K
(x)(α(x)C(x)) holds iff:
1) (Explanation) rcovers at least one positive example of Cin ,
2) (Consistency) rdoes not cover any negative example of Cin
Notice that if we have two conflicting formulas in of the form ϕ(a)C(a)and ψ(b)
¬C(b)where the example ahas more (or less) description attributes than example b, then
no rule (x)(α(x)C(x)) covering either example can be inductively derived from
. The next definition identifies when a set of examples is free of these kind of conflicts.
Definition 3 (Consistency) A set of examples is said to be consistent with respect to a
concept Cand background knowledge Kwhen: if ϕ(a)C(a)and ψ(b)∧¬C(b)belong
to , then both K6` (x)(ϕ(x)ψ(x)) and K6` (x)ψ((x)ϕ(x)).
Definition 4 (Inducible Rules) Given a set of examples and background knowledge
K, we call IRK(∆) = {(x)(ϕ(x)C(x)) ||K(x)(ϕ(x)C(x))}the set of
all rules that can be induced from and K.
We will assume in the rest of the paper that IRK(∆) is finite. Next we show some
interesting properties of the inductive inference |K.
Lemma 1 The inductive inference |Ksatisfies the following properties:
1. Reflexivity: if is consistent w.r.t. Cand K, then if ϕ(a)C(a)then
|K(x)(ϕ(x)C(x)).
2. Positive monotonicity: |K(x)(α(x)C(x)) implies {ϕ(a)
C(a)} |K(x)(α(x)C(x))
3. Negative non-monotonicity: |K(x)(α(x)C(x)) does not imply
{ϕ(a) ¬C(a)} |K(x)(α(x)C(x))
4. If K`(x)(ϕ(x)α(x)) then,
|K(x)(α(x)C(x)) does not imply |K(x)(ϕ(x)C(x))
5. If |K(x)(α(x)C(x)) and `(x)(α(x)ϕ(x)) then 6|K
(x)(ϕ(x) ¬C(x))
6. If |K(x)(α(x)C(x)) and `(x)(ϕ(x)α(x)) then 6|K
(x)(ϕ(x) ¬C(x))
7. Falsity preserving: let r= (x)(α(x)C(x)) such that it covers a negative
example from , hence r6∈ IRK(∆); then r6∈ IRK(∆ 0)for any further
set of examples 0.
8. IRK(∆12)IRK(∆1)IRK(∆2)
Proof: 1. Since ϕ(a)C(a)and we obviously have ϕ(a)K`ϕ(a), ex-
planation trivially holds. Now assume ψ(a) ¬C(a). Then, since is
consistent w.r.t. Cand K,ψ(a)K6` ϕ(a), hence consistency also holds.
2. Trivial
3. The reason is that nothing prevents that ϕ(a)K`α(a)may hold.
4. The reason is that, since ϕis more specific than α, it may not cover any example.
5. Let us assume that `(x)(α(x)ϕ(x)) and |K(x)(ϕ(x) ¬C(x)).
Then, by consistency, for all ψ(a)C(a)we have ψ(a)K6` ϕ(a), and
hence ψ(a)K6` α(a)as well. Then clearly, 6|K(x)(α(x)C(x)).
6. Let us assume now that `(x)(ϕ(x)α(x)) and |K(x)(ϕ(x)
¬C(x)). Then, by explanation, there exists ψ(a) ¬C(a)such that
ψ(a)K`ϕ(a). But then we have ψ(a)K`α(a)as well, so again
6|K(x)(α(x)C(x)).
7. Notice that if rcovers a negative example of , that particular example will
remain in 0.
8. Let RIRK(∆12). It means that Rat least covers a positive example e+
12and covers no negative example of 12, so it covers no negative
example of both 1and 2. Now, if e+1then clearly RI RK(∆1);
otherwise, if e+2, then RIRK(∆2), hence in any case RIRK(∆1)
IRK(∆2).
Let us now examine the intuitive interpretation of the properties in Lemma 1 from
the point of view of ICL; for this purpose we will reformulate some notions into the
vocabulary commonly used in ICL. The first property, Reflexivity, transforms (or lifts)
every example in einto a rule rewhere constants have been substituted by variables.
This lifting is usually called in ICL literature the “single representation trick,” by which
an example in the language of instances is transformed into an expression in the language
of generalizations.
Property 2 states that adding a positive example e+does not invalidate any existing
induced rule, i.e. IRK(∆) does not decrease; notice that it can increase since now there
are induced rules that explain e+that were not in IRK(∆) that are in I RK(∆ {e+}).
Property 3 states that adding a negative example emight invalidate existing induced
rules in IRK(∆), i.e. IRK(∆ {e})I RK(∆). Property 4 states that specializing an
induced rule does not imply it is still in IRK(∆), since it may not explain any example
in . Properties 5 and 6 state that by generalizing (resp. specializing) an induced rule
will never conclude the negation of the target concept.
Property 7 states the well known fact that inductive inference is falsity preserving,
i.e. once we know some induced rule is not valid, it will never be valid again. This is
related to Property 3, since once a negative example defeats an induced rule r, we know
rwill never be valid regardless of how many examples are added to , i.e. it will never
be in IRK(∆ 0). Property 8 states that the rules that can be induced from the union
of two sets of examples are a subset of the union of the rules that can be induced from
each of the sets.
The notions of inductive derivation and inducible rules allows us to define next an
inductive theory for a concept as a set of inducible rules which, together with the back-
ground knowledge, explain all positive examples.
Definition 5 (Inductive Theory) An inductive theory Tfor a concept C, w.r.t. and K,
is a subset TIRK(∆) such that for all ϕ(a)C(a), it holds that TK
{ϕ(a)} ` C(a). T is minimal if there is no T0Tthat is an inductive theory for C.
Since rules in IRK(∆) do not cover any negative example, notice that if Tis an inductive
theory for Cw.r.t. and K, and ψ(a) ¬C(a)for some constant a, then it holds
that TK {ψ(a)} 6` C(a). In the remainder of this paper we will assume agents have
an algorithm capable of generating inductive theories, e.g. [11].
3. Multi-agent Induction through Argumentation
We will consider a multi-agent system scenario with two agents Ag1and Ag2under the
following assumptions: (1) both agents share the same background knowledge K2and
(2) each agent has a set of examples 1,2 Lesuch that 12is consistent.
The goal of each agent Agiis to induce an inductive theory Tiof a concept Csuch that
TiIR(∆12)and that constitutes an inductive theory w.r.t. 12. We will call
this problem multi-agent ICL.
A naïve approach is for both agents to share their sets of examples, but that might
not be feasible for a number of reasons, like cost or privacy. In this section we will show
that by communicating their inductive inferences two agents can also solve the multi-
agent inductive concept learning (ICL) problem. Let us present an argumentation-based
framework that can model this problem of sharing and comparing inductive inferences
in order to address the multi-agent ICL problem.
3.1. Computational Argumentation
Let us introduce the necessary notions of computational argumentation we will use in the
rest of this paper. In our setting, an argumentation framework will be a pair A= ,),
where arguments are rules, i.e. Γ Lr.
2For simplicity, since both agents share K, in the rest of this paper we will drop the Kfrom the notation.
Definition 6 Given two rules R, R0Γ, an attack relation RR0holds when
R= (x)(α(x)C(x)),R0= (x)(β(x) ¬C(x)), and K`(x)(α(x)β(x)).
Otherwise, R6R0. If RR0and R06Rwe say that Rdefeats R0, otherwise if both
RR0and R0R(i.e. if K`(x)(α(x)β(x))) we say that Rblocks R0.
As in any argumentation system, the goal is to determine whether a given argument is
acceptable (or warranted) according to a given semantics. In our case we will adopt the
semantics based on dialogical trees [3,13].
Definition 7 Given an argumentation framework A= ,)and R0Γ, an argu-
mentation line rooted in R0in Ais a sequence: λ=hR0, R1, R2, . . . , Rkisuch that:
1. Ri+1 Ri(for i= 0,1,2, . . . k),
2. if Ri+1 Riand Riblocks Ri1then Ri6Ri+1.
Notice that, given Def. 6, an argumentation line has no circularities and is always finite.
We will be interested in the set Λ(R0)of maximal argumentation lines rooted in R0,
i.e. those that are not subsequences of other argumentation lines3rooted in R0. It is clear
that Λ(R0)can be arranged in the form of a tree, where all paths from the root to the
leaf nodes exactly correspond to all the possible maximal argumentation lines rooted in
R0. In order to decide whether R0is accepted in A, the nodes of this tree are marked U
(undefeated) or D (defeated) according to the following (cautious) rules:
1. every leaf node is marked U
2. each inner node is marked U iff all of its children are marked D, otherwise it is
marked D
Then the status of a rule R0in the argumentation framework Ais defined as follows:
R0will be accepted if R0is marked U in the tree Λ(R0)
R0will be rejected if R0is marked D in the tree Λ(R0)
In this way, we decide the status of each argument and define two sets:
Accepted(A) = {RΓ|Ris accepted}Rejected(A) = Γ \Accepted(A)
3.2. Argumentation-based Induction
Given a set of examples , and an argumentation framework A= ,), such that
IR(∆) Γ, we can define the set AIR(∆,A)of argumentation-consistent induced
rules as those induced from which are accepted by A, i.e. AIR(∆,A) = IR(∆)
Accepted(A). This allows us to define argumentation-consistent inductive theories.
Definition 8 An argumentation-consistent inductive theory Tfor a concept C, with re-
spect to , and an argumentation framework A= ,), such that IR(∆) Γ, is an
inductive theory of such that TAIR(∆,A).
In other words, an argumentation-consistent inductive theory is an inductive theory com-
posed of rules which have not been defeated by the arguments known to an agent.
3An argumentation line λ1is a subsequence of another one λ2if the set of arguments in λ1is a subset of
the set of arguments in λ2.
3.3. Argumentation-based Induction in Multi-agent Systems
Let us see now how can argumentation and induction be combined in order to model the
multi-agent ICL problem for two agents. The main idea is that agents induce rules from
the examples they know, and then they share them with the other agent. Rules are then
contrasted using an argumentation framework, and only those rules which are consistent
are accepted in order to find a joint inductive theory.
Thus, in addition to Kand the set of examples i, each agent has a different ar-
gumentation framework Ai, corresponding to its individual point of view. Let us ana-
lyze the situation where each agent Agicommunicates all its inducible rules IR(∆i)
to the other agent. As a result, each agent will have the same argumentation framework
A= (IR(∆1)IR(∆2),). Given a rule RAccepted(A), clearly there are no
counterexamples of Rin either 1or in 2(given the reflexivity property the arguments
corresponding to those examples would defeat Rotherwise). Thus, if T
1and T
2are
argumentation-consistent inductive theories of 1and 2respectively with respect to
A, then T
1T
2is clearly a (joint) inductive theory w.r.t. 12.
Therefore, two agents can reach their goal of finding a joint inductive theory w.r.t.
12, by sharing all of their inductive inferences IR(∆1)and I R(∆2), then comput-
ing individually an argumentation-consistent inductive theory, T
1and T
2respectively,
and then computing the union T
1T
2. In other words, by sharing all the inductive in-
ferences and using argumentation, agents can also reach their goal in the same way as
sharing all the examples. However, sharing the complete IR(∆i)is not a practical so-
lution since it can be very large. Nevertheless, not all arguments in I R(∆i)need to be
exchanged. We will present a process that finds a joint inductive theory w.r.t. 12
without forcing the agents to exchange all their complete IR(∆i).
During this process, agents will communicate rules to each other. Let us call St
jto the
set of rules that an agent Agjhas communicated Agiat a given time tduring this process.
Moreover, we assume that St
jIR(∆j), i.e. that the rules communicated by the agent
Agjare rules that Agjhas been able to induce with its collection of examples. Thus, for
two agents, A1= (IR(∆1)S2,)(i.e. Ag1will have as arguments all the inducible
rules for the agent plus the rules shared by the other agent Ag2); and analogously A2=
(IR(∆2)S1,).
For each argument RRejected(Ai), let us denote by Defeatersi(R)the set of
undefeated children of Rin the argument tree Λ(R)in Ai(which will be non-empty by
definition). Two agents can find a joint inductive theory w.r.t. 12as follows:
1. Before the first round, t= 0, S0
1=, S0
2=,T0
1=,T0
2=.
2. At each new round t, starting at t= 1, each agent Agiperforms two actions:
(a) Given Agis argumentation framework At
i= (IR(∆i)St1
j,),Agigen-
erates a argumentation-consistent inductive theory Tt
iw.r.t. its examples i
such that (Tt1
iAccepted(At1
i)) Tt
i, and (Tt
iRejected(At1
i)) = ,
i.e. the new theory Tt
icontains all the accepted rules from Tt1
iand replaces
the rules that were defeated in Tt1
iby new rules.
(b) Agicreates a set of attacks Rt
iin the following way. Let D={R
Rejected(At
i)St1
j|Defeatersi(R)St1
i=∅}.Dbasically contains all
the arguments sent by the other agent which are, according to Agi, defeated
but Agjmight not be aware of (since Agihas not shared with Agjany of the
arguments which defeats them). Rt
iis created by selecting a single argument
(whichever) R0Defeatersi(R)for each R D. That is, Rt
icontains one
attack for each argument that Agiconsiders defeated, but Agjis not aware of.
3. Then, a new round starts with: St
i=St1
iTt
i Rt
i. When St
1=St1
1and
St
2=St1
2, the process terminates, i.e. when there is a round where no agent has
sent any further attack.
If the set 12is consistent, when the process terminates each agent Agihas
an argumentation-consistent inductive theory Tt
iw.r.t. ithat is also consistent with the
examples jof the other agent Agj(but it might not be an argumentation-consistent
inductive theory w.r.t. j). However their union Tt
1Tt
2is an inductive theory w.r.t. the
examples in 12and since both agents know Tt
1and Tt
2, both agents can have an
argumentation-consistent inductive theory w.r.t. 12. Notice that Ag1can obtain
from Tt
1Tt
2a minimal inductive theory T0Tt
2where T0Tt
1is the minimum set of
rules that cover those examples in 1not covered by Tt
2(and analogously for Ag2).
Lemma 2 If the set 12is consistent, the previous process always ends in a finite
number of rounds t, and that when it ends Tt
1Tt
2is an inductive theory w.r.t. 12.
Proof: First, let us prove that the final theories (Tt
1and Tt
2) are consistent with 12.
For this purpose we will show that the termination condition (St
1=St1
1and St
2=St1
2)
implies that the argumentation-consistent inductive theory Tt
ifound by an agent Agiat
the final round thas no counterexamples in either 1nor in 2.
Let us assume that there is an example ak1which is a counterexample of a
rule RTt
2. Because of the reflexivity property, there is a rule RkI R(∆1)which
corresponds to that example. Since 12is consistent, there is no counterexample
of Rk, and thus Rkis undefeated. Since, by assumption RkR,Rkshould have been
in St1
1,Rwould have been defeated, and therefore rule Rcould not be part of any
argumentation-consistent inductive theory generated by Ag2. The analogous proof can
be used to prove that there are no counterexamples of Tt
1in 12.
Given that Tt
iis an inductive theory w.r.t. i,Tt
1Tt
2is an inductive theory w.r.t.
12because it has no counterexamples in 12, and every example in 12
is explained at least by one rule in Tt
1or Tt
2.
Finally, the process has to terminate in a finite number of steps, since, by assumption,
IR(∆1)and IR(∆2)are finite sets, and at each round sets St
1and St
2grow at least
with one new argument, but since St
iIR(∆i), there is only a finite number of new
arguments that can be added to St
1and St
2before the termination condition holds.
The process to find a joint inductive theory can be seen as composed of three mech-
anisms: induction, argumentation and belief revision. Agents use induction to generate
general rules from concrete examples, they use argumentation to decide which of the
rules sent by another agent can be accepted, and finally they use belief revision to revise
their inductive theories in light of the arguments sent by other agents. The belief revi-
sion process is embodied by how the set of accepted rules Accepted(At
i)changes from
round to round, which also determines how an agent inductive theory changes in light of
arguments shared by the other agent4.
4For reasons of space an example of the execution is not included in this paper, but it can be found at
http://www.iiia.csic.es/~santi/papers/IL2010_extended.pdf
4. Related Work
Peter Flach [6] introduced a logical analysis of induction, focusing on hypothesis gener-
ation. In Flach’s analysis induction is studied on the meta-level of consequence relations,
and focuses on different properties that may be desirable for different kinds of induc-
tion, while we focus in a limited form of induction, namely inductive concept learning,
extensively studied in machine learning.
Computational argumentation is often modeled using Dung’s abstract approach [4],
that consider arguments as atomic nodes linked through a binary relation called “attack”.
On the other hand there are argumentation systems [12,7,8,2] which take as basis a log-
ical language and an associated consequence relation used to define an argument. Some
of these systems, like [7] use a logic programming language defined over a set of literals
and an acceptability semantics based on dialectical trees is applied in order to determine
the “acceptable arguments”. In our argumentation approach, we argue about the accept-
ability of induced rules from examples with a well defined notion of attack relation, and
the semantics is based on dialectical trees.
Finally, about the use of argumentation for concept learning, let us mention two
related works. Ontañón and Plaza [11] study an argumentation-based framework (A-
MAIL) that allows agents to achieve a shared, agreed-upon meaning for concepts. Con-
cept descriptions are created by agents using inductive learning and revised during ar-
gumentation until a convergent concept description is found and agreed-upon. A-MAIL
integrates inductive machine learning and MAS argumentation in a coherent approach
where the belief revision mechanism that allows concept convergence is sound w.r.t. in-
duction and argumentation models.
Amgoud and Serrurier [1] propose an argumentation framework for the inductive
concept learning problem. In their framework, both examples and hypotheses are con-
sidered as arguments and they define an attack relation among them following Dung’s
framework. However, they do not model the inductive process of generating hypotheses
from examples, but assume that a set of candidate hypotheses exists.
5. Conclusions and Future Work
This paper has two main contributions. First, we have presented a logical characteriza-
tion of the inductive inference used in inductive concept learning, a common problem
in machine learning. Additionally, we have proposed an argumentation-based approach
to model the process of communication of inductive inferences which appears in multi-
agent inductive concept learning. This combination of induction with argumentation in
a common model is the second contribution to the paper. This combination is useful in
itself, as we have shown elsewhere [11], for communication in multi-agent systems and
for learning from communication. But more importantly, this combination of induction
with argumentation shows the usefulness of developing a logical characterization of in-
duction; without a formal framework to model induction there would be no possibility
to combine with other forms of inference and reasoning, as for example the defeasible
form of reasoning that is argumentation.
Our future work will focus on moving from a Boolean approach to a graded (or
weighted) approach. ICL techniques usually accept generalizations that are not 100%
consistent with the set of examples. We intend to investigate a logic model of induc-
tion where generalizations have an associated confidence measure. Integrating induc-
tion with argumentation can make use of a confidence measure, specifically by consider-
ing weighted argumentation frameworks [5], where attacks may have different weights.
We intend to investigate how weighted attacks and confidence-based induction could be
modeled using multivalued or graded logics.
Acknowledgements
We are grateful to Prof. Francesc Esteva for his insights during discussions on the earlier
drafts of this paper and the anonymous reviewers for their valuable comments. Research
partially funded by the projects Agreement Technologies (CONSOLIDER CSD2007-
0022), ARINF (TIN2009-14704-C03-03), Next-CBR (TIN2009-13692-C03-01), Lo-
MoReVI (FFI2008-03126-E/FILO), and by the grants 2009-SGR-1433 and 2009-SGR-
1434 of the Generalitat de Catalunya.
References
[1] Leila Amgoud and Mathieu Serrurier. Arguing and explaining classifications. In Proc. AAMAS-07,
pages 1–7, New York, NY, USA, 2007. ACM.
[2] Philippe Besnard and Anthony Hunter. Elements of Argumentation. The MIT Press, 2008.
[3] Carlos Chesñevar and Guillermo Simari. A lattice-based approach to computing warranted beliefs in
skeptical argumentation frameworks. In Proc. of IJCAI-07, pages 280–285, 2007.
[4] Phan Minh Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reason-
ing, logic programming and n-person games. Artificial Intelligence, 77(2):321–357, 1995.
[5] Paul E. Dunne, Anthony Hunter, Peter McBurney, Simon Parsons, and Michael Wooldridge. Inconsis-
tency tolerance in weighted argument systems. In Proc. of the AAMAS’09, pages 851–858, 2009.
[6] Peter A. Flach. Logical characterisations of inductive learning. In Handbook of defeasible reasoning and
uncertainty management systems: Volume 4 Abductive reasoning and learning, pages 155–196. Kluwer
Academic Publishers, Norwell, MA, USA, 2000.
[7] Alejandro J. García and Guillermo R. Simari. Defeasible logic programming an argumentative approach.
In Theory and Practice of Logic Programming, pages 95–138. Cambridge University Press, 2004.
[8] Guido Governatori, Michael J. Maher, Grigoris Antoniou, and David Billington. Argumentation seman-
tics for defeasible logic. J. Log. and Comput., 14(5):675–702, 2004.
[9] N. Lavraˇ
c and S. Džeroski. Inductive Logic Programming. Techniques and Applications. Ellis Horwood,
1994.
[10] Tom Mitchell. Machine Learning. McGraw-Hill, 1997.
[11] Santiago Ontañón and Enric Plaza. Multiagent inductive learning: an argumentation-based approach.
In Proc. ICML-2010, 27th International Conference on Machine Learning, pages 839–846. Omnipress,
2010.
[12] Henry Prakken and Giovanni Sartor. Argument-based extended logic programming with defeasible
priorities. Journal of Applied Non-Classical Logics, 7(1), 1997.
[13] Nicolás Rotstein, Martín Moguillansky, and Guillermo Simari. Dialectical abstract argumentation: a
characterization of the marking criterion. In Proc. of IJCAI-09, pages 898–903, 2009.
... First, it allows for a better understanding of ICL algorithms, and second, it facilitates the integration of inductive reasoning with other forms of logical reasoning, as we will show by integrating ICL with computational argumentation to define a model of multiagent ICL. This paper extends the preliminary work in [8], modeling inductive generalization as a non-monotonic logic, extending the properties satisfied, and using preference relations to model bias in ICL. ...
Article
This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data.
... In our approach we will adopt the semantics based on dialogical trees [1]. For a wider explanation the formal model underlying our framework see [5]. There are two kinds of arguments in A-MAIL: ...
Conference Paper
Full-text available
How to achieve shared meaning is a significant issue when more than one intelligent agent is involved in the same domain. We define the task of concept convergence, by which intelligent agents can achieve a shared, agreed-upon meaning of a concept (restricted to empirical domains). For this purpose we present a framework that, integrating computational argumentation and inductive concept learning, allows a pair of agents to (1) learn a concept in an empirical domain, (2) argue about the concept’s meaning, and (3) reach a shared agreed-upon concept definition. We apply this framework to marine sponges, a biological domain where the actual definitions of concepts such as orders, families and species are currently open to discussion. An experimental evaluation on marine sponges shows that concept convergence is achieved, within a reasonable number of interchanged arguments, and reaching short and accurate definitions (with respect to precision and recall).
Article
Full-text available
This chapter presents a logical analysis of induction. Contrary to common approaches to inductive logic that treat inductive validity as a real-valued generalisation of deductive validity, we argue that the only logical step in induction lies in hypothesis generation rather than evaluation. Inspired by the seminal paper of Kraus, Lehmann and Magidor(Kraus et al., 1990) we analyse the logic of inductive hypothesis generation on the meta-level of consequence relations. Two main forms of induction are considered: explanatory induction, aimed at inducing a general theory explaining given observations, and confirmatory induction, aimed at characterising completely or partly observed models. Several sets of meta-theoretical properties of inductive consequence relations are considered, each of them characterised by a suitable semantics. The approach followed in this chapter is extensively motivated by referring to recent and older work in philosophy, logic, and machine learning.
Article
Full-text available
The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP pro-vides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query q will succeed when there is an argument A for q that is warranted, i. e. the argument A that supports q is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting ap-proach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.
Conference Paper
Full-text available
We introduce and investigate a natural extension of Dung's w ell- known model of argument systems in which attacks are associated with a weight, indicating the relative strength of the attack. A key concept in our framework is the notion of an inconsistency bud- get, which characterises how much inconsistency we are prepared to tolerate: given an inconsistency budget �, we would be prepared to disregard attacks up to a total cost of �. The key advantage of this approach is that it permits a much finer grained level of a naly- sis of argument systems than unweighted systems, and gives useful solutions when conventional (unweighted) argument systems have none. We begin by reviewing Dung's abstract argument systems, and present the model of weighted argument systems. We then investigate solutions to weighted argument systems and the associ- ated complexity of computing these solutions, focussing in partic- ular on weighted variations of grounded extensions.
Conference Paper
Full-text available
Multiagent Inductive Learning is the problem that groups of agents face when they want to perform inductive learning, but the data of interest is distributed among them. This paper focuses on concept learning, and presents A-MAIL, a framework for multiagent induction integrating ideas from inductive learning, case-based reasoning and argumentation. Argumentation is used as a communication framework with which the agents can communicate their inductive inferences to reach shared and agreed-upon concept definitions. We also identify the requirements for learning algorithms to be used in our framework, and propose an algorithm which satisfies them. 1.
Conference Paper
Full-text available
Argumentation is a promising approach used by autonomous agents for reasoning about inconsistent knowledge, based on the construction and the comparison of arguments. In this paper, we apply this approach to the classification problem, whose purpose is to construct from a set of training examples a model (or hypothesis) that assigns a class to any new example. We propose a general formal argumentation-based model that constructs arguments for/against each possible classification of an example, evaluates them, and determines among the conflicting arguments the acceptable ones. Finally, a “valid” classification of the example is suggested. Thus, not only the class of the example is given, but also the reasons behind that classification are provided to the user as well in a form that is easy to grasp. We show that such an argumentation-based approach for classification offers other advantages, like for instance classifying examples even when the set of training examples is inconsistent, and considering more general preference relations between hypotheses. Moreover, we show that in the particular case of concept learning, the results of version space theory are retrieved in an elegant way in our argumentation framework.
Conference Paper
Full-text available
Logic-based formalizations of argumentation, that take pros and cons for some claim into account, have been extensively studied, and some basic principles have been established (for reviews see [1-3]). These formalizations assume a set of formulae and then exhaustively lay out arguments and counterarguments, where a counterargument either rebuts (i.e. negates the claim of the argument) or undercuts (i.e. negates the support of the argument). Recently attempts have been made to refine these formalizations by using techniques for selecting the more appropriate arguments and counterarguments by taking into account intrinsic factors (such as the degree of inconsistency between an argument and its counterarguments) and extrinsic factors (such as the impact of particular arguments on the audience and the beliefs of the audience). In this presentation, we consider the need to take intrinsic and extrinsic factors into account, and then consider ways that this can be done in logic in order to refine existing logic-based approaches to argumentation. These refinements offer interesting options for formalizations that may better capture practical argumentation for intelligent agents [3].
Conference Paper
Full-text available
This paper introduces a novel approach to model warrant computation in a skeptical abstract argu- mentation framework. We show that such search space can be defined as a lattice, and illustrate how the so-called dialectical constraints can play a role for guiding the efficient computation of warranted arguments. 1 Introduction and Motivations Abstract argumentation frameworks have played a major role as a way of understanding argument-based inference, result- ing in different argument-based semantics. In order to com- pute such semantics, efficient argument-based proof proce- dures are required for determining when a given argument A is warranted. This involves the analysis of a potentially large search space of candidate arguments related to A by means of an attack relationship. This paper presents a novel approach to model such search space for warrant computation in a skeptical abstract argu- mentation framework. We show that the above search space can be defined as a lattice, and illustrate how some constraints (called dialectical constraints) can play a role for guiding the efficient computation of warranted arguments. The rest of this paper is structured as follows. Section 2 presents the ba- sic ideas of an abstract argumentation framework with dialec- tical constraints. Section 3 shows how so-called dialectical trees can be used to analyze the search space for computing warrants, representing it as a lattice. In Section 4 we analyze different criteria which can lead to compute warrant more ef- ficiently on the basis of this lattice characterization. Finally, in Sections 5 and 6 we discuss some related work and present the main conclusions that have been obtained.
Conference Paper
Full-text available
This article falls within the field of abstract argumentation frameworks. In particular, we focus on the study of frameworks using a proof procedure based on dialectical trees. These trees rely on a marking procedure to determine the warrant status of their root argument. Thus, our objective is to formulate rationality postulates to characterize the marking criterion over dialectical trees. The behavior of the marking procedure is closely tied to the alteration of trees, which is the keystone of any model of change based on dialectical argumentation. Hence, the results achieved in this work will benefit research on dynamics in argumentation.
Article
The purpose of this paper is to study the fundamental mechanism, humans use in argumentation, and to explore ways to implement this mechanism on computers.We do so by first developing a theory for argumentation whose central notion is the acceptability of arguments. Then we argue for the “correctness” or “appropriateness” of our theory with two strong arguments. The first one shows that most of the major approaches to nonmonotonic reasoning in AI and logic programming are special forms of our theory of argumentation. The second argument illustrates how our theory can be used to investigate the logical structure of many practical problems. This argument is based on a result showing that our theory captures naturally the solutions of the theory of n-person games and of the well-known stable marriage problem.By showing that argumentation can be viewed as a special form of logic programming with negation as failure, we introduce a general logic-programming-based method for generating meta-interpreters for argumentation systems, a method very much similar to the compiler-compiler idea in conventional programming.