PreprintPDF Available

Abstract and Figures

Literature in AI & Law contemplates argumentation in legal cases as an instance of theory construction. The task of a lawyer in a legal case is to construct a theory containing: (a) relevant generic facts about the world, (b) relevant legal rules such as precedents and statutes, and (c) contingent facts describing or interpreting the situation at hand. Lawyers then elaborate convincing arguments starting from these facts and rules, deriving into a positive decision in favour of their client, often employing sophisticated argumentation techniques involving such notions as burden of proof, stare decisis, legal balancing, etc. In this paper we exemplarily show how to harness Isabelle/HOL to model lawyer's argumentation using value-oriented legal balancing, while drawing upon shallow embeddings of combinations of expressive modal logics in HOL. We highlight the essential role of model finders (Nitpick) and 'hammers' (Sledgehammer) in assisting the task of legal theory construction and share some thoughts on the practicability of extending the catalogue of ITP applications towards legal informatics. 2012 ACM Subject Classification Keywords and phrases Isabelle/HOL, shallow embedding, preference logic, legal reasoning Acknowledgements We thank Bertram Lomfeld for encouraging us to take up this formalisation challenge.
Content may be subject to copyright.
Value-oriented Legal Argumentation in Isabelle/HOL
Christoph Benzmüller !
Freie Universität Berlin, Germany
David Fuenmayor !
University of Luxembourg, Luxembourg
Abstract
Literature in AI & Law contemplates argumentation in legal cases as an instance of theory construction. The
task of a lawyer in a legal case is to construct a theory containing: (a) relevant generic facts about the world,
(b) relevant legal rules such as precedents and statutes, and (c) contingent facts describing or interpreting the
situation at hand. Lawyers then elaborate convincing arguments starting from these facts and rules, deriving into
a positive decision in favour of their client, often employing sophisticated argumentation techniques involving
such notions as burden of proof, stare decisis, legal balancing, etc. In this paper we exemplarily show how to
harness Isabelle/HOL to model lawyer’s argumentation using value-oriented legal balancing, while drawing upon
shallow embeddings of combinations of expressive modal logics in HOL. We highlight the essential role of model
finders (Nitpick) and ‘hammers’ (Sledgehammer) in assisting the task of legal theory construction and share some
thoughts on the practicability of extending the catalogue of ITP applications towards legal informatics.
2012 ACM Subject Classification
Keywords and phrases Isabelle/HOL, shallow embedding, preference logic, legal reasoning
Acknowledgements We thank Bertram Lomfeld for encouraging us to take up this formalisation challenge.
1 Introduction
In this paper we explore (value-oriented) legal reasoning as a new application area for higher-order
proof assistants. More specifically, we employ Isabelle/HOL [
31
] to formalise, verify, and enhance
legal arguments as presented in the context of a legal case between two parties: a plaintiff and a
defendant. In the spirit of previous work in the AI & Law tradition, we tackle the formal reconstruction
of legal cases as a task of theory construction, namely, “building, evaluating and using theories” [
5
].
Thus, “the task for a lawyer or a judge in a ‘hard case’ is to construct a theory of the disputed rules
that produces the desired legal result, and then to persuade the relevant audience that this theory is
preferable to any theories offered by an opponent" [30].
We utilise the framework of shallow semantical embeddings (SSE; cf. [
7
,
16
]) of (combinations
of) non-classical logics in classical higher-order logic (HOL). HOL, which is instantiated here as
Isabelle/HOL, thereby serves as a meta-logic, rich enough to support the encoding of combinations of
object logics (modal, conditional, deontic, etc. [6, 8, 9, 10]) allowing for the modelling of adaptable
value systems. For this sake, we also integrate some basic notions from formal concept analysis (FCA)
[22] to support our encoding of a theory of legal values as proposed by Lomfeld [28].
This paper improves an unpublished workshop paper [
11
]; for further details we refer to the
extended, evolving text [12]. The sources of our formalisation are available online [1].
Paper structure: In §2 we outline our object logic of choice, a modal logic of preferences [
35
],
and we then present a SSE of this logic in the Isabelle/HOL proof assistant. Subsequently we depict in
§3 the encoding of a logic of legal values by drawing upon FCA notions and Lomfeld’s value theory.
In §4 we demonstrate how the formalisation of relevant legal and world knowledge can be used for
formally reconstructing value-oriented arguments for an exemplary property law case. We conclude
in §5 with some comments on related work and further reflections and ideas for the prospective
application of ITP in the legal domain.
2 Value-oriented Legal Argumentation in Isabelle/HOL
2 Shallow Embedding of the Object Logic
2.1 Modal Preference Logic
As will become evident later on, our object logic needs to provide the means for representing (condi-
tional) preferences between propositions. For this sake we have chosen the modal logic of ceteris
paribus preferences as introduced by van Benthem et al. [
35
], which we abbreviate by
in the
remainder. For the purpose of this present paper we will focus our discussion on
’s basic preference
language, disregarding the mechanism of ceteris paribus clauses. Nevertheless, we have provided a
complete encoding and assessment of
in the associated Isabelle/HOL sources [
1
]. We will briefly
outline below some relevant syntactic and semantic notions of
and refer the reader to [
35
] for a
complete exposition.
is composed of normal S4 and K4 modal operators, together with a global existential modality
E
. Combinations of these modalities enable us to capture a wide variety of propositional preference
statements of the form
𝐴≺𝐵
(for different, indexed
-relations). The formulas of
are inductively
defined as follows (where pranges over a set Prop of propositional constant symbols):
𝜑, 𝜓 ∶∶= p𝜑𝜓¬𝜑𝜑𝜑E𝜑
𝜑
is to be read as “
𝜑
is true in a state that is considered to be at least as good as the current
state”,
𝜑
as “
𝜑
is true in a state that is considered to be strictly better than the current state”, and
E𝜑
as “there is a state where
𝜑
is true”.
𝜑
,
𝜑
and
A𝜑
can be introduced to abbreviate
¬¬𝜑
,
¬¬𝜑
and
¬E¬𝜑
, respectively. Further, standard logical connectives such as
,
and
can be
defined as usual. We use
boldface
fonts to distinguish standard logical connectives of
from their
counterparts in HOL.
A preference model
is a triple
=𝑊 , , 𝑉
where: (i)
𝑊
is a set of states; (ii)
is a so-
called “betterness relation” that is reflexive and transitive (i.e. a preorder), where its strict subrelation
is defined as:
𝑤≺𝑣
iff
𝑤𝑣𝑣𝑤
for all
𝑣
and
𝑤
(totality of
, i.e.
𝑣𝑤
or
𝑤𝑣
, is
generally not assumed); (iii)
𝑉
is a standard modal valuation. Below we show the truth conditions for
s modal connectives (the rest are standard):
, 𝑤 𝜑iff 𝑣𝑊such that 𝑤𝑣and , 𝑣 𝜑
, 𝑤 𝜑iff 𝑣𝑊such that 𝑤≺𝑣and , 𝑣 𝜑
, 𝑤 E𝜑iff 𝑣𝑊such that , 𝑣 𝜑
A formula
𝜑
is true at world
𝑤𝑊
in model
if
, 𝑤 ⊨ 𝜑
.
𝜑
is globally true in
, denoted
⊨ 𝜑
, if
𝜑
is true at every
𝑤𝑊
. Moreover,
𝜑
is valid (in a class of models
) if globally true
in every (), denoted  𝜑(𝜑).
Quite relevant to our purposes is the fact that
introduces eight semantical definitions for
binary preference operations on propositions (
𝐸𝐸 ,𝐴𝐸 ,𝐸 𝐴,𝐴𝐴
, and their strict variants). They
correspond, roughly speaking, to the four different ways of combining a pair of universal and existential
quantifiers when “lifting” an ordering on worlds to an ordering on sets of worlds (i.e. propositions). In
this respect
can be seen as a family of preference logics encompassing, in particular, the proposals
by von Wright [
36
] and Halpern [
24
].
appears well suited for effective automation using the
SSE approach, which has been an important selection criterion. This judgment is based on good
prior experience with the SSE of related (monadic) modal logics [15, 16] whose semantics employs
Kripke-style relational semantics.
C. Benzmüller and D. Fuenmayor 3
2.2 Encoding in Meta-logic HOL
We employ the shallow semantical embeddings (SSE) technique [
7
,
16
] to encode (a semantical
characterisation of) the logical connectives of an object logic as
𝜆
-expressions in HOL. This essentially
shows that the object logic can be unraveled as a fragment of HOL and hence automated as such. For
(multi-)modal normal logics, like
, the relevant semantical structures are Kripke-style relational
frames. formulas can thus be encoded as predicates in HOL taking worlds as arguments.1
As a result, we obtain a combined, interactive and automated, theorem prover and model finder
for (an extended variant of)
realised within Isabelle/HOL. This is a new contribution, since we
are not aware of any other existing implementation and automation of such a logic. Moreover, the
SSE technique supports the automated assessment of meta-logical properties of the embedded logic
at a semantical level, which in turn provides practical evidence for the correctness of our encoding.
We now give a succint overview of the SSE of
[
1
]. The embedding starts with declaring the
HOL base type
𝜄
, corresponding to the domain of possible worlds/states in our formalisation.
propositions are modelled as predicates on objects of type
𝜄
(i.e. as truth-sets of worlds) and, hence,
they are given the type
(𝜄𝑜)
, which is abbreviated as
𝜎
in the remainder. The “betterness relation”
of
is introduced as an uninterpreted constant symbol
(𝜄𝜄𝑜)
in HOL, and its strict variant
is introduced as an abbreviation
(𝜄𝜄𝑜)
standing for the HOL term
𝜆𝑣𝜆𝑤(𝑣𝑤∧ ¬(𝑤𝑣))
.
-accessible worlds are interpreted as those that are at least as good as the present one, and we
hence postulate that
is a preorder, i.e. reflexive and transitive. In a next step the
𝜎
-type lifted
logical connectives of
are introduced as abbreviations for
𝜆
-terms in the meta-logic HOL. The
conjunction operator
of
, for example, is introduced as an abbreviation
𝜎𝜎𝜎
which stands for
the HOL term
𝜆𝜑𝜎𝜆𝜓𝜎𝜆𝑤𝜄(𝜑 𝑤 𝜓 𝑤)
, so that
𝜑𝜎𝜓𝜎
reduces to
𝜆𝑤𝜄(𝜑 𝑤 𝜓 𝑤)
, denoting
the set
2
of all worlds
𝑤
in which both
𝜑
and
𝜓
hold. Analogously, for negation, we introduce an
abbreviation ¬𝜎𝜎, which stands for 𝜆𝜑𝜎𝜆𝑤𝜄¬(𝜑 𝑤).
The operators
and
use
and
as guards in their definitions. These operators are
introduced as shorthands
𝜎𝜎
and
𝜎𝜎
abbreviating the HOL terms
𝜆𝜑𝜎𝜆𝑤𝜄𝑣𝜄(𝑤𝑣𝜑 𝑣)
and
𝜆𝜑𝜎𝜆𝑤𝜄𝑣𝜄(𝑤≺𝑣𝜑 𝑣)
, respectively.
𝜎𝜎𝜑𝜎
thus reduces to
𝜆𝑤𝜄𝑣𝜄(𝑤𝑣𝜑 𝑣)
, denoting
the set of all worlds
𝑤
so that
𝜑
holds in some world
𝑣
that is at least as good as
𝑤
; analogous for
𝜎𝜎
. Additionally, the global existential modality
E𝜎𝜎
is introduced as shorthand for the HOL
term
𝜆𝜑𝜎𝜆𝑤𝜄𝑣𝜄(𝜑 𝑣)
. The duals
𝜎𝜎𝜑𝜎
,
𝜎𝜎𝜑𝜎
and
A𝜎𝜎𝜑
can easily be added so that they are
equivalent to
¬
𝜎𝜎¬𝜑𝜎
,
¬
𝜎𝜎¬𝜑𝜎
and
¬E𝜎𝜎¬𝜑
respectively. A special predicate
𝜑𝜎
(read
𝜑𝜎is valid) for 𝜎-type lifted formulas in HOL is defined as an abbreviation for 𝑤𝜄(𝜑𝜎𝑤).
is now ‘lifted’ to a preference relation between propositions (sets of worlds).3
(𝜑𝜎𝐸𝐸 𝜓𝜎)𝑢𝜄iff 𝑠𝜄𝜑𝜎𝑠∧ (∃𝑡𝜄𝜓𝜎𝑡𝑠𝑡) (𝑢𝜄arbitrary)
(𝜑𝜎𝐸𝐴 𝜓𝜎)𝑢𝜄iff 𝑡𝜄𝜓𝜎𝑡∧ (∀𝑠𝜄𝜑𝜎𝑠𝑠𝑡) (𝑢𝜄arbitrary)
(𝜑𝜎𝐴𝐸 𝜓𝜎)𝑢𝜄iff 𝑠𝜄𝜑𝜎𝑠(∃𝑡𝜄𝜓𝜎𝑡𝑠𝑡) (𝑢𝜄arbitrary)
(𝜑𝜎𝐴𝐴 𝜓𝜎)𝑢𝜄iff 𝑠𝜄𝜑𝜎𝑠(∀𝑡𝜄𝜓𝜎𝑡𝑠𝑡) (𝑢𝜄arbitrary)
As an illustration, we can read
𝜑 ≺𝐴𝐴 𝜓
as “every
𝜓
-state being better than every
𝜑
-state”,
and read
𝜑 ≺𝐴𝐸 𝜓
as “every
𝜑
-state having a better
𝜓
-state” (similarly for others). Each of these
non-trivial variants can be argued for [
35
,
25
]. However, as we will reveal in §3, only the EA- and
1
This corresponds to the well-known standard translation to first-order logic. Observe, however, that the additional
expressivity of HOL allows us to also encode and flexibly combine non-normal modal logics (conditional, deontic,
etc.) and to encode also different kinds of quantifiers; see e.g. [6, 8, 9, 10].
2In HOL (with Henkin semantics) sets are associated with their characteristic functions.
3
The variant
𝐸𝐴
as originally presented in [
35
] was in fact wrongly formulated. This mistake has been uncovered
during the (iterative) formalisation process thanks to Isabelle/HOL.
4 Value-oriented Legal Argumentation in Isabelle/HOL
Figure 1 SSE of basic in Isabelle/HOL (extract).
AE-variants satisfy the minimal conditions required for a logic of value aggregation. Moreover, they
are the only ones that satisfy transitivity.
As shown in [
35
], the binary preference operators above are complemented by ‘syntactic’ counter-
parts defined as derived operators using the language of
. In fact, both sets of definitions (‘semantic’
and ‘syntactic’) coincide in general only for the EE- and AE-variants (other variants coincide only
if
is a total/linear ordering). The ‘syntactic’ variants are encoded below in HOL employing the
𝜎-type lifted logic (using boldface to differentiate them).
(𝜑𝜎𝐸𝐸 𝜓𝜎) ∶= 𝑬(𝜑𝜎𝜓𝜎)and (𝜑𝜎𝐸 𝐸 𝜓𝜎) ∶= 𝑬(𝜑𝜎𝜓𝜎)
(𝜑𝜎𝐸𝐴 𝜓𝜎) ∶= 𝑬(𝜓𝜎¬𝜑𝜎)and (𝜑𝜎𝐸 𝐴 𝜓𝜎) ∶= 𝑬(𝜓𝜎¬𝜑𝜎)
(𝜑𝜎𝐴𝐸 𝜓𝜎) ∶= 𝑨(𝜑𝜎𝜓𝜎)and (𝜑𝜎𝐴𝐸 𝜓𝜎) ∶= 𝑨(𝜑𝜎𝜓𝜎)
(𝜑𝜎𝐴𝐴 𝜓𝜎) ∶= 𝑨(𝜓𝜎¬𝜑𝜎)and (𝜑𝜎𝐴𝐴 𝜓𝜎) ∶= 𝑨(𝜓𝜎¬𝜑𝜎)
We further extend the lifted logic
by adding quantifiers. This can be done by identifying
𝑥𝛼𝑠𝜎
with the HOL term
𝜆𝑤𝜄𝑥𝛼(𝑠𝜎𝑤)
and
𝑥𝛼𝑠𝜎
with
𝜆𝑤𝜄𝑥𝛼(𝑠𝜎𝑤)
. This way quantified expressions
can be seamlessly employed, e.g., for the representation of legal and world knowledge in §4.
2.3 Faithfulness of the SSE
The faithfulness (soundness & completeness) of the present SSE of
in HOL follows from previous
results for SSEs of propositional multi-modal logics [
15
] and their quantified extensions [
16
]. Sound-
ness of the SSE states that our modelling does not give any ‘false positives’, i.e., if
HOL(Γ) 𝜑𝜎
then
 𝜑
, and therefore
 𝜑
in the (complete) calculus axiomatised by [
35
]; here HOL(
Γ
)
corresponds to HOL extended with the relevant types and constants plus a set
Γ
of axioms encoding
semantic conditions, i.e., reflexivity and transitivity of
(𝜄𝜄𝑜)
. Completeness of the SSE means
that our modelling does not give ‘false negatives’, i.e., if
 𝜑
then
HOL(Γ) 𝜑𝜎
. Moreover, SSE
C. Benzmüller and D. Fuenmayor 5
completeness can be mechanically verified by deriving the
𝜎
-type lifted
axioms and inference
rules in HOL(Γ).4
3 A Logic for Value-oriented Legal Reasoning
On top of object logic
we define a domain-specific logic for reasoning with values in the context
of legal cases. We subsequently encode this logic of legal values in Isabelle/HOL and put it to the test.
Setting the Stage: Plaintiff vs. Defendant
In a preliminary step, the contending parties in a legal case, the “plaintiff” (
p
) and the “defendant”
(
d
), are introduced as an (extensible) two-valued datatype c(for “contender”) together with a function
()−1
used to obtain for a given party the other one; i.e.
p−1 =d
and
d−1 =p
. Moreover, we add a
predicate For to model the ruling for a party and postulate: For 𝑥¬For 𝑥−1 .
Abstract Values and Value Principles
Our approach to value-oriented legal reasoning draws upon recent work in legal theory by Lomfeld
[
28
,
27
] who considers a four-quadrant value space generated by two axes featuring antagonistic
abstract values (FREEDOM vs. SECURITY & UTILITY vs. EQUALITY) at the extremes (Fig. 2).
Figure 2 Value theory of Lomfeld [28]
A set of eight value principles are allocated to each of the four quadrants as in Fig. 2. Addition-
ally, Lomfeld’s theory contemplates the encoding of legal rules as conditional preferences between
conflicting value principles of the form:
𝑅∶ (𝐸1𝐸𝑛)𝐴≺𝐵
. Hence, application of rule
𝑅
involves balancing value principles 𝐴and 𝐵in context (i.e. under the conditions 𝐸1𝐸𝑛).
4
See the corresponding sources in [
1
], where we conducted numerous experiments mechanically verifying meta-
theoretical results on .
6 Value-oriented Legal Argumentation in Isabelle/HOL
To provide a concrete modelling of this theory in Isabelle/HOL, we have chosen to model value
principles as sets of abstract values.
5
For the latter we have introduced a four-valued datatype (
𝑡VAL
).
Observe that this datatype is parameterised with a type variable
𝑡
. In the remainder we take
𝑡
as
being
𝑐
. In doing this, we allow for the encoding of value principles w.r.t. to a particular (favoured)
legal party. In the remainder value principles are thus encoded as functions taking objects of type c(
p
or d) to sets of abstract values:
We have also introduced some convenient type-aliases;
𝑣
for the type of sets of abstract values,
and 𝑐𝑣 for its corresponding functional version (taking a legal party as parameter).
Instances of value principles (w.r.t. a legal party) are next introduced as sets of abstract values
(w.r.t. a legal party), i.e., as objects of type cv. For this we introduce set-constructor operators for
values (depicted as
). Recalling Fig. 2, we have, e.g., that the principle of STABility favouring
the plaintiff (
STAB𝑝
) is encoded as a two-element set of abstract values (favouring the plaintiff), i.e.,
SECURITY 𝑝, UTILITY 𝑝. We do analogously for the other value principles.
From a modal logic point of view it is, alternatively, convenient to conceive value principles as
truth-bearers, i.e., propositions (as sets of worlds or situations). To overcome this apparent dichotomy
in the modelling of value principles (sets of abstract values vs. sets of worlds) we make use of the
mathematical notion of a Galois connection as exemplified by the notion of derivation operators
from the theory of formal concept analysis (FCA), a mathematical theory of concepts and concept
hierarchies as formal ontologies. Below we succinctly discuss a couple of FCA notions relevant to
our work. We refer the interested reader to [22] for an actual introduction to FCA.
Some FCA Notions
Aformal context is a triple
𝐾=𝐺, 𝑀 , 𝐼
where
𝐺
is a set of objects,
𝑀
is a set of attributes, and
𝐼
is a relation between
𝐺
and
𝑀
(so-called incidence relation), i.e.,
𝐼 𝐺 ×𝑀
. We read
𝑔, 𝑚𝐼
as “the object
𝑔
has the attribute
𝑚
”. We define two so-called derivation operators
and
as follows:
𝐴∶= {𝑚𝑀| ⟨𝑔, 𝑚𝐼for all 𝑔𝐴}for 𝐴⊆𝐺
𝐵∶= {𝑔𝐺| ⟨𝑔, 𝑚𝐼for all 𝑚𝐵}for 𝐵 ⊆ 𝑀
𝐴
is the set of all attributes shared by all objects from
𝐴
, called the intent of
𝐴
. Dually,
𝐵
is the set
of all objects sharing all attributes from
𝐵
, called the extent of
𝐵
. This pair of derivation operators
5
Here we suitably simplify Lomfeld’s value theory to the effect that, e.g., STABility becomes identified with EFFIciency.
This is enough for our modelling work in §4. A more granular encoding of value principles is possible by adding a
third axis to the value space in Fig. 2.
C. Benzmüller and D. Fuenmayor 7
thus forms an antitone Galois connection between (the powersets of)
𝐺
and
𝑀
, i.e. we always have
that 𝐵 ⊆ 𝐴iff 𝐴⊆𝐵.
Aformal concept (in a context
𝐾
) is defined as a pair
𝐴, 𝐵
such that
𝐴⊆𝐺
,
𝐵 ⊆ 𝑀
,
𝐴=𝐵
,
and
𝐵=𝐴
. We call
𝐴
and
𝐵
the extent and the intent of the concept
𝐴, 𝐵
, respectively. Indeed
𝐴↑↓, 𝐴and 𝐵, 𝐵↓↑are always concepts.
The set of concepts in a formal context is partially ordered by set inclusion of their extents, or,
dually, by the (reversing) inclusion of their intents. In fact, for a given formal context this ordering
forms a complete lattice: its concept lattice. Conversely, it can be shown that every complete lattice is
isomorphic to the concept lattice of some formal context. We can thus define lattice-theoretical meet
and join operations on FCA concepts in order to obtain an algebra of concepts:6
𝐴1, 𝐵1𝐴2, 𝐵2∶= (𝐴1𝐴2),(𝐵1𝐵2)↓↑
𝐴1, 𝐵1𝐴2, 𝐵2∶= (𝐴1𝐴2)↑↓ ,(𝐵1𝐵2)
Value Principles
We now extend the encoding (SSE) of our object logic
, exploiting the high expressivity of our
meta-logic HOL. We define two FCA derivation operators
and
employing the corresponding
definitions from above. For this we take
𝐺
as the domain set of worlds corresponding to the type
𝜄
and
𝑀
as a domain set of abstract values, corresponding in the current modelling approach to the type
VAL. In doing this, each value principle (set of abstract values) becomes associated with a proposition
(set of worlds) by means of the operator
(conversely for
). We encode this by defining a binary
incidence relation
between worlds/states (type
𝜄
) and abstract values (type VAL). We define
so
that 𝑉denotes the set of all worlds that are -related to every value in 𝑉(analogously for 𝑉).
We introduce an alternative notation: [𝑉] ∶= 𝑉which may enhance readability in some cases.
Recalling the semantics of the object logic
from our discussion in §2.1, we can give an intuitive
reading for truth at a world in a preference model to terms of the form
𝑃
; namely, we can read
, 𝑤 ⊨ 𝑃
as “principle
𝑃
provides a reason for (state of affairs)
𝑤
to obtain”. In the same vein, we
can read ⊨ 𝐴 𝑃as “principle 𝑃provides a reason for proposition 𝐴being the case”.
Transfering these insights to our current modelling in Isabelle/HOL, we can intuitively read, e.g.,
the formula
STAB𝑑𝑤
(of type bool) as: “the legal principle of stability is justifiably promoted
in favour of the defendant (in situation
𝑤
)”. In a similar vein, we can read
For 𝑑STAB𝑑
as
“promoting (legal) stability in favour of the defendant justifies deciding for him/her (in any situation)”.
Value Aggregation and Preference
As discussed above, our logic of legal values must provide means for expressing conditional preferences
between principles of the form:
(𝐸1𝐸𝑛)𝐴≺𝐵
. The conditional
is modelled in this work
using
’s material conditional
, while noting that a defeasible conditional operator can indeed
be defined and added by employing
’s modal operators [
21
,
26
]. We can also define a binary
6
This result can be seamlessly stated for infinite meets and joins (infima and suprema) in the usual way. It corresponds
to the first part of the so-called basic theorem on concept lattices [22].
8 Value-oriented Legal Argumentation in Isabelle/HOL
preference connective
for propositions by reusing any of the eight preference ‘lifting’ variants in
as discussed in §2. However, this choice cannot be arbitrary, since it needs to interact with value
aggregation in an appropriate way.
Lomfeld’s theory also contemplates a mechanism for expressing aggregation of value principles
(as reasons). We thus define a binary value aggregation connective
, observing that it should satisfy
particular logical constraints in interaction with a (suitably selected) value preference relation :
(𝐴≺𝐵)(𝐴 ≺ 𝐵 ⊕ 𝐶)but (𝐴 ≺ 𝐵 ⊕ 𝐶)(𝐴≺𝐵)aggregation on the right
(𝐴 ⊕ 𝐶 𝐵)(𝐴≺𝐵)but (𝐴≺𝐵)(𝐴 ⊕ 𝐶 ≺ 𝐵)aggregation on the left
(𝐵 ≺ 𝐴)∧(𝐶 ≺ 𝐴)(𝐵 ⊕ 𝐶 𝐴)union property (optional)
The aggregation connectives are most conveniently defined using join (resp. set union) operators,
which gives us commutativity. As it happens, only the
𝐴𝐸 ∕⪯𝐴𝐸
and
𝐸𝐴 ∕⪯𝐸𝐴
variants from §2
satisfy the first two conditions. They are also the only variants satisfying transitivity. Moreover, if
we choose to enforce the third aggregation principle (union property), then we are left with only
one variant to consider, namely
𝐴𝐸 ∕⪯𝐴𝐸
. This variant also offers several benefits for our current
modelling purposes: it can be faithfully encoded in the language of
[
35
] and its behaviour is well
documented in the literature [24] [25, Ch. 4].
After extensive computer-supported experiments in Isabelle/HOL (see [
1
]) we have identified the
following candidate definitions satisfying all desiderata. First, for value aggregation :7
𝐴 ⊕(1) 𝐵∶= (𝐴𝐵)and 𝐴 ⊕(2) 𝐵∶= (𝐴𝐵)
Then, for a binary preference connective between propositions we have:
𝜑 ≺(1) 𝜓∶= 𝜑𝐴𝐸 𝜓and 𝜑 ≺(2) 𝜓∶= 𝜑 ≺𝐴𝐸 𝜓
For the rest of this work we will illustratively employ the second set of definitions indexed by (2).
Promoting Values
We still need to consider the mechanism by which we can link legal decisions, together with other
legally relevant facts, to legal values. We conceive of such a mechanism as a sentence schema, which
reads intuitively as: “Taking decision D in the presence of facts F promotes/advances legal (value)
principle V”. The formalisation of this schema corresponds to a new predicate Promotes(F,D,V),
where
𝐹
is a conjunction of facts relevant to the case (a proposition),
𝐷
is the legal decision, and
𝑉
is
the value principle thereby promoted.8
Promotes(𝐹 , 𝐷, 𝑉 ) ∶= 𝐹(𝐷𝑉)
Promotes(F,D,V) can be given an intuitive reading: “in every
𝐹
-situation we have that, in all better
states, the admissibility of promoting value
𝑉
both entails and justifies (as a reason) taking decision
𝐷”.
7
Observe that
1
is based upon the join operation on the corresponding FCA formal concepts.
2
is a strengthening
of the first, since (𝐴 ⊕2𝐵)(𝐴 ⊕1𝐵).
8
We adopt the terminology of advancing or promoting a value from the literature [
17
,
32
,
5
] understanding it in a
teleological sense: a decision promoting a value principle means taking that decision for the sake of honouring the
principle; thus seeing the value principle as a reason for taking that decision.
C. Benzmüller and D. Fuenmayor 9
Value Conflict
Another important idea inspired from Lomfeld’s value theory [
27
,
28
] is the notion of value conflict.
Recalling Fig. 2, values are disposed around two axis of value coordinates, with values lying at
contrary poles playing antagonistic roles. For our modelling purposes it makes thus sense to consider
a predicate Conflict on worlds (i.e. a proposition) signalling situations where value conflicts appear.
Testing the Encoding
In order to test the adequacy of our modelling, some implied and non-implied knowledge is studied.
We briefly discuss some of the conducted tests as shown in Fig. 3.
Among others, we verify that the pair of operators for extension (
) and intension (
), cf. formal
concept analysis [
22
], constitute indeed a Galois connection (Lines 6–18), and we carry out some
further tests on the value theory concerning value aggregation and consistency (Lines 20ff.).
In our modelling of the notion of value conflict, promoting values (for the same party) from
two opposing value quadrants, say RELI & WILL, should entail a value conflict; theorem provers
quickly confirm this as shown in Fig. 3 (Line 20). However, promoting values from two non-opposed
quadrants, such as WILL & STAB (Line 29) should not imply conflict: the model finder Nitpick
9
computes and reports a countermodel (not shown here) to the stated conjecture. A value conflict is
also not implied if values from opposing quadrants are promoted for different parties (Lines 36-37).
Note that the notion of value conflict has deliberately not been aligned with inconsistency in
meta-logic HOL. This way we can represent conflict situations in which, for instance, RELI and WILL
(being conflicting values, see Line 20 in Fig. 3) are promoted for the plaintiff (
𝑝
), without leading to a
logical inconsistency in Isabelle/HOL (thus avoiding ‘explosion’). In Line 22 of Fig. 3, for example,
Nitpick is called simultaneously in both modes in order to confirm the contingency of the statement; as
expected both a model (cf. Fig. 4) and countermodel (not displayed here) for the statement are returned.
This value conflict (w.r.t.
𝑝
) can also be spotted by inspecting the satisfying models generated by
Nitpick. One of such models is depicted in Fig. 4, where it is shown that (in the given possible world
𝜄1
)
all of the abstract values (EQUALITY, SECURITY, UTILITY, and FREEDOM) are simultaneously
promoted for 𝑝, which implies a value conflict according to our definition.
Analysing the model structures returned by Nitpick has indeed been very helpful to gain a deeper
insight into
semantic structures. This becomes particularly relevant for complex modelling tasks
where a clear understanding is often initially lacking.
Further tests in Fig. 3 (Lines 39-48) assess the behaviour of the aggregation operator
in
combination with value preferences. We test for a correct behaviour when ‘strengthening’, resp.
‘weakening’, the right-hand side (Lines 39-43). As an illustration, in line 41, if STAB is preferred
over WILL, then STAB combined with, say, RELI is also preferred over WILL alone. Similar test are
conducted for ‘strengthening’, resp. ‘weakening’, the left-hand side (Lines 44-48).
Finally, we verify (lines 50–52) basic properties of the preference relation.
9
Nitpick [
20
] searches for, respectively enumerates, finite models or countermodels to a conjectured statement/lemma.
By default Nitpick searches for countermodels, and model finding is enforced by stating the parameter keyword
‘satisfy’. These models are given as concrete interpretations of relevant terms in the given context so that the
conjectured statement is satisfied or falsified.
10 Value-oriented Legal Argumentation in Isabelle/HOL
Figure 3 Testing the logic of legal values
Figure 4 Satisfying model for the statement in Line 22 of Fig. 3.
C. Benzmüller and D. Fuenmayor 11
4 A Case Study in Property Law
To illustrate our approach, we formalise and assess, employing Isabelle/HOL, a well-known benchmark
case in AI & Law involving the appropriation of wild animals: Pierson vs. Post. Before we start some
words on the modelling of background (legal & world) knowledge are in order.
4.1 Legal & World Knowledge
The realistic modelling of concrete legal cases requires further legal & world knowledge (LWK) to
be taken into account. For the sake of illustration, we introduce here only a small and monolithic
Isabelle/HOL theory
10
called “GeneralKnowledge”. This includes a small excerpt of a much simplified
“animal appropriation taxonomy”, where we associate “animal appropriation” kinds of situations with
the value preferences they imply (as conditional preference relations).
In a realistic setting this knowledge base would be further split and structured similarly to other
legal or general ontologies, e.g., in the Semantic Web. Note, however, that the expressiveness
in our approach, unlike in many other legal ontologies or taxonomies, is by no means limited to
definite underlying (but fixed) logical language foundations. We could thus easily decide for a more
realistic modelling, e.g., avoiding simplifying propositional abstractions. For instance, the proposition
“appWildAnimal”, representing the appropriation of one or more wild animals, can anytime be replaced
by a more complex formula (featuring, e.g., quantifiers, modalities or defeasible conditionals).
We now briefly outline the encoding of our example LWK (see [1] for the full details).
First, some non-logical constants that stand for kinds of legally relevant situations (here: of
appropriation) are introduced, and their meaning is constrained by some postulates:
Then the ‘default’ legal rules for several situations (here: appropriation of animals) are formulated
as conditional preference relations:
For example, rule R2 could be read as: “In a wild-animals-appropriation kind of situation,
promoting STABility in favour of a party (say, the plaintiff) is preferred over promoting WILL in
favour of the other party (defendant)”. If there is no more specific legal rule from a precedent or a
codified statute then these ‘default’ preference relations determine the result. Moreover, we can have
rules conditioned on more concrete legal factors.
11
As a didactic example, the legal rule R4 states
10
Isabelle documents are suggestively called “theories”. They correspond to top-level modules bundling together
related definitions, theories, proofs, etc.
11
The introduction of legal factors is an established practice in the implementation of case-based legal systems (cf. [
3
]
for an overview). They can be conceived –as we do– as propositions abstracted from the facts of a case by the
analyst/modeller in order to allow for assessing and comparing cases at a higher level of abstraction. Factors are
12 Value-oriented Legal Argumentation in Isabelle/HOL
that the Ownership (say, the plaintiff’s) of the land on which the appropriation took place, together
with the fact that the opposing party (defendant) acted out of Malice implies a value preference of
RELIance and RESPonsibility over STABility. This last rule has indeed been chosen to reflect the
famous common law precedent of Keeble vs. Hickeringill [17, 2].
As already discussed, for ease of illustration, terms like “appWildAnimal” are modelled here
as simple propositional constants. In practice, however, they may later be replaced, or logically
implied, by a more realistic modelling of the relevant situational facts, utilising suitably complex
(even higher-order, if needed) formulas depicting states of affairs to some desired level of granularity.
For the sake of modelling the appropriation of objects, we have introduced an additional type
𝑒
(for ‘entities’) that can be employed for terms denoting individuals (things, animals, etc.) when
modelling legally relevant situations. Some simple vocabulary and taxonomic relationships (here for
wild and domestic animals) are specified to illustrate this.
As mentioned before, we have introduced some convenient legal factors into our example LWK to
allow for the encoding of legal knowledge originating from precedents or statutes at a more abstract
level. In our approach these factors are to be logically implied (as deductive arguments) from the
concrete facts of the case (as exemplified in §4 below). Observe that our framework also allows us to
introduce definitions for those factors for which clear legal specifications exist. At the present stage,
we will provide some simple postulates constraining factors’ interpretation.
Recalling §3 we relate the introduced factors to value principles and outcomes by means of the
Promotes predicate. Finally, the consistency of all axioms and rules provided is confirmed by Nitpick.
typically either pro-plaintiff or pro-defendant, and their being true or false (resp. present or absent) in a concrete case
can serve to invoke relevant precedents or statutes.
C. Benzmüller and D. Fuenmayor 13
4.2 Pierson vs. Post
We illustrate our reasoning framework by encoding the classic property law case Pierson vs. Post. In
a nutshell: Pierson killed and carried off a fox which Post already was hunting with hounds on public
land. The Court found for Pierson (cf. [23, 2, 32, 17]).
Ruling for Pierson
The formal modelling of an argument in favour of Pierson is outlined next (the entire formalisation of
this argument is presented in the sources [1]).
First we introduce some minimal vocabulary: a constant
𝛼
of type
𝑒
(denoting the appropriated
animal), and the relations pursue and capture between the animal and one of the parties (of type
𝑐
). A
background (generic) theory as well as the (contingent) case facts as suitably interpreted by Pierson’s
party are then stipulated:
The aforementioned decision of the court for Pierson was justified by the majority opinion. The
essential preference relation in the case is implied in the idea that appropriation of (free-roaming)
wild animals requires actual corporal possession. The manifest corporal link to the possessor creates
legal certainty, which is represented by the value stability (STAB) and outweighs the mere will to
possess (WILL) by the plaintiff; cf. the arguments of classic lawyers cited by the majority opinion
[
23
]: “pursuit alone vests no property” (Justinian institutes), and “corporal possession creates legal
certainty” (Pufendorf). Recalling Fig. 2 in §3, this corresponds to a preference for the abstract value
SECURITY over FREEDOM.
We can see that this legal rule R2, as introduced in the previous section (§4.1) is indeed employed
by Isabelle/HOL’s automated tools to prove that, given a suitable defendant’s theory, the (contingent)
facts imply a decision in favour of Pierson in all ‘better’ worlds (which we could read deontically as a
sort of obligation):
14 Value-oriented Legal Argumentation in Isabelle/HOL
The previous ‘one-liner’ proof has indeed been suggested by Sledgehammer [
18
,
19
] which we
credit, together with Nitpick [
20
], for doing the heavy lifting in our work. A proof argument in favour
of Pierson that uses the same dependencies can also be constructed interactively using Isabelle’s
human-readable proof language Isar [
37
]. The individual steps of the proof are this time formulated
with respect to an explicit world/situation parameter 𝑤. The argument goes roughly as follows:
1.
From Pierson’s facts and theory we infer that in the disputed situation
𝑤
a wild animal has been
appropriated: appWildAnimal 𝑤.
2.
In this context, by applying the value preference rule R2, we get that promoting STAB in favour
of Pierson is preferred over promoting WILL in favour of Post: [WILL𝑝][STAB𝑑].
3.
The admissibility of promoting WILL in favour of Post thus entails the admissibility of promoting
STAB in favour of Pierson: [WILL𝑝][STAB𝑑].
4.
Moreover, after instantiating the value promotion schema F1 (§4.1) for Post (
𝑝
), and acknowledging
that his pursuing of the animal (Pursue
𝑝 𝛼
) entails his intention to possess (Intent
𝑝
), we obtain
(for the given situation
𝑤
) an obligation/recommendation to ‘align’ any ruling for Post with the
admissibility of promoting WILL in his favour: (For 𝑝[WILL𝑝]) 𝑤.
5.
Analogously, in view of Pierson’s (
𝑑
) capture of the animal (Capture
𝑑 𝛼
), thus having taken
possession of it (Poss
𝑑
), we infer from the instantiation of value promotion schema F3 (for
Pierson) an obligation/recommendation to align a ruling for Pierson with the admissibility of
promoting the value principle STAB (in his favour): (For 𝑑[STAB𝑑]) 𝑤.
6.
From (4) and (5) in combination with the courts duty to find a ruling for one of both parties
(ForAx) we infer, for the given situation
𝑤
, that either the admissibility of promoting WILL in
favour of Post or the admissibility of promoting STAB in favour of Pierson (or both) hold in every
‘better’ world/situation (thus becoming a recommended/obligatory condition):
([WILL𝑝]
[STAB𝑑]) 𝑤.
7.
From this and (3) we thus get that the admissibility of promoting STAB in favour of Pierson is
recommended/obligatory in the given context 𝑤:([STAB𝑑]) 𝑤.
8.
And this together with (5) finally implies the recomendation/obligation to rule in favour of Pierson
in the given context 𝑤:(For 𝑑 𝑣).
The consistency of Pierson’s assumptions (theory and facts) together with the other postulates from
the previously introduced Isabelle theories “GeneralKnowledge” and “ValueOntology” is verified by
generating a (non-trivial) model using Nitpick (Line 38). Further tests confirm that the decision for
Pierson (and analogously for Post) is compatible with the premises and, moreover, that for neither
party value conflicts are implied.
C. Benzmüller and D. Fuenmayor 15
Finally, observe that an analogous (deductively valid) argument for Post cannot follow from the
given theory and situational facts. This is not surprising given that they have been deliberately chosen
to suit Pierson’s case. We show next, how it is indeed possible to construct a case (theory) suiting
Post using our approach.
Ruling for Post
We model a possible counterargument by Post claiming an interpretation (i.e. a distinction in case
law methodology) in that the animal, being vigorously pursued (with large dogs and hounds) by a
professional hunter, is not “free-roaming”. In doing this, the value preference
[WILL𝑝][STAB𝑑]
(for appropriation of wild animals) as in the previous Pierson’s argument does not obtain. Furthermore,
Post’s party postulates an alternative (suitable) value preference for hunting situations.
Note that an alternative legal rule (i.e. a possible argument for overruling in case law methodology)
is presented in Line 16 above, entailing a value preference of the value combination efficiency (EFFI)
and will (WILL) over stability (STAB):
[STAB𝑑][EFFI𝑝WILL𝑝]
. Following the argument
put forward by the dissenting opinion in the original case, we might justify this new rule (inverting the
initial value preference in the presence of EFFI) by pointing to the alleged public benefit of hunters
getting rid of foxes, since the latter cause depredations in farms.
Accepting these modified assumptions the deductive validity of a decision for Post can in fact be
proved and confirmed automatically, again, thanks to Sledgehammer:
16 Value-oriented Legal Argumentation in Isabelle/HOL
Similar to above, a detailed, interactive proof for the argument in favour of Post has been encoded
and verified in Isabelle/Isar. We have also conducted further tests confirming the consistency of the
assumptions and the absence of value conflicts (see sources in [1]).
5 Conclusion
Supporting interactive and automated value-oriented legal argumentation on the computer is a non-
trivial challenge which we address, for reasons as defended e.g. by Bench-Capon [
4
], with symbolic
AI techniques and formal methods. Motivated by recent pleas for explainable and trutsworthy AI,
our primary goal is to work towards the development of ethico-legal governors for future generations
of intelligent system, or more generally, towards some form of (legally and ethically) reasonable
machines [
13
], capable of exchanging rational justifications for the actions they take. While building
up a capacity to engage in value-oriented legal argumentation is just one of a multitude of challenges
this vision is faced with, it would clearly constitute an important stepping stone.
Custom software systems for legal case-based reasoning have been developed in the AI & Law
community starting with the influential HYPO system in the 80’s [
34
] (cf. also the survey paper [
3
]).
In later years there has been a gradual shift of interest from rule-based, non-monotonic reasoning
(e.g. logic programming) towards argumentation-based approaches (cf. [
33
] for a survey); however, we
are not aware of any other work harnessing higher-order theorem proving and proof assistants. Another
important aspect of our work concerns value-oriented legal argumentation and balancing, where a
considerable amount of work has been put forward in response to the challenge set by Berman and
Hafner [
17
]. Our approach, drawing mainly upon Lomfeld’s theory [
28
,
27
], has also been influenced
by some of this work, in particular [
32
,
2
,
5
]. We think that some of the recent work employing
expressive deontic logics for value balancing (cf. [
29
] and the references therein) can be fruitfully
integrated into our approach.
The approach presented and illustrated in this paper adapts and implements the multilayered LO-
GIKEYknowledge engineering methodology [
14
] to enable the application of off-the-shelf interactive
and automated theorem proving technology for classical higher-order logic in ethico-legal reasoning.
Isabelle/HOL has proven an excellent base technology to support the presented formalisation work and
the conducted experiments. We are particularly pleased about the good performance of the integrated
automated theorem provers (put at our disposal by Sledgehammer) and the Nitpick model finder, which
provided highly useful feedback at all modelling layers, including fully automated proofs to formally
justify the discussed court rulings.
Further work includes the refinement of the modelling of Lomfeld’s value theory in combination
with the addition of a defeasible conditional operator to (eventually) replace material implication in
the modelling of the presented court cases. It is the pluralistic nature of our approach, realised within
a dynamic modelling framework, which enables and supports such emendations without requiring
technical adjustments of the underlying basic reasoning technology.
References
1
Isabelle/HOL sources for this formalisation work.
http://logikey.org
, 2021. Subfolder: Preference-
Logics/EncodingLegalBalancing.
2
Trevor J. M. Bench-Capon. The missing link revisited: The role of teleology in representing legal argument.
Artificial Intelligence and Law, 10(1-3):79–94, 2002.
3
Trevor J. M. Bench-Capon. HYPO’s legacy: introduction to the virtual special issue. Artificial Intelligence
and Law, 25(2):205–250, 2017.
C. Benzmüller and D. Fuenmayor 17
4
Trevor J. M. Bench-Capon. The need for good old-fashioned AI and Law. In W. Hötzendorfer, C. Tschol,
and F. Kummer, editors, In International Trends in Legal Informatics: A Festschrift for Erich Schweighofer.
Weblaw AG, 2020.
5
Trevor J. M. Bench-Capon and Giovanni Sartor. A model of legal reasoning with cases incorporating
theories and value. Artificial Intelligence, 150:97–143, 2003.
6
Christoph Benzmüller. Cut-elimination for quantified conditional logic. Journal of Philosophical Logic,
46(3):333–353, 2017. doi:10.1007/s10992-016- 9403-0.
7
Christoph Benzmüller. Universal (meta-)logical reasoning: Recent successes. Science of Computer
Programming, 172:48–62, 2019. doi:10.1016/j.scico.2018.10.008.
8
Christoph Benzmüller, Ali Farjami, Paul Meder, and Xavier Parent. I/O logic in HOL. Journal of Applied
Logics – IfCoLoG Journal of Logics and their Applications (Special Issue: Reasoning for Legal AI),
6(5):715–732, 2019. URL: https://www.collegepublications.co.uk/ifcolog/?00034.
9
Christoph Benzmüller, Ali Farjami, and Xavier Parent. Åqvist’s dyadic deontic logic E in HOL. Journal
of Applied Logics – IfCoLoG Journal of Logics and their Applications (Special Issue: Reasoning for Legal
AI), 6(5):733–755, 2019. URL: https://www.collegepublications.co.uk/ifcolog/?00034.
10
Christoph Benzmüller, Ali Farjami, and Xavier Parent. Dyadic deontic logic in hol: Faithful embedding
and meta-theoretical experiments. In Matthias Armgardt, Hans Christian Nordtveit Kvernenes, and
Shahid Rahman, editors, New Developments in Legal Reasoning and Logic: From Ancient Law to Modern
Legal Systems, volume 23 of Logic, Argumentation & Reasoning. Springer Nature Switzerland AG, 2021.
doi:10.1007/978-3-030-70084-3.
11
Christoph Benzmüller, David Fuenmayor, and Bertram Lomfeld. Encoding legal balancing: Automat-
ing an abstract ethico-legal value ontology in preference logic, 2020. Workshop on Models of Legal
Reasoning (MLR 2020), hosted by 17th Conference on Principles of Knowledge Representation and Reas-
oning (KR 2020). Unpublished paper available at:
https://www.researchgate.net/publication/
342380027.
12
Christoph Benzmüller, David Fuenmayor, and Bertram Lomfeld. Encoding legal balancing: Automating an
abstract ethico-legal value ontology in preference logic, 2021.
https://arxiv.org/abs/2010.00810
;
extended and improved version of our paper presented at the 1st Workshop on Models of Legal Reasoning
(MLR 2020).
13
Christoph Benzmüller and Bertram Lomfeld. Reasonable machines: A research manifesto. In Ute
Schmid, Franziska Klügl, and Diedrich Wolter, editors, KI 2020: Advances in Artificial Intelligence – 43rd
German Conference on Artificial Intelligence, Bamberg, Germany, September 21–25, 2020, Proceedings,
volume 12352 of Lecture Notes in Artificial Intelligence, pages 251–258. Springer, Cham, 2020.
doi:
10.1007/978-3-030-58285-2_20.
14
Christoph Benzmüller, Xavier Parent, and Leendert van der Torre. Designing normative theories for
ethical and legal reasoning: LogiKEy framework, methodology, and tool support. Artificial Intelligence,
287:103348, 2020. doi:10.1016/j.artint.2020.103348.
15
Christoph Benzmüller and Lawrence C. Paulson. Multimodal and intuitionistic logics in simple type
theory. The Logic Journal of the IGPL, 18(6):881–892, 2010. doi:10.1093/jigpal/jzp080.
16
Christoph Benzmüller and Lawrence C. Paulson. Quantified multimodal logics in simple type the-
ory. Logica Universalis (Special Issue on Multimodal Logics), 7(1):7–20, 2013.
doi:10.1007/
s11787-012-0052-y.
17
Donald Berman and Carole Hafner. Representing teleological structure in case-based legal reasoning: the
missing link. In Proceedings 4th ICAIL, pages 50–59. New York: ACM Press, 1993.
18
Jasmin C. Blanchette, Sascha Böhme, and Lawrence C. Paulson. Extending Sledgehammer with SMT
solvers. Journal of Automated Reasoning, 51(1):109–128, 2013.
19
Jasmin C. Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. Hammering towards qed.
Journal of Formalized Reasoning, 9(1):101–148, 2016.
20
Jasmin C. Blanchette and Tobias Nipkow. Nitpick: A counterexample generator for higher-order logic
based on a relational model finder. In Matt Kaufmann and Lawrence C. Paulson, editors, ITP 2010, volume
6172 of LNCS, pages 131–146. Springer, 2010.
18 Value-oriented Legal Argumentation in Isabelle/HOL
21
Craig Boutilier. Toward a logic for qualitative decision theory. In Principles of knowledge representation
and reasoning, pages 75–86. Elsevier, 1994. doi:10.1016/B978-1-4832-1452-8.50104-4.
22
Bernhard Ganter and Rudolf Wille. Formal concept analysis: mathematical foundations. Springer Berlin,
2012.
23
Thomas F. Gordon and Douglas Walton. Pierson vs. Post revisited. Frontiers in Artificial Intelligence and
Applications, 144:208, 2006.
24
Joseph Y. Halpern. Defining relative likelihood in partially-ordered preferential structures. Journal of
Artificial Intelligence Research, 7:1–24, 1997.
25
Fenrong Liu. Changing for the better: Preference dynamics and agent diversity. PhD thesis, Inst. for
Logic, Language and Computation, Universiteit van Amsterdam, 2008.
26
Fenrong Liu. Reasoning about Preference Dynamics. Springer Netherlands, 2011.
doi:10.1007/
978-94-007-1344-4.
27
Bertram Lomfeld. Die Gründe des Vertrages: Eine Diskurstheorie der Vertragsrechte. Mohr Siebeck,
Tübingen, 2015.
28
Bertram Lomfeld. Grammatik der rechtfertigung: Eine kritische rekonstruktion der rechts(fort)bildung.
Kritische Justiz, 52(4), 2019.
29
Juliano Maranhão and Giovanni Sartor. Value assessment and revision in legal interpretation. In Pro-
ceedings of the Seventeenth International Conference on Artificial Intelligence and Law, ICAIL 2019,
Montreal, QC, Canada, June 17-21, 2019, pages 219–223, 2019. doi:10.1145/3322640.3326709.
30
L Thorne McCarty. An implementation of Eisner v. Macomber. In Proceedings of the 5th International
Conference on Artificial Intelligence and Law, pages 276–286, 1995.
31
Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL: A Proof Assistant for Higher-
Order Logic, volume 2283 of LNCS. Springer, 2002.
32
Henry Prakken. An exercise in formalising teleological case-based reasoning. Artificial Intelligence and
Law, 10(1-3):113–133, 2002.
33
Henry Prakken and Giovanni Sartor. Law and logic: A review from an argumentation perspective. Artificial
Intelligence, 227:214–225, 2015.
34
Edwina L. Rissland and Kevin D. Ashley. A case-based system for trade secrets law. In Proceedings of
the 1st international conference on Artificial Intelligence and Law, pages 60–66, 1987.
35
Johan van Benthem, Patrick Girard, and Olivier Roy. Everything else being equal: A modal logic for
ceteris paribus preferences. J. Philos. Log., 38(1):83–125, 2009.
doi:10.1007/s10992-008-9085-3
.
36 Georg Henrik von Wright. The logic of preference. Edinburgh University Press, 1963.
37
Makarius Wenzel. Isabelle/Isar—a generic framework for human-readable proof documents. From Insight
to Proof-Festschrift in Honour of Andrzej Trybulec, 10(23):277–298, 2007.
... Building up a capacity in such reasonable machines to engage in value-oriented ethico-legal argumentation is thus a relevant challenge to address. As a possible solution we currently explore suitable adaptions of the multilayered LogiKEy framework to not only enable flexible and expressing non-classical reasoning, but to also take different value systems and preferences into account [2,3]. ...
Presentation
Full-text available
Presentation at the AAAI Spring Symposium on Implementing AI Ethics
Chapter
Full-text available
A shallow semantical embedding of a dyadic deontic logic by Carmo and Jones in classical higher-order logic is presented. The embedding is proven sound and complete, that is, faithful. This result provides the theoretical foundation for the implementation and automation of dyadic deontic logic within o�-the-shelf higher- order theorem provers and proof assistants. To demonstrate the practical relevance of our contribution, the embedding has been encoded in the Isabelle/HOL proof assistant. As a result a sound and complete (interactive and automated) theorem prover for the dyadic deontic logic of Carmo and Jones has been obtained. Experi- ments have been conducted which illustrate how the exploration and assessment of meta-theoretical properties of the embedded logic can be supported with automated reasoning tools integrated with Isabelle/HOL.
Chapter
Full-text available
Future intelligent autonomous systems (IAS) are inevitably deciding on moral and legal questions, e.g. in self-driving cars, health care or human-machine collaboration. As decision processes in most modern sub-symbolic IAS are hidden, the simple political plea for transparency, accountability and governance falls short. A sound ecosystem of trust requires ways for IAS to autonomously justify their actions, that is, to learn giving and taking reasons for their decisions. Building on social reasoning models in moral psychology and legal philosophy such an idea of »Reasonable Machines« requires novel, hybrid reasoning tools, ethico-legal ontologies and associated argumentation technology. Enabling machines to normative communication creates trust and opens new dimensions of AI application and human-machine interaction.
Article
Full-text available
A framework and methodology—termed LogiKEy—for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples—all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.
Conference Paper
Full-text available
The research aims at a formal definition of constructive interpretation in law as the dynamic of revision of theories about the normative system, embedding a model of balancing values [13] into an architecture of i/o logics representing conceptual, deontological and axiological rules [11]. We also introduce new revision operators which are relevant in the context of value assessments.
Article
Full-text available
We devise a shallow semantical embedding of Åqvist's dyadic deontic logic E in classical higher-order logic. This embedding is shown to be faithful, viz. sound and complete. This embedding is also encoded in Isabelle/HOL, which turns this system into a proof assistant for deontic logic reasoning. The experiments with this environment provide evidence that this logic implementation fruitfully enables interactive and automated reasoning at the meta-level and the object-level.
Article
Full-text available
Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is suitable as a foundation for the development of a universal logical reasoning engine. Such an engine may be employed, as already envisioned by Leibniz, to support the rigorous formalisation and deep logical analysis of rational arguments on the computer. A respective universal logical reasoning framework is described in this article and a range of successful first applications in philosophy, artificial intelligence and mathematics are surveyed.
Article
Full-text available
A semantic embedding of quantified conditional logic in classical higher-order logic is utilized for reducing cut-elimination in the former logic to existing results for the latter logic. The presented embedding approach is adaptable to a wide range of other logics, for many of which cut-elimination is still open. However, special attention has to be payed to cut-simulation, which may render cut-elimination as a pointless criterion.
Article
Full-text available
This paper surveys the emerging methods to automate reasoning over large libraries developed with formal proof assistants. We call these methods hammers. They give the authors of formal proofs a strong "one-stroke" tool for discharging difficult lemmas without the need for careful and detailed manual programming of proof search. The main ingredients underlying this approach are efficient automatic theorem provers that can cope with hundreds of axioms, suitable translations of the proof assistant's logic to the logic of the automatic provers, heuristic and learning methods that select relevant facts from large libraries, and methods that reconstruct the automatically found proofs inside the proof assistants. We outline the history of these methods, explain the main issues and techniques, and show their strength on several large benchmarks. We also discuss the relation of this technology to the QED Manifesto and consider its implications for QED-like efforts.