Conference PaperPDF Available

Computer-supported Analysis of Arguments in Climate Engineering

Authors:

Abstract and Figures

Climate Engineering (CE) is the intentional large-scale intervention in the Earth's climate system to counter climate change. CE is highly controversial, spurring global debates about whether and under what conditions it should be considered. We focus on the computer-supported analysis of a small subset of the arguments pro and contra CE interventions as presented in the work of Betz and Cacean (2012), namely those drawing on the "ethics of risk"; these arguments point out uncertainties in future deployment of CE technologies. The aim of this paper is to demonstrate and explain the application of higher-order interactive and automated theorem proving (utilising shallow semantical embeddings) to the logical analysis of "real-life" argumentative discourse.
Content may be subject to copyright.
Computer-supported Analysis of Arguments in
Climate Engineering?
David Fuenmayor1and Christoph Benzm¨uller1,2
1Freie Universit¨at Berlin, Germany
2University of Luxembourg, Luxembourg
Abstract. Climate Engineering (CE) is the intentional large-scale in-
tervention in the Earth’s climate system to counter climate change. CE
is highly controversial, spurring global debates about whether and under
which conditions it should be considered. We focus on the computer-
supported analysis of a small subset of the arguments pro and contra
CE interventions as presented in the work of Betz and Cacean (2012),
namely those drawing on the “ethics of risk”; these arguments point out
uncertainties in future deployment of CE technologies. The aim of this
paper is to demonstrate and explain the application of higher-order in-
teractive and automated theorem proving (utilizing shallow semantical
embeddings) to the logical analysis of “real-life” argumentative discourse.
Keywords: argumentation ·knowledge representation ·higher-order
logic ·automated theorem proving ·Isabelle ·climate engineering.
1 Introduction
Climate Engineering (CE), aka. Geo-engineering, is the intentional large-scale
intervention in the Earth’s climate system in order to counter climate change.
Proposed CE technologies (e.g., solar radiation management, carbon dioxide re-
moval) are highly controversial, spurring global debates about whether and under
which conditions they should be considered. Criticisms to CE range from divert-
ing attention and resources from much needed mitigation policies to potentially
catastrophic side-effects; thus the cure may become worse than the disease. The
analyzed arguments around the CE debate presented in this paper originate from
Betz and Cacean’s book [6], which is a slightly modified and updated translation
of a study commissioned by the German Federal Ministry of Education and Re-
search (BMBF) on “Ethical Aspects of Climate Engineering” finalized in spring
2011. Betz and Cacean’s work aimed at providing a quite complete overview of
the arguments around CE at the time. However, it is to expect that it has become
partially outdated meanwhile. The illustrative analysis carried out in the present
paper focuses on a small subset of the CE argumentative landscape, namely on
those arguments concerned with the “ethics of risk” ([6] p. 38ff.) which point
out (potentially dangerous) uncertainties in future deployment of CE.
?Supported by VolkswagenStiftung, grant Consistent, Rational Arguments in Politics
(CRAP).
2 D. Fuenmayor and C. Benzm¨uller
Our objective is to further illustrate and explore an approach previously pre-
sented at the CLAR-2018 conference [14], which concerns the application of
(higher-order) interactive theorem proving to the logical analysis of individual
arguments and argument networks. In that work we reconstructed several vari-
ants of G¨odel’s ontological argument3using the proof assistant Isabelle; initially
as networks of abstract nodes, which were mechanically tested for validity and
(in)consistency after adding or removing dialectical relations (attack or support);
and later each node became “instantiated” by identifying it with a formula of a
target (higher-order modal) logic and the experiments were repeated. Employ-
ing theorem provers and model finders, we showed that, e.g., consistency results
for the abstracted arguments provide no guarantee at all at the instantiated
level, i.e., after the semantics of the argument nodes is added. Drawing on this
and other similar results, we argued that the analysis of non-trivial natural-
language arguments at the abstract argumentation level is useful, but of limited
explanatory power. Achieving such explanatory power requires the extension of
techniques from abstract argumentation with means for deep semantical analysis
using expressive logic formalisms (cf. approaches inspired by Montague seman-
tics [15]) and, vice versa, methods for semantical analysis can become enriched
by integrating them with contemporary argumentation frameworks.
In the current work we are formalizing and evaluating an extract from a quite
contemporary and controversial discourse topic (in contrast to the previous, more
philosophical arguments). This time we focus from the beginning on instantiated
argument networks and on the use of automated tools to support the process of
reconstructing both individual arguments and attack (resp. support) relations,
by adding missing (implicit) premises. We aim at illustrating how the utilization
of reasoning technology for very expressive (e.g. higher-order) logics has realistic
prospects in the analysis of “real-life” argumentative discourse. In particular, our
results suggest that this technology can be very useful to help in the reconstruc-
tion of argument networks using structured, deductive approaches (e.g. ABA
[12] and Deductive Argumentation [5,4])4and also to identify implicit and idle
premises in arguments (cf. our previous work [13]). The case study presented in
section 3has been carried out employing the Isabelle/HOL proof assistant [16] for
3Ontological arguments (or proofs) are arguments for the existence of a Godlike be-
ing, common since centuries in philosophy and theology. More recently, they have
attracted the attention of logicians, not only because of their interesting history, but
also because of their quite sophisticated logical structures.
4Our reason for choosing a deductive approach over a defeasible one had originally a
technical motivation: the base logic provided (off-the-shelf) in Isabelle/HOL is clas-
sical (monotonic). In fact, the shallow semantical embedding of non-classical object
logics reuses the consequence relation (i.e. the proof methods) of the meta-logic. Em-
bedding a non-monotonic logic in Isabelle/HOL can certainly be done (e.g. by deep
embeddings or by explicit modeling of a non-monotonic consequence relation), but
we are not currently pursuing such an approach, since this would be more complex
from a user perspective and also take a toll on the performance of automated tools).
In this respect we have chosen to treat arguments as deductions, thus locating all
fallibility of an argument in its (sometimes implicit) premises.
Analyzing Arguments in Climate Engineering 3
classical higher-order logic (HOL).5Sources for this case study have been made
available online (https://github.com/davfuenmayor/CE-Debate). We encourage
the interested reader to try out (and improve on) this work.
2 Framework
In previous work on the logical analysis of argumentative discourse, we have pre-
sented an interpretive approach named computational hermeneutics, amenable
to partial mechanization using three kinds of automated reasoning technology:
(i) theorem provers, which tell us whether a (formalized) claim logically follows
from a set of assumptions; (ii) model finders, which give us (counter-)examples
for formulas in the context of a background set of assumptions; and (iii) so-
called “hammers”, which automatically invoke (i) as to find minimal sets of
relevant premises sufficient to derive a claim, whose consistency can later be
verified by (ii). We exemplified this approach by using some implementations
of (i-iii) for higher-order logic provided by the Isabelle/HOL proof assistant. In
computational hermeneutics, we work iteratively on an argument by choosing
(tentatively at first) a logic for formalization and then working back and forth
on the formalization of its premises and conclusion, while getting real-time feed-
back about the suitability of our choices (including the chosen logic) from a proof
assistant. In particular, following the interpretive “principle of charity” [10], we
aim at formalizations which render the argument as logically valid, while having
a consistent and minimal set of assumptions. These actions are to be repeated
until arriving at a state of reflective equilibrium: a state where our arguments
and claims have the highest degree of coherence and acceptability according to
syntactic and, particularly, inferential criteria of adequacy (see [14,13]).
Drawing upon the literature on structured argumentation graphs, in particular
on Besnard and Hunter’s work [5], we conceive an argument as a pair consisting
of (i) a set of formulas (premises), from which (ii) another formula (conclusion)
logically follows according to a previously chosen logic for formalization. Besnard
and Hunter further introduce and interrelate different kinds of attack relations
between arguments (defeaters, undercuts, and rebuttals; cf. [5]) which can be all
subsumed, as we do, by considering an attack between (a set of) arguments A
and Bas the inconsistency of the set of formulas formed by the conclusion(s) of A
together with the premises of B. Drawing upon the work of Cayrol and Lagasquie-
Schiex on bipolar argumentation frameworks (BAF) [9], we also consider support
relations between arguments. The original support notion of BAFs will also be
extended to the case where two (or more) arguments jointly support another one
(as happens with arguments A47 and A48 jointly supporting A22 in our case
study). To put it more formally:
5HOL, also known as Church’s type theory, is a logic of functions formulated on top
of the simply typed lambda-calculus, which also provides a foundation for functional
programming [2].
4 D. Fuenmayor and C. Benzm¨uller
Definition 1. A (deductive) argument is an ordered pair hϕ, αi, where ϕ`(L)α
for some chosen logic L(which may not be explicitly mentioned). ϕis the support,
or premises/assumptions of the argument, and αis the claim, or conclusion, of the
argument. Other constraints we set on arguments are consistency: ϕhas to be logically
consistent (according to the chosen logic L); and minimality: there is no ψϕsuch
that ψ`α. For an argument A =hϕ, αithe function Premises(A) returns ϕand
Conclusion(A) returns (a singleton set containing) α. Note that while every pair hϕ,
αican be seen as a candidate argument during the process of formal reconstruction,
only those pairs which satisfy the given constraints are considered as arguments proper.
Definition 2. An argument Aattacks (is a defeater of) Biff the set Conclusion(A)
Premises(B)is inconsistent. Notice that this definition subsumes the more traditional
one for classical logic, Conclusion(A)` ¬X for some X Premises(B), while al-
lowing for paraconsistent formalization logics where explosion (inconsistency) does not
necessarily follow from pairs of contradictory formulas. This definition can be seam-
lessly extended to two (or more) arguments: A1and A2(jointly) attack B iff the set
Conclusion(A1)Conclusion (A2)Premises(B)is inconsistent.
Definition 3. An argument Asupports Biff Conclusion(A)`X for some X
Premises(B). This definition can be seamlessly extended to two (or more) arguments:
A1and A2(jointly) support B iff Conclusion (A1)Conclusion(A2)`X for some X
Premises(B).
We want to highlight the similarity in spirit between ours and Besnard and
Hunter’s [5] “descriptive approach” to reconstructing argument graphs (from
natural language sources); where we have some abstract argument graph as the
input, together with some informal text description of each argument. Thus,
the task becomes to find the appropriate logical formulas for the premises and
conclusion of each argument, compatible with the choice of the logic of formal-
ization. As will become clear when analyzing our case study in section 3, there
is a need for finding appropriate “implicit” premises which render the individ-
ual arguments logically valid and additionally honor their intended dialectical
role in the input abstract graph (i.e., attacking or supporting other arguments).
This interpretive aspect, in particular, has been emphasized in our computa-
tional hermeneutics approach [14,13], as well as the possibility of modifying the
input abstract argument graph as new insights, resulting from the formaliza-
tion process, appear. In their exposition of structured argumentation (see, e.g.,
[5]) Besnard and Hunter duly highlight the fact that “richer” logic formalisms
(i.e., more expressive than “rule-based” ones like, e.g., logic programming) are
more appropriate for reconstructing “real-world arguments”. Such representa-
tional and interpretive issues are tackled in our approach by the use of different
(combinations of) non-classical and higher-order logics for formalization. For
this we utilize the shallow semantical embeddings (SSE) approach to combining
logics [3]. SSE exploits HOL as a meta-logic in order to embed the syntax and
semantics of diverse object logics (e.g. modal, deontic, paraconsistent), thereby
turning theorem proving systems for higher-order logics into universal reasoning
engines [1].
Analyzing Arguments in Climate Engineering 5
A50 A51
A47 A48 A49 A45 A46
A22
@@
@
Fig. 1. Abstract argumentation network for the ethics of risk cluster in the CE debate
(arrows labeled with @ indicate attack); the * indicates a joint support.
3 Case Study
3.1 Individual (Component) Arguments
As has been observed by Betz and Cacean [6], incalculable side-effects and im-
ponderables constitute one of the main reasons against CE technology deploy-
ment. Thus, arguments from the ethics of risk primarily support the thesis: “CE
deployment is morally wrong” (named T9 in [6]) and make for an argument clus-
ter with a non-trivial dialectical structure which we aim at reconstructing in this
section. We focus on six arguments from the ethics of risk, which entail that the
deployment of CE technologies (today as in the future) is not desirable because
of being morally wrong (argument A22). Supporting arguments of A22 are: A45,
A46, A47, A48, A49 (using the original notation in Betz and Cacean’s work [6]).
In particular, two of these arguments, namely A48 and A49, are further attacked
by A50 and A51.6
Ethics of Risk Argument (A22) The argument has as premise: “CE deploy-
ment is morally wrong” and as conclusion: “CE deployment is not desirable”.
Notice that both are formalized as (modally) valid propositions, i.e., true in all
possible worlds or situations. We are thus presupposing a possible-worlds se-
mantics for our logic of formalization while restricting ourselves, for the time
being, to a propositional logic (to keep it simple). Also notice that we introduce
6We strive to remain as close as possible to the original argument network as intro-
duced by Betz and Cacean [6] (with one exception concerning the dialectical relation
among arguments A47, A48, A50 and A22, which will be commented upon later on).
The reader will notice that some of the arguments could have been merged together.
However, Betz and Cacean have deliberately decided not to do so. We conjecture
that this is due to traceability concerns, given the fact that most arguments have
been compiled from different bibliographic sources and authors. See [9] and [17] for
a discussion on this issue.
6 D. Fuenmayor and C. Benzm¨uller
two new, uninterpreted propositional constants (“CEisWrong” and “CEisNot-
Desirable”) and interrelate them by means of an implicit premise (A22-P2), but
without further constraining their meaning at this stage of the modeling process.
In general, term meanings (understood as their inferential roles) will gradually
become determined as we add other companion arguments to the analysis.
Since this is the first argument to be represented in the proof assistant Isabelle
in this work, we will pay special attention to the syntactic elements used for its
formulation in the system. First notice that we use the keyword consts to in-
troduce two non-interpreted constants; their type is wbool, which corresponds
to the type for characteristic functions of sets of worlds (of type w).
consts CEisWrong::wbool — type for world-contingent propositional constants
consts CEisNotDesirable::wbool
Now we use Isabelle ’s keyword definition to introduce interpreted constants (of
Boolean type). The first two definitions introduce the premises of the argument,
labeled A22-P1 and A22-P2, and the last one introduces its conclusion, labeled
A22-C.7We introduce an equivalence between two formulas (by employing the
symbol ) with the definiendum on its left-hand side and the definiens on its
right-hand side. The expression [`P] for some proposition Pstands for modal
validity, i.e., truth in all worlds, formalized as: w.P(w) (not shown).
definition A22-P1 [`CEisWrong]
definition A22-P2 [`CEisWrong CEisNotDesirable]
definition A22-C [`CEisNotDesirable]
Below we employ the model finder Nitpick [7] to find a model satisfying both
premises and conclusion of the formalized argument. This shows consistency.
lemma assumes A22-P1 and A22-P2 and A22-C shows True
nitpick [satisfy]oops — Nitpick presents a simple model (not shown)
This first argument (A22) serves as a quite straightforward illustration of the
role of implicit, unstated premises in enabling the reconstruction of a candidate
argument as a valid argument (proper). Since, in our approach, we treat ar-
guments as deductions, we will encode them as meta-logical theorems stating
that a formula (conclusion) logically follows from a collection of other formulas
(premises) in this form: ϕ1, . . . ϕn`α(recall Definition 1in section 2); which
is encoded using Isabelle notation as assumes ϕ1and . . . ϕnshows α.8In
7Notice that we will keep this same suffix convention throughout this work.
8Notice the similarly to sequents in Gentzen-type deductive systems. In fact, Is-
abelle/HOL’s meta-logic is based upon (higher-order) Gentzen-type natural deduc-
tion. It is also worth mentioning that our implementation in Isabelle/HOL handles
arguments as (sequent-like) inferences independently from each other. This is dif-
ferent than having the premises for all arguments as axioms in a same theory resp.
knowledge-base and drawing conclusions as theorems. In our approach, two argu-
ments with mutually inconsistent premises will not cause any problems nor trivialize
Analyzing Arguments in Climate Engineering 7
this first example, we utilize the tableaux-based prover blast to verify that the
conclusion follows from the premises.
theorem A22-valid:assumes A22-P1 and A22-P2 shows A22-C
using A22-C-def A22-P2-def A22-P1-def assms(1)assms (2)by blast
Termination Problem (A45) CE measures do not possess viable exit options.
If deployment is terminated abruptly, catastrophic climate change ensues.9No-
tice that we add as implicit premise (A45-P1) that there a real possibility of CE
interventions being terminated abruptly.
consts CEisTerminated::wbool — world-contingent propositional constants
consts CEisCatastrophic::wbool
definition A45-P1 [`CEisTerminated] — additional (implicit) premise
definition A45-P2 [`CEisTerminated CEisCatastrophic]
definition A45-C [`CEisCatastrophic]
Notice that we have introduced in the above formalization the modal operator
to signify that a proposition is possibly true (e.g. at a future point in time).
theorem A45-valid:assumes A45-P1 and A45-P2 shows A45-C
using A45-C-def A45-P1-def A45-P2-def assms(1)assms (2)by blast
No Long-term Risk Control (A46) Our social systems and institutions are
possibly not capable of controlling risk technologies on long time scales and of
ensuring that they are handled with proper technical care [6]. Notice that we can
make best sense of this objection as (implicitly) presupposing a risk of CE-caused
catastrophes (A46-P2).
consts RiskControlAbility::wbool
definition A46-P1 [`¬RiskControlAbility]
definition A46-P2 [`¬RiskControlAbility CEisCatastrophic] — implicit
definition A46-C [`CEisCatastrophic]
As before, we can use automated tools to find further implicit premises, which
may actually correspond to modifications to the logic of formalization. In fact,
the argument A46 needs a (stronger) modal logic K4 to succeed, so the corre-
sponding additional premise is: Ax4 : [` ϕ. ϕϕ] (which can be read
intuitively as: “necessary propositions are so, necessarily” corresponding to tran-
sitivity of the accessibility relation, cf. possible-worlds semantics for modal logic).
lemma assumes A46-P1 and A46-P2 shows A46-C
nitpick oops — counterexample found (not shown – modal axiom 4 is required).
theorem A46-valid:assumes A46-P1 and A46-P2 and Ax4 shows A46-C
using A46-C-def A46-P1-def A46-P2-def assms(1)assms (2)assms (3)by blast
anything. In the same vein, conflicting arguments with the same explicit premises
are also possible; the cause for the conflicting conclusions is to be found in additional
(implicit) premises.
9Cf. Betz and Cacean’s work [6] for sources for these and other proposed theses and
arguments in the CE debate.
8 D. Fuenmayor and C. Benzm¨uller
CE Interventions are Irreversible (A47) As presented in [6], this argument
consists of a simple sentence (its conclusion), which states that CE represents an
irreversible intervention, i.e., that once the first interventions in world’s climate
have been set in motion, there is no way to “undo” them. In the following
arguments we work with a predicate logic (including quantification), and thus
introduce an additional type (“e”) for actions (interventions).
typedecl e— introduces a new type for actions
consts CEAction::ewbool — notice type for (world-dependent) predicates
consts Irreversible::ewbool
definition A47-C [`I.CEAction(I)Irreversible(I)]
No Ability to Retain Options after Irreversible Interventions (A48)
Irreversible interventions (of any kind) narrow the options of future generations
in an unacceptable way, i.e., it is wrong to carry them out [6].
consts WrongAction::ewbool
definition A48-C [`I.Irreversible(I)WrongAction(I)]
Unpredictable Side-Effects are Wrong (A49) As long as side-effects of CE
technologies cannot be reliably predicted, their deployment is morally wrong [6].
A49-P2 suggests that interventions with unpredictable side-effects are wrong.
consts USideEffects::ewbool
definition A49-P1 [`I.CEAction(I)USideEffects(I)]
definition A49-P2 [`I.USideEffects(I)WrongAction(I)] — implicit
definition A49-C [`I.CEAction(I)WrongAction(I)]
theorem A49-valid:assumes A49-P1 and A49-P2 shows A49-C
using A49-C-def A49-P1-def A49-P2-def assms(1)assms (2)by blast
Mitigation is also Irreversible (A50) Mitigation of climate change (i.e., the
“preventive alternative” to CE), too, is, at least to some extent, an irreversible
intervention with unforeseen side-effects [6].
consts Mitigation::e— constant of same type as actions/interventions
definition A50-C [`Irreversible(Mitigation)USideEffects(Mitigation)]
All Interventions have Unpredictable Side-Effects (A51) This defense
of CE states that we do never completely foresee the consequences of our ac-
tions (anyways), and thus aims at somehow trivializing the concerns regarding
unforeseen side-effects of CE.
definition A51-C [`I.USideEffects(I)]
Analyzing Arguments in Climate Engineering 9
3.2 Reconstructing the Argument Graph
The claim that an argument (or a set of arguments) attacks resp. supports an-
other argument is, in our approach, conceived as an argument in itself, which
also needs to be reconstructed as logically valid by (possibly) adding implicit
premises. Below we introduce our generalized attack resp. support relations be-
tween arguments along the lines of structured and bipolar argumentation (cf. [5]
and [9] respectively; and also recall Definition 2and Definition 3in section 2).10
abbreviation attacks1 ϕ ψ (ϕψ)False — for one attacker
abbreviation supports1 ϕ ψ ϕψ— for one supporter
abbreviation attacks2 γ ϕ ψ (γϕψ)False — for two attackers
abbreviation supports2 γ ϕ ψ (γϕ)ψ— for two supporters
Does A45 support A22? In this example, as in others, we have utilized three
kinds of automated tools integrated into Isabelle: the model finder Nitpick [7],
which finds a counterexample to the claim that A45 supports A22 (without
further implicit premises); the tableaux-based prover blast,11 which can indeed
verify that by adding an implicit premise (if CE is possibly catastrophic then
its deployment is wrong) the support relation obtains; and the “hammer” tool
Sledgehammer [8], which automagically finds minimal sets of assumptions needed
to prove a theorem. Let us recall the corresponding definitions: A45-C [`
CEisCatastrophic] and A22-P1 [`CEisWrong].
lemma supports1 A45-C A22-P1 nitpick oops — countermodel found
theorem assumes [`CEisCatastrophic CEisWrong] — implicit
shows supports1 A45-C A22-P1 using A22-P1-def A45-C-def assms(1)by blast
Does A46 support A22? The same implicit premise as before is needed (recall
the definition: A46-C [`CEisCatastrophic]).
lemma supports1 A46-C A22-P1 nitpick oops — countermodel found
theorem assumes [`CEisCatastrophic CEisWrong] — implicit
shows supports1 A46-C A22-P1 using A22-P1-def A46-C-def assms(1)by blast
Do A47 and A48 (together) support A22? Here we have diverged from
the argument network as introduced in Betz and Cacean [6], where A48 is ren-
dered as an argument supporting A47. We claim that our reconstruction is more
faithful to the given natural language description of the arguments and also bet-
ter represents their intended dialectical relations. Also notice that an implicit
premise is needed to reconstruct this support relation as logically valid, namely
that if every CE action is wrong, then deployment of CE is wrong. (Let us recall
again the definitions: A47-C [`I.CEAction(I)Irreversible(I)] and
A48-C [`I.Irreversible(I)WrongAction(I)].)
10 Notice that we use Isabelle’s keyword abbreviation to introduce these definitions
as “syntactic sugar”.
11 This is a prover among several others integrated into Isabelle [16].
10 D. Fuenmayor and C. Benzm¨uller
lemma supports2 A47-C A48-C A22-P1 nitpick oops — countermodel found
theorem assumes [`I.CEAction(I)WrongAction(I)][`CEisWrong]
shows supports2 A47-C A48-C A22-P1
using A22-P1-def A47-C-def A48-C-def assms(1)by blast
Does A49 support A22? Note that the previous implicit premise is needed
too (recall the definition: A49-C [`I.CEAction(I)WrongAction(I)]).
lemma supports1 A49-C A22-P1 nitpick oops — countermodel found
theorem assumes [`I.CEAction(I)WrongAction(I)] [`CEisWrong]
shows supports1 A49-C A22-P1 using A22-P1-def A49-C-def assms(1)by blast
Does A50 attack both A48 and A49? Here, again, we diverge from Betz
and Cacean’s [6] original argument network. We think that, given the natural
language description of the arguments, an attack relation between A50 and A48
is better motivated than between A50 and A47 (as originally presented). The in-
direct attack towards the main thesis (conclusion of A22) persists, since A47 and
A48 jointly support A22 (see above). Also notice that we employ an additional,
implicit premise to reconstruct the attack relation, namely that mitigation of
climate change is not a wrong action. (Let us recall again the corresponding
definitions: A50-C [`Irreversible(Mitigation)USideEffects(Mitigation )],
A48-C [`I.Irreversible(I)WrongAction(I)] and finally
A49-P2 [`I.USideEffects(I)WrongAction(I)].)
lemma attacks1 A50-C A48-C nitpick oops — countermodel found
lemma attacks1 A50-C A49-P2 nitpick oops — countermodel found
theorem assumes [`¬WrongAction(Mitigation)] — implicit premise
shows attacks1 A50-C A48-C
using A48-C-def A50-C-def assms(1)by blast
theorem assumes [`¬WrongAction(Mitigation)] — implicit premise
shows attacks1 A50-C A49-P2
using A49-P2-def A50-C-def assms(1)by blast
Does A51 attack A49? Notice that the previous additional premise is re-
quired again to reconstruct this attack relation as logically valid. (Recall the
definitions: A49-P2 [`I.USideEffects(I)WrongAction(I)] and A51-C
[`I.USideEffects(I)].)
lemma attacks1 A51-C A49-P2 nitpick oops — countermodel found
theorem assumes [`¬WrongAction(Mitigation)] — implicit premise
shows attacks1 A51-C A49-P2 using A49-P2-def A51-C-def assms(1)by blast
Analyzing Arguments in Climate Engineering 11
4 Challenges and Prospects
We are working on extending the current analysis to other argument clusters
in the CE discourse, as presented in [6] (also drawing on more recent sources).
An analysis at the abstract level, e.g. by using Dung’s dialectic semantics, is
also in sight (also extended with support relations, cf. BAF [9]). Preliminary ex-
periments have shown that the expressivity of higher-order logic (HOL) indeed
allows us to encode Dung’s definitions for complete, grounded, preferred and
stable semantics in Isabelle/HOL and to use automated tools for HOL to carry
out computations. This can be very useful for prototyping tasks and as well for
reasoning with arguments at the abstract and structural level in an integrated
fashion. Further work is necessary to obtain a satisfactorily usable and scalable
implementation. We are further working on utilizing shallow semantic embed-
dings (SSE) of non-classical logics (modal, intensional, deontic, paraconsistent,
among several others) into HOL in order to continue fostering a logico-pluralist
approach towards the reconstruction of structured argument graphs (e.g. by em-
ploying attack resp. support relations parameterized with different base logics).
Concerning the prospects for a fully automated argument reconstruction pro-
cess, it is worth mentioning that the initial step from natural language to formal
representations lies outside our proposed framework. For example, in the pre-
sented case study we have “outsourced” the argumentation-mining task to the
researchers who carried out the analysis (Betz and Cacean), while the semantic-
parsing task was carried out “manually” by us. However, we are much impressed
by recent progress in natural language processing (NLP) for these applications
and follow with great interest the latest developments in the argumentation min-
ing community. Another important challenge concerns the problem of coming up
with candidates for additional (implicit) premises that render an inference valid,
which is an instance of the old problem of abduction. The evaluation of candi-
date formulas is indeed supported by our tool-set, e.g. (counter)model finders can
determine (in)consistency automatically, and theorem provers and “hammers”
help us verify validity using minimal sets of assumptions (also useful to identify
“question-begging” ones). The creative part of coming up with (plausible) can-
didates is, however, still a task for humans in our approach. Abductive reasoning
techniques for the kind of expressive logics we work with (e.g. intensional, first-
and higher-order) remain, to the best of our knowledge, very limited, so as to
support full automation. We could reuse techniques and tools for some less ex-
pressive fragments of HOL (in cases where formalized arguments are bound to
remain inside those fragments); but in general we strive for the finest granularity
level in the semantic analysis (e.g. along the lines of Montague semantics [15]).
With all its pros and cons, this is the distinguishing aspect of our approach.
Acknowledgements
We thank the anonymous reviewers for their valuable remarks and comments,
which significantly helped to improve the final version of this paper.
12 D. Fuenmayor and C. Benzm¨uller
References
1. Benzm¨uller, C.: Universal (meta-)logical reasoning: Recent successes. Science of
Computer Programming 172, 48–62 (2019)
2. Benzm¨uller, C., Andrews, P.: Church’s type theory. In: Zalta, E.N. (ed.) The Stan-
ford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University,
summer 2019 edn. (2019)
3. Benzm¨uller, C., Paulson, L.: Quantified multimodal logics in simple type theory.
Logica Universalis (Special Issue on Multimodal Logics) 7(1), 7–20 (2013)
4. Besnard, P., Hunter, A.: Elements of argumentation, vol. 47. MIT press (2008)
5. Besnard, P., Hunter, A.: Constructing argument graphs with deductive arguments:
a tutorial. Argument & Computation 5(1), 5–30 (2014)
6. Betz, G., Cacean, S.: Ethical aspects of climate engineering. KIT Scientific Pub-
lishing (2012)
7. Blanchette, J.C., Nipkow, T.: Nitpick: A counterexample generator for higher-order
logic based on a relational model finder. In: Proc. of ITP 2010. LNCS, vol. 6172,
pp. 131–146. Springer (2010)
8. Blanchette, J.C., B¨ohme, S., Paulson, L.C.: Extending Sledgehammer with SMT
solvers. Journal of automated reasoning 51(1), 109–128 (2013)
9. Cayrol, C., Lagasquie-Schiex, M.C.: Bipolar abstract argumentation systems. In:
Rahwan, I., Simari, G. (eds.) Argumentation in AI, pp. 65–84. Springer (2009)
10. Davidson, D.: Inquiries into truth and interpretation: Philosophical essays, vol. 2.
Oxford University Press (2001)
11. Dung, P.M.: On the acceptability of arguments and its fundamental role in non-
monotonic reasoning, logic programming and n-person games. Artificial intelligence
77(2), 321–357 (1995)
12. Dung, P.M., Kowalski, R.A., Toni, F.: Assumption-based argumentation. In: Ar-
gumentation in AI, pp. 199–218. Springer (2009)
13. Fuenmayor, D., Benzm¨uller, C.: A computational-hermeneutic approach for con-
ceptual explicitation. In: Nepomuceno, A., Magnani, L., Salguero, F., Bares, C.,
Fontaine, M. (eds.) Model-Based Reasoning in Science and Technology – Inferen-
tial Models for Logic, Language, Cognition and Computation, SAPERE, vol. 49,
pp. 441–469. Springer (2019)
14. Fuenmayor, D., Benzm¨uller, C.: Computational hermeneutics: An integrated ap-
proach for the logical analysis of natural-language arguments. In: Liao, B., Agotnes,
T., Wang, Y.N. (eds.) Dynamics, Uncertainty and Reasoning – The Second Chi-
nese Conference on Logic and Argumentation, pp. 187–207. Logic in Asia: Studia
Logica Library, Springer (2019)
15. Janssen, T.M.V.: Montague semantics. In: Zalta, E.N. (ed.) The Stanford Encyclo-
pedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2020
edn. (2020)
16. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL: A proof assistant for higher-
order logic, LNCS, vol. 2283. Springer (2002)
17. Prakken, H.: Modelling support relations between arguments in debates. In:
Ches˜nevar, C., Falappa, M., Ferme, E. (eds.) Argumentation-based Proofs of En-
dearment. Essays in Honor of Guillermo R. Simari on the Occasion of his 70th
Birthday, pp. 349–365. College Publications (2018)

Supplementary resource (1)

... However, also at that level we support the adoption of initial skills by providing a library of example encodings of deontic logics, and other relevant logics and logic combinations, to start with; this is helpful, since it, among others, enables copying and pasting from these encodings. Several successful student projects at BSc, MSc and PhD level meanwhile provide good evidence for the practical relevance of our approach at the developers level [10][11][12][13][32][33][34][35][36][37][38]. ...
... An example of such a framework is formal argumentation [70]. The question of how to embed argumentation frameworks into HOL is ongoing and future work [35]. ...
Preprint
Full-text available
A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic. This meta-logical approach enables the provision of powerful tool support in LogiKEy with small effort: off-the-shelf theorem provers and model finders for higher-order logic are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give early evidence that HOL's undecidability often does not hinder efficient experimentation.
... However, also at that level we support the adoption of initial skills by providing a library of example encodings of deontic logics, and other relevant logics and logic combinations, to start with; this is helpful, since it, among others, enables copying and pasting from these encodings. Several successful student projects at BSc, MSc and PhD level meanwhile provide good evidence for the practical relevance of our approach at the developers level [10][11][12][13][32][33][34][35][36][37][38]. ...
... An example of such a framework is formal argumentation [70]. The question of how to embed argumentation frameworks into HOL is ongoing and future work [35]. ...
Article
Full-text available
A framework and methodology—termed LogiKEy—for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples—all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.
... • Subdirectory 2020-DataInBrief-Data/Climate-Engineering contains Isabelle/HOL data files related to the formalization and assessment of selected arguments in climate engineering [10] . • Subdirectory 2020-DataInBrief-Data/US-Constitution-Loophole contains Isabelle/HOL data files related to a formalization and assessment of Kurt Gödel's claim that the US Constitution contains a loophole for establishing a dictatorship. ...
Article
Full-text available
The LogiKEy workbench and dataset for ethical and legal reasoning is presented. This workbench simultaneously supports development, experimentation, assessment and deployment of formal logics and ethical and legal theories at different conceptual layers. More concretely, it comprises, in form of a data set (Isabelle/HOL theory files), formal encodings of multiple deontic logics, logic combinations, deontic paradoxes and normative theories in the higher-order proof assistant system Isabelle/HOL. The data was acquired through application of the LogiKEy methodology, which supports experimentation with different normative theories, in different application scenarios, and which is not tied to specific logics or logic combinations. Our workbench consolidates related research contributions of the authors and it may serve as a starting point for further studies and experiments in flexible and expressive ethical and legal reasoning. It may also support hands-on teaching of non-trivial logic formalisms in lecture courses and tutorials. Our article is publicly available at: https://doi.org/10.1016/j.dib.2020.106409
Article
Full-text available
Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is suitable as a foundation for the development of a universal logical reasoning engine. Such an engine may be employed, as already envisioned by Leibniz, to support the rigorous formalisation and deep logical analysis of rational arguments on the computer. A respective universal logical reasoning framework is described in this article and a range of successful first applications in philosophy, artificial intelligence and mathematics are surveyed.
Article
Full-text available
A deductive argument is a pair where the first item is a set of premises, the second item is a claim, and the premises entail the claim. This can be formalised by assuming a logical language for the premises and the claim, and logical entailment (or consequence relation) for showing that the claim follows from the premises. Examples of logics that can be used include classical logic, modal logic, description logic, temporal logic, and conditional logic. A counterargument for an argument A is an argument B where the claim of B contradicts the premises of A. Different choices of logic, and different choices for the precise definitions of argument and counterargument, give us a range of possibilities for formalising deductive argumentation. Further options are available to us for choosing the arguments and counterarguments we put into an argument graph. If we are to construct an argument graph based on the arguments that can be constructed from a knowledgebase, then we can be exhaustive in including all arguments and counterarguments that can be constructed from the knowledgebase. But there are other options available to us. We consider some of the possibilities in this review.
Chapter
Full-text available
In most existing argumentation systems, only one kind of interaction is considered between arguments. It is the so-called attack relation. However, recent studies on argumentation [23, 34, 35, 4] have shown that another kind of interaction may exist between the arguments. Indeed, an argument can attack another argument, but it can also support another one. This suggests a notion of bipolarity, i.e. the existence of two independent kinds of information which have a diametrically opposed nature and which represent repellent forces.
Chapter
Full-text available
Assumption-Based Argumentation (ABA) [4, 3, 27, 11, 12, 20, 22] was developed, starting in the 90s, as a computational framework to reconcile and generalise most existing approaches to default reasoning [24, 25, 4, 3, 27, 26]. ABA was inspired by Dung’s preferred extension semantics for logic programming [9, 7], with its dialectical interpretation of the acceptability of negation-as-failure assumptions based on the notion of “no-evidence-to-the-contrary” [9, 7], by the Kakas, Kowalski and Toni interpretation of the preferred extension semantics in argumentation-theoretic terms [24, 25], and by Dung’s abstract argumentation (AA) [6, 8].
Article
Full-text available
We present an embedding of quantified multimodal logics into simple type theory and prove its soundness and completeness. A correspondence between QKπ models for quantified multimodal logics and Henkin models is established and exploited. Our embedding supports the application of off-the-shelf higher- order theorem provers for reasoning within and about quantified multimodal logics. Moreover, it provides a starting point for further logic embeddings and their combinations in simple type theory.
Article
This study investigates the ethical aspects of deploying and researching into so-called climate engineering methods, i.e. large-scale technical interventions in the climate system with the objective of offsetting anthropogenic climate change. The moral reasons in favour of and against R&D into and deployment of CE methods are analysed by means of argument maps. These argument maps provide an overview of the CE controversy and help to structure the complex debate.
Church's type theory
  • C Benzmüller
  • P Andrews
Benzmüller, C., Andrews, P.: Church's type theory. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, summer 2019 edn. (2019)