An Integrated Approach for the Logical Analysis
of Natural-Language Arguments
David Fuenmayor and Christoph Benzmüller
Abstract We utilize higher-order automated deduction technologies for the logi-
cal analysis of natural language arguments. Our approach, termed computational
hermeneutics, is grounded on recent progress in the area of automated theorem
proving for classical and non-classical higher-order logics, and it integrates tech-
niques from argumentation theory. It has been inspired by ideas in the philosophy of
language, especially semantic holism and Donald Davidson’s radical interpretation;
a systematic approach to interpretation that does justice to the inherent circularity
of understanding: the whole is understood compositionally on the basis of its parts,
while each part is understood only in the context of the whole (hermeneutic circle).
Computational hermeneutics is a holistic, iterative approach where we evaluate the
adequacy of some candidate formalization of a sentence by computing the logical
validity of (i) the whole argument it appears in, and (ii) the dialectic role the argument
plays in some piece of discourse.
While there have been major advances in the ﬁeld of automated theorem prov-
ing (ATP) during the last years, its main ﬁeld of application has mostly remained
bounded to mathematics and hardware/software veriﬁcation. We argue that the use
of ATP in argumentation (particularly in philosophy) can also be very fruitful,1not
only because of the obvious quantitative advantages of automated reasoning tools
(e.g. reducing by several orders of magnitude the time needed to test argument’s
Freie Universität Berlin
Freie Universität Berlin and University of Luxembourg
(Funded by VolkswagenStiftung under grant CRAP: Consistent Rational Argumentation in Politics.)
1See e.g. the results reported in [5, 12–14, 32].
2 D. Fuenmayor and C. Benzmüller
validity), but also because it enables a novel approach to the logical analysis (aka.
formalization) of arguments, which we call computational hermeneutics.
As a result of reﬂecting upon previous work on the application of ATP for the
computer-supported evaluation of arguments in metaphysics [5,12–14, 30], we have
become interested in developing a methodology for formalising natural-language ar-
guments with regard to their assessment using automated tools. Unsurprisingly, the
problem of ﬁnding an/the adequate formalization of some piece of natural-language
discourse turns out to be far from trivial. In particular, concerning expressive higher-
order logical representations, this problem has already been tackled in the past
without much practical success.2In spite of the eﬀorts made in this area, carrying
out logical analysis of natural language (particularly of arguments) continues to
be considered as a kind of artistic skill that cannot be standardized or taught me-
thodically, aside from providing students with a handful of paradigmatic examples
supplemented with commentaries.
Our research aims at improving this situation. By putting ourselves in the shoes of
an interpreter aiming at ‘translating’ some natural-language argument into a formal
representation, we have had recourse to the philosophical theories of radical trans-
lation  and radical interpretation [22,25] (the latter being a further development
of the former), in which a so-called radical translator (Quine) or interpreter (David-
son), without any previous knowledge of the speaker’s language, is able to ﬁnd a
translation (in her own language) of the speaker’s utterances. The interpreter does
this by observing the speaker’s use of language in context and also by engaging
(when possible) in some basic dialectical exchange with him/her (e.g. by making
utterances while pointing to objects or asking yes/no questions). In our proposed
approach, this exchange takes place between a human (seeking to translate ‘unfamil-
iar’ natural-language discourse into a ‘familiar’ logical formalism) and interactive
proof assistants. The questions we ask concern the logical validity, invalidity and
consistency of formulas and proofs (our translation-candidates).
We also draw upon recent work aimed at providing adequacy criteria for logical
formalization of natural language discourse (e.g. [4,19, 43]), with a special emphasis
on the work of Peregrin and Svoboda , who, apart from providing syntactic
and pragmatic (inferential) adequacy criteria, also tackle the problem of providing a
systematic methodology for logical analysis. In this respect, they propose the method
of reﬂective equilibrium,3which is similar in spirit to the idealized scientiﬁc method
2See e.g. the research derived from Montague’s universal grammar program  and some of its
followers like Discourse Representation Theory (e.g. ) and Dynamic Predicate Logic (e.g. ).
3The notion of reﬂective equilibrium has been initially proposed by Nelson Goodman  as an
account for the justiﬁcation of the principles of (inductive) logic and has been popularized years
later in political philosophy and ethics by John Rawls  for the justiﬁcation of moral principles.
In Rawls’ account, reﬂective equilibrium refers to a state of balance or coherence between a set of
general principles and particular judgments (where the latter follow from the former). We arrive at
such a state through a deliberative give-and-take process of mutual adjustment between principles
Computational Hermeneutics 3
and, additionally, has the virtue of approaching this problem in a holistic way: the
adequacy of candidate formalizations for some argument’s sentences is assessed by
computing the argument’s validity as a whole (which depends itself on the way we
have so far formalized all of its constituent sentences).4As we see it, this circle
is a virtuous one: it does justice to holistic accounts of meaning drawing on the
inferential role of sentences. As Davidson has put it:
“[...] much of the interest in logical form comes from an interest in logical geography: to give
the logical form of a sentence is to give its logical location in the totality of sentences, to describe
it in a way that explicitly determines what sentences it entails and what sentences it is entailed by.
The location must be given relative to a speciﬁc deductive theory; so logical form itself is relative
to a theory." [23, p. 140]
2 Radical Interpretation and the Principle of Charity
What is the use of radical interpretation in argumentation? The answer is trivially
stated by Davidson himself, by arguing that “all understanding of the speech of
another involves radical interpretation" [22, p. 125]. Furthermore, the impoverished
evidential position we are faced with when interpreting some arguments (particu-
larly philosophical ones) corresponds very closely to the starting situation Davidson
contemplates in his thought experiments on radical interpretation, where he shows
how an interpreter could come to understand someone’s words and actions without
relying on any prior understanding of them. Davidson’s program builds on the idea
of taking the concept of truth as basic and extracting from it an account of inter-
pretation satisfying two general requirements: (i) it must reveal the compositional
structure of language, and (ii) it can be assessed using evidence available to the
interpreter [22, 24].
The ﬁrst requirement (i) is addressed by noting that a theory of truth in Tarski’s style
(modiﬁed to apply to natural language) can be used as a theory of interpretation. This
implies that, for every sentence sof some object language L, a sentence of the form:
«“s" is true in Liﬀ p» (aka. T-schema) can be derived, where pacts as a translation
of sinto a suﬃciently expressive language used for interpretation (note that in the
T-schema the sentence pis being used, while sis only being mentioned). Thus, by
virtue of the recursive nature of Tarski’s deﬁnition of truth , the compositional
structure of the object-language sentences becomes revealed. From the point of view
of computational hermeneutics, the sentence sis to be interpreted in the context of
a given argument (or a network of mutually attacking/supporting arguments). The
language Lthereby corresponds to the idiolect of the speaker (natural language), and
and judgments. More recent methodical accounts of reﬂective equilibrium have been proposed as
a justiﬁcation condition for scientiﬁc theories  and objectual understanding .
4In much the same spirit of Davidson’s theory of meaning  and Quine’s holism of theory
(dis-)conﬁrmation  in philosophy.
4 D. Fuenmayor and C. Benzmüller
the target language is constituted by formulas of our chosen logic of formalization
(some expressive logic XY) plus the turnstyle symbol `XY signifying that an infer-
ence (argument or argument step) is valid in logic XY. As an illustration, consider
the following instance of the T-schema:
«“Fishes are necessarily vertebrates" is true [in English, in the context of argu-
ment A] iﬀ A1, A2, ..., An`MLS4 “∀x. Fish(x) →Vertebrate(x)"»
where A1, A2, ..., Ancorrespond to the formalization of the premises of argument A
and the turnstyle `MLS4 corresponds to the standard logical consequence relation in
the chosen logic of formalization, e.g. a modal logic S4 (MLS4).5This toy example
aims at illustrating how the interpretation of a sentence relates to its logic of formal-
ization and to the inferential role it plays in a single argument. Moreover, the same
approach can be extended to argument networks. In such cases, instead of using the
notion of logical consequence (represented above as the parameterized logical turn-
style `XY), we can work with the notion of argument support. It is indeed possible
to parameterize the notions of support and attack common in argumentation theory
with the logic used for argument’s formalization (see example in section 5).
The second general requirement (ii) of Davidson’s account of radical interpretation
states that the interpreter has access to objective evidence in order to judge the appro-
priateness of her interpretations, i.e., access to the events and objects in the ‘external
world’ that cause sentences to be true (or, in our case, arguments to be valid). In our
approach, formal logic serves as a common ground for understanding. Computing
the logical validity of a formalized argument constitutes the kind of objective (or,
more appropriately, intersubjective) evidence needed to secure the adequacy of our
interpretations, under the charitable assumption that the speaker follows (or at least
accepts) similar logical rules as we do. In computational hermeneutics, the computer
acts as an (arguably unbiased) arbiter deciding on the truth of a sentence in the con-
text of an argument.
A central concept in Davidson’s account of radical interpretation is the principle of
charity, which he holds as a condition for the possibility of engaging in any kind of
interpretive endeavor. The principle of charity has been summarized by Davidson
by stating that “we make maximum sense of the words and thoughts of others when
we interpret in a way that optimizes agreement" [24, p. 197]. Hence the principle
builds on the possibility of intersubjective agreement about external facts among
speaker and interpreter. The principle of charity can be invoked to make sense of a
speaker’s ambiguous utterances and, in our case, to presume (and foster) the validity
of an argument. Consequently, in computational hermeneutics we assume from the
outset that the argument’s conclusions indeed follow from its premises and disregard
formalizations that do not do justice to this postulate.
5As described below, using the technique of semantical embeddings  (cf. also  and the
references therein) allows us to work with several diﬀerent non-classical logics (modal, temporal,
deontic, intuitionistic, etc.) while reusing existing higher-order reasoning infrastructure.
Computational Hermeneutics 5
3 Holistic Approach: Why Feasible Now?
Following a holistic approach for logical analysis was, until very recently, not feasible
in practice; since it involves an iterative process of trial-and-error, where the ade-
quacy of some candidate formalization for a sentence becomes tested by computing
the logical validity of the whole argument. In order to explore the vast combinatoric
of possible formalizations for even the simplest argument, we have to test its validity
at least several hundreds of times (also to account for logical pluralism). It is here
where the recent improvements and ongoing consolidation of modern automated
theorem proving technology (for propositional logic, ﬁrst-order logic and in partic-
ular also higher-order logic) become handy.
To get an idea of this, let us imagine the following scenario: A philosopher working
on a formal argument wants to test a variation on one of its premises or deﬁnitions
and ﬁnd out if the argument still holds. Since our philosopher is working with pen
and paper, she will have to follow some kind of proof procedure (e.g. tableaus or
natural-deduction calculus), which, depending on her calculation skills, may take
some minutes to be carried out. It seems clear that she cannot allow herself many of
such experiments on such conditions.
Now compare the above scenario to another one in which our working philosopher
can carry out such an experiment in just a few seconds and with no eﬀort, by em-
ploying an automated theorem prover. In a best-case scenario, the proof assistant
would automatically generate a proof (or the sketch of a countermodel), so she just
needs to interpret the results and use them to inform her new conjectures. In any
case, she would at least know if her speculations had the intended consequences, or
not. After some minutes of work, she will have tried plenty of diﬀerent variations of
the argument while getting real-time feedback regarding their suitability.6
We aim at showing how this radical quantitative increase in productivity does indeed
entail a qualitative change in the way we approach formal argumentation, since it
allows us to take things to a whole new level (note that we are talking here of many
hundreds of such trial-and-error ‘experiments’ that would take months or even years
if using pen and paper only). Most importantly, this qualitative leap opens the door
for the possibility of fully automating the process of argument formalization, as
it allows us to compute inferential (holistic) adequacy criteria of formalization in
real-time. Consider as an example Peregrin and Svoboda’s [43,44] proposed criteria:
(i) The principle of reliability: “φcounts as an adequate formalization of the sentence
Sin the logical system Lonly if the following holds: If an argument form in which
6The situation is obviously idealized, since, as is well known, most of theorem-proving problems
are computationally complex and even undecidable, so in many cases a solution will take several
minutes or just never be found. Nevertheless, as work in the emerging ﬁeld of computational
metaphysics [12–14, 29, 30, 48] suggests, the lucky situation depicted above is not rare and will
further improve in the future.
6 D. Fuenmayor and C. Benzmüller
φoccurs as a premise or as the conclusion is valid in L, then all its perspicuous
natural language instances in which Sappears as a natural language instance of φ
are intuitively correct arguments."
(ii) The principle of ambitiousness: “φis the more adequate formalization of the
sentence Sin the logical system Lthe more natural language arguments in which S
occurs as a premise or as the conclusion, which fall into the intended scope of Land
which are intuitively perspicuous and correct, are instances of valid argument forms
of Sin which φappears as the formalization of S." [44, pp. 70-71].
The evaluation of such inferential criteria clearly involves automatically comput-
ing the logical validity or consistency of formalized arguments (proofs). This is the
kind of work automated theorem provers are built for. Moreover, our focus on theo-
rem provers for higher-order logics is motivated by the notion of logical pluralism.
Computational hermeneutics targets the utilization of diﬀerent kinds of classical and
non-classical logics through the technique of semantical embeddings  (cf. also 
and the references therein), which allows us to take advantage of the expressive power
of classical higher-order logic (HOL) as a metalanguage in order to embed the syntax
and semantics of another logic as an object language. Using (shallow) semantical
embeddings we can, for instance, embed a modal logic by deﬁning the modal and
♦operators as meta-logical predicates in HOL and using quantiﬁcation over sets of
objects of a deﬁnite type w, representing the type of possible worlds or situations.
This gives us two important beneﬁts: (i) we can reuse existing automated theorem
proving technology for HOL and apply it for automated reasoning in non-classical
logics (e.g. free, modal, temporal or deontic logics); and (ii) the logic of formaliza-
tion becomes another degree of freedom and thus can be ﬁne-tuned dynamically by
adding/removing axioms in our metalanguage: HOL. A framework for automated
reasoning in diﬀerent logics by applying the technique of semantical embeddings
has been successfully implemented using automated theorem proving technology
(see e.g. [6, 30, 33]).
The following two sections illustrate some exemplary applications of the computa-
tional hermeneutics approach. They have been implemented using the Isabelle/HOL
 proof assistant for classical higher-order logic, aka. Church’s type theory .
4 Logical Analysis of Individual Structured Arguments
A ﬁrst application of computational hermeneutics for the logical analysis of ar-
guments has been presented in  (with its corresponding Isabelle/HOL sources
available in ). In that work, a modal variant of the ontological argument for the
existence of God, introduced in natural language by the philosopher E. J. Lowe ,
has been iteratively analyzed using our computational hermeneutics approach and,
as a result, a ’most’ adequate formalization has been found. In a series of iterations
(seven in total) Lowe’s argument has been formally reconstructed using slightly dif-
Computational Hermeneutics 7
ferent sets of premises and logics, and the partial results have been compiled and
presented each time as a new variant of the original argument. We aimed at illus-
trating how Lowe’s argument, as well as our understanding of it, gradually evolves
as we experiment with diﬀerent combinations of (formalized) deﬁnitions, premises
and logics for formalization. We quote from  the following paragraph which best
summarizes the methodological approach taking us from a natural-language argu-
ment to its corresponding adequate logical formalization (and refer the interested
reader to  and  for further details):
“We start with formalizations of some simple statements (taking them as tentative) and use
them as stepping stones on the way to the formalization of other argument’s sentences, repeating
the procedure until arriving at a state of reﬂective equilibrium: A state where our beliefs and com-
mitments have the highest degree of coherence and acceptability. In computational hermeneutics,
we work iteratively on an argument by temporarily ﬁxing truth-values and inferential relations
among its sentences, and then, after choosing a logic for formalization, working back and forth
on the formalization of its premises and conclusions by making gradual adjustments while getting
automatic feedback about the suitability of our speculations. In this fashion, by engaging in a
dialectic questions-and-answers (‘trial-and-error’) interaction with the computer, we work our way
towards a proper understanding of an argument by circular movements between its parts and the
whole (hermeneutic circle)."
5 Logical Analysis of Arguments in their Extended
As mentioned in the previous section, an instance of the ontological argument by
E. J. Lowe has previously been employed to showcase the application of our compu-
tational hermeneutics approach to the problem of ﬁnding an adequate formalization
for a natural-language argument (but without considering the surrounding network
of arguments where it is embedded) . In contrast, the example we discuss in
this section7additionally motivates and illustrates the fruitful combination of our
previous work with methods as typically used in abstract argumentation frameworks.
By doing so, we can now extend our holistic approach to logical analysis to include
the dialectic role an argument plays in some larger area of discourse represented
as a network of arguments. We see this as a novel contribution, which aligns our
work with other prominent structured approaches to argumentation in artiﬁcial in-
telligence [16, 17, 27].
Below we will show how our approach can extend methods from abstract argumenta-
tion [26,52] by adding a layer for deep semantical analysis to it and, vice versa, how
7The assessment presented here draws on previous work in  and particularly the more recent,
invited paper , which present an updated analysis of Gödel’s and Scott’s modal variants [34, 49]
of the ontological argument and illustrate how our method is able to formalize, assess and explain
those in full detail.
8 D. Fuenmayor and C. Benzmüller
our approach to deep semantical argument analysis becomes enriched by augmenting
it with methods as developed in argumentation theory. This way, argument analysis
becomes supported at both the abstract level and a concrete semantical level (i.e. with
fully formalized natural language content). We believe that such a combined, two-
level approach can provide a fruitful technological backbone for our computational
hermeneutics program. A particular advantage being the logical plurality we achieve
at both layers. At the abstract level, for instance, the support or attack relations8can
be replaced with little technical eﬀort with e.g. their intuitionistic logic or relevance
logic counterparts (exploiting the technique of shallow semantical embeddings).9At
a concrete level, we will demonstrate how the employed logics can be varied and
that it is, in fact, essential to do so, in order to achieve proper assessment results.
More precisely, in the example below we will switch between the higher-order modal
logics K, KB, S4 and S5.
5.1 Gödel’s Ontological Argument as a Showcase
Gödel’s and Scott’s variants of the ontological argument are direct descendants of
Leibniz’s, which in turn derives from Descartes’. These arguments have a two-part
structure: (i) prove that God’s existence is possible (see t3 in Figs. 1 and 2), and (ii)
prove that God’s existence is necessary, if possible (t5). The main conclusion (God’s
necessary existence, t6) then follows from (i) and (ii), either by modus ponens (in
non-modal contexts) or by invoking some axioms of modal logic (notably, but not
necessarily as we will see, the so-called modal logic system S5). Gödel’s ontological
argument, in its diﬀerent variants, is amongst the most discussed formal proofs in
modern literature, and so most of its premises and inferential steps have been subject
to criticism in some way or another (see e.g. ,  and ). We can therefore
conceive of this argument as a network of (abstracted) nodes, some of them standing
for some argument supporting the respective premise and others standing for attack-
ing arguments (cf. bipolar argumentation frameworks [20, 21]).
The abstracted nodes of the natural language argument are introduced in Fig. 1
together with their associated formalizations in higher-order modal logic. The cor-
responding abstract argumentation network is displayed in Fig. 2. The network pre-
sented in Fig. 2 only comprises support relations, which is suﬃcient for the purpose
of this paper. Meaningful attack relations could of course be added. For example, it
is well known that Gödel’s and Scott’s versions of the ontological argument support
the modal collapse , which in turn can be interpreted as an attack to free will
(see the recent discussion of this aspect in ). Extending our work below to cover
8See lines 4-5 in Fig. 4, where their deﬁnitions are provided for classical logic
9The full ﬂexibility of our framework is not illustrated to its maximum in this paper due to
space restrictions. For example, for intuitionistic logic we would simply integrate the respective
embedding presented in earlier work  to model intuitionistic support/attack relations.
Computational Hermeneutics 9
d1 Being Godlike is equivalent to having all positive properties.
a1 Exactly one of a property or its negation is positive.
a2 Any property entailed by a positive property is positive.a
t1 Every positive property is possibly instantiated (if a property X is positive, then it is possible
that some being has property X).
t2bBeing Godlike is a positive property.
t3 Being Godlike is possibly instantiated.
a4 Positive (negative) properties are necessarily positive (negative).
d2 A property Y is the essence of an individual x iﬀ x has Y and all of x’s properties are entailed
t4 Being Godlike is an essential property of any Godlike individual (Eis standing for one of the
two notions above).
d3 Necessary existence of an individual is the necessary instantiation of all its essences.
a5 Necessary existence is a positive property.
t5 Being Godlike, if (possibly) instantiated, is necessarily instantiated.
t6 Being Godlike is actually instantiated.
aThe quantiﬁers ∃EXand ∀EXrepresent a kind of restricted (aka. actualist) quantiﬁcation over
a set of ‘existent’ objects. Its deﬁnition can be seen in lines 31-35 in Fig 3.
bGödel originally considered an additional assumption a3, which was used solely for deriving
theorem t2; Scott suggested to take t2 directly as an assumption instead, which is what we also
cThe underlined part in deﬁnition D2 has been added by Scott . Gödel  originally omitted
this part; more on this in Section 2.3 below.
Fig. 1 The deﬁnitions (d1,d2,d3), assumptions (a1,a2,t2,a4,a5) and theorems/argumentation steps
(t1,t3,t4,t5,t6) of Gödel’s, respectively Scott’s, modal version of the ontological argument. Both the
natural language statements and the corresponding modal logic formalizations (in Isabelle/HOL)
10 D. Fuenmayor and C. Benzmüller
Fig. 2 Abstract argumentation network for Gödel’s ontological argument. The displayed arrows
indicate support relations. Arrow annotations (e.g., d1, d2in the support arrow from a4to t4)
indicate which deﬁnitions need to be unfolded for the respective support relations to apply.
the more elaborate analysis of the ontological argument as presented in  will be
addressed in future work.
5.2 Embedding a Higher-order Modal Logic in Isabelle/HOL
As previously mentioned, higher-order modal logic has been employed as a logic
for formalization of the natural-language content of the argument nodes. To turn
Isabelle/HOL into a ﬂexible modal logic reasoner we have adopted the shallow se-
mantical embedding approach [6, 10]. The respective embedding of higher-order
modal logic in Isabelle/HOL is the content of theory ﬁle «IHOML.thy», which is
displayed in Fig. 3. The base logic of Isabelle/HOL is classical higher-order logic
(HOL aka. Church’s type theory ). HOL is a logic of functions formulated on
top of the simply typed λ-calculus, which also provides a foundation for functional
programming. The semantics of HOL is well understood . Relevant for our pur-
poses is that HOL supports the encoding of sets via their characteristic functions
represented as λ-terms. In this sense, HOL comes with a in-built notion of (typed)
sets that is exploited in our work for the explicit encoding of the truth-sets that
are associated with the formulas of higher-order modal logic. Since Isabelle/HOL-
speciﬁc extensions of HOL (except for preﬁx polymorphism) are not exploited in
our work, the technical framework we depict here can easily be transferred to other
HOL theorem proving environments.
Our semantical embedding in Isabelle/HOL encodes in lines 6-24 in Fig. 3 the
standard translation of propositional modal logic to ﬁrst-order logic in form of a
few (non-recursive) equations. Formula ϕ, for example, is modeled as an abbrevi-
ation (syntactic sugar) for the truth-set λwi.∀vi.wrv −→ ϕv, where rdenotes the
accessibility relation associated with the modal operator. All presented equations
Computational Hermeneutics 11
Fig. 3 Shallow semantical embedding of higher-order modal logic in Isabelle/HOL
exploit the idea that truth-sets in Kripke-style semantics can be directly encoded as
predicates (i.e. sets) in HOL. Possible worlds are thus explicitly represented in our
framework as terms of type iand modal formulas ϕare identiﬁed with their corre-
sponding truth sets ϕi→oof predicate type i→o. Note how validity and invalidity
is encoded in lines 21 and 24. A modal logic formula ϕis valid, denoted bϕc, if
its truth-set is the universal set, i.e., if ϕi→ois true in all words wi. Similarly, ϕ
is invalid, denoted bϕcinv, if ϕi→ois false in all worlds wi. In lines 26-35, further
equations are added to obtain actualist quantiﬁcation (here only for individuals) and
(polymorphic) possibilist quantiﬁcation for objects of arbitrary type (order). This
is where the shallow semantical embedding approach signiﬁcantly augments the
standard translation for propositional modal logics. For example, where ∀xα.φx is
12 D. Fuenmayor and C. Benzmüller
shorthand (binder-notation in HOL) for Πλxα.φx (the denotation of Πtest whether
its argument denotes the universal set of type α), then ∀xα.P x is now represented
as Π0λxα.λwi.P xw, where Π0stands for the lambda term λΦ.λwi.∀xα.Φxw and
the gets resolved as described above. In lines 37-42 some useful relations on
accessibility relations are stated, which are used to provide semantical deﬁnitions
for modal logics KB, S4 and S5. If none the latter abbreviations is postulated, the
content of Fig. 3 introduces higher-order modal logic K. Further details of the pre-
sented embedding, including proofs of faithfulness, have been presented elsewhere
(see e.g. , further references are given in ).
5.3 The Ontological Argument as an Abstract Argument Network
In lines 4-5 of the Isabelle/HOL ﬁle «ArgumentDeﬁnitions.thy», displayed in Fig. 4,
we import two central notions from argumentation theory: the binary relations “Sup-
ports” and “Attacks” [17, 21]. The former states that a valid modal formula Aand
another valid modal formula Btogether imply the validity of modal formula ψ. The
concretely employed implication and conjunction relations are those from classi-
cal logic (the meta-logic HOL). However, as mentioned before, our framework is
rich and expressive enough to replace classical consequence here by various other
notions of logical consequence. Alternatively, we could parameterize the three con-
tained validity statements for A,Band ψfor diﬀerent modal logics. In fact, the
diﬀerent notions of logics to be employed in these deﬁnitions could be modeled
as proper parameters (arguments) of the support and attack relations. This way we
would obtain a very expressive and powerful reasoning framework. In order to keep
things simple we will not further pursue this here, but leave it for further work.
Gödel’s argument is then speciﬁed in lines 13-14 of Fig. 4 as a network of ab-
stract nodes (recall Fig. 2). The modal validity of the assumptions “a1”, “a2”, “t2”,
“a4” and “a5” is assumed and the various support relations are stated as depicted
graphically in Fig. 3. The nodes itself have been introduced as uninterpreted con-
stant symbols in line 8 of Fig. 4 (and in line 10, we introduce further uninterpreted
constant symbols “kb”, “s4” and “s5” for characterizing the assumed modal logic
conditions). The “inner semantics” of these abstract nodes, and also the deﬁnitions
of the concepts «Godlike (G)», «Essence (E)» and «Necessary Existence (NE)», are
subsequently speciﬁed in lines 17-33. Since theory ﬁle «IHOML.thy» is imported,
we have access to the higher-order modal logic notions introduced there. At this
point we still leave it open whether the logic K, KB, S4 or S5 is considered, since we
will experiment with the diﬀerent settings later on (neither Gödel nor Scott explicitly
stated in their works which modal logic they actually assumed). We also do not ﬁx
the notion of essence here, but introduce the two alternative deﬁnitions proposed
by Gödel and Scott (see lines 26-27). In line 28, a respective uninterpreted constant
symbol for essence, E, is introduced and then used as a dummy in the formalization
of the subsequent argument nodes. In the experiments ahead we can now switch
Computational Hermeneutics 13
Fig. 4 Encoding of Gödel’s ontological argument as an abstract argument network, with a speciﬁ-
cation of the inner semantics of the argument nodes.
between Gödel’s and Scott’s notions of essence, by equating this dummy constant
symbol with the diﬀerent concrete deﬁnitions considered.
In lines 36-37, the abstract argument nodes are identiﬁed with their formalizations
as just introduced. Thus, when postulating the here deﬁned Boolean ﬂag “Instanti-
ateArgumentNodes”, the argument from lines 13-14 is no longer just abstract, but
in a sense instantiated through activation of the inner semantics of the argument
nodes. In line 39, the abstract logic conditions are analogously instantiated with their
concrete realizations by Boolean ﬂag “InstantiateLogics”. The “Instantiate” in line
41 then simply combines them into a single ﬂag.
14 D. Fuenmayor and C. Benzmüller
Fig. 5 Analysis of Gödel’s variant of the ontological argument.
5.4 Analysis of Gödel’s Variant of the Ontological Argument
In Fig. 5 we analyze Gödel’s variant of the ontological argument using the notions and
ideas as introduced in the imported theory ﬁles «IHOML.thy» and «ArgumentDeﬁ-
nitions.thy» from Figs. 3 and 4 respectively. To do so, the “essence" dummy constant
symbol is mapped to the deﬁnition as proposed by Gödel’s (see line 4). Then, in lines
7-9, we ask the model ﬁnder Nitpick , integrated with Isabelle/HOL, to compute
a model for the abstract Gödel argument. For the call in line 7, a model consisting of
one world (accessible from itself – not shown in the window) with one single indi-
vidual is presented in the lower window of Fig. 5. The duplicated calls in line 8 and 9
are of course redundant, since the Boolean logic ﬂags “kb”, “s4” and “s5” (and also
the argument nodes) are still uninterpreted. Hence, at the abstract level, the Gödel ar-
Computational Hermeneutics 15
gument has a model (we here even present the minimal model) and is thus satisﬁable.
For illustration purposes we show in line 12 that the abstract argument can eas-
ily become unsatisﬁable, for example, by adding an attack relation as displayed.
Automated theorem proving technology integrated with Isabelle/HOL can quickly
reveal such inconsistencies at the abstract argument level. A much more interesting
and relevant aspect is illustrated in lines 25-24 of Fig. 5: the satisﬁability of the
abstract Gödel argument provides no guarantee at all for its satisﬁability at instan-
tiated level, i.e. when the semantics of the argument nodes is added. In line 15, the
model ﬁnder Nitpick indeed fails to compute a satisfying model for the instantiated
argument (it terminates with a timeout).
However, if we now study the semantically instantiated (formalized) argument, which
is done by activating the link between the abstract nodes and their formalizations,
then we ﬁnd out that the Gödel argument is actually inconsistent (for all logic con-
ditions). And in lines 21-24, the inconsistency of the instantiated abstract argument
is then proven automatically by respective automation tools in Isabelle/HOL (here
the prover Metis is employed). The clue to the inconsistency is the “Empty Essence
Lemma (EEL)”, which is proven in line 18. The inconsistency of Gödel’s ontological
argument was unknown to philosophers until recently, when it was detected by the
automated higher-theorem prover LEO-II ; for more on this see [14,15].
5.5 Analysis of Scott’s Variant of the Ontological Argument
We analyze Scott’s version of the argument in Fig. 6. In line 4 of this ﬁle the notion of
essence according to Scott is activated. In lines 7-9, the model ﬁnder Nitpick again
conﬁrms the consistency of the argument at the abstract level, which was expected,
since the concretely employed notion of essence does not have an inﬂuence at this
level. It does so, however, at instantiated level, and this can be seen in lines 13-15 in
Fig. 6. In contrast to the instantiated Gödel argument from Fig. 5, where the model
ﬁnder Nitpick failed to conﬁrm satisﬁability, it now succeeds. And in fact it does so
for all logic conditions. The reported model (for logic S5, which in fact works for all
logic conditions) is displayed in the lower window of Fig. 6. This model is minimal,
it consists of one world (accessible to itself) and one object, and further details on
the interpretation of essence and the notion of positive properties are displayed.
However, as we illustrate next, the satisﬁability of Scott’s argument for logics KB, S4
and S5 does of course not imply the validity of the argument for these logic conditions.
In lines 19-29 in Fig. 7, we ﬁrst prove the validity of the Scott argument for logic
KB. This is done by automatically proving that all (instantiated) support relations
are validated. Note how concisely the exactly required dependencies are displayed in
the justiﬁcations of the proof steps. In lines 32-42 of Fig. 7, we analogously assess
the validity of the Scott argument for logic S4. However, the attempt to copy and
16 D. Fuenmayor and C. Benzmüller
Fig. 6 Analysis of Scott’s variant of the ontological argument – Part I
paste the previous proof does fail: in line 38, we obtain a countermodel, and this
countermodel comes with a non-symmetric accessibility relation between worlds.
Note that in line 25, in the proof from before, the logic condition KB (assms(1))
was indeed employed in the justiﬁcation. We here see that the symmetry condition
of logic KB indeed plays a role for the validity of Scott’s argument. In a logic such
as S4, where symmetric accessibility relations between worlds are not enforced, the
argument fails. This is conﬁrmed again in lines 44-45, where the countermodel is
computed directly for the stated validity conjecture (excluding the possibility that
there might be an alternative proof in S4 to the one attempted in lines 32-42). For
logic S5 the Scott argument is valid again, which is not a surprise given that logic
S5 entails logic KB. This is conﬁrmed in lines 48-58 in Fig. 7.
Computational Hermeneutics 17
Fig. 7 Analysis of Scott’s variant of the ontological argument – Part II
5.6 Section Summary
The analysis of non-trivial natural language arguments at the abstract argumentation
level is useful, but of limited explanatory power. Achieving such explanatory power
requires the extension of techniques from abstract argumentation frameworks with
means for deep semantical analysis as provided in our computational hermeneutics
approach. This has been illustrated in this section with the help of Gödel’s and
Scott’s versions of the ontological argument for the existence of God. Highly relevant
aspects, such as inconsistency of Gödel’s argument, invalidity of Scott’s argument
18 D. Fuenmayor and C. Benzmüller
for S4 and validity for KB and S5 could only be shown by the integration of the
abstract argumentation layer with our machinery.
6 Ongoing and Future Work
In previous work  we have illustrated how the computational hermeneutics ap-
proach can be carried out in a semi-automatic fashion for the logical analysis of an
isolated argument: We work iteratively on an argument by (i) tentatively choosing
a logic for formalization; (ii) ﬁxing truth-values and inferential relations among its
sentences; and (iii) working back and forth on the formalization of its axioms and
theorems, by making gradual adjustments while getting real-time feedback about the
suitability of our changes (e.g. validating the argument, avoiding inconsistency or
question-begging, etc.). This steps are to be repeated until arriving at a state of re-
ﬂective equilibrium: A state where our arguments and claims have the highest degree
of coherence and acceptability according to syntactic and, particularly, inferential
criteria of adequacy (see Peregrin and Svoboda’s criteria presented above [43,44]).
In this paper we have sketched another exemplary application of computational
hermeneutics to argumentation theory. We exploited the fact that our approach is
well suited for the utilization of diﬀerent kinds of classical and non-classical logics
through the technique of shallow semantical embeddings [6,10], which allows us to
take advantage of the expressive power of classical higher-order logic (as a metalan-
guage) in order to embed the syntax and semantics of another logic (object language).
This way it has become possible to parameterize the relations of argument support
and attack by adding the logic of formalization as a variable. This parameter can
then be varied by adding or removing premises (at a meta-level), which correspond
to the embedding of the logic in question. In future work, we aim at integrating our
approach with others in argumentation theory, which also take into account the logi-
cal structure of arguments (see e.g. [16, 17,27] and also  for a proof-theoretically
Computational hermeneutics features the sort of (holistic) mutual adjustment be-
tween theory and observation, which is characteristic of scientiﬁc inquiry; we are
currently exploring the way to fully automate this process. The idea is to tackle the
problem of formalization as a combinatorial optimization problem, by using (among
others) inferential criteria of adequacy to deﬁne the ﬁtness/utility function of an
appropriate optimization algorithm. Davidson’s principle of charity would provide
our main selection criteria: an adequate formalization must (i) validate the argument
(among other qualitative requirements) and (ii) do justice to its intended dialectic
role in some discourse (i.e. it attacks/supports other arguments as intended). It is
worth noting that, for the kind of non-trivial arguments we are interested in (e.g.
from ethics, metaphysics and politics), such a selection criteria would aggressively
prune our search tree. Furthermore, the evaluation of our ﬁtness function is, with
Computational Hermeneutics 19
today’s technologies, not only completely automatizable, but also seems to be highly
parallelizable. The most challenging task remains how to systematically come up
with the candidate formalization hypotheses (an instance of abductive reasoning).
Here we see great potential in the combination of automated theorem proving with
other areas in artiﬁcial intelligence such as machine learning and, in particular, ar-
gumentation frameworks, by exploiting a layered, structured approach as illustrated
Acknowledgements We thank the anonymous reviewers for their valuable comments which helped
to improve this paper.
1. P. Andrews. Church’s type theory. In E. N. Zalta, editor, The Stanford Encyclopedia of
Philosophy. Metaphysics Research Lab, Stanford University, summer 2018 edition, 2018.
2. O. Arieli and C. Straßer. Sequent-based logical argumentation. Argument & Computation,
3. C. Baumberger and G. Brun. Dimensions of objectual understanding. Explaining under-
standing. New perspectives from epistemology and philosophy of science, pages 165–189,
4. M. Baumgartner and T. Lampert. Adequate formalization. Synthese, 164(1):93–115, 2008.
5. M. Bentert, C. Benzmüller, D. Streit, and B. Woltzenlogel Paleo. Analysis of an ontological
proof proposed by Leibniz. In C. Tandy, editor, Death and Anti-Death, Volume 14: Four
Decades after Michael Polanyi, Three Centuries after G.W. Leibniz. Ria University Press,
6. C. Benzmüller. Universal (meta-)logical reasoning: Recent successes. Science of Computer
Programming, 172:48–62, March 2019. DOI (preprint): http://dx.doi.org/10.13140/RG.2.2.
7. C. Benzmüller, C. Brown, and M. Kohlhase. Higher-order semantics and extensionality.
Journal of Symbolic Logic, 69(4):1027–1088, 2004.
8. C. Benzmüller and D. Fuenmayor. Can computers help to sharpen our understanding of
ontological arguments? In S. Gosh, R. Uppalari, K. V. Rao, V. Agarwal, and S. Sharma,
editors, Mathematics and Reality, Proceedings of the 11th All India Students’ Conference on
Science & Spiritual Quest, 6-7 October, 2018, IIT Bhubaneswar, Bhubaneswar, India. The
Bhaktivedanta Institute, Kolkata, www.binstitute.org, 2018.
9. C. Benzmüller and L. Paulson. Multimodal and intuitionistic logics in simple type theory. The
Logic Journal of the IGPL, 18(6):881–892, 2010.
10. C. Benzmüller and L. Paulson. Quantiﬁed multimodal logics in simple type theory. Logica
Universalis (Special Issue on Multimodal Logics), 7(1):7–20, 2013.
11. C. Benzmüller, N. Sultana, L. C. Paulson, and F. Theiß. The higher-order prover LEO-II.
Journal of Automated Reasoning, 55(4):389–404, 2015.
12. C. Benzmüller, L. Weber, and B. Woltzenlogel-Paleo. Computer-assisted analysis of the
Anderson-Hájek controversy. Logica Universalis, 11(1):139–151, 2017.
13. C. Benzmüller and B. Woltzenlogel Paleo. Automating Gödel’s ontological proof of God’s
existence with higher-order automated theorem provers. In T. Schaub, G. Friedrich, and
B. O’Sullivan, editors, ECAI 2014, volume 263 of Frontiers in Artiﬁcial Intelligence and
Applications, pages 93 – 98. IOS Press, 2014.
14. C. Benzmüller and B. Woltzenlogel Paleo. The inconsistency in Gödel’s ontological argument:
A success story for AI in metaphysics. In IJCAI 2016, 2016.
20 D. Fuenmayor and C. Benzmüller
15. C. Benzmüller and B. Woltzenlogel Paleo. An object-logic explanation for the inconsistency
in Gödel’s ontological theory (extended abstract). In M. Helmert and F. Wotawa, editors, KI
2016: Advances in Artiﬁcial Intelligence, Proceedings, volume 9725 of LNCS, pages 43–50,
Berlin, Germany, 2016.
16. P. Besnard and A. Hunter. A logic-based theory of deductive arguments. Artiﬁcial Intelligence,
17. P. Besnard and A. Hunter. Argumentation based on classical logic. In Argumentation in
Artiﬁcial Intelligence, pages 133–152. Springer, 2009.
18. J. Blanchette and T. Nipkow. Nitpick: A counterexample generator for higher-order logic based
on a relational model ﬁnder. In Proc. of ITP 2010, volume 6172 of LNCS, pages 131–146.
19. G. Brun. Die richtige Formel: Philosophische Probleme der logischen Formalisierung, vol-
ume 2. Walter de Gruyter, 2003.
20. C. Cayrol and M.-C. Lagasquie-Schiex. On the acceptability of arguments in bipolar argu-
mentation frameworks. In European Conference on Symbolic and Quantitative Approaches to
Reasoning and Uncertainty, pages 378–389. Springer, 2005.
21. C. Cayrol and M.-C. Lagasquie-Schiex. Bipolar abstract argumentation systems. In I. Rahwan
and G. R. Simari, editors, Argumentation in artiﬁcial intelligence, pages 65–84. Springer, 2009.
22. D. Davidson. Radical interpretation interpreted. Philosophical Perspectives, 8:121–128,
23. D. Davidson. Essays on actions and events: Philosophical essays, volume 1. Oxford University
Press on Demand, 2001.
24. D. Davidson. Inquiries into Truth and Interpretation: Philosophical Essays, volume 2. Oxford
University Press, 2001.
25. D. Davidson. Radical interpretation. In Inquiries into Truth and Interpretation. Oxford
University Press, September 2001.
26. P. M. Dung. On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games. Artiﬁcial intelligence, 77(2):321–357,
27. P. M. Dung, R. A. Kowalski, and F. Toni. Assumption-based argumentation. In Argumentation
in Artiﬁcial Intelligence, pages 199–218. Springer, 2009.
28. C. Elgin. Considered judgment. Princeton University Press, 1999.
29. B. Fitelson and E. N. Zalta. Steps toward a computational metaphysics. Journal of Philosoph-
ical Logic, 36(2):227–247, 2007.
30. D. Fuenmayor and C. Benzmüller. Automating emendations of the ontological argument in
intensional higher-order modal logic. In G. Kern-Isberner, J. Fürnkranz, and M. Thimm,
editors, KI 2017: Advances in Artiﬁcial Intelligence, volume 10505, pages 114–127. Springer,
31. D. Fuenmayor and C. Benzmüller. Computer-assisted reconstruction and assessment of E. J.
Lowe’s modal ontological argument. Archive of Formal Proofs, Sept. 2017. http://isa- afp.org/
entries/Lowe_Ontological_Argument.html, Formal proof development.
32. D. Fuenmayor and C. Benzmüller. A case study on computational hermeneutics: E. J. Lowe’s
modal ontological argument. Journal of Applied Logics - IfCoLoG Journal of Logics and their
Applications (special issue on Formal Approaches to the Ontological Argument), 2018.
33. T. Gleißner, A. Steen, and C. Benzmüller. Theorem provers for every normal modal logic. In
T. Eiter and D. Sands, editors, LPAR-21. 21st International Conference on Logic for Program-
ming, Artiﬁcial Intelligence and Reasoning, volume 46 of EPiC Series in Computing, pages
14–30, Maun, Botswana, 2017. EasyChair.
34. K. Gödel. Appx.A: Notes in Kurt Gödel’s Hand, pages 144–145. In , 2004.
35. N. Goodman. Fact, ﬁction, and forecast. Harvard University Press, 1983.
36. J. Groenendijk and M. Stokhof. Dynamic predicate logic. Linguistics and philosophy,
37. H. Kamp, J. Van Genabith, and U. Reyle. Discourse representation theory. In Handbook of
philosophical logic, pages 125–394. Springer, 2011.
Computational Hermeneutics 21
38. E. J. Lowe. A modal version of the ontological argument. In J. P. Moreland, K. A. Sweis, and
C. V. Meister, editors, Debating Christian Theism, chapter 4, pages 61–71. Oxford University
39. R. Montague. Formal Philosophy: Selected Papers of Richard Montague. Ed. and with an
Introd. by Richmond H. Thomason. Yale University Press, 1974.
40. T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL — A Proof Assistant for Higher-Order
Logic. Number 2283 in LNCS. Springer, 2002.
41. G. Oppy. Gödelian ontological arguments. Analysis, 56(4):226–230, 1996.
42. G. Oppy. Ontological arguments and belief in God. Cambridge University Press, 2007.
43. J. Peregrin and V. Svoboda. Criteria for logical formalization. Synthese, 190(14):2897–2924,
44. J. Peregrin and V. Svoboda. Reﬂective Equilibrium and the Principles of Logical Analysis:
Understanding the Laws of Logic. Routledge Studies in Contemporary Philosophy. Taylor and
45. W. v. O. Quine. Two dogmas of empiricism. In Can Theories be Refuted?, pages 41–64.
46. W. v. O. Quine. Word and object. MIT press, 2013.
47. J. Rawls. A theory of justice. Harvard university press, 2009.
48. J. Rushby. The ontological argument in PVS. In Proc. of CAV Workshop “Fun With Formal
Methods”, St. Petersburg, Russia, 2013.
49. D. Scott. Appx.B: Notes in Dana Scott’s Hand, pages 145–146. In , 2004.
50. J. Sobel. Logic and Theism: Arguments for and Against Beliefs in God. Cambridge U. Press,
51. A. Tarski. The concept of truth in formalized languages. Logic, semantics, metamathematics,
52. F. H. van Eemeran and R. Grootendorst. A Systematic Theory of Argumentation. Cambridge
University Press, 2004.