Conference PaperPDF Available

A Computational Logic Approach to the Suppression Task

Authors:
A preview of the PDF is not available
... By contrast, pragmatic inference is governed by context-sensitive mechanisms, rather than context-free and general-purpose mechanisms (Cheng and Holyoak 1985;Cosmides and Tooby 1992). As argued by Dietz et al. (2012), computational approaches to explain human reasoning should be cognitively adequate, that is, they appropriately represent human knowledge (conceptually adequate) and computations behave similarly to human reasoning (inferentially adequate). Then if we use logic programming for representing knowledge in daily life, it is useful to have a mechanism of automatic transformation of a knowledge base to simulate human reasoning depending on the context in which conditional sentences are used. ...
... This argument coincides with our view addressed in Section 5.1. Dietz et al. (2012) point out a technical flaw in the formulation by Stenning and Lambalgen (2008). In the above example, open and library are unknown (U) under the 3-valued logic, then the rule "library ← open ∧ ¬ab 3 " becomes "U ← U." Under the Fitting semantics, however, the truth value of the rule "U ← U" is U, then it does not represent the truth of the rule "library ← open ∧ ¬ab 3 ." ...
... To remedy the problem, they employ Lukasiewicz's 3-valued logic which maps "U ← U" to . Dietz et al. (2012) also characterize the suppression effects in AC or DC using an abductive logic program Π, Γ with abducibles Γ = { p ← ⊥, p ← }. Consider Π 1 , Γ 1 where Π 1 : library ← essay ∧ ¬ab 1 , ab 1 ← ⊥, Γ 1 : essay ← ⊥, essay ← , the weakly completed program of Π 1 becomes library ↔ essay ∧ ¬ab 1 , ab 1 ↔ ⊥. ...
Article
Full-text available
Given a conditional sentence “ φψ{\varphi}\Rightarrow \psi " (if φ{\varphi} then ψ\psi ) and respective facts, four different types of inferences are observed in human reasoning: Affirming the antecedent (AA) (or modus ponens ) reasons ψ\psi from φ{\varphi} ; affirming the consequent (AC) reasons φ{\varphi} from ψ\psi ; denying the antecedent (DA) reasons ¬ψ\neg\psi from ¬φ\neg{\varphi} ; and denying the consequent (DC) (or modus tollens ) reasons ¬φ\neg{\varphi} from ¬ψ\neg\psi . Among them, AA and DC are logically valid, while AC and DA are logically invalid and often called logical fallacies . Nevertheless, humans often perform AC or DA as pragmatic inference in daily life. In this paper, we realize AC, DA and DC inferences in answer set programming . Eight different types of completion are introduced, and their semantics are given by answer sets. We investigate formal properties and characterize human reasoning tasks in cognitive psychology. Those completions are also applied to commonsense reasoning in AI.
... Reconsidering the previous program 0 , the reader may note that while the undefined atom is mapped to false under the completion of 0 , it is mapped to unknown under its weak completion. Weak completion is necessary for the WCS framework to adequately model the suppression task (and other reasoning tasks) as demonstrated in [5]. ...
... Such a characterization is along the lines of so-called enabling relations as discussed in [27]. Using this we have also modelled experiments involving additional arguments in the suppression task [4,5]. The reader may note that in ( 3 ), the assumptions ab ← ⊥ and ab ← ⊥ are overridden by ab ← ¬ and ab ← ¬ , respectively. ...
... The reader may note that in ( 3 ), the assumptions ab ← ⊥ and ab ← ⊥ are overridden by ab ← ¬ and ab ← ¬ , respectively. Let us now consider statements (1) to (6), and in particular statements (5) and (6). While practising in London during 1848 may be deemed necessary for the level of experience Snow had with Cholera patients, his apprenticeship during 1831 may be deemed non-necessary by some individuals. ...
Conference Paper
Full-text available
In their book, Noise: A Flaw in Human Judgment, the authors Daniel Kanheman, Olivier Sibony andCass R. Sunstein highlight the importance of minimizing bias, i.e. systematic deviation, and noise, i.e.variability, in judgments in order to reduce error. Bias has long been the subject of many discussions butnoise is yet to gain the attention it deserves. In this paper, we discuss noise variables in decision-making, particularly in unique, non-recurrent or singular decisions. For this purpose we introduce and utilize the framework of the Weak Completion Semantics to discuss how noise variables may be identified using counterfactual reasoning.
... The Weak Completion Semantics (WCS) is a three-valued, non-monotonic cognitive theory, which can not only adequately model the suppression task by [1] as shown by [6], human syllogistic reasoning as shown by [18], and DC inferences as shown by [5] but also the AA, AC, and the majority ¬C answers of the DA. However, the existing framework of the WCS did not seem adequate to model the significant number of nothing follows (nf ) responses in the DA task in case of a non-necessary antecedent. ...
... The resulting set of equivalences is called the weak completion of P. It differs from the program completion defined in [3] in that undefined atoms in the weakly completed program are not mapped to false, but to unknown instead. Weak completion is necessary to adequately model the suppression task (and other reasoning tasks) by the WCS as demonstrated in [6]. ...
... In Section 5 we will explain how these five steps work in the case of the DA reasoning tasks considered in this paper. More examples can be found, for example, in [6] or [18] or [9]. ...
Conference Paper
Full-text available
An experiment has revealed that if the antecedent of a conditional sentence is denied, then most participants conclude that the negation of the consequent holds. However, a significant number of participants answered nothing follows if the antecedent of the conditional sentence was non-necessary. The weak completion semantics correctly models the answers of the majority, but cannot explain the number of nothing follows answers. In this paper we extend the weak completion semantics by counter examples. The extension allows to explain the experimental findings.
... In the WCS, this program represents the conditional sentence if A then C. Thus, conditional sentences are not represented by implications but by licenses for inferences (Stenning and van Lambalgen, 2008). The abnormalities are initially assumed to false, but can later be used to represent -among others -enabling relationships (see, e.g., Dietz et al. 2012). ...
... This is independent of whether these said conclusions were valid or invalid with respect to classical two-valued logic. The suppression task was the first experiment which was adequately modeled by the WCS (Dietz et al., 2012). In what follows two of the twelve tasks will be discussed to illustrate how the WCS models a suppression with abduction. ...
Chapter
The weak completion semantics is a novel cognitive theory. It is multi-valued, non-monotonic, and knowledge-rich, allows learning, can handle inconsistent background knowledge, and can be applied to model the average reasoner. Moreover, it uses abduction to explain observations, to satisfy integrity constraints, and to search for counterexamples. In all these applications, human reasoning tasks can only be adequately modeled within the weak completion semantics if skeptical abduction rather than credulous abduction is applied. This will be illustrated in the context of the suppression task, disjunctive reasoning, and conditional reasoning.
... The weak completion semantics (WCS) is a logic programming approach to model human reasoning. Based on ideas originally developed by Stenning and van Lambalgen (2008), it is a three-valued, non-monotonic theory which is knowledge-rich, can handle inconsistent background knowledge, and has been shown to adequately model the average case in various human reasoning tasks like the suppression task (Dietz, Hölldobler, & Ragni, 2012), human syllogistic reasoning (Oliviera da Costa, Dietz Saldanha, Hölldobler, & Ragni, 2017), and human conditional reasoning (Cramer, Hölldobler, & Ragni, 2021). Thus, the WCS offers solutions for the five fundamental problems attributed to the classical binary logic approach in the psychology of reasoning by Oaksford and Chater (2020). ...
... In this paper the abnormalities will always be false. However, in other applications like the suppression task they are important to model exceptional cases and enabling relations(Dietz et al., 2012). ...
Conference Paper
Full-text available
The weak completion semantics is a three-valued, non-monotonic theory which has been shown to adequately model various cognitive reasoning tasks. In this paper we extend the weak completion semantics to model disjunctions and exclusive disjunctions. Such disjunctions are encoded by integrity constraints and skeptical abduction is applied to compute logical consequences. We discuss various examples and relate the approach to the elimination of disjunctions in the calculus of natural deduction.
... The Weak Completion Semantics is based on ideas initially proposed by Keith Stenning and Michiel van Lambalgen in [28]. It is mathematically sound [16], has been applied to various human reasoning tasks such as the suppression task [6], the selection task [7], the belief-bias effect [25], ethical decision-making [15] etc. It has outperformed the twelve cognitive theories considered by Philip Johnson-Laird and Sangeet Khemlani [21] in syllogistic reasoning [5] and is implementable in a connectionist setting [27]. ...
Article
Full-text available
With this article, the two authors would like to pay tribute to the memory of their dear friend and colleague Steffen Hölldobler, who left us far too early in 2023. Ulrich (UF), in his time as a postdoc at the University of the Bundeswehr Munich, mentored Steffen as a student in his first logic lectures. Meghna (MB) is Steffen’s last PhD student. Although there is so much more to the wonderful man Steffen was, this article strives to briefly touch upon some of the various hats he donned during his lifetime—as a student, a researcher, a professor and a friend.
... In doing so, the WCS models reasoning problems via sets of clauses representing the known facts and domain-specific inferential principles that allow for derivation of new knowledge (da Costa et al., 2017). So far, the WCS was successfully applied to a variety of problem domains such as syllogistic reasoning (da Costa et al., 2017;Dietz Saldanha et al., 2018;Dietz Saldanha & Mörbitz, 2020;Dietz Saldanha & Schambach, 2019, 2020, spatial relational reasoning (Dietz, Hölldobler, & Höps, 2015), conditional reasoning (Dietz, Hölldobler, & Pereira, 2015), the suppression task (Dietz et al., 2012), the Wason se-lection task (Dietz et al., 2013), and the belief-bias e↵ect (Dietz, 2017;Pereira et al., 2014a,b). ...
Thesis
The field of human reasoning research can look back on over a century of interdisciplinary work aimed at uncovering the cognitive processes underlying the human ability to make inferences. However, despite this extensive history, recent investigations suggest flaws in the methodological approach predominantly employed by the field. For instance, the traditional focus has been on aggregate analyses, i.e., trying to understand and explain group-level behavior that neglects the importance of individual differences. This invites the problem of group-to-individual generalizability, which may render the transfer of insight to individual behavior invalid. If proven to hold for the field of reasoning research, current theories would not be applicable to individual human reasoners but only to artificial, average behavior with limited scientific relevance. The present thesis presents a novel approach to reasoning research that aims to provide a remedy for the current shortcomings of the field. This is accomplished by shifting the methodological perspective from the prevailing focus on groups to individuals. In particular, the fundamental research goals are reformulated as a series of predictive modeling problems that naturally invite the use of accuracy metrics to gauge the performance of models. By jointly allowing theory-driven and data-driven approaches to compete on a common ground, this new perspective does not only allow for an improved assessment of performances, but also for detailed analyses of the shortcomings and advantages of specific approaches, which may boost synergistic effects in the field. Using this novel approach, syllogistic and spatial relational reasoning, two central domains in the field of reasoning research, are analyzed. Benchmarks consisting of collections of state-of-the-art models that reflect the theoretical progress in the field, as well as a selection of representative datasets, are created to run the field's most comprehensive model evaluations to date. The results suggest that the problems of group-to-individual generalizability are present in the field. As such, the prevailing neglect of inter-individual differences that has caused a focus on "average" instead of individual behavior leads to the risk of theories losing their explanatory relevance. Furthermore, practical shortcomings of the data landscape such as noise and the lack of discriminability between individuals are uncovered. The thesis concludes by proposing an improved methodological paradigm for approaching model-driven research in the field.
... into three groups and were asked whether they could derive conclusions given variations of a set of premises. First, we present the formalization in ASP guided by cognitive principles. In Part I, reasoning is done deductively, and, in Part II, it is done abductively. In contrast to other logic programming approaches (Stenning & van Lambalgen, 2008;E.-A. Dietz, Hölldobler, & Ragni, 2012), we apply quantitative reasoning to the computed models which allows us to account for the majority's differences in the experimental results. ...
Preprint
Full-text available
Cognitive theories for reasoning are about understanding how humans come to conclusions from a set of premises. Starting from hypothetical thoughts, we are interested which are the implications behind basic everyday language and how do we reason with them. A widely studied topic is whether cognitive theories can account for typical reasoning tasks and be confirmed by own empirical experiments. This paper takes a different view and we do not propose a theory, but instead take findings from the literature and show how these, formalized as cognitive principles within a logical framework, can establish a quantitative notion of reasoning, which we call plausibility. For this purpose, we employ techniques from non-monotonic reasoning and computer science, namely, a solving paradigm called answer set programming (ASP). Finally, we can fruitfully use plausibility reasoning in ASP to test the effects of an existing experiment and explain different majority responses.
Chapter
The Cambridge Handbook of Computational Cognitive Sciences is a comprehensive reference for this rapidly developing and highly interdisciplinary field. Written with both newcomers and experts in mind, it provides an accessible introduction of paradigms, methodologies, approaches, and models, with ample detail and illustrated by examples. It should appeal to researchers and students working within the computational cognitive sciences, as well as those working in adjacent fields including philosophy, psychology, linguistics, anthropology, education, neuroscience, artificial intelligence, computer science, and more.
Article
Full-text available
Two experiments are reported which compare conditional reasoning with three types of rule. These consist of two types of rule that have been widely studied previously, if p then q and p only if q, together with a third type, q if p. In both experiments, the p only if q type of rule yields a different pattern of performance from the two other types of rule. Experiment 1 is an abstract rule-evaluation task and demonstrates differential effects of temporal order and of suppositional bias. Experiment 2 investigates rule generation, rephrasing, and comparison, and demonstrates differential effects of temporal order and of thematic content. An analysis of the results is offered in terms of biases and mental models. Effects of rule form and context can be explained as reflecting the different sequences in which mental models are created for each rule form. However, it is necessary to consider the internal structure of individual mental models to account for effects arising from temporal ordering of rules.
Chapter
Full-text available
Research into the processing of symbolic knowledge by means of connectionist networks aims at systems which combine the declarative nature of logicbased artificial intelligence with the robustness and trainability of artificial neural networks. This endeavour has been addressed quite successfully in the past for propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended beyond propositional logic, it is not obvious at all what neuralsymbolic systems should look like such that they are truly connectionist and allow for a declarative reading at the same time.
Article
Qualitative spatial reasoning (QSR) is often claimed to be cognitively more plausible than conventional numerical approaches to spatial reasoning, because it copes with the indeterminacy of spatial data and allows inferences based on incomplete spatial knowledge. The paper reports experimental results concerning the cognitive adequacy of an important approach used in QSR, namely the spatial interpretation of the interval calculus introduced by Allen (1983). Knauff, Rauh and Schlieder (1995) distinguished between the conceptual and inferential cognitive adequacy of Allen''s interval calculus. The former refers to the thirteen base relations as a representational system and the latter to the compositions of these relations as a tool for reasoning. The results of two memory experiments on conceptual adequacy show that people use ordinal information similar to the interval relations when representing and remembering spatial arrangements. Furthermore, symmetry transformations on the interval relations were found to be responsible for most of the errors, whereas conceptualneighborhood theory did not appear to correspond to cognitively relevant concepts. Inferential adequacy was investigated by two reasoning experiments and the results show that in inference tasks where the number of possible interval relations for the composition is more than one, subjects ignore numerous possibilities and interindividually prefer the same relations. Reorientations and transpositions operating on the relations seem to be important for reasoning performance as well, whereas conceptual neighborhood did not appear to affect the difficulty of reasoning tasks based on the interval relations.
Chapter
A variety of disciplines have dealt with the design of intelligent algorithms – among them Artificial Intelligence and Robotics. While some approaches were very successful and have yielded promising results, others have failed to do so which was — at least partly — due to inadequate architectures and algorithms that were not suited to mimic the behavior of biological intelligence. Therefore, in recent years, a quest for ”brain-like” intelligence has arosen. Soft- and hardware are supposed to behave like biological brains — ideally like the human brain. This raises the questions of what exactly defines the attribute ”brain-like”, how can the attribute be implemented and how tested. This chapter suggests the concept of cognitive adequacy in order to get a rough estimate of how ”brain-like” an algorithm behaves.