Why Do Humans Reason? Arguments for an Argumentative Theory

Philosophy, Politics and Economics Program, University of Pennsylvania, Philadelphia, PA 19104, USA.
Behavioral and Brain Sciences (Impact Factor: 20.77). 04/2011; 34(2):57-74; discussion 74-111. DOI: 10.1017/S0140525X10000968
Source: PubMed


Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.

Download full-text


Available from: Hugo Mercier,
131 Reads
  • Source
    • "The similarity is easily explained by the fact that when reasoning produces arguments for one's position, it is automatically in a situation in which it agrees with the argument's conclusion. Selective laziness can be interpreted in light of the argumentative theory of reasoning (Mercier & Sperber, 2011). This theory hypothesizes that reasoning is best employed in a dialogical context. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this "selective laziness," we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.
    Cognitive Science A Multidisciplinary Journal 10/2015; DOI:10.1111/cogs.12303 · 2.59 Impact Factor
  • Source
    • "Another hypothesis is that the process of comparison between the probability of the premises and the probability of a conclusion, which results from an inference, may have fostered an attitude of epistemic vigilance (Mercier & Sperber, 2011). Because the essence of an argument is to provide reasons to believe the conclusion, participants may have been prompted to wonder why the given probability of the premises is a good reason for the conclusion to have or not to have the same probability; and this differs from the simple comparison of the probability of two sentences. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The new paradigm in the psychology of reasoning redirects the investigation of deduction conceptually and methodologically because the premises and the conclusion of the inferences are assumed to be uncertain. A probabilistic counterpart of the concept of logical validity and a method to assess whether individuals comply with it must be defined. Conceptually, we used de Finetti's coherence as a normative framework to assess individuals' performance. Methodologically, we presented inference schemas whose premises had various levels of probability that contained non-numerical expressions (e.g., “the chances are high”) and, as a control, sure levels. Depending on the inference schemas, from 60% to 80% of the participants produced coherent conclusions when the premises were uncertain. The data also show that (1) except for schemas involving conjunction, performance was consistently lower with certain than uncertain premises, (2) the rate of conjunction fallacy was consistently low (not exceeding 20%, even with sure premises), and (3) participants' interpretation of the conditional agreed with de Finetti's “conditional event” but not with the material conditional.
    Thinking and Reasoning 08/2015; DOI:10.1080/13546783.2015.1052561 · 1.12 Impact Factor
  • Source
    • "The ability to look for correct answers might even be just a luck accidental consequence of our actual need and use for reasoning. The more we learn, the more it seems we use reasoning mostly as a tool for winning arguments [5] [6]. When we have one central belief we defend, one idea we want to claim right, we tend to believe in everything that would support that conclusion and deny everything that would go against it, even when those beliefs are logically independent [4]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We believe, in the sense of supporting ideas and considering them correct while dismissing doubts about them. We take sides about ideas and theories as if that was the right thing to do. And yet, from a rational point of view, this type of support and belief is not justifiable at all. The best we can hope when describing the real world, as far as we know, is to have probabilistic knowledge, to have probabilities associated to each statement. And even that can be very hard to achieve in a reliable way. Far worse, when we defend ideas and believe them as if they were true, Cognitive Psychology experiments show that we stop being able to analyze the question we believe at with competence. In this paper, I gather the evidence we have about taking sides and present the obvious but unseen conclusion that these facts combined mean that we should actually never believe in anything about the real world, except in a probabilistic way. We must actually never take sides because taking sides destroy out abilities to seek for the most correct description of the world. That means we need to start reformulating the way we debate ideas, from our teaching to our political debates, if we actually want to be able to arrive at the best solutions as suggested by whatever evidence we might have. I will show that this has deep consequences on a number of problems, ranging from the emergence of extremism to the reliability of whole scientific fields. Inductive reasoning requires that we allow every idea to make predictions so that we may rank which ideas are better and that has important consequences in scientific practice. The crisis around $p$-values is also discussed and much better understood under the light of this paper results. Finally, I will debate possible ideas to try to minimize the problem.
Show more