ArticlePDF Available

Why Fallacies Appear to Be Better Arguments than They Are

Authors:

Abstract and Figures

This paper explains how a fallacious argument can be deceptive by appearing to be a better argument of its kind than it really is. The explanation combines heuristics and argumentation schemes. Heuristics are fast and frugal shortcuts to a solution to a problem. They are reasonable to use, but sometimes jump to a conclusion that is not justified. In fallacious instances, according to the theory proposed, such a jump overlooks prerequisites of the defeasible argumentation scheme for the type of argument in question. Three informal fallacies, argumentum ad verecundiam, argumentum ad ignorantiam and fear appeal argument, are used to illustrate and explain the theory.
Content may be subject to copyright.
© Douglas Walton. Informal Logic, Vol. 30, No. 2 (2010), pp. 159-
184.
Why Fallacies Appear to be
Better Arguments Than They Are
DOUGLAS WALTON
Centre for Research in Reasoning, Argumentation and Rhetoric
University of Windsor
2500 University Avenue West
Windsor, ON
Canada N9B 3Y1
Email: dwalton@uwindsor.ca
Web: www.dougwalton.ca
Abstract: This paper explains how a
fallacious argument can be deceptive
by appearing to be a better argument
of its kind than it really is. The expla-
nation combines heuristics and argu-
mentation schemes. Heuristics are fast
and frugal shortcuts to a solution to a
problem. They are reasonable to use,
but sometimes jump to a conclusion
that is not justified. In fallacious in-
stances, according to the theory pro-
posed, such a jump overlooks prereq-
uisites of the defeasible argumentation
scheme for the type of argument in
question. Three informal fallacies,
argumentum ad verecundiam, argu-
mentum ad ignorantiam and fear ap-
peal argument, are used to illustrate
and explain the theory.
Resumé: Dans cet article on explique
comment un argument fallacieux peut
nous tromper en semblant être un
meilleur argument qu’il ne l’est en
effet. L’explication réunit des heuris-
tiques et des schèmes
d’argumentation. Des heuristiques
sont des raccourcis rapides et
économes employés dans une solution
à un problème. Il est raisonnable de
les employer, mais parfois elles ar-
rivent à une conclusion injustifiée.
Dans les cas fallacieux de leur usage,
on propose qu’elles négligent des
conditions nécessaires des schèmes
d’argumentation défaisable qui
devraient être prises en considération
dans l’évaluation du type d’argument
en question. Pour illustrer et expliquer
cette théorie, on emploie trois sophis-
mes non formels: argumentum ad
verecundiam, argumentum ad igno-
rantiam
Keywords: argumentation schemes; Carneades model of argumentation; defea-
sible reasoning; errors of reasoning; heuristics; paraschemes.
In the informal logic tradition, fallacies are commonly used
sophisms or errors in reasoning like hasty generalization, argu-
mentum ad hominem (argument against the person), argumentum
ad verecundiam (appeal to authority, especially inappropriate
argument from expert opinion), post hoc ergo propter hoc (false
cause), straw man argument, peititio principii (begging the
Douglas Walton
160
question) and so forth. Many of the most common forms of argu-
ment associated with major fallacies, like argument from expert
opinion, ad hominem argument, argument from analogy and argu-
ment from correlation to cause, have now been analyzed using the
device of defeasible argumentation schemes (Walton, Reed and
Macagno, 2008). Recent research in computing has also embraced
the use of argumentation schemes, linking them to key logical no-
tions like burden of proof (Gordon, Prakken and Walton, 2007).
Argumentation schemes have been put forward as a helpful way of
characterizing structures of human reasoning, like argument from
expert opinion, that have proved troublesome to view deductively.
Many of the schemes are closely related to specific informal
fallacies representing types of errors that come about when a
scheme is used wrongly. Such schemes represent the structure of
correct forms of reasoning used wrongly in specific instances
where an argument is judged to be fallacious. Studies of fallacies in
argumentation and informal logic have mainly taken a normative
approach, by seeing fallacies as arguments that violate standards of
how an argument should properly be used in rational thinking or
arguing.
However, fallacies also have a psychological dimension. They
are illusions and deceptions that we as human thinkers are prone to.
They are said to be arguments that seem valid but are not (Ham-
blin, 1970, 12). Even so, little is known about how the notion
‘seems valid’ should be explained (Hansen, 2002). Could it be psy-
chological? Psychology studies heuristics and cognitive biases in
human decision-making (Tversky and Kahneman, 1974). Heuris-
tics may be broadly characterized as rules of thumb that enable us
to rapidly solve a problem even where information is insufficient to
yield an optimal solution, but in some cases they are known to lead
to errors and cognitive biases. In this paper, it is shown how
heuristics are closely connected to fallacies in a way that helps to
explain why fallacies appear to be better arguments than they really
are. Three examples of heuristics that are also known to be falla-
cies are used to bring the normative dimension better into relation
with the psychological dimension.
The problem is solved by placing the notion of a heuristic as a
mediating concept between the notions of fallacy and defeasible
argumentation scheme. These are the three heuristics, as we will
call them. If it is an expert opinion, defer to it. If there is no reason
to think it is false, accept it as true. If it is fearful, avoid taking
steps to whatever might lead to it. These three heuristics are inter-
posed between three argumentation schemes underlying three in-
formal fallacies by introducing a new device called a parascheme.
The parascheme represents the structure of the heuristic. Each pa-
rascheme sits alongside a given scheme in the background, like a
ghostly double. It comes into play to explain the relationship be-
Why Fallacies Appear to be Better Arguments
161
tween a reasonable argument that fits an argumentation scheme and
the same kind of argument that has been employed in a way that
makes it fallacious. It is shown how the parascheme, along with the
scheme and the heuristic, can be used to explain what has gone
wrong in fallacious instances of these three kinds of arguments.
1. Heuristics and paraschemes
Gigerenzer et al. (1999) explore the cognitive theory that we have
two minds—one that is automatic, unconscious, and fast, the other
controlled, conscious, and slow. In recent years there has been
great interest in so-called dual-process theories of reasoning and
cognition. According to dual process theories in cognitive science,
there are two distinct cognitive systems underlying human reason-
ing. One is an evolutionarily old system that is associative, auto-
matic, unconscious, parallel, and fast. It instinctively jumps to a
conclusion. In this system, innate thinking processes have evolved
to solve specific adaptive problems. The other is a system that is
rule-based, controlled, conscious, serial, and slow. In this cognitive
system, processes are learned slowly and consciously, but at the
same time need to be flexible and responsive.
The old system uses what are called heuristics to rapidly jump
to a conclusion or course of action. An example would be the use
of trial and error when one cannot find a better way of solving a
problem. Argument making has been combined with heuristic
thinking by Facione and Facione (2007) to help explain the com-
plexity of human reasoning of the kind used in decision-making.
They distinguish between two kinds of thinking (Facione and Fa-
cione, 2007, 5). One, based on heuristics, applies to situations that
are familiar, like making a fast decision to brake while driving on a
freeway. The other is useful for judgments in unfamiliar situations,
processing abstract concepts and deliberating where there is
sufficient time to plan carefully and collect evidence.
Heuristics are said to be “fast and frugal” in use of resources
(Gigerenzer et al., 1999). They are extremely useful in arriving at a
decision to proceed tentatively on a defeasible basis under con-
straints of time pressure and lack of complete knowledge. Gigeren-
zer et al. (1999, 4) offer the example of a man who is rushed to a
hospital while having a heart attack. The physician needs to decide
under time pressure whether he should be classified as a low risk or
a high risk patient. This can be done using three variables. (1) The
patient who has a systolic blood pressure of less than 91 is classi-
fied as high risk without considering any other factors. (2) A pa-
tient under age 62.5 is classified as low risk. (3) If the patient is
over that age, the additional factor of sinus tachycardia (heart
rhythm of greater than 100 beats per minute) needs to be taken into
Douglas Walton
162
account. These three variables can be applied using the decision
tree in Figure 1.
Figure 1. Decision Tree for Heart Attack Victim (adapted from Gi-
gerenzer et al., 1999, 4)
This decision strategy is very simple and ignores quantitative in-
formation, hence it makes us suspicious that it might be inaccurate
compared to a statistical classification method that takes much
more data into account. A heuristic is only a shortcut, and if there
is enough time for more evidence to be collected, a better method
can often be found. The controlled, conscious and slow system of
reasoning can pose critical questions, looking at evidential consid-
erations pro and contra. An argument based on a heuristic might
stand up or not under this more detailed kind of scrutiny. Still, heu-
ristics can not only be useful but often highly accurate. According
to Gigerenzer et al. (1999, 4-5), the decision tree heuristic “is actu-
ally more accurate in classifying heart attack patients according to
risk status than are some rather more complex statistical classifica-
tion methods”.
We need to be aware, however, that the term ‘heuristic’ has
different meanings in different disciplines. In psychology it refers
to the use of simple and efficient rules that can be used to explain
how people make decisions and solve problems under conditions of
incomplete information. Such rules can be practically useful and
work well in many situations, but they can also be known to lead to
errors in some cases. Philosophers of science have emphasized the
importance of heuristics for invention of hypotheses in scientific
investigations. In engineering, a heuristic is a rule of thumb based
on practical experience that can be used to save time and costs
when solving a problem.
Why Fallacies Appear to be Better Arguments
163
Russell and Norvig (1995, 94) have presented a brief history
of how the meaning of the term ‘heuristic’ has evolved in computer
science. Originally the term was used to refer to the study of meth-
ods for discovering problem-solving techniques, especially ones
that can be used to find mathematical proofs. Later, the term was
used as the opposite of an algorithm. In other words, it was defined
as a process that may solve a problem, but offers no guarantee of
solving it. Still later, during the period when expert systems domi-
nated artificial intelligence, “heuristics were viewed as rules of
thumb that domain experts could use to generate good solutions
without exhaustive search” (Russell and Norvig, 1995, 94). How-
ever, this notion of a heuristic proved to be too inflexible, leading
to the current usage that refers to heuristics as techniques designed
to solve a problem even if the solution cannot be proved conclu-
sively to be the correct one. This usage is the one used in work on
devising intelligent search strategies for computer problem solving.
Many examples of typical uses of heuristics in computer problem
solving are given by Pearl (1984). An example Pearl gives (1984,
3) is the case of the chess master who decides that a particular
move is most effective because it appears stronger than the posi-
tions resulting from other moves. This method is an alternative to
rigorously determining which sequences of moves force a check-
mate by precisely comparing all these available sequences.
Heuristics are clearly related in some way both to defeasible
argumentation schemes and to fallacies, as we can see by compar-
ing them. For example, the heuristic ‘If it’s an expert opinion, defer
to it’ is clearly related to the argumentation scheme for expert opin-
ion. The heuristic appears to be a fast and shorter version of the
scheme, which, as will be seen in the next section, is longer, de-
pending on which version of the scheme is selected. Perhaps the
heuristic, since a heuristic is known to be capable of leading to er-
ror, is part of the fallacy, or can be used to explain how the fallacy
works. To explore this suggestion, here we introduce a new con-
cept into logic.1
A parascheme is a device that can be used to represent the
structure of a heuristic as a speedy form of inference that instinc-
tively jumps to a conclusion and is commonly used to make deci-
sions. Here are three examples of paraschemes. I name the first the
parascheme for expert opinion: an expert says A is true, therefore A
is true. I name the second the parascheme for lack of a better rea-
son: A is not known to be false (true) therefore A is true (false). I
name the third the parascheme for fearful consequence: conse-
quence C is fearful, therefore, do not carry out any action α that
would have consequence C. These paraschemes are obviously re-
1 It may not be all that new, if one recalls that one of the words Aristotle used for
fallacy was paralogism.
Douglas Walton
164
lated in some interesting way to two well-known informal fallacies,
argumentum ad verecundiam (fallacious appeals to authority), ar-
gumentum ad ignorantiam (arguments from ignorance), and to fear
appeal arguments, sometimes associated in logic textbooks with
fallacious ad baculum arguments, or arguments that appeal to
threats.
2. Variants of the scheme for argument from expert opinion
Argument from expert opinion has long associated with the fallacy
of appeal to authority, but recent work in informal logic has shown
that it also very often a reasonable argument that has the structure
of a defeasible argumentation scheme. The following form of
reasoning represents its argumentation scheme: if E is an expert
and E says that A is true then A is true; E is an expert who says that
A is true; therefore A is true. This scheme is defeasible. It is not
deductively valid, since what an expert says often turns out to be
wrong, or at least subject to revisions as new information comes in.
Such a defeasible scheme is inherently subject to critical
questioning. Moreover the conditional in the major premise is not
the material conditional of the kind used in deductive propositional
logic. It is a defeasible conditional. Here is the version of the
scheme for argument from expert opinion from Walton, Reed and
Macagno (2008, p. 309). Let’s call it the simple version of the
scheme.
Major Premise: Source E is an expert in subject domain S
containing proposition A.
Minor Premise: E asserts that proposition A is true (false).
Conclusion: A is true (false).
The simple version is short, having only two premises. It expresses
the nature of the basic type of argument very well. It brings out
how argument from expert opinion works as a fast and frugal heu-
ristic in everyday thinking. But there are some problems with it.
The first problem was pointed out by Walton and Reed (2002).
The scheme above, usually taken to represent the basic scheme for
argument from expert opinion, seems to be incomplete. Walton and
Reed (2002, 2) suggest that the structure of the argument could be
more fully expressed in the following version, which they call Ver-
sion II.
Explicit Premise: Source E is an expert in subject domain
S containing proposition A.
Explicit Premise: E asserts that proposition A (in domain
S) is true (false).
Why Fallacies Appear to be Better Arguments
165
Conditional Premise: If source E is an expert in subject
domain S containing proposition A and E says that A
is true then A may plausibly be taken to be true
(false).
Conclusion: A is true (false).
Let’s call this version of the scheme the conditional version. It has
what appears to be a modus ponens structure, but it represents a
defeasible variant of this form of argument that is not well modeled
as deductive or inductive. Note that it even adds a new dimension
from the simple scheme by adding ‘may plausibly be taken to be’
in the conditional premise. These remarks suggest a second prob-
lem. A distinction needs to be drawn between the deductive form
of argument commonly called modus ponens and its defeasible va-
riant defeasible modus ponens, called modus non excipiens by Ver-
heij (1999, 5). This type of argument has the following form: if A
then (defeasibly) B; A; therefore (defeasibly) B. It is a type of
argument that can hold tentatively under conditions of incomplete
knowledge of the full facts of a case, but that can defeated by ex-
ceptions. It is not a deductively valid form of inference. In
defeasible logic (see Nute, 1994), a rule-based non-monotonic
formal system, a conclusion derived is only tentatively accepted,
subject to new information that may come in later. Where rep-
resents the defeasible conditional, the statement A B reads: if A
then defeasibly B. It means that ‘if A then B’ holds tentatively, sub-
ject to new information that might come in, providing an instance
where A holds but B doesn’t.
Taking into consideration how such arguments can be defeated
or cast into doubt brings us to the asking of appropriate critical
questions matching each defeasible scheme. The six basic critical
questions matching the argument from expert opinion are given in
Walton, Reed and Macagno (2008, 310) as follows.
CQ1: Expertise Question. How credible is E as an expert
source?
CQ2: Field Question. Is E an expert in the field that A is
in?
CQ3: Opinion Question. What did E assert that implies A?
CQ4: Trustworthiness Question. Is E personally reliable as
a source?
CQ5: Consistency Question. Is A consistent with what oth-
er experts assert?
CQ6: Backup Evidence Question. Is E's assertion based
on evidence?
The critical questions are provided to teach skills of critical think-
ing concerning how best to react when confronted with a particular
type of argument.
Douglas Walton
166
There is also a third problem with the simple version of the
scheme. This problem was first noticed in a general discussion of
schemes and critical questions by Verheij (2001). The problem as
applied to the simple version of this scheme is that the field ques-
tion appears to be redundant, because the major premise already
states that the field (domain) of the proposition that is claimed to
be true matches the field (domain) of the expert. Since this asser-
tion is already made in the premise, there is no need to add consid-
eration of it as a critical question as well, because anyone who dis-
agrees with the argument, or wants to question it, can simply dis-
agree with the premise, and ask for support for it. So it might seem
that, in order to use these critical questions, the simple version of
the scheme could be shortened even further. We return to this prob-
lem in the next section.
Four of the six critical questions of the scheme for argument
from expert opinion can be modeled as implicit premises that sup-
plement the explicit premises of the scheme (Walton and Gordon,
2009). These four questions are modeled as additional assump-
tions, added to the ordinary premises. First, consider CQ1. When
you put forward an appeal to expert opinion, you assume, as part of
the argument, that the source is credible, or has knowledge in some
field. Second, consider CQ2. You assume that the expert is an ex-
pert in the field of the claim made. Third, consider CQ3. You as-
sume that the expert made some assertion that is the claim of the
conclusion, or can be inferred from it. Fourth, consider CQ6. You
assume that the expert’s assertion was based on some evidence
within the field of his or her expertise.
Questions are not premises, but the Carneades model repre-
sents the structure of the scheme to represent them as premises.
The new fully explicit argumentation scheme no longer needs criti-
cal questions in order for it to be subject to evaluation. The premise
can be questioned or argued against in the usual way, shifting a
burden of proof onto the arguer to defend it, or to the questioner to
back up his criticism. That does not end the process of questioning
if critical sub-questioning is possible. But this process can be mod-
eled by Carneades in the same way, just by moving the process an-
other step.
Questions CQ4 and CQ5 can also be modeled as implicit prem-
ises of the scheme for argument from expert opinion, but they need
to be handled in a different way. One does not assume the expert
cited is untrustworthy without some evidence to back up such a
charge. The burden of proof to support such a claim, once made,
would shift to the respondent to back up his charge before the
given argument from expert opinion would fail to hold up. To suc-
cessfully challenge the trustworthiness of a witness, some evidence
of bias or dishonesty must be produced. Nor would one assume,
without further evidence, that what the expert said is inconsistent
Why Fallacies Appear to be Better Arguments
167
with what other experts say. To successfully challenge the consis-
tency of an expert’s claim with what other experts in the same field
say, some evidence of what the others say must surely be produced.
The difference between these two kind of critical questions can be
seen as one of burden of proof (Gordon, Prakken and Walton,
2007). Before they refute the argument from expert opinion, CQ4
and CQ5 have a burden of proof that needs to be met, whereas the
other critical questions refute the argument just by being asked,
unless the proponent offers some appropriate reply to the question.
The Carneades model of argumentation uses the following
procedure for determining the acceptability of an argument (Gor-
don and Walton, 2006).
At each stage of the argumentation process, an effec-
tive method (decision procedure) is used for testing
whether some proposition at issue is acceptable given
the arguments of the stage and a set of assumptions.
The assumptions represent undisputed facts, the current
consensus of the participants, or the commitments or
beliefs of some agent, depending on the task.
The evaluation of an argument depends on the proof
standard applicable to the proposition at issue in a type
of dialogue appropriate for the setting.
A decidable acceptability function provided by the
Carneades model of argument is used to evaluate how
strong or weak an argument is.
The Carneades model for reasoning with argumentation schemes
distinguishes three types of premises, ordinary premises, assump-
tions and exceptions. Assumptions are assumed to be acceptable
unless called into question (Gordon and Walton, 2006). Like ordi-
nary premises, they have a burden of proof on the proponent, who
must either give an appropriate answer or the argument is refuted.
Ordinary premises and assumptions are assumed to be acceptable,
but they must be supported by further arguments in order to be
judged acceptable. Exceptions are modeled as premises that are not
assumed to be acceptable. They only become acceptable when the
appropriate evidence is given to show they hold. On the Carneades
model, the major and the minor premise of the scheme above are
classified as ordinary premises, while the first four questions are
treated as assumptions and the last two are treated as exceptions.
Following the proposal above that argument from expert opin-
ion has a defeasible modus ponens form (DMP), the scheme for
argument from expert opinion can be presented in an amplified
form that reveals its implicit premises as follows.
Douglas Walton
168
Ordinary Premise: E is an expert.
Ordinary Premise: E asserts that A.
Ordinary Premise: If E is an expert and E asserts that A,
then A is true.
Assumption: E is an expert in field F.
Assumption: A is within F.
Assumption: It is assumed to be true that E is a credible
expert.
Assumption: It is assumed to be true that what E says is
based on evidence in field F.
Exception: It is an exception to the generalization stated in
the conditional premise if it is found to be false that E
is trustworthy.
Exception: It is an exception to the generalization stated in
the conditional premise if it is found to be false that
what E asserts is consistent with what other experts in
field F say.
Conclusion: A is true.
This list of premises and conclusion represents the Carneades style
of modeling the scheme for argument from expert opinion. In ef-
fect, the critical questions have been absorbed into the scheme as
additional premises. Another aspect of the Carneades version of the
scheme that requires comment is that the three ordinary premises
can be taken as explicit premises whereas the assumptions and the
exceptions, although they are also premises required to support the
conclusion, are implicit in nature.
In this section we have observed that there are various reasons
why the scheme for argument from expert opinion is potentially
useful and interesting. One reason is that one might want to use
argumentation schemes in an argument map that represents prem-
ises and conclusions as statements in text boxes, but has no
straightforward way of representing critical questions matching a
particular scheme. The Carneades style of representing arguments
solves this problem. Another reason is that we might want to study
the relationship between the scheme and its corresponding para-
scheme.
3. Relation of the parascheme to the scheme
How is the parascheme for argument from expert opinion related to
the above versions of the full scheme? First, note that the para-
scheme is even simpler than the simple version of the scheme
above. The simple scheme at least takes the field of expertise into
account. But above, it was questioned, following Verheij’s obser-
vations, whether this was necesssary, since the field of expertise is
Why Fallacies Appear to be Better Arguments
169
already taken into account in one of the critical questions. Should
the simple scheme be made even simpler as in the following ver-
sion, which could be called the simplest version of the scheme.
Explicit Premise: E is an expert.
Explicit Premise: E asserts that proposition A is true
(false).
Conclusion: A is true (false).
This simplest version matches the parascheme. A simpler variant
of the conditional variant of the scheme can also be considered.
Explicit Premise: E is an expert.
Explicit Premise: E asserts that proposition A is true
(false).
Conditional Premise: If E is an expert and E says that A is
true then A is true.
Conclusion: A is true (false).
So which of these versions of these schemes for argument from
expert opinion should be taken as the correct one, at least for stan-
dard purposes of analyzing and evaluating arguments? The disad-
vantage of the simplest version is that it does not take the domain
of expertise into account. But is that more of an asset than a liabil-
ity, if it can be taken into account in the critical questions, or in the
assumption on that matter in the Carneades version of the scheme?
Another solution would be to leave the domain issue in the ordi-
nary premise of the scheme but delete the field question from the
critical questions. In other words, we delete the parts of the ordi-
nary premises pertaining to domain of expertise and leave it as an
assumption in the Carneades list of premises.
A nice approach that seems to works very well for our pur-
poses is to opt for the simplest variant of the conditional version of
the scheme. One reason for selecting this version as the main one
for general use is that it is important to include the conditional, be-
cause it acts as the so-called warrant or inference license linking
the premises to the conclusion. It expresses the rationale, the pre-
sumption on which the inference is based to the effect that what an
experts states is generally reliable as a defeasible reason for accept-
ing something as true, in the absence of contravening reasons to
think it is false. Another reason is that the defeasibility of the con-
ditional will turn out to be important for analyzing the fallacy of
argument from expert opinion. If the conditional is treated as a ma-
terial conditional of the kind used in deductive logic, it makes the
inference inflexible, in a way that ties it in with fallacious argu-
Douglas Walton
170
ment from expert opinion, as will be shown below. Let’s provi-
sionally work with the simplest conditional variant.
The parascheme jumps straight to the conclusion from the first
two ordinary premises to the conclusion. It does not take the condi-
tional ordinary premise of the simplest variant of the conditional
scheme into account, nor does it take any of the assumptions or the
exceptions made explicit in the Carneades version of the scheme
into account. The structure of the reasoning can be modeled by de-
feasible logic. A defeasible rule has the form of a
conditional, , where each of the is called a pre-
requisite, all the together are called the antecedent, and B is
called the consequent. Argumentation schemes, like the one for
argument from expert opinion, take the following general form in
defeasible logic.
The parascheme omits one of the prerequisites of the scheme. The
fallacy is not one of a false premise, or of a premise that is inade-
quately supported by evidence. It is one of overlooking a premise
that is a prerequisite of the scheme.
How the parascheme works in an instance of the scheme for
argument from expert opinion is shown in Figure 2, where the ar-
gument jumps ahead from two of the ordinary premises to the con-
clusion without taking the other premises into account.
Figure 2. Heuristic of Argument from Expert Opinion
Why Fallacies Appear to be Better Arguments
171
Look at the two premises in the darkened boxes at the top left of
Figure 2, and the arrow representing the inference to the conclusion
in the darkened box at the bottom. This inference represents a sim-
plified version of the scheme that is understandable enough as a
familiar heuristic, but does not take the other factors into account.
These other factors include the conditional premise linking other
two ordinary premises to the conclusion (shown in the top box at
the right) and the implicit premises, the assumptions and the excep-
tions (shown below the top boxes on the left and right respec-
tively). So here we see the problem. The heuristic takes us by a fast
and frugal leap directly to the conclusion. It is the old cognitive
system of reasoning. However it overlooks the implicit conditional
premise, the assumptions and the exceptions, all factors that need
to be taken into account by the controlled, conscious, and slow in-
ferential procedure of the new cognitive system. The first problem
is how this analysis relates to the ad verecundiam fallacy.
4. Fallacious arguments from expert opinion
Argument from expert opinion can be a reasonable argument in
some instances of its use, while in other instances of its use, it can
be fallacious. But there can be different kinds of problems in using
it as an argument. Some uses are merely blunders or errors that
make the argument either weak or worthless, depending on the
standard of proof required to make the argument of some probative
worth to prove a point. On this dynamic approach, a distinction has
to be drawn between two kinds of fallacies. In some cases, a fal-
lacy is merely a blunder or an error, while in other cases, it is a so-
phistical tactic used to try to get the best of a speech partner in dia-
logue unfairly, typically by using verbal deception or trickery. The
evidence of the use of such a tactic is found in the pattern of moves
made by both sides in the exchange. It is important for fallacy the-
ory to avoid confusing these two types of problematic argumenta-
tion moves. To deal with the problem, a pragmatic theory of fallacy
(Walton, 1995) distinguished between two kinds of fallacies. The
paralogism is the type of fallacy in which an error of reasoning is
typically committed by failing to meet some necessary requirement
of an argumentation scheme. The sophism type of fallacy is a so-
phistical tactic used to try to unfairly get the best of a speech part-
ner is an exchange of arguments.
To cite an example of this latter type of problem in arguments
from expert opinion, consider a case where a movie star who is not
a physician makes claims about the healing properties of a skin
cream to cure acne or other skin conditions. This person may be a
role model, and may think that the cream cured her skin condition,
but she is not an expert of the type required to provide scientific or
Douglas Walton
172
medical evidence of the kind required to support her claim, based
on the scheme for argument from expert opinion. The error could
be diagnosed as a failure of the ordinary premise of the scheme for
argument from expert opinion claiming that the source cited is an
expert. Alternatively, if the movie star is being put forward as some
sort of expert, the problem is that she may not be an expert in the
right field needed to support the claim. Let’s take up these two
kinds of cases separately, beginning with the second one.
This kind of case takes us back to the question of formulating
the scheme studied in Section 2. Should we use a version of the
scheme for argument from expert opinion where it is required that
the field of the subject proposition A is the same as the field of the
expert cited? This requirement holds in the conditional version
called Version II by Walton and Reed (2002, 2). Or should we use
a version of the scheme for argument from expert opinion where it
is not required that field of the subject proposition A be the same as
the field of the expert cited? This requirement does not hold in the
simple version of the scheme in Section 2. Nor does it hold in the
simplest version of the scheme presented in Section 3, or in the
simpler version of the conditional version of the scheme (also in
Section 3). Another variant of the scheme that needs to be consid-
ered is the Carneades version, where there are two assumptions as
premises, one stating that E is an expert in field F and another stat-
ing that A is within F. This version dispenses with the critical ques-
tions and ensures by having these two assumptions as premises that
the field of the claim matches the field of the expert. In this in-
stance the argument is a failure to fulfill the assumption that the
supposed expert is an expert in the field appropriate for the argu-
ment.
Now let’s consider the first kind of case, where the movie star
cited was not an expert at all, even though she was put forward as
an expert in the appeal to expert opinion argument. A problem
posed by such cases is whether the failure should be classified as
an instance of the ad verecundiam fallacy or merely as a false ex-
plicit premise. The problem here is that the notion of fallacy is
generally taken in logic to represent a fallacious inference of some
sort, an argument from premises to a conclusion, and not merely a
false or insufficiently substantiated explicit premise in the argu-
ment. This problem appears to recur in all the versions of the
scheme. Even in the Carneades version E is an expert’ is an ex-
plicit premise. On the other hand, the failure to fulfill the assump-
tion that the supposed expert is an expert in the field appropriate
for the argument could plausibly be diagnosed as a fallacy on the
ground that the assumption is implicit in the argument. If the fault
is merely the failure of an ordinary premise, which is part of the
parascheme, and which is explicit, it is harder to make a case for
classifying it as a fallacy. The reason, to repeat, is that a sharp dis-
Why Fallacies Appear to be Better Arguments
173
tinction needs to be drawn in logic between a fallacious argument
and an argument that merely has a false premise. If the premise is
an implicit assumption that corresponds to a critical question how-
ever, the case is different.
To cite another side of the problem, consider a different type
of case of fallacious argument from expert opinion where the pro-
ponent of the argument treats it as infallible, and refuses to concede
that it is open to critical questioning. That would be a fallacious
misuse of the argument. For example, let’s suppose he dismisses
the respondent’s attempts to question the argument critically by
counter-attacking, replying, “Well, you’re not an expert”. This
move attempts to block critical questioning, in effect treating the
argument as holding by necessity. But argument from expert opin-
ion is defeasible in nature, and needs to be seen as open to critical
questioning. If you treat it as a deductively valid argument, serious
problems can arise. When examining expert witness testimony in
law, for example, it would be against the whole process of exami-
nation to assume that the expert is omniscient. There is a natural
tendency to respect expert opinions and even to defer to them, but
experts are often wrong, or what they say can be misleading, so
one often needs to be prepared to critically examine the opinion of
an expert. Openness to default in the face of new evidence is a very
important characteristic of defeasible reasoning. If the conditional
premise in the simple conditional version of the scheme is treated
as a material conditional of the kind used in deductive logic, it
makes the scheme deductively valid. It is no longer defeasible, and
open to critical questioning.
This second kind of case represents an even more serious in-
stance of a fallacious appeal to authority (argumentum ad verecun-
diam)2. The problem is that the argument from expert opinion has
been put forward in such an aggressive fashion that it shuts down
the capability of the respondent to raise critical questions. For ex-
ample, suppose the proponent puts forward an argument based on
expert medical opinion, and in response to critical questioning, she
replies aggressively by saying, “You’re not an expert in medicine,
are you? Are you a doctor? What you’re saying is merely anecdo-
tal”. There might be some truth in these claims. The respondent
may not be a doctor. He is not an expert in medicine. It may be in-
deed true that what he’s saying is not based on scientific findings
that have been proved by published medical studies. All this may
be true, but what makes the proponent’s reply fallacious is the way
it was put forward to leave the respondent no possibility of criti-
cally questioning the claim. No room is left for critical questioning,
and for undergoing the controlled, conscious, and slow process of
2 Literally it means argument from modesty or respect.
Douglas Walton
174
questioning the assumptions made and the exceptions that need to
be taken into account.
This parascheme treats the conditional premise as not
defeasible. As shown above, defeasible logic has defeasible rules
of the form A B, but it also has strict rules. Strict rules are rules
in the classical sense: whenever the premises are indisputable (e.g.,
facts) then so is the conclusion, e.g. ‘Penguins are birds’. A strict
rule has the form of a conditional, , where it is not
possible for all the to be true and the B false. Defeasible rules
are rules that can be defeated by contrary evidence, e.g. ‘Birds
fly’.The problem in this fallacious case of argument from expert
opinion is that the argument is set forth as if it should be treated as
deductively valid. The major premise is put forth as the rule that
what an expert says must always be true, without exception. Hence
the conclusion follows necessarily from the premises. If the
premises are true, that conclusion must be accepted. To accept the
premises but not the conclusion is logically inconsistent. Such an
argument is not defeasible, and not open to critical questioning.
The fallacy is the shutting off of the possibility of critical
questioning of the argument by putting forward the heuristic in a
strict (non-defeasible) form.
The explanation of why the fallacy is deceptive in the first
kind of case is quite different. Corresponding to the argumentation
scheme for argument from expert opinion, there is the following
parascheme: E is an expert and E says that A is true; therefore A is
true. This heuristic jumps to the conclusion in a way that is fast and
frugal but overlooks other implicit premises in the scheme for
argument from expert opinion that also need to be accounted for. In
the first type of case above, the argument is fallacious because it
either overlooks an ordinary premise or an assumption.
These two examples may not be the only kinds of problems,
blunders and deceptive moves associated with the ad verecundiam
fallacy. But they show how the deceptiveness of two important
kinds of instances of the fallacy can be explained using para-
schemes.
5. Generalizing the parascheme approach
The question now posed is whether the kind of analysis of the fal-
lacy of ad verecundiam given above using paraschemes applies to
other informal fallacies. Of the major informal fallacies, the fol-
lowing twelve need to be analyzed with defeasible argumentation
schemes of the sort that can be found in (Walton, Reed and Ma-
cagno, 2008, Chapter 9).
Why Fallacies Appear to be Better Arguments
175
1. Ad Misericordiam (Scheme for Argument from Dis-
tress, 334)
2. Ad Populum (Scheme for Argument from Popular Opi-
nion and its subtypes, 311)
3. Ad Hominem (Ad Hominem Schemes; direct, circum-
stantial, bias, 336-338)
4. Ad Baculum (Scheme for Argument from Threat, p.
333; Fear Appeal, 333)
5. Straw Man (Scheme for Argument from Commitment,
p. 335)
6. Slippery Slope (Slippery Slope Schemes; four types,
339-41)
7. Ad Consequentiam (Scheme for Argument from Conse-
quences, 332)
8. Ad Ignorantiam (Scheme for Argument from Igno-
rance, 327)
9. Ad Verecundiam (Scheme for Argument from Expert
Opinion, 310)
10. Post Hoc (Scheme for Argument from Correlation to
Cause, 328)
11. Composition and Division (Argument from Composi-
tion, p. 316; Division, 317)
12. False Analogy (Scheme for Argument from Analogy,
315)
These may not be the only fallacies that can be analyzed with the
help of argumentation schemes, but they certainly are some promi-
nent ones. Other fallacies, like equivocation, amphiboly, accent,
begging the question, fallacies of irrelevance, like red herring and
wrong conclusion, and many questions, do not appear to fit specific
argumentation schemes, or benefit directly from schemes when it
comes to analyzing them.
There is no space to try to even comment on all the twelve fal-
lacies listed above, but some of them do look like they could fit the
parascheme model very well. For example the post hoc fallacy
could be analyzed as the employment of the following parascheme:
X is correlated with Y, therefore X causes Y. Especially the emo-
tional fallacies like appeal to fear seem to be based on heuristics
that would respond well to paraschematic treatment. Argument
from ignorance is classified by Gigerenzer at al. (1999) as a promi-
nent heuristic, and would also appear to be amenable to this treat-
ment.
The simplest formulation of the scheme for the argumentum
ad ignorantiam is this: statement A is not known to be false (true),
therefore A is true (false). Calling it argument from ignorance
makes it plausibly seem fallacious, but this form of argument is
Douglas Walton
176
often reasonable when supplemented by a conditional premise: if A
were false (true), A would be known to be false (true) (Walton,
1996, 254-255). For example there is no evidence that Roman sol-
diers received posthumous medals in war, only evidence of living
soldiers receiving such awards. From this lack of evidence, the
conclusion can be drawn by inference that Roman soldiers did not
receive posthumous decorations in war. If historical evidence did
show a posthumous decoration, the conclusion would have to be
withdrawn, showing that the argument is defeasible. But if after
much historical research through all the known record no such evi-
dence was found, the conclusion could be a fairly reasonable one,
depending on the evidence backing it up (Walton, 1996, 66). It is
commonly called the lack of evidence argument in the social sci-
ences or the ex silentio argument in history, where it is regarded as
a reasonable but defeasible argument.
The structure of the lack of evidence argument, as it could be
called less prejudicially, can be represented by a more complex ar-
gumentation scheme (Walton, Reed and Macagno, 2008, 328) that
uses two variables. D is a domain of knowledge and K is a knowl-
edge base. Most knowledge bases, of the kind used in scientific
investigations, for example, are incomplete, and the reasoning
based on the knowledge in them is defeasible.
If K is complete, a lack of evidence argument based on it could
be deductively valid perhaps, but otherwise it should be seen as a
defeasible inference that is open to critical questioning. For exam-
ple, suppose that after a through security search X has never been
found guilty of breaches of security. Here, because of the thorough
search, it can be said that the conditional premise is supported by
good evidence: if X were a foreign spy, it would be known to be
true that he is a foreign spy. It could be concluded defeasibly, sub-
ject to further investigations, that it has been proved (up to what-
ever standard of proof is appropriate) that X is not a foreign spy.
However, the possibility remains that X could have avoided detec-
tion through these security searches, as Kim Philby did. Hence lack
of evidence arguments having the form of the argumentation
scheme set out above are best analyzed as defeasible arguments
that hold or not at some stage of an investigation in which evidence
is being collected in a knowledge base and assessed.
Reasoning from lack of evidence [negative evidence] is recog-
nized as a heuristic in computing. If you search through an expert
database, and don’t find statement S in it, that finding can be a rea-
son for provisionally concluding that S is false. ‘Guyana is a not
major coffee producer’ can be concluded after searching through
an expert system on coffee producing countries and finding Guy-
ana is not listed. The reason is the assumption that the expert sys-
tem knows all about coffee producers in South America, and if
Why Fallacies Appear to be Better Arguments
177
Guyana were a major coffee producer, it would be in the experts
system’s knowledge base.
An even simpler argumentation scheme for the lack of evi-
dence argument is based not just on what is known or not known to
be true, but also on what would be known if it were true (Walton,
1996, 254).
Conditional Premise: If A were true, A would be known to
be true.
Lack of Knowledge Premise: A is not known to be true.
Therefore, A is false.
This scheme is a form of defeasible modus tollens argument (as-
suming, as well, the rule of double negation that tells us that A is
false if and only if A is not true). Even though a knowledge base is
incomplete, and the search for new knowledge may still be under-
way, this scheme can still enable a conclusion to be tentatively
drawn by defeasible reasoning. In such an instance, the argumenta-
tion scheme becomes a defeasible form of argument, holding only
tentatively, subject to the asking of critical questions during a
search for more knowledge that may continue. The first premise
above is associated with the assumption that there has been a
search through the knowledge base that would contain A that has
been deep enough so that if A were there, it would be found. One
critical question is how deep the search has been. A second is the
question of how deep the search needs to be to prove the conclu-
sion that A is false to the required standard of proof in the investi-
gation. It is not necessary to go into all the details here, given space
limitations, but enough has been said to draw a parallel with the
analysis of argument from expert opinion above.
The parascheme is the simple argument from the two basic
premises in the simplest formulation of the scheme given above.
How it works is shown in Figure 3, where we can see the linked
argument based on the scheme for argument from lack of evidence
with its two ordinary premises. We have not shown the assumption
and exceptions for the argument from lack of evidence, in addition
to the ordinary premises, but the reader can imagine them appear-
ing on the right and left, in a way comparable to Figure 2. In Figure
3 the heuristic is even simpler. It is the fast inference from the lack
of evidence premise all by itself to the conclusion, without taking
the conditional premise into account. The lack of evidence premise
and the conclusion are shown in the darkened boxes, showing the
heuristic parts of the inference.
Douglas Walton
178
Figure 3. Heuristic of the Lack of Evidence Argument
In contrast to the quick leap of the heuristic, the controlled, con-
scious, and slow inferential procedure of analyzing and evaluating
any given instance of a lack of evidence argument may require the
consideration of the conditional premise and the critical questions
matching the scheme. To judge whether an alleged argument from
ignorance is fallacious the heuristic has to be examined in relation
to whether other assumptions and exceptions need to be taken into
account that may be acceptable or not.
Another type of argument that is well worth taking a look at is
the fear appeal argument. Many of these arguments bypass logical
reasoning and hope to convince by raising fears about some
horrible consequences of a policy or action directly. The problem is
that an argument may all too easily bypass other important aspects
of a given situation that should properly be taken into account. Fear
is an emotion that moves people powerfully to action and may tend
to make them put more careful considerations of the complex
features of a situation aside. An immediate response may be to
jump to a conclusion, powerfully motivated by fear, instead of
taking a more realistic look at all the factors involved in a decision.
The heuristic for this kind of reasoning runs as follows. If I carry
out action α, it may bring about consequence C. Consequence C is
really scary. Therefore, there is no way I am going to bring about
action α. An example is the exploitation of fear appeal arguments
in public policy-making on President Obama’s proposed health
care reforms, which called for more of a government role in health
care funding. There was a sign outside an August 2009 town hall
meeting in New Hampshire saying, “Obama lies, grandma dies”
(Begley, 2009, 41). This fear appeal argument has the effect of
suggesting to the reader the immediate action of stopping any
health care reform that might condemn one of his/her loved ones to
death because a government panel has ruled that treating her dis-
ease is too expensive. Because of the emotional fear appeal of this
argument, viewers of the sign may tend to jump to the conclusion
that the proposed health care reform is scary and should be re-
sisted. It raises the scary idea that government death panels could
Why Fallacies Appear to be Better Arguments
179
make decisions to terminate medical treatment for elderly patients
based on calculations of health care costs. When examined criti-
cally in relation to the facts, and the particulars of the proposal, this
argument may not be very persuasive, but as a heuristic that ap-
peals to fear, it may work very well as a rhetorical strategy.
6. Arguments that appear to be better than they are
The two most fully developed theories of fallacy so far (Tindale,
1997) are the pragmatic theory (Walton, 1995) and the pragma-
dialectical theory (van Eemeren and Grootendorst, 1992). Accord-
ing to the earlier version of their theory, a fallacy is a violation of a
rule of a critical discussion where the goal is to resolve a difference
of opinion by rational argumentation (van Eemeren and Groo-
tendorst, 1992). The theory has been more recently been strength-
ened by the work of van Eemeren and Houtlosser (2006) on strate-
gic maneuvering. Even more recently, a fallacy has been defined as
“a speech act that prejudices or frustrates efforts to resolve a differ-
ence of opinion” (van Eemeren, Garssen and Meuffels, 2009, 27).
According to the pragmatic theory (Walton 1995, 237-238), a fal-
lacy is a failure, lapse, or error that occurs in an instance of an un-
derlying, systematic kind of wrongly applied argumentation
scheme or is a departure from acceptable procedures in a dialogue,
and is a serious violation, as opposed to an incidental blunder, er-
ror, or weakness of execution. Both theories can benefit from in-
vestigating further how schemes are wrongly applied when a fal-
lacy has been committed. The problem is that neither theory has
fully taken into account that longstanding intuition, very much evi-
dent in Aristotle’s treatment of the sophistici elenchi, that fallacies
are deceptive. They are not just arguments that prejudice efforts to
resolve a difference of opinion, wrongly applied argumentation
schemes, or departures from acceptable procedures in a dialogue,
although they are all that. They are arguments that work as decep-
tive stratagems. They are arguments that seem correct but are not.
These remarks take us back to the notion attributed to Hamblin
in the introduction that a fallacy can be characterized as an argu-
ment that seems to be valid but is not. What Hamblin (1970, 12)
actually wrote was, “A fallacious argument, as almost every ac-
count from Aristotle onwards tells you, is one that seems to be va-
lid but is not so”[his italics]. Using this sentence to define ‘fallacy’
is problematic in a number of ways. First, whether or not an argu-
ment seems to be valid to any individual or group of individuals is
not of much use to us in attempting to determine whether it is an
argument that really is fallacious or not. Second, the term ‘valid’, is
typically taken to refer to deductive validity, making the definition
too narrow, or even mistaken. Third, a survey of leading logic text-
Douglas Walton
180
survey of leading logic textbooks, from Aristotle to the present
(Hansen, 2002, 151) has shown that the fallacies tradition does not
support wide acceptance of the claim made in Hamblin’s sentence
quoted above. According to Hansen (2002, 152), however, this
tradition does support a comparable generalization: “a fallacy is an
argument that appears to be a better argument of its kind than it
really is”. Either way, the notion of fallacy is taken to have a
dimension that could be classified as psychological (in a broad
sense, including cognitive psychology), meaning that such a
fallacious argument has strong potential for deception. It can often
seem correct when it is not, or can appear to be better than it really
is.
Hansen’s rephrasing of the expression that says that fallacy is
an argument that seems valid but is not is highly significant. We
have two choices here. We can expand the use of the term ‘valid’
so that it no longer just applies to deductively valid arguments, and
allow it to include structurally correct arguments of the inductive
and plausible types. Or we can just drop the word ‘valid’, and ac-
cept Hansen’s way of expressing the criterion by saying that a fal-
lacy is an argument that appears to be better argument of some
kind than it really is. By using the expression ‘of some kind’, we
can include argumentation schemes as well as deductive and induc-
tive forms of argument. If we rephrase this expression to say that
the fallacy is an argument that appears to be a better argument of
its kind that really is, we can widen the account of fallacy to apply
both to inductive arguments, and to presumptive argumentation
schemes that go by defeasible reasoning to a conclusion that is ten-
tatively acceptable but that that may need to be withdrawn in spe-
cial circumstances.
7. Conclusions
How then are fallacies deceptive? The explanation offered as a hy-
pothesis in this paper is that many of them are based on heuristics.
On this hypothesis, a fallacious argument might look better than it
really is because it has the basic structure of a parascheme, and
therefore looks reasonable because it is a heuristic of the kind we
use all the time in everyday reasoning. However, it may be an in-
ference from a set of premises to a conclusion that only seems to
prove the conclusion, but does not, because it fails to meet condi-
tions required for the success of a reasonable argument of that type.
When an arguer jumps to a conclusion by a parascheme, while ig-
noring implicit assumptions and exceptions that ought to be taken
into account, or even worse, moves dogmatically to the conclusion
while failing to allow that such considerations are relevant, his ar-
gument is fallacious. The error here is an unwarranted leap to a
Why Fallacies Appear to be Better Arguments
181
conclusion that is not justified by a careful analysis of the argument
that takes its conditional premise, as well as its assumptions and
exceptions, properly into account.
This new theory of fallacy began by introducing the new no-
tion of a parascheme, and by using it to connect the logical notion
of an argumentation scheme to the psychological (cognitive sci-
ence) notion of a heuristic. The parascheme helps to explain why
an argument seems better than it is, because it represents a heuristic
that is a very natural way of unreflective thinking. Heuristics can
be extremely useful under some conditions even if they arrive at a
suboptimal solution, and there may be nothing inherently fallacious
or logically incorrect (in principle) in using them. We can cite
again the example of the heuristic used in medicine (Gigerenzer,
1999, 4) when a man is rushed to a hospital while he is having a
heart attack. We recall from Section 1 that according to Gigerenzer
et al. (1999, 4-5), this particular medical heuristic is actually more
accurate in properly classifying heart attack patients than some
more complex statistical classification methods. The point to be
emphasized is not only that heuristics are useful, but that we often
need them and rely on them.
However, precisely because heuristics are shortcuts, or fast
and frugal ways to proceed tentatively when there is not enough
data and time to arrive at a definitive conclusion, they can be dan-
gerous, and can sometimes take us to a wrong decision. As the cas-
es we have examined show, in some instances they can even be
deceptive. We are so used to employing them, almost without
thinking, we can sometimes be more easily be persuaded by them
than perhaps we should be, if there is time for more careful and
deliberate rational thinking on how to proceed. The old system of
cognition (the automatic and fast mind) uses a heuristic to jump to
a conclusion. It might be right or might not. Under constraints of
time, cost and lack of knowledge, it might be the way to go. But if
there’s time, the new (controlled, conscious and slow) system can
come in and ask critical questions, looking at logical considerations
pro and contra. The old argument might stand up to this kind of
scrutiny, or it might not.
The analysis presented so far offers an explanation of how the
paraschemes can explain why people sometimes reason carelessly,
and how the argumentation scheme corresponding to a particular
parascheme can show us what has gone wrong with the hasty use
of the parascheme when a fallacy has been committed. But how,
more precisely, does this process work in a real case? Is it that the
person who commits the fallacy has both the parascheme and the
argumentation scheme in mind and then confuses the two, and rea-
sons only on the basis of the parascheme? This explanation of the
process implies that the reasoner explicitly knows the argumenta-
tion scheme with its matching list of critical questions, as well as
Douglas Walton
182
implicitly knowing the parascheme. Such explicit knowledge may
not be there, in many cases where fallacies are committed. The fal-
lacy may be a thoughtless error of jumping too hastily to a conclu-
sion. So this explanation of how fallacies are committed will not
generalize to all of the cases we need to explain as fallacies that are
arguments that appear to be better, as arguments of a certain type,
than they really are.
A better explanation is based on the fact that the use of such
paraschemes is habitual, instinctive and natural. As explained in
Section 1, in evolutionary terms the parascheme is part of as a sys-
tem of thinking that is associative, automatic, unconscious, parallel
and fast. Thinking in this manner, a reasoner instinctively jumps to
a conclusion to accept a proposition as true or to accept a course of
action as the right one for the circumstances. To make the mistake
that is at the basis of the fallacy that is committed, the reasoner
naturally or even automatically jumps to this conclusion by react-
ing in the same way he has so often acted in the past where this
rapid form of action has so often proved to be successful. To make
this kind of mistake, the reasoner does not need to have the argu-
mentation scheme in mind. This mistake is that in this instance he
is in a set of circumstances where he would do much better if he
would only take the time to think twice, and use the rule-based,
controlled, conscious, serial and slow cognitive system of bringing
the premises and conclusion of the argumentation scheme to bear,
while taking into account the appropriate critical questions match-
ing the scheme. But he may not have time for this, or he may sim-
ply not think about it, or he may be pressured into not a fast and
instinctive but premature action by the argumentation of the other
party with whom he is engaged in a discussion. It is this explana-
tion that fills out the meaning of how arguments appear to be better
than they really are, and thereby lead to the committing of fallacies
either by a single reasoner, or by an arguer engaged in a dialogue
with another arguer.
In this paper, a new interpretation of the psychological aspect
of the concept of fallacy has been proposed, put forward as a
hypothesis that can enable us to explain how fallacies of the kinds
based on argumentation schemes have potential for deception and
ease of sliding into error. The defeasible argumentation scheme
offers a structure such that, if a given argument fits the
requirements of the scheme, it is defeasibly tenable, meaning that it
tentatively holds, subject to potential defeat as new evidence comes
in, and in particular as its implicit assumptions and exceptions are
taken into account. In cases where such additional premises are not
taken into account, especially where they are highly questionable,
or evidence shows they do not hold, a fallacy may have been
committed. The argument may appear to be better than it really is,
and hence the error of jumping to the conclusion too quickly may
Why Fallacies Appear to be Better Arguments
183
be overlooked. Even worse, if the proponent has actively tried to
suppress consideration of premises that really need to be taken into
account in a more carefully considered assessment of the argument
before the respondent should accept the conclusion, a more serious
sort of fallacy may have been committed.
This paper has presented a hypothesis that shows promise of
helping us to better define the notion of a fallacy, and to better ex-
plain its psychological dimension. It provides a theoretical basis for
further research on many other fallacies, to see whether they fit the
hypothesis or not. The notion of parascheme has been applied more
fully to fallacious arguments from expert opinion, and more curso-
rily to lack of evidence arguments and fear appeal arguments.
However, enough has been done with these examples so that work
can go ahead applying it more carefully to these latter two falla-
cies, as well as to the other fallacies in the list given at the begin-
ning of Section 5.
References
Begley, S. (2009). Attack! The Truth about ‘Obamacare’, News-
week, August 24 and 31, 41-43.
Eemeren, F.H. van and Grootendorst, R. (1992). Argumentation,
Communication and Fallacies, Hillsdale, N. J.: Erlbaum.
Eemeren, F.H. van and Houtlosser, P. (2006). Strategic Maneuver-
ing: A Synthetic Recapitulation, Argumentation, 20, 381-392.
Eemeren, F.H. van, Garssen, B. and Meuffels, B. (2009). Fallacies
and Judgments of Reasonableness. Dordrecht: Springer.
Facione, P.A. and Facione, N.C. (2007). Thinking and Reasoning
in Human Decision-Making: The Method of Argument and
Heuristic Analysis. Millbrae, California: The California Aca-
demic Press.
Gigerenzer, G. Todd, P.M. and the ABC Research Group (1999).
Simple Heuristics That Make Us Smart, Oxford: Oxford Uni-
versity Press.
Gordon, T.F. and Walton, D. (2006). The Carneades Argumenta-
tion Framework, Computational Models of Argument: Pro-
ceedings of COMMA 2006, ed. P. E. Dunne and T. J. M.
Bench-Capon. Amsterdam: IOS Press, 195-207.
Gordon, T.F. and Walton, D. (2009). Legal Reasoning with Argu-
mentation Schemes, 12th International Conference on Artifi-
cial Intelligence and Law (ICAIL 2009). ed. Carole D. Hafner.
New York: ACM Press, 137-146.
Gordon, T.F. Prakken, H. and Walton, D. (2007). The Carneades
Model of Argument and Burden of Proof, Artificial Intelli-
gence, 171, 875-896.
Hamblin, C. (1970). Fallacies. London: Methuen.
Douglas Walton
184
Hansen, H.V. (2002). The Straw Thing of Fallacy Theory: The
Standard Definition of Fallacy, Argumentation, 16, 133-155.
Nute, D. (1994). Defeasible Logic. In Handbook of Logic in Artifi-
cial Intelligence and Logic Programming, volume 3: Non-
monotonic Reasoning and Uncertain Reasoning. Ed. Dov M.
Gabbay et al. Oxford: Oxford University Press, 353-395.
Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Com-
puter Problem Solving. Reading, Mass.: Addison-Wesley.
Russell, S. and Norvig, P. (1995). Artificial Intelligence: A Modern
Approach. Upper Saddle River: Prentice Hall.
Tindale, C.W. (1997). Fallacies, Blunders and Dialogue Shifts:
Walton’s Contributions to the Fallacy Debate, Argumentation,
11, 341-354.
Tversky, A. and Kahneman, D. (1974). Judgment Under Uncer-
tainty, Science, 185, 1124-1131.
Verheij, B. (1999). Logic, context and valid inference. Or: Can
there be a logic of law?, Legal Knowledge Based Systems. JU-
RIX 1999: The Twelfth Conference, ed. J. van den Herik, M.
Moens, J. Bing, B. van Buggenhout, J. Zeleznikow and C.
Grütters. Amsterdam: IOS Press.
Verheij, B. (2001). Book Review of D. Walton’s The New Dialec-
tic, Ad Hominem Arguments and One-Sided Arguments, Arti-
ficial Intelligence and Law, 9, 305-313.
Walton, D. (1995). A Pragmatic Theory of Fallacy. Tuscaloosa:
University of Alabama Press.
Walton, D. (1996). Arguments from Ignorance. University Park,
Pennsylvania: Penn State Press.
Walton, D. and Gordon, T.F. (2009). Jumping to a Conclusion:
Fallacies and Standards of Proof, Informal Logic, 29, 215-243.
Walton, D. and Reed, C. (2002). Argumentation Schemes and De-
feasible Inferences, Proceedings of the Workshop on Compu-
tational Models of Natural Argument (CMNA), ed. Carenini,
G., Grasso, F. and Reed, C., 1-5.
Walton, D. Reed, C. and Macagno, F. (2008). Argumentation
Schemes, Cambridge: Cambridge University Press.
... Contrary to logical fallacies, where the argumentative inconsistency results from structural invalidity, informal fallacies describe faulty arguments in terms of their content (Sahai et al. 2021). The persuasiveness of informal fallacies has been well documented within scholarly literature (Hahn & Oaksford 2007;Oswald & Lewinski 2014;Walton 2010). Fallacies are considered successful when they go unnoticed (Lewinski & Oswald 2013), and thus their extensive persuasiveness implies a general difficulty in identifying them within natural argumentation. ...
... Finally, we examine the third possible explanation for our findings; the misattribution of fallacy identification to analytical thinking in prior literature. Cognitive accounts of informal fallacies generally suggest that the longer and the more intensively one thinks about arguments, the more likely they are to correctly identify errors in reasoning (O'Keefe 2013;Lewinski & Oswald 2013;Oswald & Lewinski 2014;Walton 2010). Yet, the dual-process cognitive framework has been largely underused within experimental research that is specific to informal fallacies. ...
... Yet, the dual-process cognitive framework has been largely underused within experimental research that is specific to informal fallacies. In fact, most accounts associating cognitive heuristics to poor identification performance are theoretical models of argumentative cognition (Godden 2015;Jackson 1995;Lewinski & Oswald 2013;Oswald & Lewinski 2014;Walton 2010). ...
Article
Full-text available
When processing political arguments, people are strongly affected by their prior ideological beliefs. Political cognition often relies on two types of ideological biases. Firstly, confirmation bias leads addressees of political communication to accept arguments that affirm their preferred ideological positions. Secondly, disconfirmation bias probes reasoners to reject arguments that provide attitudinally incongruent evidence. Here, we report the findings of an experiment aimed at investigating the role of biased reasoning on perceptions of argument soundness. We focused on the processing of the strawman fallacy to determine whether strawman effectiveness is contingent upon the activation of different ideological biases. We examined argument comprehension, argument evaluation and fallacy identification by means of a memory task, a rating task and an interview. Our data suggests that ideological biases and fallacy effect are associated with deliberative cognitive settings and marks a distinction between evaluative attitudes and the capacity to identify fallacies in political argumentation.
... These types of exaggerative phrasings blow situations out of proportion to score political points rather than provide a reasonable representation of reality. Exaggeration is also commonly used through the frequent repetition of "loaded buzzwords" that act "as emotional triggers" for audiences (38). Terms like "death tax," "government takeover," or "radical socialist agenda" imply an aspect of exaggerated threat or extremism (38). ...
... Exaggeration is also commonly used through the frequent repetition of "loaded buzzwords" that act "as emotional triggers" for audiences (38). Terms like "death tax," "government takeover," or "radical socialist agenda" imply an aspect of exaggerated threat or extremism (38). Through endless recycling in political messages, such buzzwords become conditioned to elicit an automatic prejudiced response based more in misrepresentation than facts. ...
Article
Full-text available
Election periods tends to offer critical opportunity for change to the better especially for countries that practice democracy as their governance choice. Nigeria, like many African countries, in the turn of the century, switched to this governance style. The 2023 general election was expected to be a critical turning point in the nation’s democratic journey with politicians employing persuasive language to influence public opinion especially during campaigns. Over time, by utilizing rhetoric, exaggeration, and unrealistic promises, politicians aimed to foster unwarranted hope among voters. And this reliance on misleading tactics poses a serious problem as it misinforms citizens and cultivates unsustainable expectations that erode faith in the democratic process over time. This study, relying on the survey descriptive design, examines the pragmatic nature of misrepresentation in political discourse through a comparative analysis of the speeches delivered by Peter Obi, Bola Tinubu, and Atiku Abubakar during the Nigerian 2023 general elections. Through the theorization of loaded language, the study investigates the linguistic choices, speech patterns, and rhetorical strategies employed by these prominent candidates to shape public perception in order to acquire or maintain power. The result shows an overreliance on rhetoric that inspires through ambitious vision rather than informs through practical reason. It also shows politicians to be prone to presenting themselves and solutions in a glorified and over simplified terms that makes governance to be unrealistically achievable. Overall, the research contributes to a better understanding on how language is manipulated in political campaigns and to empower citizens with the ability to critically evaluate politicians' messages.
... El modelo de Toulmin (1958) en su formato original, con adaptaciones o en combinación con otras propuestas es uno de los más utilizados para evaluar la argumentación en productos escritos por los estudiantes (Cebrián- Robles, et al., 2018;Probosari, et al., 2019;Noroozi, et al., 2023). También, se halló la teoría de los esquemas de argumentación de Walton (2010) como alternativa para evaluar las habilidades de razonamiento argumentativo en escritos ya que permite identificar diferentes tipos de argumentos y verificar si estos son expresados de forma válida (Rapanta & Macagno, 2019). Además, se encontró que la combinación de la rúbrica de argumentos de Ferretti (2000) con la puntuación de atributos del argumento basado en la fuente (cuyas siglas en inglés son AWC) permite evaluar las habilidades de comunicación escrita de los estudiantes en textos argumentativos, tomando en cuenta no solo la construcción del argumento sino también algunos elementos lingüísticos como la gramática y el vocabulario, al igual que el uso de fuentes y citaciones (Davies, et al., 2022). ...
Chapter
Full-text available
La argumentación es una competencia crucial para la construcción del conocimiento, la comunicación efectiva, la solución de problemas y la participación social. Los textos argumentativos son géneros comunes que los estudiantes a nivel superior deben redactar debido a que suelen ser utilizados como herramientas para evaluar el aprendizaje. En los programas de formación de profesores en inglés en México los estudiantes muestran dificultad en el desarrollo de textos argumentativos en inglés. Así también, los profesores en este contexto enfrentan retos para comprender la argumentación y el cómo ayudar a estos estudiantes. El objetivo de esta revisión sistemática de la literatura con PRISMA es identificar criterios para la evaluación de la escritura del texto argumentativo académico en inglés. Se analizaron 18 artículos publicados del 2017 al 2024 obtenidos de SCOPUS y WEB OF SCIENCE. Los hallazgos muestran que la mayoría de las intervenciones revisadas integran el uso de herramientas digitales y sus enfoques metodológicos se fundamentan en modelos específicos; sin embargo, estos tienden a ser variados. Además, no hay un uso consistente de referentes en los instrumentos de evaluación para la valoración de la calidad del argumento en los escritos de los estudiantes. En consecuencia, se percibe la necesidad de desarrollar instrumentos de evaluación que aludan a indicadores o estándares para generar evidencias contundentes. Este artículo constituye la parte inicial de un proyecto más amplio en el que se elaborará una intervención pedagógica mediante un sistema tecnológico que promoverá el desarrollo de estrategias para abordar el texto argumentativo de manera autónoma.
... Etymology is the trigger of emotive meaning in the sense that it is where the stereotype leading to a negative or positive evaluation has originated, and the awareness of this relation can be used for denying or renegotiating the emotive meaning of a word. However, since emotive meaning is an inference associated with the use of a term, it can be heuristically drawn independent of the knowledge of its source (Walton, 2010). The negative judgment drawn from the use of the word "honky" in most contexts can be drawn without being aware of its reason, just like the inference that if something is a viper, then it is dangerous can be drawn even without knowing that it injects a venom containing proteases which degrade tissues. ...
Chapter
Full-text available
This paper investigates the nature of the emotive meaning (or expressive force) of words commonly referred to as “thick” or “emotive,” which include slurs, derogative or pejorative words, and ethical terms. The inclusion of an expressive component in the semantic representation of a slur is assessed by considering the notion of definition and the related inferential tests developed in the dialectical tradition. The failure of such tests shows how the expressive force cannot be accounted for in terms of lexical meaning. Emotive meaning is analyzed by distinguishing the evaluative component from the emotive one, which results from the former. The negative evaluation is explained as a stable but defeasible inference resulting from different types of triggers, which can be classified into three categories: a specific definitional component (often revealed through its etymology), the denotation, or the “expression” level of a word (connotation). In this sense, instead of considering emotive meaning as a unique concept, it is more adequate to refer to distinct emotive meanings, which vary according to their triggers. At the same time, this approach can account for the characteristics of stability, force variability, historical variability, and cancellability in specific contexts of use.
... Logical fallacy refers to the use of invalid or flawed reasoning in an argumentation (Risen et al., 2007;Walton, 2010;Cotton, 2018). Logical fallacy can occur as unintentional mistakes or deliberate persuasions in a variety of human communications, such as news media (Da San Martino et al., 2019), educational essay (Jin et al., 2022), political debates (Goffredo et al., 2023;Mancini et al., 2024), or online discussions (Sahai et al., 2021). ...
... Logical fallacy refers to the use of invalid or flawed reasoning in an argumentation (Risen et al., 2007;Walton, 2010;Cotton, 2018). Logical fallacy can occur as unintentional mistakes or deliberate persuasions in a variety of human communications, such as news media (Da San Martino et al., 2019), educational essay (Jin et al., 2022), political debates (Goffredo et al., 2023;Mancini et al., 2024), or online discussions (Sahai et al., 2021). ...
Preprint
Logical fallacy uses invalid or faulty reasoning in the construction of a statement. Despite the prevalence and harmfulness of logical fallacies, detecting and classifying logical fallacies still remains a challenging task. We observe that logical fallacies often use connective words to indicate an intended logical relation between two arguments, while the argument semantics does not actually support the logical relation. Inspired by this observation, we propose to build a logical structure tree to explicitly represent and track the hierarchical logic flow among relation connectives and their arguments in a statement. Specifically, this logical structure tree is constructed in an unsupervised manner guided by the constituency tree and a taxonomy of connectives for ten common logical relations, with relation connectives as non-terminal nodes and textual arguments as terminal nodes, and the latter are mostly elementary discourse units. We further develop two strategies to incorporate the logical structure tree into LLMs for fallacy reasoning. Firstly, we transform the tree into natural language descriptions and feed the textualized tree into LLMs as a part of the hard text prompt. Secondly, we derive a relation-aware tree embedding and insert the tree embedding into LLMs as a soft prompt. Experiments on benchmark datasets demonstrate that our approach based on logical structure tree significantly improves precision and recall for both fallacy detection and fallacy classification.
... For example, the heuristic 'If it's an expert opinion, defer to it' is clearly related to the argumentation scheme for expert opinion. (Walton, 2010, p. 163) According to Walton (2010), most of the informal fallacies are associated with an argumentation scheme and a corresponding parascheme. The argumentation scheme is part of a newer (in evolutionary terms) cognitive system that operates in a controlled, conscious, and slow manner. ...
Book
Full-text available
This Element examines two prominent public health crises – the emergence of bovine spongiform encephalopathy (BSE) in British cattle and the COVID-19 pandemic. It contends that a group of arguments called the informal fallacies functioned as cognitive heuristics and facilitated public health reasoning during both crises. These arguments, which include the argument from ignorance, the argument from authority, and circular argument, are particularly well adapted to the type of uncertainty that surrounds the emergence of novel infectious diseases. By bridging gaps in knowledge, these arguments can facilitate reasoning when evidence about these diseases is limited and the need to take action is urgent. The Element charts a public health journey beginning in the 1950s with a disease called kuru, then examines the response to the emergence of BSE in 1986 and extends to the present day with the COVID-19 pandemic. This title is also available as Open Access on Cambridge Core.
... However, one worry that I have with Chomsky's answer is that the evaluation of arguments is costly in terms of both time and cognitive effort, which is something not every consumer of news is able or willing to pay. This is, after all, one of the things that makes fallacious rhetoric effective (Walton 2010). The misuse, therefore, is a danger worth paying more attention to. ...
Article
Although the use of argumentative fallacies is generally prohibited in discourse, a significant part of fallacy studies consists in identifying the specific circumstances where the use of a fallacy is permissible. However, this literature often remains silent on whether a fallacy should be used even when deemed legitimate. This silence is problematic, as it suggests that legitimacy is the sole cri-terion for deploying a fallacy. In this paper, I challenge this approach by demonstrating that even when a fallacy is legitimate, its use could still jeopardize the arguer’s goals. I base this argument on an analysis of a specific instance of the whataboutism fallacy used by Noam Chomsky in his commentary on the Russian invasion of Ukraine. I show that Chomsky’s references to the United States’ actions in Afghanistan and Iraq are better understood as broadening the context of the dis-cussion rather than distracting or even justifying Russian actions. This is what makes Chomsky’s whataboutisms plausibly legitimate. But while these whataboutisms might be legitimate, I argue, drawing on a cost-benefit analysis framework, that Chomsky has compelling reasons to avoid us-ing them. Real-life examples show that such arguments are prone to misunderstanding and misuse. I conclude that the risks associated with Chomsky’s whataboutism outweigh the potential benefits, suggesting that he should avoid their deployment. This case study reveals a broader lesson for the fallacy literature: it must explicitly address whether the legitimacy of a fallacy is sufficient justi-fication for its use.
Chapter
The volume aims to advance understanding of argumentative practices in different communicative contexts, with special regard for those with heightened public resonance: politics, media, and public debate in general. Furthermore, it intends to explore the linguistic aspects of argumentation, including both explicit codification, with the related issue of indicators, and the activation of implicit meanings. Bringing together different paradigms to account for the relations between contextual factors and discourse realizations, the contributions articulate around three foci, placing emphasis on one or more of them: the communicative purpose within a given genre or activity type; the argumentative and linguistic features of the investigated discourses, among which prototypical patterns, argumentative styles, and implicit meanings; the assessment of argumentation quality and strategies to cope with illegitimate practices.
Book
Full-text available
Simple Heuristics That Make Us Smart invites readers to embark on a new journey into a land of rationality that differs from the familiar territory of cognitive science and economics. Traditional views of rationality tend to see decision makers as possessing superhuman powers of reason, limitless knowledge, and all of eternity in which to ponder choices. To understand decisions in the real world, we need a different, more psychologically plausible notion of rationality, and this book provides it. It is about fast and frugal heuristics--simple rules for making decisions when time is pressing and deep thought an unaffordable luxury. These heuristics can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality. But when and how can such fast and frugal heuristics work? Can judgments based simply on one good reason be as accurate as those based on many reasons? Could less knowledge even lead to systematically better predictions than more knowledge? Simple Heuristics explores these questions, developing computational models of heuristics and testing them through experiments and analyses. It shows how fast and frugal heuristics can produce adaptive decisions in situations as varied as choosing a mate, dividing resources among offspring, predicting high school drop out rates, and playing the stock market. As an interdisciplinary work that is both useful and engaging, this book will appeal to a wide audience. It is ideal for researchers in cognitive psychology, evolutionary psychology, and cognitive science, as well as in economics and artificial intelligence. It will also inspire anyone interested in simply making good decisions.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Book
Heuristics stand for strategies using readily accessible information to control problem-solving processes in man and machine. This book presents an analysis of the nature and the power of typical heuristic methods, primarily those used in artificial intelligence and operations research, to solve problems in areas such as reasoning, design, scheduling, planning, signal interpretation, symbolic computation, and combinatorial optimization. It is intended for advanced undergraduate or graduate students in artificial intelligence and for researchers in engineering, mathematics, and operations research.
Book
This book provides a systematic analysis of many common argumentation schemes and a compendium of 96 schemes. The study of these schemes, or forms of argument that capture stereotypical patterns of human reasoning, is at the core of argumentation research. Surveying all aspects of argumentation schemes from the ground up, the book takes the reader from the elementary exposition in the first chapter to the latest state of the art in the research efforts to formalize and classify the schemes, outlined in the last chapter. It provides a systematic and comprehensive account, with notation suitable for computational applications that increasingly make use of argumentation schemes.
Article
Hamblin held that the conception of ''fallacy'' as an argument that seems valid but is not really so was the dominant conception of fallacy in the history of fallacy studies. The present paper explores the extent of support that there is for this view. After presenting a brief analysis of ''the standard definition of fallacy,'' a number of the definitions of ''fallacy'' in texts from the middle of this century – from the standard treatment – are considered. This is followed by a review of the definitions of ''fallacy'' in the earlier history of logic books, including those of Aristotle, Whately, Mill and De Morgan. The essay concludes that there is scarcely any support for Hamblin''s view that this particular definition of ''fallacy'' was widely held.
Article
The paper examines Waltons concept of fallacy as it develops throughthree stages of his work: from the early series of papers co-authored withJohn Woods; through a second phase of involvement with thepragma-dialectical perspective; and on to the final phase in which heoffers a distinct pragmatic theory that reaches beyond the perceived limitsof the pragma-dialectical account while still exhibiting a debt to thatperspective and the early investigations with Woods. It is observed how Waltons model of fallacy is established in distinction to its competitors,and its various problems and successes are discussed.