ArticlePDF Available

The dark side of cognitive illusions: When an illusory belief interferes with the acquisition of evidence-based knowledge

Authors:

Abstract and Figures

Cognitive illusions are often associated with mental health and well-being. However, they are not without risk. This research shows they can interfere with the acquisition of evidence-based knowledge. During the first phase of the experiment, one group of participants was induced to develop a strong illusion that a placebo medicine was effective to treat a fictitious disease, whereas another group was induced to develop a weak illusion. Then, in Phase 2, both groups observed fictitious patients who always took the bogus treatment simultaneously with a second treatment which was effective. Our results showed that the group who developed the strong illusion about the effectiveness of the bogus treatment during Phase 1 had more difficulties in learning during Phase 2 that the added treatment was effective.
Content may be subject to copyright.
British Journal of Psychology (2015), 106, 597–608
©2015 The Authors. British Journal of Psychology published by
John Wiley & Sons Ltd on behalf of the British Psychological Society
www.wileyonlinelibrary.com
The dark side of cognitive illusions: When an
illusory belief interferes with the acquisition of
evidence-based knowledge
Ion Yarritu
1
, Helena Matute
1
* and David Luque
2,3
1
Department of Experimental Psychology, Deusto University, Bilbao, Spain
2
Biomedical Research Institute (IBIMA), University of Malaga, Spain
3
School of Psychology, UNSW, Sydney, Australia
Cognitive illusions are often associated with mental health and well-being. However, they
are not without risk. This research shows they can interfere with the acquisition of
evidence-based knowledge. During the first phase of the experiment, one group of
participants was induced to develop a strong illusion that a placebo medicine was effective
to treat a fictitious disease, whereas another group was induced to develop a weak
illusion. Then, in Phase 2, both groups observed fictitious patients who always took the
bogus treatment simultaneously with a second treatment which was effective. Our results
showed that the group who developed the strong illusion about the effectiveness of the
bogus treatment during Phase 1 had more difficulties in learning during Phase 2 that the
added treatment was effective.
Human cognition has shown to be prone to a biased interpretation of reality. People tend
to believe falsely that they are better than others (Brown, 1986; Pronin, Lin, & Ross, 2002),
that their own skills can determine their success in a purely chance task (Langer, 1975), or
that certain bogus treatments they follow can miraculously cure their diseases (Matute,
Yarritu, & Vadillo, 2011). These false beliefs, typically known as cognitive illusions, have
often been related in the psychological literature with mental health and well-being
(Lefcourt, 1973; Taylor, 1989; Taylor & Brown, 1988). However, do cognitive illusions
have beneficial consequences in all cases? Current discussion in the literature suggests
that whereas biases and illusions can often contribute to adaptive adjustment, this is not
always the case (see McKay & Dennett, 2009 for an extensive review).
One psychological approach states that cognitive illusions are an adaptive mechanism,
ensuring the correct fitness of the person to the environment (Taylor & Brown, 1988).
From this perspective, the cognitive system has evolved to interpret the world
unrealistically, in a manner that assures the protection of the self. In this framework,
illusions related to the perception of relationships between events, such as illusory
correlations (Chapman & Chapman, 1969), illusions of control (Alloy & Abramson, 1979;
Langer, 1975), or causal attributional biases (Kelley, 1972), are typically assumed to
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which
permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no
modifications or adaptations are made.
*Correspondence should be addressed to Helena Matute, Departamento de Fundamentos y M
etodos de la Psicolog
ıa,
Universidad de Deusto, Apartado 1, 48080 Bilbao, Spain (email: matute@deusto.es).
DOI:10.1111/bjop.12119
597
have an important role in psychological well-being (Taylor & Brown, 1988). It has been
argued that instead of interpreting the environmental information rationally, people tend
to adjust the environmental data to their prior conceptualization of the world in a manner
that is self-serving (Fiske & Taylor, 1984; Lefcourt, 1973; Nisbett & Ross, 1980;
Zuckerman, 1979). For instance, it has been found that the illusion of control, a bias by
which people tend to overestimate their own control over uncontrollable outcomes
(Langer, 1975), works differently as a function of mood, which has sometimes been
interpreted as supporting its role as a self-esteem protection mechanism. Whereas non-
depressive people view themselves as controlling outcomes which are actually
uncontrollable (i.e., illusion of control), depressive people detect the absence of any
relationship between their actions and the desired outcomes. This has been called
depressive realism (Alloy & Abramson, 1979; Blanco, Matute, & Vadillo, 2012; Msetfi,
Murphy, Simpson, & Kornbrot, 2005). Given that the perception of uncontrollability is
related to helplessness and depression (Abramson, Seligman, & Teasdale, 1978), some
researchers have suggested that either depressed people are depressed because they do
not show an illusion of control, or they do not develop the illusion because they are
depressed (Alloy & Clements, 1992). In either case, this is an example of how the illusion
of control could be related to well-being under this framework (but see Blanco et al.,
2012; Msetfi et al., 2005 for more neutral interpretations of this illusion).
A rather different approach suggests that cognitive illusions are just the by-products of
a cognitive system which is responsible for extracting knowledge about the world (Beck
& Forstmeier, 2007; Haselton & Nettle, 2006; Matute et al., 2011; Tversky & Kahneman,
1974). The discussion revolves nowadays around the benefits and costs of establishing
false beliefs (Haselton & Nettle, 2006). From this point of view, cognitive illusions are not
beneficial per se. Instead, they would be the necessary cost to be assumed by an
overloaded cognitive system that tries to make sense of a vast amount of information
(Tversky & Kahneman, 1974). The results of this assumable cost range from superstitious
behaviour, magical thinking, or pseudoscientific beliefs (Matute, 1996; Matute et al.,
2011; Ono, 1987; Vyse, 1997) to prejudice, stereotyped judgements, and extremism
(Hamilton & Gifford, 1976; Lilienfeld, Ammirati, & Landfield, 2009). The previously
mentioned self-serving illusions (Taylor & Brown, 1988) would be interpreted as part of
this cost under this view.
Therefore, while keeping in mind that the cognitive illusions can eventually lead to
benefits related with psychological well-being, there are also cases in which their
collateral costs can lead to serious negative consequences. Take as an example, the person
who develops the false belief that a pseudoscientific (i.e., innocuous, at best) treatment
produces the recovery from a disease from which he or she is suffering. Believing that the
pseudoscientific treatment is effective, that person could underestimate the effectiveness
of a medical treatment which actually works. This bias could lead the person to reject the
really effective treatment and, consequently, suffer the implications derived from this
action. Or, in another example, if a person believes that a certain minority group has
higher rates of delinquency, how could we convince that person, at the light of evidence,
that his/her belief is not true? The two scenarios drawn here are examples that show that a
false belief could, under certain conditions, interfere with the establishment of grounded,
evidence-based, knowledge.
Despite the theoretical and practical relevance of this problem, there are, to our
knowledge, very few studies focusing on how illusory beliefs affect the acquisition of
evidence-based knowledge. One of the very few studies we are aware of is that of
Chapman and Chapman (1969). They found that illusory correlations in the interpretation
598 Ion Yarritu et al.
of projective tests could blind psychologists to the presence of valid correlations between
symptoms. However, it is not clear in their study how the illusions were developed, nor
the mechanism by which their occurrence could blind the detection of real correlations.
While the applied nature of their study was certainly commendable, it implied that several
aspects outside of experimental control, such as previous knowledge, credibility of the
source from which the illusion was acquired, strength of the belief, or years psychologist
had maintained the illusory belief, could, at least in principle, be affecting the results.
Because their goal was highly applied, Chapman and Chapman did not create the different
experimental conditions and manipulations over these illusory correlations, as they only
selected the most frequent erroneous interpretations of a projective test. The main goal of
the present work is to explore, in an experimental setting, the potential interference that
illusory beliefs might exert over the subsequent learning of new evidence-based
knowledge, and to propose a broad mechanism by which this could occur.
Cognitive illusions usually involve beliefs about causal relationships between events
that are, in fact, unrelated (i.e., illusion of causality). For instance, the illusion of control
involves the belief that our own action (the potential cause) produces the occurrence of
the desired goal (the outcome). The experimental literature on causal learning is a fruitful
framework for studying these cognitive illusions (Matute et al., 2011). Many causal
learning experiments have shown that learning about the relationship between a cause
and an effect influences the subsequent learning of another cause that is paired with the
same outcome. The family of learning phenomena known as cue interaction represents
the way by which these effects occur. When two potential causes, A and B, are presented
simultaneously and paired to an outcome, they compete for establishing a causal
association with that outcome. In these cases, the existence of previous experience or
previous knowledge about the relationship of one of the causes and the outcome
determines what can be learned about the second cause. For instance, the learner may
believe that one of the potential causes produces the outcome or, on the contrary, the
learner may believe that one of the causes prevents the occurrence of the outcome. In
both cases, this previous belief about one of the causes, say A, will affect what can be
learned about the other cause, B, when both causes are presented together and followed
by the outcome. In the first case, when the previous belief is that A is the cause of the
outcome, the detection of a causal relationship between the second cause B and the
outcome will be impaired (i.e., this particular case of cue interaction is generally known as
the blocking effect; Kamin, 1968). In the second case, when the previous belief is that A
prevents the outcome from occurring, the detection of a causal relationship between the
second cause B and the outcome will be facilitated (i.e., this particular case of cue
interaction is generally known as superconditioning; Rescorla, 1971). Many cue
interaction experiments, both with animals and humans, show that learning about the
relationship between a potential cause and an outcome can result altered when the
potential cause is presented in compound with another potential cause that has been
previously associated either with the outcome or its absence (Aitken, Larkin, & Dickinson,
2000; Arcediano, Matute, Escobar, & Miller, 2005; Dickinson, Shanks, & Evenden, 1984;
Kamin, 1968; Luque, Flores, & Vadillo, 2013; Luque & Vadillo, 2011; Mor
ıs, Cobos, Luque,
&L
opez, 2014; Rescorla, 1971; Shanks, 1985).
Therefore, given that previous causal knowledge can interfere with the learning of
new causal knowledge, and given that previous knowledge could in principle be illusory,
a question of interest is whether the development of cognitive illusions could interfere
with the development of new and evidence-based causal knowledge. To answer this
The dark side of cognitive illusions 599
question, we designed the current experiment, using a standard contingency learning
task (Wasserman, 1990). In our experiment, participants learned about the effectiveness
of some medicines through observation of fictitious patients: The fictitious patients
either took a medicine or not, and they either recovered from the crises produced by a
fictitious disease or not (Fletcher et al., 2001; Matute, Arcediano, & Miller, 1996). The
experiment was divided into two learning phases. In the first phase, participants were
exposed to information that should induce the illusion that a medicine (Medicine A) that
had no real effect on the patients’ recovery was nevertheless effective. In this phase, two
groups of participants differed in the information they received. For one group, the
illusion was induced to be high and for the other was induced to be low (see Method). In
the second phase, the ineffective medicine used in the first phase, Medicine A, was
always presented in compound with a new medicine (Medicine B), which actually did
have a curative effect over the patients’ disease. The question was whether the
acquisition of an illusory causal relationship between the (ineffective) Cause A and the
outcome during Phase 1 would interfere with subsequent learning about the causal
relationship between the potential (and in this case, actually effective) Cause B and the
same outcome that was presented during Phase 2. We expected that the different degree
of illusion about Medicine A induced in both groups during Phase 1 would lead
participants of the two groups to assess the effectiveness of Medicine B (i.e., the effective
one) differently at the end of Phase 2. More specifically, we expected that the group for
which we induced higher illusions about the effectiveness of Medicine A should show
greater difficulties than the other group in detecting that Medicine B was actually
effective.
Method
Participants and apparatus
We recruited 147 university students, who participated in the experiment in exchange for
academic credit. Any student in the Psychology of Learning class who expressed their
willingness to participate was allowed to do so. Participants were randomly assigned to
each of the two groups, resulting in a total of 73 participants in the high-illusion group and
74 participants in the low-illusion group. Participants performed the task on personal
computers. The program was implemented as an HTML document dynamically modified
with JavaScript.
Ethics statement
The data that the participants provided were anonymous and unidentifiable, the stimuli
and materials were harmless and emotionally neutral, the goal of the study was
transparent, and the task involved no deception. The participants were informed before
the session that their data would not be identifiable and that they would be allowed to
terminate the study by closing the task program window at any time without penalty, if
they wished so. In addition, right after the study finished, a screen asked for the
participants’ permission to send us the data they had just generated. Only the data from
those participants who granted their permission by clicking a button labelled ‘Send data’
were stored and used herein. Those participants not willing to submit their responses had
the option of clicking a button labelled ‘Cancel’, which immediately deleted the data. The
procedure was approved by the university ethics committee.
600 Ion Yarritu et al.
Procedure and design
The task was an adaptation of the allergy task, which has been widely used in causal
learning studies. This task has proven to be sensitive to the effect of the illusion of
causality (Matute et al., 2011). Participants were prompted to imagine being a medical
doctor, who specialized on a rare disease called ‘Lindsay syndrome’. They were then told
that there existed some new medicines that could cure the crises caused by the disease.
Their mission was to find out whether these medicines were effective. We used two
fictitious names for the medicines, ‘Batatrim’ and ‘Aubina’. These two names were
counterbalanced so that for some participants, the first trained medicine was Batatrim
and for the other participants was Aubina. The experiment comprised two training
phases, each containing 100 trials. In each trial, participants could first see whether a
fictitious patient had taken the medicines or not (potential cause) and then observed
whether the patient recovered from the crises (outcome). In trials in which the medicine
was taken, participants saw a picture of the medicine (a picture of Batatrim or Aubina in
the first phase and a picture of the two medicines together in the second phase) and the
sentence ‘The patient has taken’ and the name of the medicine (or medicines). When the
medicine was not taken, participants saw the sentence ‘The patient has not taken
medication’ and no picture was presented. Immediately underneath the information
about the medicine, they could read the question ‘Do you think the patient will recover
from the crisis?’ This prediction question was used to maintain the participants’ attention
and to make sure they were reading the screen. They could answer by clicking on one of
two buttons (yes/no). Once they gave their responses, a third (lower) panel showed the
information about whether the patient had recovered or not. In trials in which the patient
had recovered, participants saw a picture of the patient recovered and the sentence ‘The
patient has recovered from the crisis’. When the patient had not recovered, participants
saw a picture of the patient affected by the crisis and the sentence ‘The patient has not
recovered from the crisis’ (see Figure 1 for an example of how these trials were
presented).
Illusions of causality are found to be strongly affected by the frequency with which the
potential cause and the outcome occur. When the outcome occurs with a high
probability, the illusion is stronger (Allan & Jenkins, 1983; Alloy & Abramson, 1979;
Blanco, Matute, & Vadillo, 2013; Matute, 1995; Shanks & Dickinson, 1987). In addition,
when the probability of the potential cause is high, the illusion will also be stronger
(Blanco et al., 2013; Hannah & Beneteau, 2009; Matute, 1996; Matute et al., 2011; Perales
& Shanks, 2007). These two factors are often known as density biases (i.e., cue density
and outcome density), and they play a crucial role in the development of false beliefs about
causal relationship (Allan & Jenkins, 1983; Hannah & Beneteau, 2009; Matute, 1995, 1996;
Matute et al., 2011; Yarritu, Matute, & Vadillo, 2014). The illusion is particularly strong
when both the cause and the outcome occur frequently (Blanco et al., 2013). To
manipulate the degree of the illusion of causality developed by our participants, we used a
high probability of the outcome in all cases and manipulated between groups the
frequency of occurrence of the potential cause during Phase 1.
Table 1 shows a summary of the experimental design. During Phase 1, the potential
cause was a single medicine (A) which had no causal relationship with the outcome.
That is, the sequence of potential causeoutcome pairings was programmed in such a
way that the outcome occurred with the same probability in the presence and in the
absence of the potential cause. The probability of occurrence of the outcome was high
(.70) in both groups because, as described above, this is known to produce illusions. In
The dark side of cognitive illusions 601
this phase, however, we manipulated the probability of occurrence of the potential
cause, so that it was different for the two groups of participants. For the high-illusion
group, this probability was .80, whereas for the low-illusion group, it was .20. As
(a)
(b)
Figure 1. An example of trials presented in the allergy task used in this experiment. This example is from
a trial of Phase 1 (in which only one medicine was trained). At the beginning of the trial (Panel A),
participants could see whether the patient in that trial had taken the medicine (potential cause) or not, and
they were asked whether they believed that the patient would recover from the crisis. In this example, the
patient had taken ‘Aubina’ (i.e., the potential cause was present). Once the participants responded, they
could see whether the patient had recovered (outcome present) or not. In this example, the patient had
recovered from the crisis (i.e., the outcome was present; see Panel B).
Table 1. Design summary
Group
Phase 1 Phase 2
p(A) p(O|A) p(O|No Med) Dpp(A +B) p(O|A +B) p(O|No Med) Dp
High Illusion .8 .7 .7 0 .5 .9 .7 .2
Low Illusion .2 .7 .7 0
Note. A and B are fictitious medicines. O (outcome) is recovery from the crises produced by Lindsay
syndrome. Med =Medication.
602 Ion Yarritu et al.
mentioned earlier, when the probability of the outcome is high, a high probability of the
potential cause leads to a stronger illusion of causality than does a low probability
(Blanco et al., 2013; Matute et al., 2011). After completing all 100 training trials of this
phase, participants were presented with the following question: To what extent do you
think that Batatrim (or Aubina) was effective in healing the crises of the patients you
have seen? The answers were given by clicking on a 0100 scale, anchored at 0
(definitely NOT) and 100 (definitely YES). This judgement question was introduced at
the end of Phase 1 to make sure that our procedure led to a stronger illusion in the high-
illusion group than in the low-illusion group.
Once this question was answered, participants were told that they would see the
remaining patients and the second training phase began. This phase was identical for both
groups. In this phase, a new medicine (B) always appeared in compound with the
medicine trained in the previous phase (i.e., A +B). This new medicine was Aubina if
Batatrim was trained in the previous phase or Batatrim if it was Aubina which was trained
previously. The compound was presented in half of the training trials, that is half of the
fictitious patients in this phase took both medicines simultaneously, whereas the other
half took none. The probability of the outcome in the presence of the compounded
medication was higher (.90) than in its absence (.70). That is, the new medicine had a
positive effect on the healing of the crises of Lindsay syndrome because its intake implied
an increment in the probability of recovery. After completing all 100 training trials of this
second phase, participants were asked to emit their judgement about the new medicine
(B), which was our target-dependent variable. This judgement was worded in the same
way as the previous one. The participants’ answers were also given in the same way as in
the previous phase.
Results
We first made sure that our manipulation was effective in promoting a stronger illusion in
the high-illusion group than in the low-illusion group by the end of Phase 1. Means (and
standard errors of the means) of the effectiveness judgement for Medicine A at the end of
Phase 1 were 65.42 (2.00) for the high-illusion group and 47.95 (2.58) low-illusion group.
To discard the potential effect of the counterbalancing of the names of the medicines, we
conducted a 2 (group) 92 (counterbalancing) ANOVA on the judgements of Medicine
A. Neither the main effect of counterbalancing nor the interaction between these two
factors was significant (lower p=.13). As expected, however, the main effect of group
resulted significant, F(1, 143) =28.17, p<.001, g2
p=0.16. Thus, our manipulation was
successful.
The critical results of this experiment are the judgements about Medicine B at the end
of Phase 2. Means (and standard errors of the means) of the judgement of effectiveness for
Medicine B at the end of Phase 2 were 67.42 (2.49) for the high-illusion group and 74.36
(2.31) the low-illusion group. To discard the potential effect of our counterbalancing
procedure, we conducted a 2 (group) 92 (counterbalancing) ANOVA on the judgements
of Medicine B. Neither the main effect of counterbalancing nor the interaction between
counterbalancing and group was significant (lower p=.37). As expected, however, the
main effect of group was significant, F(1, 143) =4.05, p<.05, g2
p=0.03, showing that
the high-illusion group gave a lower judgement for Medicine B (which actually was
effective) in Phase 2.
The dark side of cognitive illusions 603
Discussion
Holding the illusory belief that a bogus treatment is efficient leads to a stronger reticence
to accept that an evidence-based treatment works better than the bogus one. Our results
show that the group who was induced to develop a strong illusory belief about the
effectiveness of an inefficient treatment (Medicine A) judged the actually effective
treatment (Medicine B) to be less effective, as compared to the group who was induced to
develop a weaker illusion. In the experiment presented here, the second training phase, in
which the two medicines were trained in compound, was identical for both groups.
Therefore, the differences in how both groups assessed the effectiveness of Medicine B
during the second phase must have resulted from the manipulation conducted during the
first phase, in which only Medicine A was presented. Note, also, that the evidence
presented to participants in both groups during the first phase could only support the
objective conclusion that Medicine A was totally ineffective because the probability that
the fictitious patients recovered was the same regardless of whether they took Medicine A
or not. Thus, if the participant’s judgements would have been accurate during Phase 1,
then participants in both groups should have learned exactly the same about Medicine A,
that is, that Medicine A was completely ineffective. This learning should have equally
affected their learning about the effectiveness of Medicine B in both groups during Phase
2. However, the participants’ judgements show that this was not the case.
The unique factor that can explain the differences in the judgements for Medicine B
between both groups at the end of Phase 2 is the difference in the degree of the illusion
developed in the first phase (as shown by the judgements about Medicine A). Thus, as
expected, previous training on an illusory belief about Medicine A exerted an influence
over the establishment of a true evidence-based belief concerning the effectiveness of
Medicine B. This influence is an example of a phenomenon that we already described
above, cue interaction. In the second phase of the present experiment, the two potential
causes, Medicine A and Medicine B, competed for establishing the causal relations hip with
the outcome, that is, with the patients’ recovery. However, the two groups of participants
had received different exposure to one of the causes (Medicine A) in the first phase, a
manipulation that is known to induce a stronger illusion of causality. Thus, one of the
groups had learned illusorily that Medicine A and the patient’s recovery were causally
related, whereas for the other group, this illusion was significantly weaker. This
differential exposure to Medicine A during the first phase led to differences between the
two groups in their effectiveness judgements of Medicine B. Given the experimental
design, we are not in a position to discriminate whether the critical differences in the way
the two groups judged the effectiveness of Medicine B at the end of Phase 2 should be
attributed to the high illusion that Medicine A was effective reducing the judgement about
Medicine B in the high-illusion group (as in the blocking effect; see above) or to the lower
(i.e., more accurate) perceived effectiveness of Medicine A in the low-illusion group
producing an overestimation in the assessment of Medicine B in that group (as in the
superconditioning effect; see above). Quite possible, the two effects contributed to the
observed differences, as is often the case in the cue interaction literature.
Nevertheless, what is clear given the present results is that the group who developed a
high illusion about the ineffective Medicine A tended to assess the effective Medicine B as
less effective than the group who developed a weaker illusion. Similar cue interaction
effects have been clearly established in other causal learning research (Aitken et al., 2000;
Arcediano et al., 2005; Dickinson et al., 1984; Luque & Vadillo, 2011; Luque et al., 2013;
Mor
ıs et al., 2014; Shanks, 1985). Moreover, cue interaction effects are beautifully
604 Ion Yarritu et al.
predicted by current theories of learning (Rescorla & Wagner, 1972). Here, we show that
illusions, and not only evidence-based knowledge, can compete with the acquisition of
new knowledge and can produce cue interaction effects like those already known to
occur in response to previous learning.
Last but not least, these results contribute to the currently open debate about the
potential adaptive value of cognitive illusions. More specifically, these results are
consistent with the nowadays growing opinion that cognitive illusions and biases are not
essentially adaptive or non-adaptive per se, but rather, that they should be considered in
the context in which they appear and under the light of the mechanisms that generate
them (McKay & Dennett, 2009). In the case of the experiment presented herein, learning
about an evidence-based treatment was impaired in the group who developed a stronger
illusion as compared to the group who developed a weaker illusion. The consequences of
this particular cognitive illusion cannot be regarded as adaptive. The example presented
above can clarify this point: If a person believing that a pseudoscientific treatment works
misperceives the effectiveness of an evidence-based treatment and, following this
misperception the person rejects the effective treatment, that person can undergo terrible
consequences, sometimes even death. A similar example is taking place today at most
Western countries when people reject vaccination on the argument that they do not
promote health. If someone lives in a city where everyone else has undergone vaccination,
then this person will not suffer from certain diseases. The problem will be that the illusory
attribution of causality (i.e., I am fine because this disease is not a real risk) will compete
with the acquisition of evidence-based knowledge on the effectiveness of vaccination
programmes. The present study shows how this biased thinking can develop. We are not
saying that all cognitive illusions do necessarily compete with newer evidence-based
knowledge. However, illusory beliefs have often demonstrated an atypical persistence, in
spite of new evidence contradicting them (Chapman & Chapman, 1969; Nisbett & Ross,
1980). Revisiting those persistent and often serious cases, at the light of the results of the
present study, could possibly prove fruitful.
Acknowledgements
We would like to thank Rafael Alonso-Bard
on, Itsaso Barberia, Fernando Blanco, Pedro L.
Cobos, Carmelo P. Cubillas, Amanda Flores, Francisco J. L
opez, Joaqu
ın Mor
ıs, Nerea Ortega-
Castro, and Miguel A. Vadillo for illuminating discussions. We also would like to thank Scott
Lilienfeld and an anonymous reviewer for helpful comments and suggestions on a previous
draft. Funding support for this research was provided by Grant PSI2011-26965 from Direcci
on
General de Investigaci
on of the Spanish Government and Grant IT363-10 from Departamento
de Educaci
on, Universidades e Investigaci
on of the Basque Government, both granted to
Helena Matute.
References
Abramson, L. Y., Seligman, M. E. P., & Teasdale, J. D. (1978). Learned helplessness in humans:
Critique and reformulation. Journal of Abnormal Psychology,87,4974. doi:10.1037/0021-
843X.87.1.49
Aitken, M. R. F., Larkin, M. J. W., & Dickinson, A. (2000). Super-learning of causal judgements.
Quarterly Journal of Experimental Psychology,53B,5981. doi:10.1080/713932716
Allan, L. G., & Jenkins, H. M. (1983). The effect of representations of binary variables on judgment of
influence. Learning and Motivation,14, 381405. doi:10.1016/0023-9690(83)90024-3
The dark side of cognitive illusions 605
Alloy, L. B., & Abramson, L. Y. (1979). Judgments of contingency in depressed and non-depressed
students: Sadder but wiser? Journal of Experimental Psychology: General,108, 441485.
doi:10.1037/0096-3445.108.4.441
Alloy, L. B., & Clements, C. M. (1992). Illusion of control: Invulnerability to negative affect and
depressive symptoms after laboratory and natural stressors. Journal of Abnormal Psychology,
101, 234245. doi:10.1037/0021-843X.101.2.234
Arcediano, F., Matute, H., Escobar, M., & Miller, R. R. (2005). Competition between antecedent and
between subsequent stimuli in causal judgments. Journal of Experimental Psychology:
Learning, Memory, & Cognition,31, 228237. doi:10.1037/0278-7393.31.2.228
Beck, J., & Forstmeier, W. (2007). Superstition and belief as inevitable by-products of an adaptive
learning strategy. Human Nature,18,3546. doi:10.1007/BF02820845
Blanco, F., Matute, H., & Vadillo, M. A. (2012). Mediating role of activity level in the depressive
realism effect. PLoS One,7, e46203. doi:10.1371/journal.pone.0046203
Blanco, F., Matute, H., & Vadillo, M. A. (2013). Interactive effects of the probability of the cue and the
probability of the outcome on the overestimation of null contingency. Learning & Behavior,41,
333340. doi:10.3758/s13420-013-0108-8
Brown, J. D. (1986). Evaluations of self and others: Self-enhancement biases in social judgments.
Social Cognition,4, 353376. doi:10.1521/soco.1986.4.4.353
Chapman, L. J., & Chapman, J. P. (1969). Illusory correlation as an obstacle to the use of valid
psychodiagnostic signs. Journal of Abnormal Psychology,74, 271280. doi:10.1037/h0027592
Dickinson, A., Shanks, D. R., & Evenden, J. L. (1984). Judgment of actoutcome contingency: The
role of selective attribution. Quarterly Journal of Experimental Psychology,36A,2950.
doi:10.1080/14640748408401502
Fiske, S. T., & Taylor, S. E. (1984). Social cognition. Reading, MA: Addison-Wesley.
Fletcher, P. C., Anderson, J. M., Shanks, D. R., Honey, R. R., Carpenter, T. A., Donovan, T. T., ...
Bullmore, E. T. (2001). Responses of human frontal cortex to surprising events are predicted by
formal associative learning theory. Nature Neuroscience,4, 10431047. doi:10.1038/nn733
Hamilton, D., & Gifford, R. (1976). Illusory correlation in interpersonal perception: A cognitive basis
of stereotypic judgments. Journal of Experimental Social Psychology,12, 392407.
doi:10.1016/S0022-1031(76)80006-6
Hannah, S. D., & Beneteau, J. L. (2009). Just tell me what to do: Bringing back experimenter control
in active contingency tasks with the command-performance procedure and finding cue density
effects along the way. Canadian Journal of Experimental Psychology,63,5973. doi:10.1037/
a0013403
Haselton, M. G., & Nettle, D. (2006). The paranoid optimist: An integrative evolutionary model of
cognitive biases. Personality and Social Psychology Review,10,4766. doi:10.1207/
s15327957pspr1001_3
Kamin, L. J. (1968). Attention-like processes in classical conditioning. In M. R. Jones (Ed.), Miami
Symposium on the prediction of behavior: Aversive stimulation (pp. 932). Miami, FL:
University of Miami Press.
Kelley, H. H. (1972). Causal schemata and the attribution process. In E. E. Jones, D. E. Kanouse, H. H.
Kelley, R. E. Nisbett, S. Valins & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior
(pp. 151174). Morristown, NJ: General Learning Press.
Langer, E. J. (1975). The illusion of control. Journal of Personality and Social Psychology,32, 311
328. doi:10.1037/0022-3514.32.2.311
Lefcourt, H. M. (1973). The function of the illusions of control and freedom. American Psychologist,
28, 417425. doi:10.1037/h0034639
Lilienfeld, S. C., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological
research on correcting cognitive errors promote human welfare? Perspectives on Psychological
Science,4, 390398. doi:10.1111/j.1745-6924.2009.01144.x
Luque, D., Flores, A., & Vadillo, M. A. (2013). Revisiting the role of within-compound
associations in cue-interaction phenomena. Learning & Behavior,41,6176. doi:10.3758/
s13420-012-0085-3
606 Ion Yarritu et al.
Luque, D., & Vadillo, M. A. (2011). Backward versus forward blocking: Evidence for performance-
based models of human contingency learning. Psychological Reports,109, 10011016.
doi:10.2466/22.23.PR0.109.6.1001-1016
Matute, H. (1995). Human reactions to uncontrollable outcomes: Further evidence for superstitions
rather than helplessness. Quarterly Journal of Experimental Psychology,48B, 142157.
doi:10.1080/14640749508401444
Matute, H. (1996). Illusion of control: Detecting response-outcome independence in analytic but
not in naturalistic conditions. Psychological Science,7, 289293. doi:10.1111/j.1467-
9280.1996.tb00376.x
Matute, H., Arcediano, F., & Miller, R. R. (1996). Test question modulates cue competition between
causes and between effects. Journal of Experimental Psychology: Learning Memory and
Cognition,22, 182196. doi:10.1037/0278-7393.22.1.182
Matute, H., Yarritu, I., & Vadillo, M. A. (2011). Illusions of causality at the hearth of pseudoscience.
British Journal of Psychology,102, 392405. doi:10.1348/000712610X532210
McKay, R. T., & Dennett, D. C. (2009). The evolution of misbelief. Behavioral and Brain Sciences,
32, 493561. doi:10.1017/S0140525X09990975
Mor
ıs, J., Cobos, P. L., Luque, D., & L
opez, F. J. (2014). Associative repetition priming as a measure of
human contingency learning: Evidence of forward and backward blocking. Journal of
Experimental Psychology: General,143,7793. doi:10.1037/a0030919
Msetfi, R. M., Murphy, R. A., Simpson, J., & Kornbrot, D. E. (2005). Depressive realism and outcome
density bias in contingency judgments: The effect of the context and intertrial interval. Journal
of Experimental Psychology: General,134,1022. doi:10.1037/0096-3445.134.1.10
Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social
judgment. Englewood Cliffs, NJ: Prentice-Hall.
Ono, K. (1987). Superstitious behavior in humans. Journal of the Experimental Analysis of
Behavior,47, 261271. doi:10.1901/jeab.1987.47-261
Perales, J. C., & Shanks, D. R. (2007). Models of covariation-based causal judgment: A review and
synthesis. Psychonomic Bulletin and Review,14, 577596. doi:10.3758/BF03196807
Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others.
Personality and Social Psychology Bulletin,28, 369381. doi:10.1177/0146167202286008
Rescorla, R. A. (1971). Variations in the effectiveness of reinforcement and nonreinforcement
following prior inhibitory conditioning. Learning and Motivation,2, 113123. doi:10.1016/
0023-9690(71)90002-6
Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the
effectiveness of reinforcement and non-reinforcement. In A. H. Black & W. F. Prokasy (Eds.),
Classical conditioning II: Current research and theory (pp. 6499). New York, NY: Appelton-
Century-Crofts.
Shanks, D. R. (1985). Forward and backward blocking in human contingency judgment. Quarterly
Journal of Experimental Psychology,37B,121. doi:10.1080/14640748508402082
Shanks, D. R., & Dickinson, A. (1987). Associative accounts of causality judgment. In G. H. Bower
(Ed.), The psychology of learning and motivation (Vol. 21, pp. 229261). San Diego, CA:
Academic Press.
Taylor, S. E. (1989). Positive illusions: Creative self-deception and the healthy mind. New York,
NY: Basic Books.
Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: A social psychological perspective on
mental health. Psychological Bulletin,103, 193210. doi:10.1037/0033-2909.103.2.193
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185, 11241131. doi:10.1126/science.185.4157.1124
Vyse, S. A. (1997). Believing in magic: The psychology of superstition. New York, NY: Oxford
University Press.
Wasserman, E. A. (1990). Attribution of causality to common and distinctive elements of compound
stimuli. Psychological Science,1, 298302. doi:10.1111/j.1467-9280.1990.tb00221.x
The dark side of cognitive illusions 607
Yarritu, I., Matute, H., & Vadillo, M. A. (2014). Illusion of control: The role of personal involvement.
Experimental Psychology,61,3847. doi:10.1027/1618-3169/a000225
Zuckerman, M. (1979). Attribution of success and failure revisited: Or the motivational bias is alive
and well in attribution theory. Journal of Personality,47, 245287. doi:10.1111/j.1467-6
Received 12 February 2014; revised version received 10 December 2014
608 Ion Yarritu et al.
... Procesar tal cantidad de información y especialmente sobre temas complejos no es tarea sencilla, máxime cuando la velocidad con la que esta se presenta es más elevada de lo que el cerebro humano necesita para su análisis (Carr, 2010). Saber distinguir dentro del gran flujo de información, qué se ajusta a la realidad y qué no, se está convirtiendo en un ejercicio muy complejo que requiere un esfuerzo demasiado grande (Greer, 2003;Dias, 2014;Yarritu, Matute, & Luque, 2015;Blanco & Matute, 2018). Esta dificultad para identificar aquella información que no es fidedigna, es una preocupación que ya es manifestada por más del 69% de las personas internautas (Amoedo, Vara-Miguel, & Negredo, 2018). ...
... En cierta forma, esto indica la existencia de una idea incorrecta compartida por miles de personas, que como es previsible, provienen de contextos culturales diferentes y, por tanto, no han accedido a las mismas fuentes informativas. Este tipo de errores o sesgos cognitivos han sido bien estudiados en trabajos previos (Tversky & Kahneman, 1974;Gilovich & Griffin, 2010;Kahneman, 2012;Kahneman & Tversky, 2013;Yarritu et al., 2015;Kaufman & Kaufman, 2018) y muestran cómo existen algunas tendencias comunes a la hora de interpretar el entorno y la información que se deriva de él. En muchos de los casos este análisis se realiza de forma rápida e intuitiva, un modelo que en el proceso de desarrollo de nuestra sociedad hasta hoy en día ha resultado de gran valor, pero que en las condiciones actuales nos lleva a interpretar incorrectamente la información a la que accedemos (Slovic, Peters, Finucane, & MacGregor, 2005;Yarritu et al., 2015). ...
... Este tipo de errores o sesgos cognitivos han sido bien estudiados en trabajos previos (Tversky & Kahneman, 1974;Gilovich & Griffin, 2010;Kahneman, 2012;Kahneman & Tversky, 2013;Yarritu et al., 2015;Kaufman & Kaufman, 2018) y muestran cómo existen algunas tendencias comunes a la hora de interpretar el entorno y la información que se deriva de él. En muchos de los casos este análisis se realiza de forma rápida e intuitiva, un modelo que en el proceso de desarrollo de nuestra sociedad hasta hoy en día ha resultado de gran valor, pero que en las condiciones actuales nos lleva a interpretar incorrectamente la información a la que accedemos (Slovic, Peters, Finucane, & MacGregor, 2005;Yarritu et al., 2015). ...
Article
Full-text available
RESUMEN El presente trabajo analiza el nivel de conocimiento que sobre el mundo actual tienen los futuros docentes que cursan estudios de Grado y Postgrado en Educación en la Universidad del País Vasco. En un contexto de hiperconectividad donde comprobar la veracidad de las informaciones se hace cada vez más complejo, es de gran importancia analizar la percepción que el futuro profesorado tiene sobre el estado actual del mundo. Este conocimiento permite proponer medidas de actuación que ayuden a los docentes a comprender cómo es la adquisición y generación de conocimiento, a través de unos medios de comunicación cada vez más digitalizados. Para ello, se ha utilizado el cuestionario ideado por Rosling et. al, que consta de trece preguntas con tres posibles respuestas cada una. Los resultados obtenidos muestran un porcentaje de aciertos del 21%, una media que no llega a los tres aciertos por persona, y una moda que se sitúa en los dos aciertos. Estos datos son muy similares a los obtenidos por las personas de países socio-económicamente avanzados. Igualmente, existe gran coincidencia en los ítems en los que se obtienen mejores y peores niveles de acierto. El paso por los estudios de educación parece no tener incidencia en el desarrollo de esquemas mentales que ayuden a tener visiones más acertadas de la realidad, que puedan prevenir tanto entre el alumnado como el profesorado la prevalencia de creencias y falacias infundadas.
... As Rosling et al. [9] point out, the problem is not so much that the correct information is not accessed, but that many people distributed across different countries have wrong views of the world. These cognitive biases have been well studied in previous work and show some common tendencies when interpreting the environment and its information [10], [11]. ...
Conference Paper
Full-text available
This article aims to check the level of information presented by postgraduate students in civil engineering concerning other population groups. This paper analyses the perception that postgraduate civil engineering students at the Universitat Politècnica de València have about the current world. For this purpose, the Factfulness Quiz by Hans Rosling was used. In this test, thirteen questions with three possible answers aim to measure the perspective on topics recurrently covered in the media. Thousands of people have answered this questionnaire, making it possible to compare groups of students. Fifty students, representing more than 75% of those enrolled, answered the survey. They correspond to students of two master's degrees related to civil engineering. The questionnaire consists of three sections. First, generic information about individuals. Next is the a priori opinion regarding current problems, both their own and world leaders and journalists. Finally, the questionnaire is completed with thirteen questions that appear randomly. A statistical analysis of the responses collected by each group was carried out to compare the results with those of previous studies. We also studied whether there are relevant differences between the groups of students. The statistical packages Microsoft Excel and SPSS were used. The results obtained show a success rate of 31%, an average of four correct answers per person, and a mode of three. The results show a somewhat lower success rate than an utterly random response. Among the questions asked, one stands out for having been answered correctly by more than 96% of the participants. It refers to climate change's effect on the planet's temperature. The study concludes that students do not have a clear and up-to-date view of contemporary problems. However, the results obtained in this study reflect the need to incorporate actions in the training of our students that allow them to be aware of their limitations when interpreting complex issues. From this new perspective, students would be more receptive to understanding the mechanisms that have led them to assimilate these unfounded and widely shared social beliefs through overexposure to current media. Therefore, there is a need for postgraduate students to improve competence related to knowledge of contemporary problems.
... Considering that water products are "neutral" stimuli with only very subtle sensory differences, mechanisms such as "illusory correlations" and "illusion of causality" (L. J. Chapman & Chapman, 1969;Yarritu, Matute, & Luque, 2015) are likely to facilitate the development of false beliefs. ...
... Later work by Langer as well as others found that illusory beliefs of controllability improve well-being among many groups who are deprived of control in their lives, for example due to physical illness, grief, or old age (8)(9)(10). But unsurprisingly, illusory beliefs can also have deleterious consequences (11). Particularly when money is wagered on the outcome of games that are objectively uncontrollable and contain a 'house edge', the illusion of control may lead to persistent gambling and financial loss. ...
Article
Full-text available
E. J. Langer's paper, 'The illusion of control' (1975), showed that people act in ways that suggest they hold illusory beliefs in their ability to control the outcome of chance-determined games. This highly cited paper influenced the emerging field of gambling studies, and became a building block for cognitive approaches to problem gambling. Over time, this work has inspired therapeutic approaches based on cognitive restructuring, preventative programmes focused upon gambling myths and regulatory scrutiny of skill mechanics in modern gambling products. However, the psychological mechanisms underlying the 'illusion of control' remain elusive.
... We begin by noting an important theme from social psychological research related to errors in judgment and decision making: People are inattentive to the reliability of the evidence they have in hand and exaggerate the degree to which a small sample of evidence is representative of ground truth (Benjamin, Rabin, Raymond, 2016;Griffin & Tversky, 1992;Tversky & Kahneman, 1971;Williams, Lombrozo, & Rehder, 2013). Often, people place far too much emphasis on the first piece of information they see and give this scant evidence undue weight on subsequent theorizing (Asch, 1946;Jones, Rock, Shaver, Goethals, & Ward, 1968;Kelley, 1950), obfuscating true patterns that might arise in the world (Kamin, 1968;Yarritu, Matute, & Luque, 2015). ...
Article
In schizophrenia research, patients who "jump to conclusions" in probabilistic reasoning tasks tend to display impaired decision-making and delusional belief. In five studies, we examined whether jumping to conclusions (JTC) was similarly associated with decision impairments in a nonclinical sample, such as reasoning errors, false belief, overconfidence, and diminished learning. In Studies 1a and 1b, JTC was associated with errors stimulated by automatic reasoning, oddball beliefs such as conspiracy theories, and overconfidence. We traced these deficits to an absence of controlled processing rather than to an undue impact of automatic thinking, while ruling out roles for plausible alternative individual differences. In Studies 2 and 3, JTC was associated with higher confidence despite diminished performance in a novel probabilistic learning task (i.e., diagnosing illnesses), in part because those who exhibited JTC behavior were prone to overly exuberant theorizing, with no or little data, about how to approach the task early on. In Study 4, we adapted intervention materials used in schizophrenia treatment to train participants to avoid JCT. The intervention quelled overconfidence in the probabilistic learning task. In summary, this research suggests that a fruitful crosstalk may exist between research on psychopathology and work on social cognition within the general public. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... In a classroom setting, the outcome density effect may play a role in biasing teachers' ability to accurately determine the effectiveness of their teaching practices if the student cohort is high-achieving and, therefore, likely to perform well academically regardless of the teaching practice used. Some researchers have also proposed that the development of strong false beliefs are able to interfere with subsequent acquisition of real causal relationships (Yarritu, Matute, & Luque, 2015), suggesting that these beliefs may be persistent and difficult to correct. ...
Article
Full-text available
Teachers sometimes believe in the efficacy of instructional practices that have little empirical support. These beliefs have proven difficult to efface despite strong challenges to their evidentiary basis. Teachers typically develop causal beliefs about the efficacy of instructional practices by inferring their effect on students' academic performance. Here, we evaluate whether causal inferences about instructional practices are susceptible to an outcome density effect using a contingency learning task. In a series of six experiments, participants were ostensibly presented with students' assessment outcomes, some of whom had supposedly received teaching via a novel technique and some of whom supposedly received ordinary instruction. The distributions of the assessment outcomes was manipulated to either have frequent positive outcomes (high outcome density condition) or infrequent positive outcomes (low outcome density condition). For both continuous and categorical assessment outcomes, participants in the high outcome density condition rated the novel instructional technique as effective, despite the fact that it either had no effect or had a negative effect on outcomes, while the participants in the low outcome density condition did not. These results suggest that when base rates of performance are high, participants may be particularly susceptible to drawing inaccurate inferences about the efficacy of instructional practices.
... Finally, and what is more relevant for our study, there are situations in which both probabilities are equal, thus yielding a Δ value of 0. This is a null contingency setting that normally implies the absence of a causal link. For example, using a pseudoscientific treatment for a serious disease will probably not improve the chances of healing as compared to when no treatment, or a placebo, is taken (Yarritu, Matute & Luque, 2015). ...
Article
Full-text available
Judgments of a treatment's effectiveness are usually biased by the probability with which the outcome (e.g., symptom relief) appears: even when the treatment is completely ineffective (i.e., there is a null contingency between cause and outcome), judgments tend to be higher when outcomes appear with high probability. In this research, we present ambiguous stimuli, expecting to find individual differences in the tendency to interpret them as outcomes. In Experiment 1, judgments of effectiveness of a completely ineffective treatment increased with the spontaneous tendency of participants to interpret ambiguous stimuli as outcome occurrences (i.e., healings). In Experiment 2, this interpretation bias was affected by the overall treatment-outcome contingency, suggesting that the tendency to interpret ambiguous stimuli as outcomes is learned and context-dependent. In conclusion, we show that, to understand how judgments of effectiveness are affected by outcome probability, we need to also take into account the variable tendency of people to interpret ambiguous information as outcome occurrences.
... In our three experiments, we have showed how an ineffective medicine can produce inaccurate beliefs of effectiveness, especially when the disease tends to resolve spontaneously after some time (e.g., self-limited diseases). This finding could be relevant in real life, as those patients who incorrectly believe in the healing potential of a pseudotherapy are at risk of replacing scientifically valid treatments by other alternatives that lack evidential support (Macfarlane et al., 2020;Yarritu et al., 2015). Our results seem robust to variations in certain assumptions about the treatment course, such as the belief that treatments can have delayed effects, as we show in Experiment 3. ...
Article
Full-text available
Rationale Self-limited diseases resolve spontaneously without treatment or intervention. From the patient's viewpoint, this means experiencing an improvement of the symptoms with increasing probability over time. Previous studies suggest that the observation of this pattern could foster illusory beliefs of effectiveness, even if the treatment is completely ineffective. Therefore, self-limited diseases could provide an opportunity for pseudotherapies to appear as if they were effective. Objective In three computer-based experiments, we investigate how the beliefs of effectiveness of a pseudotherapy form and change when the disease disappears gradually regardless of the intervention. Methods Participants played the role of patients suffering from a fictitious disease, who were being treated with a fictitious medicine. The medicine was completely ineffective, because symptom occurrence was uncorrelated to medicine intake. However, in one of the groups the trials were arranged so that symptoms were less likely to appear at the end of the session, mimicking the experience of a self-limited disease. Except for this difference, both groups received similar information concerning treatment effectiveness. Results In Experiments 1 and 2, when the disease disappeared progressively during the session, the completely ineffective medicine was judged as more effective than when the same information was presented in a random fashion. Experiment 3 extended this finding to a new situation in which symptom improvement was also observed before the treatment started. Conclusions We conclude that self-limited diseases can produce strong overestimations of effectiveness for treatments that actually produce no effect. This has practical implications for preventative and primary health services. The data and materials that support these experiments are freely available at the Open Science Framework (https://bit.ly/2FMPrMi)
... Some sources were specific to other fields, such as medicine [e.g. 30,64], and were thus rejected from our selection. After reading the sources, we selected the 5 sources shown in Table 1. ...
Thesis
Full-text available
Starting to include sustainability considerations in a design project is a transition requiring a change in how things are done, that is, a change in behaviour. Furthermore, this transition takes place in the midst of the usual pressures of product design. Prior research on sustainable design has mostly explored the so-called technical side – identifying what tasks should be performed, such as specifics of including sustainability criteria when analysing product concepts. However, this has not been enough. These tasks are not being performed to the extent that they could, or that is needed. Recent studies have advocated the consideration of the human nature of the people who are to execute these ‘technical’ tasks. In other words, there is a need to work with the socio-psychological factors in order to help sustainable design beginners to adopt new mindsets and practice (their usual way of doing design). My aim was therefore to investigate how to support individual product design team members with the human aspects of transitioning to executing sustainable design. In particular, I focused on supporting good individual decision-making and individual behaviour change. This aim was addressed through multiple research projects with four partner companies working with the early phases of product design. Given a focus to change practice, I followed an action research approach with a particular emphasis on theory building. This action research approach comprised two phases: understanding the challenge and context, and then iteratively developing solutions through a theorise–design-act-observe-reflect cycle. Through the research projects, my colleagues and I found that there are challenges related to behaviour change and decision-making that are hindering execution of sustainable design. In order to help organisations to overcome or avoid these challenges, we found that it may be beneficial for those developing sustainable design tools and methods to (i) use techniques to mitigate for cognitive illusions, (ii) provide individuals with the opportunity to implement sustainable design while helping those individuals to increase their motivation and capability to execute sustainable design, and (iii) communicate with these individuals in such a way as to avoid triggering psychological barriers (self-defence mechanisms). I combined these points into two models. Together with the partner organisations, we applied the two models to design some actions that we then tested. The actions included integrating behaviour change and decision-making considerations into sustainable design tools as well as stand-alone interventions in the culture. Given the findings of these studies, I urge developers of sustainable design tools to see implementation of their tool as a learning journey. The beginning of the journey should comprise small steps supported by handrails, which then increase in size and decrease in support as the journey continues. Especially in the beginning, tool developers will also need to help travellers to avoid the decision-making errors that occur due to being in unfamiliar territory.
Article
Full-text available
Patients' beliefs about the effectiveness of their treatments are key to the success of any intervention. However, since these beliefs are usually formed by sequentially accumulating evidence in the form of the covariation between the treatment use and the symptoms, it is not always easy to detect when a treatment is actually working. In Experiments 1 and 2, we presented participants with a contingency learning task in which a fictitious treatment was actually effective to reduce the symptoms of fictitious patients. However, the base-rate of the symptoms was manipulated so that, for half of participants, the symptoms were very frequent before the treatment, whereas for the rest of participants, the symptoms were less frequently observed. Although the treatment was equally effective in all cases according to the objective contingency between the treatment and healings, the participants' beliefs on the effectiveness of the treatment were influenced by the base-rate of the symptoms, so that those who observed frequent symptoms before the treatment tended to produce lower judgments of effectiveness. Experiment 3 showed that participants were probably basing their judgments on an estimate of effectiveness relative to the symptom base-rate, rather than on contingency in absolute terms. Data and materials are publicly available at the Open Science Framework: https://osf.io/emzbj/
Article
Full-text available
We examined whether individual differences in susceptibility to the illusion of control predicted differential vulnerability to depressive responses after a laboratory failure and naturally occurring life stressors. The illusion of control decreased the likelihood that subjects (N= 145) would (a)show immediate negative mood reactions to the laboratory failure, (b) become discouraged after naturally occurring negative life events, and (c) experience increases in depressive symptoms a month later given the occurrence of a high number of negative life events. In addition, the stress-moderating effect of the illusion of control on later depressive symptoms appeared to be mediated in part by its effect on reducing the discouragement subjects experienced from the occurrence of negative life events. These findings provide support for the hopelessness theory of depression and for the optimistic illusion-mental health link.
Article
Ciiven the task of di the source of a patient's aUer^'ic reav-tion. college students jiuigcii the causal efficacy of common (A') and distinctive (A and Bj elements of compound stimuli: AX and BX. As the differential correlation of AX and BX with the occurrence and nonoccurrence ofthe allergic reaction rose from .00 to 1.00. ratings of ihe distinctive A and B elements diverged; most importantly, ratings ofthe common X element fell. These causal judgments of humans closely parallel the conditioned responses of animals in associa-tive learning studies, and clearly disclose that stimuli compete with one another for control over behavior.
Article
The learned helplessness hypothesis is criticized and reformulated. The old hypothesis, when applied to learned helplessness in humans, has two major problems: (a) It does not distinguish between cases in which outcomes are uncontrollable for all people and cases in which they are uncontrollable only for some people (univervsal vs. personal helplessness), and (b) it does not explain when helplessness is general and when specific, or when chronic and when acute. A reformulation based on a revision of attribution theory is proposed to resolve these inadequacies. According to the reformulation, once people perceive noncontingency, they attribute their helplessness to a cause. This cause can be stable or unstable, global or specific, and internal or external. The attribution chosen influences whether expectation of future helplessness will be chronic or acute, broad or narrow, and whether helplessness will lower self-esteem or not. The implications of this reformulation of human helplessness for the learned helplessness model of depression are outlined.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Article
Three investigations are reported that examined the relation between self-appraisals and appraisals of others. In Experiment 1, subjects rated a series of valenced trait adjectives according to how well the traits described the self and others. Individuals displayed a pronounced “self-other bias,” such that positive attributes were rated as more descriptive of self than of others, whereas negative attributes were rated as less descriptive of self than of others. Furthermore, in contrast to C. R. Rogers's (1951) assertion that high self-esteem is associated with a comparable regard for others, the tendency for individuals to evaluate the self in more favorable terms than they evaluated people in general was particularly pronounced among those with high self-esteem. These findings were replicated and extended in Experiment 2, where it also was found that self-evaluations were more favorable than were evaluations of a friend and that individuals with high self-esteem were most likely to appraise their friend...