ArticlePDF Available

The Lack of Side Effects of an Ineffective Treatment Facilitates the Development of a Belief in Its Effectiveness

Abstract and Figures

Some alternative medicines enjoy widespread use, and in certain situations are preferred over conventional, validated treatments in spite of the fact that they fail to prove effective when tested scientifically. We propose that the causal illusion, a basic cognitive bias, underlies the belief in the effectiveness of bogus treatments. Therefore, the variables that modulate the former might affect the latter. For example, it is well known that the illusion is boosted when a potential cause occurs with high probability. In this study, we examined the effect of this variable in a fictitious medical scenario. First, we showed that people used a fictitious medicine (i.e., a potential cause of remission) more often when they thought it caused no side effects. Second, the more often they used the medicine, the more likely they were to develop an illusory belief in its effectiveness, despite the fact that it was actually useless. This behavior may be parallel to actual pseudomedicine usage; that because a treatment is thought to be harmless, it is used with high frequency, hence the overestimation of its effectiveness in treating diseases with a high rate of spontaneous relief. This study helps shed light on the motivations spurring the widespread preference of pseudomedicines over scientific medicines. This is a valuable first step toward the development of scientifically validated strategies to counteract the impact of pseudomedicine on society.
Content may be subject to copyright.
The Lack of Side Effects of an Ineffective Treatment
Facilitates the Development of a Belief in Its
Effectiveness
Fernando Blanco*, Itxaso Barberia, Helena Matute
Universidad de Deusto, Departamento de Fundamentos y Me
´todos de la Psicologı
´a, Bilbao, Spain
Abstract
Some alternative medicines enjoy widespread use, and in certain situations are preferred over conventional, validated
treatments in spite of the fact that they fail to prove effective when tested scientifically. We propose that the causal illusion,
a basic cognitive bias, underlies the belief in the effectiveness of bogus treatments. Therefore, the variables that modulate
the former might affect the latter. For example, it is well known that the illusion is boosted when a potential cause occurs
with high probability. In this study, we examined the effect of this variable in a fictitious medical scenario. First, we showed
that people used a fictitious medicine (i.e., a potential cause of remission) more often when they thought it caused no side
effects. Second, the more often they used the medicine, the more likely they were to develop an illusory belief in its
effectiveness, despite the fact that it was actually useless. This behavior may be parallel to actual pseudomedicine usage;
that because a treatment is thought to be harmless, it is used with high frequency, hence the overestimation of its
effectiveness in treating diseases with a high rate of spontaneous relief. This study helps shed light on the motivations
spurring the widespread preference of pseudomedicines over scientific medicines. This is a valuable first step toward the
development of scientifically validated strategies to counteract the impact of pseudomedicine on society.
Citation: Blanco F, Barberia I, Matute H (2014) The Lack of Side Effects of an Ineffective Treatment Facilitates the Development of a Belief in Its Effectiveness. PLoS
ONE 9(1): e84084. doi:10.1371/journal.pone.0084084
Editor: Jerson Laks, Federal University of Rio de Janeiro, Brazil
Received August 2, 2013; Accepted November 10, 2013; Published
Copyright: ß2014 Blanco et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: Support for this research was provided by Direccio
´n General de Investigacio
´n of the Spanish Government (Grant PSI2011-26965) and Departamento de
Educacio
´n, Universidades e Investigacio
´n of the Basque Government (Grant IT363-10). The funders had no role in study design, data collection and analysis,
decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: fernandoblanco@deusto.es
Introduction
In today’s knowledge-based society, the wide-spread use and
popularity of certain alternative medicines, such as homeopathy,
continues to be increasingly troublesome for health authorities
worldwide. While rigorous scientific studies have repeatedly shown
that homeopathy is completely ineffective (no more effective than
placebo [1]), many patients still choose to use it or other
pseudomedicines in place of conventional treatments that have
been proven effective. This decision results in important conse-
quences, sometimes death [2]. Therefore, one may ask why people
prefer to use homeopathy and other alternative medicines over
scientifically tested medicines. One possible answer is that the
alleged lack of side effects of most alternative medicines (in the case
of homeopathy, we assume no side effects because the main
ingredient is water) makes the treatment more attractive than
conventional scientific medicines, which frequently include
unpleasant side effects. While most would agree that people
frequently resort to those treatments they believe are more
effective, we propose that the reverse also holds: frequent use of a
treatment, because of the lack of side effects or other consider-
ations, fuels the belief that it is effective, even when it is not.
It has been previously suggested that a basic cognitive bias,
known as the causal illusion, underlies many irrational beliefs,
particularly beliefs in the effectiveness of pseudomedicines [3]. The
causal illusion is the illusory perception of a contingency between a
potential cause and the outcome of interest when they are actually
not causally related. In this case, the contingency is between the
use of a treatment and the recovery from a disease. A useless
treatment is one that does not increase the probability of healing
when it is used. That is, the probability of healing remains the
same whether or not the treatment is used, P(Healing|Treatment)
= P(Healing||Treatment), and hence the contingency between
the two events (treatment and healing) is zero. In recent decades,
experimental psychologists have identified a number of conditions
that promote the overestimation of zero-contingencies. We argue
that the way pseudomedicines are typically used meets the
conditions that, according to recent research on causal illusions,
facilitate the overestimation of causality, and therefore the belief in
the effectiveness of completely useless treatments.
One variable that researchers have identified as a robust
facilitator of the causal illusion, at least in controlled experiments,
is the probability of the desired outcome. The illusion would
appear more prominent when the healings occur with high
probability, even if they are not correlated with the use of the
treatment [4], [5], [6]. Another variable of interest is the
probability of occurrence of the potential cause, P(Cause).
Following the example above, P(Cause) is the probability that a
patient uses the treatment when she is sick. Thus, in this particular
context one can also refer to it as P(Treatment). Basic research
suggests that the more often a patient takes a completely useless
PLOS ONE | www.plosone.org 1 January 2014 | Volume 9 | Issue 1 | e84084
January 8, 2014
medicine, the more likely she will develop a belief in its
effectiveness. This is particularly true when the desired outcome
(the healing) takes place frequently [7], [8].
The effect of P(Cause) on causal judgments has been widely
documented in computer-based experiments. These studies are
generally theory-focused experiments and could, in principle,
make use of any context or scenario, be it real or fictional, in which
causal relationships could be assessed by participants. In fact,
many of them use medical scenarios as cover-stories in which the
potential cause to be evaluated is a fictitious treatment, and the
desired outcome is a fictitious patient recovering from a disease
[7]. Therefore, the researchers found that P(Treatment) signifi-
cantly affects the participants’ effectiveness judgments of a
treatment, at least in these fictitious scenarios. These studies
suggest the inversion of what one could take as the obvious
relationship between the mentioned variables, P(Treatment) and
belief in the effectiveness of the treatment. The obvious
relationship is that people’s belief in the effectiveness of a
treatment will influence P(Treatment), the probability that they
use the treatment. The inverse, less intuitive, relationship is that
irrespective of people’s initial expectations, the mere increase in
P(Treatment) will increase people’s belief in the effectiveness of the
treatment. That is, the experiments suggest that it is the frequent
use of the medicine that results in the illusion of its effectiveness.
Moreover, previous experiments show that, when not influ-
enced or instructed by the experimenters, people naturally tend to
introduce the target cause (i.e., to use the potential treatment), in
more than half of the occasions [7], [9] (see [10] for more general
evidence in a neutral scenario). That is, the spontaneous tendency
is to use the treatment with high probability, and this normally
leads to an illusory perception of effectiveness. Still, it is possible to
influence this spontaneous behavior. The manipulation of
P(Cause), or P(Treatment), has been achieved in different ways
in fictitious experimental scenarios. For instance, (a) by means of
explicit instructions about the rate with which the cause should be
introduced [8], [10], [11] or (b) by manipulating the availability of
the potential cause [12]. These studies showed that increasing
P(Cause) facilitates the development of the illusion, while
decreasing P(Cause) reduces it. However, the studies directly
restricted P(Cause) [12] or at least suggested the adequate
P(Cause) that participants should expose themselves to [8], [10].
In this study, our goal was to move one step back and get a glimpse
of the actual variability in P(Treatment). We wanted to know
which preceding factors determined the probability of using the
treatment in real life situations, and whether we could influence
P(Treatment) without explicit instructions or limiting the partic-
ipant’s access to the treatment.
In a recent study, Barberia et al. [9] were able to reliably reduce
the natural tendency of high school students to introduce a
potential cause (i.e., to use a fictitious medicine on a series of
fictitious patients) by using an educational intervention explaining
the rationale of causal inference, which conforms the basic
principle of scientific reasoning and experimental design. In line
with this study, we suggest another factor that might indirectly
affect the otherwise frequent use of a useless treatment, hence
developing a causal illusion of effectiveness. This factor is the
presence of side effects produced by the treatment. To our
knowledge, this factor has not been manipulated in experimental
research. Manipulating this variable may be an effective and very
natural way of modulating the cost associated with treatment, and
therefore should have an observable impact on the probability of
introducing the potential cause (i.e, the treatment). If correct, we
would also detect differences in the development of causal
illusions, depending on the presence of side-effects.
To sum up, we propose that one of the reasons why people
continue to believe in the effectiveness of bogus treatments is their
alleged lack of side effects. Because there is no harm in using
homeopathy (i.e., the cost associated with using the treatment is
very low), it is used frequently. In certain conditions where the
base rate of spontaneous remission is high (such as headache, back
pain, flu, etc.), the experimental research on contingency learning
and causal illusions clearly suggests that introducing the potential
cause with high frequency results in overestimation of the
effectiveness of the treatment. This would generate a vicious circle
in which pseudomedicines are frequently used because they imply
low cost (e.g., no harm), and in turn, high rates of pseudomedicine
use contributes to an illusory perception of effectiveness, thus
reinforcing further consumption.
In our experiment, we adapt the standard contingency learning
task that has been used in previous experiments [3] to include a
manipulation of the cost associated with using the treatment based
on the presence of side effects. Our prediction is that, because a
lack of side effects encourages the use of the treatment with high
probability, it facilitates the illusory belief that the treatment is
working.
Methods
Ethics Statement
The ethical review board of the University of Deusto examined
and approved the procedure used in this experiment, as a part of a
larger research project (Ref: ETK-44/12-13).
Participants and Apparatus
Seventy-nine anonymous first-year Psychology students from
the Universidad Nacional de Educacio´n a Distancia (UNED,
Spain) volunteered to take part in the study through the virtual
laboratory website [http://www.labpsico.deusto.es] as an optional
course activity. No personal information (age, gender) was
collected. The computer program assigned each participant to
one of two groups, a high-cost group and a no-cost group. Data
from five participants were removed because the medicine was not
administered to any patient during the session. Thus, the final
sample consisted of 74 participants, 39 of whom were in the high-
cost group, and 35 in the no-cost group. The experiment was
programmed in JavaScript, a web-based language that is interpret-
able by most browsers.
The participants were informed before the experiment that they
could quit the study at any moment by closing the browser
window. The data collected during the experiment were sent
anonymously to the experimenter only upon explicit permission by
the participant, indicated by clicking on a "Submit" button. If the
participant clicked on the "Cancel" button, the information was
erased. No personal information (i.e., name, IP address, e-mail)
was collected. In agreement with the ethical guidelines for
Internet-based research [13], we did not use cookies or other
software to covertly obtain information from the participants.
Procedure and Design
We adapted a standard contingency learning paradigm that has
been extensively used in the literature [14]. Participants were
individually presented with a computer task in which they were
asked to imagine that they were medical doctors working in an
emergency care facility. They were told that crises induced by a
dangerous disease called "Lindsay syndrome" should be stopped
immediately. Each participant was to heal as many patients as
possible. To this end, participants could use a medicine called
Batatrim but, since this medicine was still experimental, its
Belief in Pseudomedicine as Casual Illusion
PLOS ONE | www.plosone.org 2 January 2014 | Volume 9 | Issue 1 | e84084
effectiveness in treating the disease had not yet been proven. The
high-cost group was informed that Batatrim would produce a
severe and permanent skin rash as a side effect in every patient
who takes it. The no-cost group was not told about any side effect.
(An English translation of the full instructions is available as
supporting material Instructions S1.)
After reading the instructions, 50 medical records of different
fictitious patients were presented sequentially. Each record
contained a picture showing the patient suffering from the
syndrome (i.e., a greenish head covered in beads of sweat)
together with a sentence stating that the patient was suffering from
a crisis provoked by the syndrome. Immediately below was the
question, "Would you like to give Batatrim to this patient?" The
participant then indicated his decision by clicking on a "Yes" or
"No" button. Upon giving the answer, a message indicated
whether Batatrim was used by displaying either "You have given
Batatrim to this patient"or"You have given nothing to this patient." This
statement was accompanied by a picture of a medicine bottle,
which was crossed out if the participant opted not to use Batatrim.
The picture of the sick patient and the indication of Batatrim use
were displayed in the top and middle panels, respectively. The
bottom panel then displayed the outcome for the patient. Figure 1
shows a sample of two medical records used in the task.
For each participant, in 35 out of 50 trials, the patient recovered
from the crisis induced by the syndrome. This outcome was
programmed to occur randomly, independent of the participant’s
decision to use Batatrim. In other words, the outcome took place
with high probability (70%) but was programmed to be
uncorrelated with use of the medicine.
In the no-cost group, the outcome was displayed as a picture of
a healthy face and the message, "The patient has recovered from the
crisis", whereas the outcome absence was displayed as a picture of
an ill face (greenish, covered in sweat) identical to the one
presented in the top panel of the computer screen, and the
statement, "The patient has not recovered from the crisis." This procedure
and presentation format was identical to those widely used in
previous versions of the standard contingency learning task [3]. By
contrast, the high-cost group was shown pictures and messages
conveying not only the disease outcome, but also the side effects of
Batatrim when it was used. Thus, whenever the medicine was
given, the picture of the patient showed a skin rash, and the
statement also included the words "...and has severe side effects."
Likewise, whenever the medicine was not given, the words "...and
has no side effects" were added to the message. Figure 2 shows a
sample of the four stimuli used as outcomes in the high-cost group.
Note that the side effects were described and visually depicted as
different from the symptoms produced by the Lindsay syndrome
crisis; the former were presented as a skin rash of red spots
covering the head, and the latter as a combination of greenish skin
color and beads of sweat. The symptoms of the syndrome,
identical in all the patients, were always visible in the top panel of
the screen, which showed the initial state of the patient. The side
effects, when they appeared, were superimposed on the symptoms
of the illness in the lower panel of the screen. This allowed visual
comparison between the initial and final status of the patient, with
the aim of preventing participants from confusing the patient’s
symptoms and the medicine’s side effects.
At the end of the experiment, the participants were asked to rate
the perceived effectiveness of Batatrim by asking, "To what extent
do you think that Batatrim has been effective to stop the crises of
Lindsay syndrome in the patients you have just seen?" They
indicated their rating by clicking on a numerical scale from zero to
100 where zero was "It has been completely ineffective to stop the
crises," 50 was "It has been moderately effective to stop the crises,"
and 100 was "It has been perfectly effective to stop the crises."
Because the recoveries from the syndrome occurred equally often
in the presence and in the absence of the medicine, the higher the
effectiveness ratings, the stronger the overestimation of the zero
contingency between the medicine and the recoveries.
Results
The left panel in Figure 3 depicts the mean probability of using
Batatrim, i.e., the P(Cause), for each group. A one-way ANOVA
indicated that, as expected, P(Cause) was significantly higher in the
no-cost group than in the high-cost group, F(1, 72) = 23.87,
p,0.001, g
p
2
= 0.25. The no-cost group gave the treatment to
more than 50% of the patients, t(34) = 2.08, p,0.005 in
agreement with previous research showing a spontaneous tenden-
cy to use the medicine with high probability. The high-cost group
gave the treatment to significantly fewer than 50% of the patients,
t(38) = 3.86, p,0.001. The mean effectiveness judgments given by
the participants at the end of the session were also higher in the
no-cost group, F(1, 72) = 10.64, p,0.005, g
p
2
= 0.13 (Figure 3).
We were also interested in testing whether the effect of the cost
manipulation on the effectiveness judgments could be attributed to
the mediator role of P(Cause). This mediational hypothesis has
received empirical support [9], [11] concerning the effectiveness of
a fictitious medicine. We wished to test whether it held true for our
case, where the cost of administering the medicine, rather than an
instructional manipulation or an educational workshop, was what
prompted the participants to reduce P(Cause).
According to the mediational structure hypothesis, the total
effect of the cost manipulation on the judgments was composed of
two effects, one direct effect and one indirect effect through
P(Cause) (Figure 4). These effects were assessed using the
procedure described by Hayes [15]. The resulting (unstandard-
ized) coefficients are shown in Figure 4. The total effect of the cost
manipulation on effectiveness judgments was significant, c= 0.22,
t(73) = 3.26, p,0.005, 95% CI = 0.08 to 0.35, and partitioned into
two pathways (Figure 4) that were examined in turn. The direct
effect of the cost manipulation, c’=20.04, t(72) = 20.82, p= 0.42,
95% CI = 20.13 to 0.06, was not significant once it was controlled
for P(Cause). The indirect effect was composed of two pathways,
both of which were significant, a= 0.34, t(73) = 4.89, p,0.001,
95% CI = 0.20 to 0.47, and b= 0.77, t(72) = 10.76, p,0.001, 95%
CI = 0.63 to 0.91. The former shows a strong relationship between
cost manipulation and P(Cause) and the latter, a strong positive
relationship between P(Cause) and effectiveness judgments even
after controlling for the effect of the cost manipulation. These
results suggest, as expected, that P(Cause) completely mediated the
effect of the cost manipulation on the effectiveness judgments.
Recent evidence [15] recommends that inferences about the
indirect effect are based not on the significance of the individual
paths (aand b), but rather, on the explicit quantification of the
indirect path itself. This was achieved by using the PROCESS
macro [15]. We employed the bias-corrected bootstrap resampling
method (5000 samples) made available by this tool to compute
95% confidence intervals for the indirect effect of the cost
manipulation through P(Cause) (i.e., pathway a*b). The analyses
confirmed the mediator role of P(Cause), a*b= 0.26, 95%
CI = 0.15 to 0.36, in that the entire confidence interval was above
zero. In addition, the proportion of the total effect due to the
indirect effect was 1.18, 95% CI = 0.83 to 2.20, further
strengthening support for the total mediation hypothesis.
Belief in Pseudomedicine as Casual Illusion
PLOS ONE | www.plosone.org 3 January 2014 | Volume 9 | Issue 1 | e84084
Discussion
In this study, we have shown that knowing a medicine produces
side effects prevented the overestimation of its effectiveness that is
typically observed when the percentage of spontaneous remissions
is high [7], [10]. We demonstrated that the mechanism by which
this effect works rests on the lower frequency of the treatment
usage exhibited by those participants who were aware of the
medicine’s side effects.
Previous work in the more general domain of causal illusions
made use of a procedure similar to the one we employed in the no-
cost group in that, they did not mention any side effect of the
medicine. As a result, they found a spontaneous tendency to use
the treatment with relatively high frequency [7], [9], and,
consequently, a strong overestimation of the effectiveness of the
medicine. The cost manipulation that we implemented via the
explicit inclusion of side effects demonstrated how these illusions
can be readily reduced, and reinforced the idea that the causal
illusion is strongly determined by P(Cause) as previous research
suggests [7], [8], [9], [10].
We therefore assumed that the effect of manipulating the side
effects in this study is completely attributable to subsequent
changes in P(Cause). Further research is needed to study the
mechanism linking high P(Cause) and the illusory perception of
causality. One possible explanation for this P(Cause) effect rests on
the increase in the number of coincidences between the potential
cause (i.e., the use of the medicine) and the outcome (i.e., the
recovery from the disease). Because both the potential cause and
the outcome happen very frequently, they are likely to coincide
accidentally [10]. These fortuitous pairings lead to the develop-
ment of an illusory causal link between them, and thus to an
irrational effectiveness overestimation, similar to a superstition.
This was inspired by early precedents, namely, the adventitious
reinforcement designs reported by Skinner in 1948 [16] (although
Skinner’s interpretations of his data as "superstitious behavior"
were later criticized [17], recent advancements converge to
support the role of accidental coincidences in the development,
by reinforcement, of systematic behavior patterns even in
laboratory studies using animal subjects [18]). The coincidence-
based account for the P(Cause effect) is readily accommodated by
current leading theories proposed to explain causal and associative
learning. For instance, associative theories such as the Rescorla-
Wagner model [19] predict the illusion of causality as long as
Figure 1. Two samples of the medical records used in the contingency learning task. Medical records were presented sequentially (one
new patient per trial). In these two samples, the fictitious patients were programmed to fail to recover. Thus, the outcomes for these two patients
show the same symptoms as in the initial state of the trial (greenish skin, sweat) that was always presented in the top panel of each record. The
record depicted in the top of the figure corresponds to a patient who was given the medicine by a participant in the high-cost group. The patient
developed the skin rash side effect which was added to the symptoms of the syndrome. The record depicted at the bottom of the figure corresponds
to a patient in which the participant decided not to use the medicine (the pill bottle is crossed out in red). This patient showed no additional
symptoms to the ones provoked by the syndrome alone.
doi:10.1371/journal.pone.0084084.g001
Belief in Pseudomedicine as Casual Illusion
PLOS ONE | www.plosone.org 4 January 2014 | Volume 9 | Issue 1 | e84084
many cause-outcome coincidences occur early in the experiment.
Similarly, several alternative causal learning models, based on
statistical rules, predict the illusion because they weigh more
heavily on the trials in which these coincidences occur [20], [21].
In any case, the relationship between P(Cause) and the illusion of
causality is a matter of theoretical discussion in the associative and
causal learning field, and goes beyond the focus of our work (see
[7] for further discussion on this point).
Importantly, the introduction of the high cost of using the
treatment renders our experimental task more natural and
ecological than the standard human contingency learning
paradigms commonly used in causal illusion research, in which
no cost of the action is imposed. A psychological experiment is
most often an artificial situation where little information is given
and few behavioral options are available to the participant. In
contrast, a typical real life situation involves a wide variety of
factors affecting human decisions and behaviors. These factors
may be economic cost, effort, magnitude of benefit, competing
alternatives, etc. While our side-effect manipulation arguably
made the task more ecological than paradigms commonly used in
the literature, one must remain cautious when extending our
results to real pseudomedicine use, since many of the above-
mentioned factors are outside the scope of our simplified
experimental setting. Once this limitation is acknowledged, we
believe it is plausible to assume that our results are applicable to
pseudomedicine use in several ways.
Most pseudomedicines are used in conditions that promote the
illusion of efficacy, and, as seen in our experiment, these
alternative treatments are more often applied to mild diseases
with a high rate of spontaneous remission (headache, back pain,
etc.) This parallels experiments conducted where there is a high
probability of the desired outcome, resulting in strong overesti-
mations of zero contingencies [4], [5], [6]. In addition, many
pseudomedicines are advertised as harmless, as opposed to most
conventional treatments, which typically produce undesired side
effects. As shown in our experiment, the lack of side effects
increased the probability of the cause (i.e., the frequency with
which the medicine is prescribed), which in turn is known to be
another factor that results in overestimations of zero contingencies
[7], [8], [10], that frequently used medicines will be more likely
considered effective even if they are completely useless. Moreover,
Figure 2. Stimuli used to represent the outcome information in
the high-cost group. These consisted of a picture and a message.
Each patient either recovered from the crisis (left column) or not (right
column). In addition, the cost (side effect) of the action was depicted as
a permanent skin rash whenever it was used (top row). The skin rash
was never observed otherwise (bottom row). In the no-cost group,
regardless the decision to use the medicine, the stimuli were identical
to those presented in the bottom row, except for the removal of the
reference to side effects in the accompanying messages (i.e., ‘‘The
patient has recovered from the crisis’’ or ‘‘The patient has not recovered
from the crisis’’ for the left and right panels, respectively).
doi:10.1371/journal.pone.0084084.g002
Figure 3. Results of the experiment. The left panel shows the mean
probability of introducing the potential cause, P(Cause), in each of the
two groups. The right panel shows the mean effectiveness judgments
given by participants in the two groups. Error bars depict 95%
confidence intervals for the means.
doi:10.1371/journal.pone.0084084.g003
Figure 4. Mediational structure underlying the experimental
manipulation. The total effect of the cost of the action on the
effectiveness judgments, depicted as path c (top panel), is partitioned
into two components, one indirect effect through P(Cause) (paths a and
b, bottom panel), and one direct effect (path c’, bottom panel), which is
the result of discounting the indirect effect. The unstandardized
coefficients and p-values for each pathway are provided. These results
suggest that the effect of the cost associated with the treatment,
manipulated between groups, was completely mediated by P(Cause).
doi:10.1371/journal.pone.0084084.g004
Belief in Pseudomedicine as Casual Illusion
PLOS ONE | www.plosone.org 5 January 2014 | Volume 9 | Issue 1 | e84084
when both a high probability of the outcome and a high
probability of the cause are combined, the chances that the two
events coincide accidentally increase, and therefore the causal
illusion is strongly facilitated [5], as predicted by leading
theoretical accounts of causal learning. It can be safely predicted
that, once a treatment is considered at least somewhat effective, it
will be used even more frequently, resulting in higher chances of
further accidental coincidences. This feeds a vicious circle in which
accidental occurrences of a desired outcome reinforce actions that
are thought to produce them.
If the knowledge acquired from the experimental research on
causal illusions and causal learning can be used to better
understand why certain pseudomedicines are popular in the
public, it can also be valuable in overcoming problems derived
from the belief in the effectiveness of these treatments. Our first
recommendation for the developers of educational interventions to
prevent or combat these beliefs is that they should stress the benefit
of using scientifically validated treatments over the benefit of
avoiding side effects. If the patient’s focus is on the latter point, she
will generally start using the pseudomedicine because of the low
cost associated with this action (e.g., there is no harm in using it).
We have documented that this is likely to result in an illusion of
effectiveness. Our second recommendation is to draw the patient’s
attention to the occasions in which the use of the medicine and
health improvement do not co-occur; either the treatment is not
followed by the healing, or the healing occurs without taking the
pseudomedicine. Because people spontaneously grant more credit
to response-outcome coincidences [22] and different theories agree
in the crucial role that these coincidences posses in the genesis of
causal illusions, reminding people to pay attention to the events in
which treatment and healing occur separately could prevent or
reduce the illusion. This approach was adopted in a recent
intervention designed for high school students described by
Barberia et al. [9]. Their intervention was successful in reducing
the tendency to develop causal illusions, as measured by means of
a contingency learning task very similar to the one used here. To
summarize, we believe that the knowledge obtained in basic
experimental psychology can fruitfully contribute to prevent and
eradicate certain widespread harmful behaviors and beliefs.
Supporting Information
Instructions S1 Full instructions of the experiment
(translated from the original in Spanish). The underlined
sentences were omitted from the no-cost group.
(PDF)
Acknowledgments
We thank Cristina Orgaz for her help in recruiting participants for the
experiment.
Author Contributions
Conceived and designed the experiments: FB IB HM. Performed the
experiments: FB IB. Analyzed the data: FB. Wrote the paper: FB IB HM.
References
1. Shang A, Huwiler-Mu¨ ntener K, Nartey L, Ju¨niP, Do¨rig S, et al. (2005) Are the
clinical effects of homeopathy placebo effects? Comparative study of placebo-
controlled trials of homeopathy and allopathy. LANCET 366: 7262732. doi:
10.1016/S0140-6736(05)67177-2.
2. Freckelton I (2012) Death by homeopathy: issues for civil, criminal and coronial
law and for health service policy. Journal of Law and Medicine 19: 4542478.
3. Matute H, Yarritu I, Vadillo MA (2011) Illusions of causality at the heart of
pseudoscience. BRIT J PSYCHOL 102: 3922405. doi: 10.1348/
000712610X532210.
4. Allan LG, Jenkins HM (1983) The Effect of Representations of Binary Variables
on Judgment of Influence. LEARN MOTIV 14: 3812405. doi: 10.1016/0023-
9690(83)90024-3.
5. Blanco F, Matute H, Vadillo MA (2013) Interactive effects of the probability of
the cue and the probability of the outcome on the overestimation of null
contingency. LEARN BEHAV. Advance online publication. doi: 10.3758/
s13420-013-0108-8.
6. Buehner MJ, Cheng PW, Clifford D (2003) From covariation to causation: A test
of the assumption of causal power. J EXP PSYCHOL LEARN 29: 111921140.
doi: 10.1037/0278-7393.29.6.1119.
7. Blanco F, Matute H, Vadillo MA (2011) Making the uncontrollable seem
controllable: The role of action in the illusion of control. Q J EXP PSYCHOL
64: 129021304. doi: 10.1080/17470218.2011.552727.
8. Hannah S, Beneteau JL (2009) Just tell me what to do: Bringing back
experimenter control in active contingency tasks with the command-perfor-
mance procedure and finding cue-density effects along the way. CAN J EXP
PSYCHOL 63: 59273. doi: 10.1037/a0013403.
9. Barberia I, Blanco F, Cubillas CP, Matute H (2013) Implementation and
assessment of an intervention to debias adolescents against causal illusions.
PLOS ONE 8(8), e71303. doi:10.1371/journal.pone.0071303.
10. Matute H (1996) Illusion of control: Detecting response-outcome independence
in analytic but not in naturalistic conditions. PSYCHOL SCI 7: 2892293. doi:
10.1111/j.1467-9280.1996.tb00376.
11. Blanco F, Matute H, Vadillo MA (2012) Mediating role of the activity level in
the depressive realism effect. PLOS ONE 7(9): e46203. doi: 10.1371/journal.
pone.0046203.
12. Yarritu I, Matute H, Vadillo MA (2013) Illusion of control: The role of personal
involvement. EXP PSYCHOL. doi: 10.1027/1618-3169/a000225.
13. Frankel MS, Siang S (1999) Ethical and legal aspects of human subjects research
in cyberspace. Report of a workshop convened by the American Association for
the Advancement of Science, Program on Specific Freedom, Responsibility, and
Law, Washington D.C. Available: http://www.aaas.org/spp/dspp/sfrl/
projects/intres/main.htm. Accessed 2013 Jul 24.
14. Wasserman EA (1990) Detecting response-outcome relations: Toward an
understanding of the causal texture of the environment. In: Bower GH, editor.
The psychology of learning and motivation, Vol. 26. San Diego, CA: Academic
Press. pp. 27282.
15. Hayes AF (2012) PROCESS: A versatile computational tool for observed
variable mediation, moderation, and conditional process modeling. Available:
http://www.personal.psu.edu/jxb14/M554/articles/process2012.pdf. Accessed
2013 Nov 21.
16. Skinner BF (1948) Superstition in the pigeon. J EXP PSYCHOL GEN 38:
1682172. doi: 10.1037/h0055873.
17. Staddon JER, Simmelhag VL (1971) The "superstition" experiment: A re-
examination of it s implications for the principles of adapt ive behavior.
PSYCHOL REV 78: 3243. doi: 10.1037/h0030305.
18. Killeen PR, Pello´ n R (2013) Adjunctive behaviors are operants. LEARN
BEHAV 41: 1224. doi: 10.3758/s13420-012-0095-1.
19. Rescorla RA, Wagner AR (1972) A theory of Pavlovian conditioning: Variations
in the effectiveness of reinforcement and nonreinforcement. In: Black AH,
Prokasy WF, editors. Classical conditioning II: Current research and theory.
New York: Appelton-Century-Crofts. pp. 64299.
20. Perales JC, Shanks DR (2007) Models of covariation-based causal judgments: a
review and synthesis. PSYCHON B REV 14: 5772596. doi: 10. 3758/
BF03196807.
21. White PA (2004) Causal judgment from contingency information: A systematic
test of the pCI rule. MEM COGNITION 32: 3532368. doi: 10.3758/
BF03195830.
22. Kao S-F, Wasserman EA (1993) Assessment of an information integration
account of contingency judgment with examination of subjective cell importance
and method of information presentation. J EXP PSYCHOL LEARN 19,
136321386. doi:10.1037/0278-7393.19.6.1363.
Belief in Pseudomedicine as Casual Illusion
PLOS ONE | www.plosone.org 6 January 2014 | Volume 9 | Issue 1 | e84084
... Pseudotherapy use has been linked to negative experiences with conventional evidence-based medicine, ranging from a negative doctor-patient interaction to the perception of conventional medicine's ineffectiveness, through to the side effects of conventional treatments [5][6][7]. In this way, the use of pseudotherapies would be motivated by being perceived as riskfree, in part because their development has not been influenced by the interests of pharmaceutical companies [8,9]. However, motivation for pseudotherapy use is not simply derived from a rejection of conventional medicine or a poor perception of health systems [10]. ...
... • Interest in/knowledge of Medicine and Health: low [2][3][4]; medium [5][6][7]; high [8][9][10]. Variable constructed from the sum of the following Likert-type variables (range 1-5): 1) Now, I would like to know if you are very little, little, somewhat, quite or very interested in the following topics: medicine and health. ...
... 2) Now, I would like you to tell me whether you consider yourself: very little, a little, somewhat, quite or very informed about each of these same topics: medicine and health. • Perceived health: In general, would you say that your health is ...? good/very good; fair/bad/very bad • Quality of the health system: In general, how would you rate the quality of the public health system: good/rather good; rather bad/bad • Confidence in homeopathy and acupuncture: low [2][3][4]; mean [5][6][7]; high [8][9][10]. Variable constructed from the summation of the following Likert-type variables (range 1-5): Of the following practices, please tell me if you trust a lot, a lot, something, little or not at all in its usefulness for health and general well-being: 1) acupuncture; 2) homeopathy • Scientific validity of homeopathy/acupuncture: low [2][3][4]; medium [5][6][7]; high [8][9][10]. ...
Article
Full-text available
Objectives: To identify how perceptions, attitudes, and beliefs towards pseudotherapies, health, medicine, and the public health system influence the pseudotherapy use in Spain. Methods: We carried out a cross-sectional study using the Survey of Social Perception of Science and Technology-2018 (5,200 interviews). Dependent variable: ever use of pseudotherapies. Covariables: attitude towards medicine, health and public health system; perceived health; assessment of the scientific character of homeopathy/acupuncture. The association was estimated using prevalence ratios obtained by Poisson regression models. The model was adjusted for age and socioeconomic variables. Results: Pseudotherapy use was higher in women (24.9%) than in men (14.2%) (p < 0.001). The probability of use in men (p < 0.001) and women (p < 0.001) increases with the belief in pseudotherapies’ usefulness. Among men, a proactive attitude (reference: passive) towards medicine and health (RP:1.3), and a negative (reference: positive) assessment of the quality of the public health system increased use-probability (RP:1.2). For women, poor health perceived (referencie: good) increased likelihood of use (RP:1.2). Conclusion: Pseudotherapy use in Spain was associated with confidence in its usefulness irrespective of users’ assessment of its scientific validity.
... This line of research that uses causal learning experiments to study health beliefs has shown some promising advances. For example, it is possible to predict which conditions will make patients and users more vulnerable to pseudomedicine and bogus health claims (Blanco et al., 2014;, to discover situations in which previously acquired beliefs interfere with actual effectiveness (Yarritu et al., 2015), to investigate how health beliefs are affected by biases in Internet search , to explain why certain patients are hypersensitive to pain symptoms (Meulders et al., 2018), and to improve the effect of placebos (Yeung et al., 2014). This knowledge has the potential to offer a valuable foundation for designing interventions aimed at debiasing dysfunctional beliefs in real life settings (Lewandowsky et al., 2012;Macfarlane et al., 2020). ...
... Rather, they must form their beliefs of effectiveness on the basis of a more limited comparison: how often symptoms were observed before the treatment started vs. how often they occur during the treatment, on the same patient (usually, themselves). Most causal learning experiments do not take into account this limitation, and instead provide participants with information about a series of different patients (Blanco et al., 2014;Matute et al., 2019). This is useful to investigate the formation of causal knowledge in general, but it is not realistic when applied to the case of patients' beliefs of effectiveness, as the procedure clearly departs from the actual experience of patients with their own treatments. ...
... However, researchers have also reported systematic deviations, or biases. In particular, when the probability of the desired outcome is high, judgments tend to be higher even in null contingency conditions (Alloy and Abramson, 1979;Buehner et al., 2003;Blanco et al., 2014Chow et al., 2019), contributing to what has been called a "causal illusion." This is a bias consisting of the belief in a causal link that is actually inexistent (Matute et al., 2015;Matute et al., 2019). ...
Article
Causal illusions have been postulated as cognitive mediators of pseudoscientific beliefs, which, in turn, might lead to the use of pseudomedicines. However, while the laboratory tasks aimed to explore causal illusions typically present participants with information regarding the consequences of administering a fictitious treatment versus not administering any treatment, real-life decisions frequently involve choosing between several alternative treatments. In order to mimic these realistic conditions, participants in two experiments received information regarding the rate of recovery when each of two different fictitious remedies were administered. The fictitious remedy that was more frequently administered was given higher effectiveness ratings than the low-frequency one, independent of the absence or presence of information about the spontaneous recovery rate. Crucially, we also introduced a novel dependent variable that involved imagining new occasions in which the ailment was present and asking participants to decide which treatment they would opt for. The inclusion of information about the base rate of recovery significantly influenced participants’ choices. These results imply that the mere prevalence of popular treatments might make them seem particularly effective. It also suggests that effectiveness ratings should be interpreted with caution as they might not accurately reflect real treatment choices. Materials and datasets are available at the Open Science Framework [https://osf.io/fctjs/].
... This line of research that uses causal learning experiments to study health beliefs has shown some promising advances. For example, it is possible to predict which conditions will make patients and users more vulnerable to pseudomedicine and bogus health claims (Blanco et al., 2014;, to discover situations in which previously acquired beliefs interfere with actual effectiveness (Yarritu et al., 2015), to investigate how health beliefs are affected by biases in Internet search , to explain why certain patients are hypersensitive to pain symptoms (Meulders et al., 2018), and to improve the effect of placebos (Yeung et al., 2014). This knowledge has the potential to offer a valuable foundation for designing interventions aimed at debiasing dysfunctional beliefs in real life settings (Lewandowsky et al., 2012;Macfarlane et al., 2020). ...
... Rather, they must form their beliefs of effectiveness on the basis of a more limited comparison: how often symptoms were observed before the treatment started vs. how often they occur during the treatment, on the same patient (usually, themselves). Most causal learning experiments do not take into account this limitation, and instead provide participants with information about a series of different patients (Blanco et al., 2014;Matute et al., 2019). This is useful to investigate the formation of causal knowledge in general, but it is not realistic when applied to the case of patients' beliefs of effectiveness, as the procedure clearly departs from the actual experience of patients with their own treatments. ...
... However, researchers have also reported systematic deviations, or biases. In particular, when the probability of the desired outcome is high, judgments tend to be higher even in null contingency conditions (Alloy and Abramson, 1979;Buehner et al., 2003;Blanco et al., 2014Chow et al., 2019), contributing to what has been called a "causal illusion." This is a bias consisting of the belief in a causal link that is actually inexistent (Matute et al., 2015;Matute et al., 2019). ...
Article
Full-text available
Patients' beliefs about the effectiveness of their treatments are key to the success of any intervention. However, since these beliefs are usually formed by sequentially accumulating evidence in the form of the covariation between the treatment use and the symptoms, it is not always easy to detect when a treatment is actually working. In Experiments 1 and 2, we presented participants with a contingency learning task in which a fictitious treatment was actually effective to reduce the symptoms of fictitious patients. However, the base-rate of the symptoms was manipulated so that, for half of participants, the symptoms were very frequent before the treatment, whereas for the rest of participants, the symptoms were less frequently observed. Although the treatment was equally effective in all cases according to the objective contingency between the treatment and healings, the participants' beliefs on the effectiveness of the treatment were influenced by the base-rate of the symptoms, so that those who observed frequent symptoms before the treatment tended to produce lower judgments of effectiveness. Experiment 3 showed that participants were probably basing their judgments on an estimate of effectiveness relative to the symptom base-rate, rather than on contingency in absolute terms. Data and materials are publicly available at the Open Science Framework: https://osf.io/emzbj/
... Surely, advertising campaigns and hearsay play an important role on the acquisition of treatment effectiveness beliefs (de Barra, 2017). Additionally, perhaps they can form gradually as people acquire experience with the treatment (Blanco et al., 2014;Rottman et al., 2017). This means such beliefs are in part the result of causal learning, the ability to acquire causal links between potential causes and their effects, in this case between the use of a treatment and a health improvement, as some authors have argued . ...
... Previous experiments have documented how patients tend to overestimate the effectiveness of a completely ineffective drug when they feel symptomatic relief often (but as frequently when taking the drug as when not taking it). Moreover, if the pseudotherapy is used with high Running head: PERCEIVING TREATMENT EFFECTIVENESS 3 frequency too (e.g., because it has no side effects), then the OD bias becomes stronger, resulting in a large overestimation of the causal relationship between taking the drug and feeling better (Blanco et al., 2014). Yet there is a potentially critical factor that has not yet been studied in this particular domain: What would happen to beliefs of effectiveness if symptoms improved gradually while taking the pseudotherapy, just by mere chance? ...
... Experiment 3 presents an interesting question, as it is designed to work under the assumption of non-independence between trials. Most contingency learning experiments describe trials that can be naturally interpreted as mutually independent: for example, it is common to present a sequence of trials arranged in random order, each one corresponding to a different individual patient (Blanco et al., 2014;Blanco and Matute, 2019). Thus, it is unlikely that participants assume that treating one patient could affect subsequent patients. ...
Article
Full-text available
Rationale Self-limited diseases resolve spontaneously without treatment or intervention. From the patient's viewpoint, this means experiencing an improvement of the symptoms with increasing probability over time. Previous studies suggest that the observation of this pattern could foster illusory beliefs of effectiveness, even if the treatment is completely ineffective. Therefore, self-limited diseases could provide an opportunity for pseudotherapies to appear as if they were effective. Objective In three computer-based experiments, we investigate how the beliefs of effectiveness of a pseudotherapy form and change when the disease disappears gradually regardless of the intervention. Methods Participants played the role of patients suffering from a fictitious disease, who were being treated with a fictitious medicine. The medicine was completely ineffective, because symptom occurrence was uncorrelated to medicine intake. However, in one of the groups the trials were arranged so that symptoms were less likely to appear at the end of the session, mimicking the experience of a self-limited disease. Except for this difference, both groups received similar information concerning treatment effectiveness. Results In Experiments 1 and 2, when the disease disappeared progressively during the session, the completely ineffective medicine was judged as more effective than when the same information was presented in a random fashion. Experiment 3 extended this finding to a new situation in which symptom improvement was also observed before the treatment started. Conclusions We conclude that self-limited diseases can produce strong overestimations of effectiveness for treatments that actually produce no effect. This has practical implications for preventative and primary health services. The data and materials that support these experiments are freely available at the Open Science Framework (https://bit.ly/2FMPrMi)
... Even in laboratory experiments, a relevant amount of evidence has been collected in computer tasks that used meaningful scenarios, such as the typical medicine-evaluation task, in which participants are asked to judge the effectiveness of a medicine in treating a fictitious disease. These experiments have revealed important information that can be used to alleviate the undesired effects of the causal illusion in real life situations, such as pseudomedicine usage or self-medication [7]. PLOS The experimental paradigm typically used to investigate causal illusions is the contingency learning task [8,9]. ...
... Previous research has proposed that causal illusions such as those reported here entail both good and bad consequences for people [57]. On the one hand, the causal illusion could underlie many every-day irrational beliefs, attitudes and behaviors: pseudomedicine usage, paranormal beliefs, pathological gambling, and superstitions [2,7,[58][59][60]. These practices can even be dangerous (e.g., when one person resorts to a pseudomedicine, abandoning a valid treatment). ...
Article
Full-text available
Previous research revealed that people’s judgments of causality between a target cause and an outcome in null contingency settings can be biased by various factors, leading to causal illusions (i.e., incorrectly reporting a causal relationship where there is none). In two experiments, we examined whether this causal illusion is sensitive to prior expectations about base-rates. Thus, we pretrained participants to expect either a high outcome base-rate (Experiment 1) or a low outcome base-rate (Experiment 2). This pretraining was followed by a standard contingency task in which the target cause and the outcome were not contingent with each other (i.e., there was no causal relation between them). Subsequent causal judgments were affected by the pretraining: When the outcome base-rate was expected to be high, the causal illusion was reduced, and the opposite was observed when the outcome base-rate was expected to be low. The results are discussed in the light of several explanatory accounts (associative and computational). A rational account of contingency learning based on the evidential value of information can predict our findings.
... Accordingly, CAM subscribers may be more liberal in using these treatments since the short-term cost of treatment use is low. The effect of the cost of administration was investigated in Blanco, Barberia, and Matute's (2014) study, in which participants were presented with two fictitious drugs, one with a side effect and one without. The researchers found that participants were more likely to administer the drug without any side effects, and the frequency of drug administration was highly predictive of illusory causation; participants exposed to the drug frequently (i.e., high cue density) were more likely to judge the treatment as being more efficacious than those who rarely administered the drug and thus, had fewer cue-present events (Blanco, Barberia, & Matute, 2014). ...
... The effect of the cost of administration was investigated in Blanco, Barberia, and Matute's (2014) study, in which participants were presented with two fictitious drugs, one with a side effect and one without. The researchers found that participants were more likely to administer the drug without any side effects, and the frequency of drug administration was highly predictive of illusory causation; participants exposed to the drug frequently (i.e., high cue density) were more likely to judge the treatment as being more efficacious than those who rarely administered the drug and thus, had fewer cue-present events (Blanco, Barberia, & Matute, 2014). ...
Article
Full-text available
Illusory causation refers to a consistent error in human learning in which the learner develops a false belief that two unrelated events are causally associated. Laboratory studies usually demonstrate illusory causation by presenting two events—a cue (e.g., drug treatment) and a discrete outcome (e.g., patient has recovered from illness)—probabilistically across many trials such that the presence of the cue does not alter the probability of the outcome. Illusory causation in these studies is further augmented when the base rate of the outcome is high, a characteristic known as the outcome density effect. Illusory causation and the outcome density effect provide laboratory models of false beliefs that emerge in everyday life. However, unlike laboratory research, the real-world beliefs to which illusory causation is most applicable (e.g., ineffective health therapies) often involve consequences that are not readily classified in a discrete or binary manner. This study used a causal learning task framed as a medical trial to investigate whether similar outcome density effects emerged when using continuous outcomes. Across two experiments, participants observed outcomes that were either likely to be relatively low (low outcome density) or likely to be relatively high (high outcome density) along a numerical scale from 0 (no health improvement) to 100 (full recovery). In Experiment 1, a bimodal distribution of outcome magnitudes, incorporating variance around a high and low modal value, produced illusory causation and outcome density effects equivalent to a condition with two fixed outcome values. In Experiment 2, the outcome density effect was evident when using unimodal skewed distributions of outcomes that contained more ambiguous values around the midpoint of the scale. Together, these findings provide empirical support for the relevance of the outcome density bias to real-world situations in which outcomes are not binary but occur to differing degrees. This has implications for the way in which we apply our understanding of causal illusions in the laboratory to the development of false beliefs in everyday life. Electronic supplementary material The online version of this article (10.1186/s41235-018-0149-9) contains supplementary material, which is available to authorized users.
... In these cases, some people develop the conviction of the effectiveness of a treatment, even when there is no real covariation between using the pseudomedicine or the miracle product and the improvement . In fact, as previously noted by Blanco et al. (2014), pseudomedicines seem to proliferate in contexts that are ideal for the rise of causal illusions: first, they are usually applied to health problems (e.g., back pain, headache) that tend to show spontaneous recovery (high outcome density) and, since they are frequently advertised as lacking side effects, they tend to be used recurrently (high cause density). As far as misbeliefs such as these can strongly impact relevant daily life decisions, understanding the mechanisms by which they are formed, the circumstances in which they most probably proliferate, and the expected evolution of these over time seems essential. ...
... As we have already noted, causal illusions are widely present in our everyday life, and they influence some relevant quotidian decisions like those related to health. Pseudomedicines tend to be repeatedly applied to health conditions associated to high spontaneous recovery rates (Blanco et al., 2014). These high cause density and high outcome density situations are ideal for the appearance of causal illusions. ...
Article
Full-text available
We carried out an experiment using a conventional causal learning task but extending the number of learning trials participants were exposed to. Participants in the standard training group were exposed to 48 learning trials before being asked about the potential causal relationship under examination, whereas for participants in the long training group the length of training was extended to 288 trials. In both groups, the event acting as the potential cause had zero correlation with the occurrence of the outcome, but both the outcome density and the cause density were high, therefore providing a breeding ground for the emergence of a causal illusion. In contradiction to the predictions of associative models such the Rescorla-Wagner model, we found moderate evidence against the hypothesis that extending the learning phase alters the causal illusion. However, assessing causal impressions recurrently did weaken participants’ causal illusions.
... Realizamos asociaciones, como la herradura con el bienestar y la ausencia de problemas, de forma inconsciente. Al igual ocurre con el llamado "efecto placebo" de cualquier sustancia inocua que, a pesar de no contener medicamento alguno, produce un efecto curativo y de bienestar en el paciente, quien al tomarlo proyecta la ilusión de la causa-efecto (Blanco et al., 2014) y siente la mejora. ...
Thesis
During this research, called Prototypes and archetypes of the representation of sleep paralysis: an approach from art, we analyzed, as its title indicates, the different artistic prototypes and archetypes that have emerged around a neurological sleep disorder known as sleep paralysis. This parasomnia takes place during the transition from sleep to wakefulness, responding to common symptoms that cause great suffering and fear to those afflicted by it, primarily through visual sensory hallucinations. Due to the limited and scarce information on this sleep disorder in the field of artistic research, a medical approach has been followed in the first and second chapters, accompanied by a discussion of the relevant psychological aspects, which will enable a better understanding of the anthropological field that surrounds it. This allows us to enter in the third chapter, where the cultural evolution of this parasomnia in the anthropological context is investigated through the examination of the mythology surrounding the incubus and the succubus, both of which are figures that are frequently associated with sleep paralysis. Their respective interpretation and interiorization as real beings will provide, through the association of ideas and the collective imagination, different social behavioral values to people regarding their experience with sleep paralysis. In the fourth chapter, an exhaustive analysis of prototypes and archetypes arising from the artistic representation of sleep paralysis is presented, focusing on the study of the work The Nightmare (1781) by Henry Füssli. A categorization and methodological chronology of different works ranging from the 18th century to the present day is discovered, which allows us to understand and study their analogous corresponding representation in art. In the fifth chapter, it is provided a reflection On the artistic representation and interpretation of different concepts associated with sleep paralysis, such as identity, memory and the emotion of fear, which has forwarded our understanding of this sleep disorder. At the same time, a study specifically designed for this project involved the collection of testimonies of people who have experienced sleep paralysis, in order to study their visual patterns in hallucinations from their descriptions. In the sixth and last chapter, a new perspective on the representation of sleep paralysis is proposed through the creation of subjective visual works (based on the testimonies) using photographic techniques. The methodology used to undertake this research involved the study and analysis of ancient medical and cultural treatises, such as the Persian manuscript Hidayat by Akhawayni Bohkari from the 10th century, The Discoverie of Witchcraft (1584) by Reginald Scot, the story The Night-Mare (1664) by Isbrand Van Diermerbroeck, the essay An essay on the incubus, or nightmare (1753) by John Bond and The Nightmare (1931) by Ernest Jones, among others. In additioninterviews were taken from contemporary artists who currently represent sleep paralysis very similar and were assembled in a compendium. Furthermore, an analytical and statistical study was also carried out, based on interviews of people who have suffered from this sleep disorder accompanied by a collection of written testimonies submitted through a web page created specifically for this artistic study. One of the main objectives was to develop a codified study of the myths and legends in different cultures and countries, and to understand their symbolic representation based on their popular imagery and the existing tradition in the category of the monstrous and the figure of the incubus in art. Specifically, we tracked the above mentioned work The Nightmare by Füssli, a work whose influence pertains to this day, being the most representative prototype and archetype of sleep paralysis. These research outputs will allow us to reflect on, to recreate and to question the existing representation of sleep paralysis in art until our days. The final objective is to approach the subjective representation of the experience of sleep paralysis, breaking with the prototype and archetype created over the years. To this end, new patterns of representation will be proposed through the author`s artistic creation based on the collected testimonies, in order to create a visual guide that serves as a means of understanding a society that has no prior experience with sleep paralysis. As a final conclusion, the interdisciplinary nature of this research has allowed us to understand the mythology and beliefs associated with sleep paralysis, which enables the identification and designation of possible prototypes and archetypes in the artistic representation of this parasomnia, marked by a powerful collective imagination. The artistic work presented here has created novel prototypes and archetypes of sleep paralysis, which greatly advances our understanding of this experience. As it is shown, this work is considerably better understood when is accompanied by the description of testimonies, as it connects a communication code between the text and the image. Nevertheless, despite the fact that a new proposal for the representation of sleep paralysis in art is emerging, the timeless value of the representation of Füssli’s The Nightmare is confirmed here. With this study, and with the resulting artistic works, we are able to approximate the experience of this parasomnia to a public that was unaware of it, which also reveals how the imagination operates on a collective and personal level, since it is built on each individual with components that are inherited culturally and transmitted and expressed through art.
... Once a belief had been consolidated, confirmation bias effects were found despite the presence of 2-sided evidence, mitigating selective evidence exposure as an explanation for confirmation bias effects (Doherty & Mynatt, 1990;Doherty, Mynatt, Tweney, & Schiavo, 1979;Klayman, 1995;Klayman & Ha, 1987). It is expected however, that in environments in which the individual can be selective in what they see (e.g., always selecting to take the medicine, so that the alternative of the disease disappearing without taking a medicine is never seen), the consolidation threshold would likely be crossed quicker (Blanco, Barberia, & Matute, 2014;Doherty et al., 1979;Yarritu, Matute, & Vadillo, 2014). Similarly, it is expected that factors that increase participant's trust in the source of information (of which increasing the number of anonymous comments was one) will increase the degree of belief uptake (Yaniv & Kleinberger, 2000;Yaniv, 2004), along with factors that provide supplementary motivations for confirmation (Kunda, 1990). ...
Thesis
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to assess the relationship of emotional characteristics and formulation of the question on the illusion of control, depending on the desirable and undesirable results. In the study, it was assumed that the illusion of control depends on the amount of effort applied to achieve the result. It has also been suggested to reduce the illusion of control when asking a causal question in the case where the result is desirable and the participant acts to make that result appear, and in the case where the result is undesirable and the subject acts to prevent it from occurring. The influence of the cause-effect question and emotional characteristics on the value of the illusion of control, measured by the self-esteem of the subjects was not found. There was also no correlation between the amount of effort and the illusion of control.
Article
Full-text available
Researchers have warned that causal illusions are at the root of many superstitious beliefs and fuel many people's faith in pseudoscience, thus generating significant suffering in modern society. Therefore, it is critical that we understand the mechanisms by which these illusions develop and persist. A vast amount of research in psychology has investigated these mechanisms, but little work has been done on the extent to which it is possible to debias individuals against causal illusions. We present an intervention in which a sample of adolescents was introduced to the concept of experimental control, focusing on the need to consider the base rate of the outcome variable in order to determine if a causal relationship exists. The effectiveness of the intervention was measured using a standard contingency learning task that involved fake medicines that typically produce causal illusions. Half of the participants performed the contingency learning task before participating in the educational intervention (the control group), and the other half performed the task after they had completed the intervention (the experimental group). The participants in the experimental group made more realistic causal judgments than did those in the control group, which served as a baseline. To the best of our knowledge, this is the first evidence-based educational intervention that could be easily implemented to reduce causal illusions and the many problems associated with them, such as superstitions and belief in pseudoscience.
Article
Full-text available
The illusion of control consists of overestimating the influence that our behavior exerts over uncontrollable outcomes. Available evidence suggests that an important factor in development of this illusion is the personal involvement of participants who are trying to obtain the outcome. The dominant view assumes that this is due to social motivations and self-esteem protection. We propose that this may be due to a bias in contingency detection which occurs when the probability of the action (i.e., of the potential cause) is high. Indeed, personal involvement might have been often confounded with the probability of acting, as participants who are more involved tend to act more frequently than those for whom the outcome is irrelevant and therefore become mere observers. We tested these two variables separately. In two experiments, the outcome was always uncontrollable and we used a yoked design in which the participants of one condition were actively involved in obtaining it and the participants in the other condition observed the adventitious cause-effect pairs. The results support the latter approach: Those acting more often to obtain the outcome developed stronger illusions, and so did their yoked counterparts.
Article
Full-text available
Adjunctive behaviors such as schedule-induced polydipsia are said to be induced by periodic delivery of incentives, but not reinforced by them. That standard treatment assumes that contingency is necessary for conditioning and that delay of reinforcement gradients are very steep. The arguments and evidence for this position are reviewed and rejected. In their place, data are presented that imply different gradients for different classes of responses. Proximity between response and reinforcer, rather than contingency or contiguity, is offered as a key principle of association. These conceptions organize a wide variety of observations and provide the rudiments for a more general theory of conditioning.
Article
Full-text available
Replication and extension of Skinner's "supersitition" experiment showed the development of 2 kinds of behavior at asymptote: (a) interim activities, related to adjunctive behavior, which occurred just after food delivery; and (b) the terminal response, a discriminated operant, which occurred toward the end of the interval and continued until food delivery. These data suggest a view of operant conditioning (the terminal response) in terms of 2 sets of principles: principles of behavioral variation that describe the origins of behavior appropriate to a situation, in advance of reinforcement; and principles of reinforcement that describe the selective elimination of behavior so produced. This approach was supported by (a) an account of the parallels between the law of effect and evolution by means of natural selection; (b) its ability to elucidate persistent problems in learning, e.g., continuity vs. noncontinuity, variability associated with extinction, the relationship between classical and instrumental conditioning, the controversy between behaviorist and cognitive approaches to learning; and (c) its ability to deal with a number of recent anomalies in the learning literature (instinctive drift, auto-shaping, and auto-maintenance). The interim activities are interpreted in terms of interactions among motivational systems, and this view is supported by a review of the literature on adjunctive behavior and by comparison with similar phenomena in ethology (displacement, redirection, and vacuum activities). The proposed theoretical scheme represents a shift away from hypothetical laws of learning toward an interpretation of behavioral change in terms of interaction and competition among tendencies to action according to principles evolved in phylogeny. (4 p. ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This chapter discusses that experimental psychology is no longer a unified field of scholarship. The most obvious sign of disintegration is the division of the Journal of Experimental Psychology into specialized periodicals. Many forces propel this fractionation. First, the explosion of interest in many small spheres of inquiry has made it extremely difficult for an individual to master more than one. Second, the recent popularity of interdisciplinary research has lured many workers away from the central issues of experimental psychology. Third, there is a growing division between researchers of human and animal behavior; this division has been primarily driven by contemporary cognitive psychologists, who see little reason to refer to the behavior of animals or to inquire into the generality of behavioral principles. The chapter considers the study of causal perception. This area is certainly at the core of experimental psychology. Although recent research in animal cognition has taken the tack of bringing human paradigms into the animal laboratory, the experimental research is described has adopted the reverse strategy of bringing animal paradigms into the human laboratory. A further unfortunate fact is that today's experimental psychologists are receiving little or no training in the history and philosophy of psychology. This neglected aspect means that investigations of a problem area are often undertaken without a full understanding of the analytical issues that would help guide empirical inquiry.
Article
Experiments in which subjects are asked to analytically assess response-outcome relationships have frequently yielded accurate judgments of response-outcome independence, but more naturalistically set experiments in which subjects are instructed to obtain the outcome have frequently yielded illusions of control The present research tested the hypothesis that a differential probability of responding p(R), between these two traditions could be at the basis of these different results Subjects received response-independent outcomes and were instructed either to obtain the outcome (naturalistic condition) or to behave scientifically in order to find out how much control over the outcome was possible (analytic condition) Subjects in the naturalistic condition tended to respond at almost every opportunity and developed a strong illusion of control Subjects in the analytic condition maintained their p(R) at a point close to 5 and made accurate judgments of control The illusion of control observed in the naturalistic condition appears to be a collateral effect of a high tendency to respond in subjects who are trying to obtain an outcome, this tendency to respond prevents them from learning that the outcome would have occurred with the same probability if they had not responded
Article
Overestimations of null contingencies between a cue, C, and an outcome, O, are widely reported effects that can arise for multiple reasons. For instance, a high probability of the cue, P(C), and a high probability of the outcome, P(O), are conditions that promote such overestimations. In two experiments, participants were asked to judge the contingency between a cue and an outcome. Both P(C) and P(O) were given extreme values (high and low) in a factorial design, while maintaining the contingency between the two events at zero. While we were able to observe main effects of the probability of each event, our experiments showed that the cue- and outcome-density biases interacted such that a high probability of the two stimuli enhanced the overestimation beyond the effects observed when only one of the two events was frequent. This evidence can be used to better understand certain societal issues, such as belief in pseudoscience, that can be the result of overestimations of null contingencies in high-P(C) or high-P(O) situations.
Article
Two experiments used a rich and systematic set of noncontingent problems to examine humans' ability to detect the absence of an inter-event relation. Each found that Ss who used nonnormative strategies were quite inaccurate in judging some types of noncontingent problems. Group data indicate that Ss used the 2 × 2 information in the order Cell A > Cell B > Cell C > Cell D; individual data indicate that Ss considered the information in Cell A to be most important, that in Cell D to be least important, and that in Cells B and C to be of intermediate importance. Trial-by-trial presentation led to less accurate contingency judgments and to more uneven use of 2 × 2 cell information than did summary-table presentation. Finally, the judgment processes of about 70% and 80%, respectively, of nonnormative strategy users under trial-by-trial and summary-table procedures could be accounted for by an averaging model. (PsycINFO Database Record (c) 2012 APA, all rights reserved)