Article

Interactive effects of the probability of the cue and the probability of the outcome on the overestimation of null contingency

Article

Interactive effects of the probability of the cue and the probability of the outcome on the overestimation of null contingency

If you want to read the PDF, try requesting it from the authors.

Abstract

Overestimations of null contingencies between a cue, C, and an outcome, O, are widely reported effects that can arise for multiple reasons. For instance, a high probability of the cue, P(C), and a high probability of the outcome, P(O), are conditions that promote such overestimations. In two experiments, participants were asked to judge the contingency between a cue and an outcome. Both P(C) and P(O) were given extreme values (high and low) in a factorial design, while maintaining the contingency between the two events at zero. While we were able to observe main effects of the probability of each event, our experiments showed that the cue- and outcome-density biases interacted such that a high probability of the two stimuli enhanced the overestimation beyond the effects observed when only one of the two events was frequent. This evidence can be used to better understand certain societal issues, such as belief in pseudoscience, that can be the result of overestimations of null contingencies in high-P(C) or high-P(O) situations.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... One variable that researchers have identified as a robust facilitator of the causal illusion, at least in controlled experiments, is the probability of the desired outcome. The illusion would appear more prominent when the healings occur with high probability, even if they are not correlated with the use of the treatment [4], [5], [6]. Another variable of interest is the probability of occurrence of the potential cause, P(Cause). ...
... Most pseudomedicines are used in conditions that promote the illusion of efficacy, and, as seen in our experiment, these alternative treatments are more often applied to mild diseases with a high rate of spontaneous remission (headache, back pain, etc.) This parallels experiments conducted where there is a high probability of the desired outcome, resulting in strong overestimations of zero contingencies [4], [5], [6]. In addition, many pseudomedicines are advertised as harmless, as opposed to most conventional treatments, which typically produce undesired side effects. ...
... doi:10.1371/journal.pone.0084084.g002 when both a high probability of the outcome and a high probability of the cause are combined, the chances that the two events coincide accidentally increase, and therefore the causal illusion is strongly facilitated [5], as predicted by leading theoretical accounts of causal learning. It can be safely predicted that, once a treatment is considered at least somewhat effective, it will be used even more frequently, resulting in higher chances of further accidental coincidences. ...
Article
Full-text available
Some alternative medicines enjoy widespread use, and in certain situations are preferred over conventional, validated treatments in spite of the fact that they fail to prove effective when tested scientifically. We propose that the causal illusion, a basic cognitive bias, underlies the belief in the effectiveness of bogus treatments. Therefore, the variables that modulate the former might affect the latter. For example, it is well known that the illusion is boosted when a potential cause occurs with high probability. In this study, we examined the effect of this variable in a fictitious medical scenario. First, we showed that people used a fictitious medicine (i.e., a potential cause of remission) more often when they thought it caused no side effects. Second, the more often they used the medicine, the more likely they were to develop an illusory belief in its effectiveness, despite the fact that it was actually useless. This behavior may be parallel to actual pseudomedicine usage; that because a treatment is thought to be harmless, it is used with high frequency, hence the overestimation of its effectiveness in treating diseases with a high rate of spontaneous relief. This study helps shed light on the motivations spurring the widespread preference of pseudomedicines over scientific medicines. This is a valuable first step toward the development of scientifically validated strategies to counteract the impact of pseudomedicine on society.
... Accordingly, we expected that, in Experiment 2, our participants would increase their causal illusion after a low base-rate pretraining. The Control group (without pretraining), in line with previous reports in which low outcome-density conditions were used, should show little or no overestimation of causality [13,34]. Thus, Experiment 2 complements Experiment 1 because it extends the predictions to a low, rather than high, outcome base-rate. ...
... That is, in a high P(O) setting (Experiment 1), we found a cause-density bias, but it was absent in the low P(O) setting (Experiment 2). The finding that P(C) effects do not appear when P(O) is low has been reported in the past, both in observational tasks [34] and in active tasks like the one used in this article [20], and suggests that the outcome-and the cue-density biases are not symmetrical. Rather, a high P(O) setting is needed in order to reliably observe the cause-density bias. ...
... When participants believed that the outcome would appear with high base-rate (Experiment 1), then the illusion of causality that is normally observed (and was indeed found in the Control group) was reduced. Conversely, when participants believed that the outcome would appear with a low base-rate (Experiment 2), the illusion of causality became strong even in a situation (i.e., low probability of the outcome) in which it is typically weak or unobserved [13,34]. ...
Article
Full-text available
Previous research revealed that people’s judgments of causality between a target cause and an outcome in null contingency settings can be biased by various factors, leading to causal illusions (i.e., incorrectly reporting a causal relationship where there is none). In two experiments, we examined whether this causal illusion is sensitive to prior expectations about base-rates. Thus, we pretrained participants to expect either a high outcome base-rate (Experiment 1) or a low outcome base-rate (Experiment 2). This pretraining was followed by a standard contingency task in which the target cause and the outcome were not contingent with each other (i.e., there was no causal relation between them). Subsequent causal judgments were affected by the pretraining: When the outcome base-rate was expected to be high, the causal illusion was reduced, and the opposite was observed when the outcome base-rate was expected to be low. The results are discussed in the light of several explanatory accounts (associative and computational). A rational account of contingency learning based on the evidential value of information can predict our findings.
... Illusions of causality are found to be strongly affected by the frequency with which the potential cause and the outcome occur. When the outcome occurs with a high probability, the illusion is stronger (Allan & Jenkins, 1983;Alloy & Abramson, 1979;Blanco, Matute, & Vadillo, 2013;Matute, 1995;Shanks & Dickinson, 1987). In addition, when the probability of the potential cause is high, the illusion will also be stronger (Blanco et al., 2013;Hannah & Beneteau, 2009;Matute, 1996;Matute et al., 2011;Perales & Shanks, 2007). ...
... When the outcome occurs with a high probability, the illusion is stronger (Allan & Jenkins, 1983;Alloy & Abramson, 1979;Blanco, Matute, & Vadillo, 2013;Matute, 1995;Shanks & Dickinson, 1987). In addition, when the probability of the potential cause is high, the illusion will also be stronger (Blanco et al., 2013;Hannah & Beneteau, 2009;Matute, 1996;Matute et al., 2011;Perales & Shanks, 2007). These two factors are often known as density biases (i.e., cue density and outcome density), and they play a crucial role in the development of false beliefs about causal relationship (Allan & Jenkins, 1983;Hannah & Beneteau, 2009;Matute, 1995Matute, , 1996Matute et al., 2011;Yarritu, Matute, & Vadillo, 2014). ...
... These two factors are often known as density biases (i.e., cue density and outcome density), and they play a crucial role in the development of false beliefs about causal relationship (Allan & Jenkins, 1983;Hannah & Beneteau, 2009;Matute, 1995Matute, , 1996Matute et al., 2011;Yarritu, Matute, & Vadillo, 2014). The illusion is particularly strong when both the cause and the outcome occur frequently (Blanco et al., 2013). To manipulate the degree of the illusion of causality developed by our participants, we used a high probability of the outcome in all cases and manipulated between groups the frequency of occurrence of the potential cause during Phase 1. Table 1 shows a summary of the experimental design. ...
Article
Full-text available
Cognitive illusions are often associated with mental health and well-being. However, they are not without risk. This research shows they can interfere with the acquisition of evidence-based knowledge. During the first phase of the experiment, one group of participants was induced to develop a strong illusion that a placebo medicine was effective to treat a fictitious disease, whereas another group was induced to develop a weak illusion. Then, in Phase 2, both groups observed fictitious patients who always took the bogus treatment simultaneously with a second treatment which was effective. Our results showed that the group who developed the strong illusion about the effectiveness of the bogus treatment during Phase 1 had more difficulties in learning during Phase 2 that the added treatment was effective.
... These biases are related to the marginal probabilities of the outcome and cause events. A significant number of studies have shown that participants' judgments tend to be higher as the probability of the outcome, p (O), increases (e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Matute, 1995;López et al., 1998;Msetfi et al., 2005;Hannah and Beneteau, 2009;Byrom et al., 2015), even when that probability is the same in the presence and in the absence of the potential cause (i.e., zero contingency; e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Matute, 1995;Blanco et al., 2013). Similarly, it has been observed that as the probability of the cause, p (C), increases, participants' judgments also tend to increase (Allan and Jenkins, 1983;Perales and Shanks, 2007;Hannah and Beneteau, 2009;White, 2009;Musca et al., 2010;Vadillo et al., 2011), even when the potential cause and the outcome are non-contingently related (e.g., Hannah and Beneteau, 2009;Blanco et al., 2013;Yarritu et al., 2015). ...
... A significant number of studies have shown that participants' judgments tend to be higher as the probability of the outcome, p (O), increases (e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Matute, 1995;López et al., 1998;Msetfi et al., 2005;Hannah and Beneteau, 2009;Byrom et al., 2015), even when that probability is the same in the presence and in the absence of the potential cause (i.e., zero contingency; e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Matute, 1995;Blanco et al., 2013). Similarly, it has been observed that as the probability of the cause, p (C), increases, participants' judgments also tend to increase (Allan and Jenkins, 1983;Perales and Shanks, 2007;Hannah and Beneteau, 2009;White, 2009;Musca et al., 2010;Vadillo et al., 2011), even when the potential cause and the outcome are non-contingently related (e.g., Hannah and Beneteau, 2009;Blanco et al., 2013;Yarritu et al., 2015). The combination of these two biases increases the overestimation of the causal relationship when the two probabilities, p (C) and p (O), are high (Blanco et al., 2013). ...
... Similarly, it has been observed that as the probability of the cause, p (C), increases, participants' judgments also tend to increase (Allan and Jenkins, 1983;Perales and Shanks, 2007;Hannah and Beneteau, 2009;White, 2009;Musca et al., 2010;Vadillo et al., 2011), even when the potential cause and the outcome are non-contingently related (e.g., Hannah and Beneteau, 2009;Blanco et al., 2013;Yarritu et al., 2015). The combination of these two biases increases the overestimation of the causal relationship when the two probabilities, p (C) and p (O), are high (Blanco et al., 2013). Note that a high probability of both events necessarily leads to a large number of coincidences, which, as predicted by the cell-weighting hypothesis, should produce the overestimation of causality. ...
Article
Full-text available
It is generally assumed that the way people assess the relationship between a cause and an outcome is closely related to the actual evidence existing about the co-occurrence of these events. However, people’s estimations are often biased, and this usually translates into illusions of causality. Some have suggested that such illusions could be the result of previous knowledge-based expectations. In the present research we explored the role that previous knowledge has in the development of illusions of causality. We propose that previous knowledge influences the assessment of causality by influencing the decisions about responding or not (i.e., presence or absence of the potential cause), which biases the information people are exposed to, and this in turn produces illusions congruent with such biased information. In a non-contingent situation in which participants decided whether the potential cause was present or absent (Experiment 1), the influence of expectations on participants’ judgments was mediated by the probability of occurrence of the potential cause (determined by participants’ responses). However, in an identical situation, except that the participants were not allowed to decide the occurrence of the potential cause, only the probability of the cause was significant, not the expectations or the interaction. Together, these results support our hypothesis that knowledge-based expectations affect the development of causal illusions by the mediation of behavior, which biases the information received.
... As the astute reader might guess, the most problematic situation is that in which both the probability of the outcome and the probability of the cue are large ( Figure 1G). Participants seem to find particularly difficult to detect the lack of contingency in these cases (Blanco, Matute, & Vadillo, 2013). ...
... To answer this question, we decided to reanalyze data from our own laboratory using the strategy followed by Allan et al. Specifically, we reanalyzed data from nine experimental conditions exploring the cue-density bias (originally published in Blanco et al., 2013;Matute et al., 2011;Vadillo et al., 2011;Yarritu, Matute, & Vadillo, 2014) and three experimental conditions exploring the outcome-density bias (originally published in Musca et al., 2010;Vadillo, Miller, & Matute, 2005). All these experiments were conducted using the standard experimental paradigm outlined above. ...
... In the following analyses, we included data from 848 participants tested in 12 conditions included in the articles mentioned in the previous paragraph. 1 Figure 4 plots the Blanco et al. (2013) reported two experiments, each of them including two conditions where the effect of the cue-density manipulation was tested. Cue-density was also manipulated in Matute et al. (2011. ...
Article
Decades of research in causal and contingency learning show that people’s estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artefacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive.
... However, this evidence is not enough to ensure that the outcome density effect appears in children, not only because it refers to adolescents, but also because it may reflect the effects of additional factors beyond outcome density. Thus, in that experiment, the bias was partially mediated by the participants' behavioral tendency to expose themselves to a high number of cause-present trials, an additional factor that has also been shown to promote overestimations of causal relations on its own [73][74][75][76]. The procedure included the active intervention of participants who could choose if the potential cause was present or not, which added a strategic component beyond contingency learning. ...
... Experiment 1 was designed with the main goal of testing whether the high probability of the outcome encourages causal illusions in children in the same way as in adults by comparing, in a within-subjects design, two conditions varying only in outcome-density (High vs. Low). Although between-subjects designs are a common approach to test the outcome density effect in adults [15,75], within-subjects designs can also be a suitable alternative [13,77], and they offer an additional advantage for testing the effect in children: They provide a reduced error variance associated with individual differences, which might be important in this population. As we have outline previously, some cognitive abilities that can be relevant for causal inference are still developing during childhood, and they may noticeably differ between children within the same age range. ...
... Humans are usually quite accurate in inferring causality from experience, but there are some factors that can bias their judgments of causality [12,78]. One of these factors is the probability of the outcome, which typically leads to the outcome-density bias [14,15,17,75,76]. This bias has not yet been reported in children, and we have proposed a number of reasons to suspect that children might differ from adults in their performance. ...
Article
Full-text available
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent.
... These models predict that variables such as the probability with which the potential cause occurs and the probability with which the outcome occurs will have a significant effect on the illusion of causality. As mentioned above, this result has been confirmed in many experiments showing that the illusion of causality increases when either the probability of the cause or the probability of the outcome, or both, is high (e.g., Allan & Jenkins, 1983;Blanco et al., 2013). Moreover, other variables that are known to influence associative learning, such as, for instance, the existence of alternative potential causes, have also been found to affect the development of causal illusions . ...
... Two groups of participants saw patients who had taken a drug that was non-contingent with the healing of the crises, one group in their native tongue (N = 16), and the other one in the foreign language (N = 20). Given that the degree of illusion of causality is highly influenced by the probability of the outcome (Allan & Jenkins, 1983;Blanco et al., 2013;Shanks & Dickinson, 1987), we used a high probability of the outcome. The purpose was to replicate the illusion of causality effect in the native language group, thus allowing for a reduction of this bias in the foreign language group. ...
... These findings imply that presenting the information in a foreign language when making a causal inference could be used as a strategy to reduce the causality bias without manipulating the information about the potential cause and outcome. Reducing the probability of the cause and/or the probability of the outcome is a strategy that is known to be effective in reducing the illusion (Allan & Jenkins, 1983;Blanco et al., 2013;Hannah & Beneteau, 2009;Matute et al., 2011;Perales et al., 2005;Perales & Shanks, 2007) but cannot always be used. Using a foreign language could therefore prove to be a very useful strategy in real-life situations in which the probability of the cause and/or the outcome remain uncontrollable. ...
Article
Full-text available
The purpose of this research is to investigate the impact of a foreign language on the causality bias (i.e., the illusion that two events are causally related when they are not). We predict that using a foreign language could reduce the illusions of causality. A total of 36 native English speakers participated in Experiment 1, 80 native Spanish speakers in Experiment 2. They performed a standard contingency learning task, which can be used to detect causal illusions. Participants who performed the task in their native tongue replicated the illusion of causality effect, whereas those performing the task in their foreign language were more accurate in detecting that the two events were causally unrelated. Our results suggest that presenting the information in a foreign language could be used as a strategy to debias individuals against causal illusions, thereby facilitating more accurate judgements and decisions in non-contingent situations. They also contribute to the debate on the nature and underlying mechanisms of the foreign language effect, given that the illusion of causality is rooted in basic associative processes.
... Multiple experiments have shown that, other things being equal, the extent to which people perceive that a cue and an outcome are related strongly depends on the number of times they have co-occurred (e.g., Kao and Waserman, 1993;Levin et al., 1993;Wasserman et al., 1990). For instance, people seem to face difficulties to detect the lack of statistical contingency between two events when their marginal probabilities are high and their coincidences are, therefore, frequent (Blanco et al., 2013). It has been proposed that the biasing impact of these coincidences might explain why people develop causal illusions and illusory correlations (e.g., Barberia et al., 2013;Lilienfeld et al., 2014;Matute et al., 2015Matute et al., , 2011Watts et al., 2015). ...
... For instance, people seem to face difficulties to detect the lack of statistical contingency between two events when their marginal probabilities are high and their coincidences are, therefore, frequent (Blanco et al., 2013). It has been proposed that the biasing impact of these coincidences might explain why people develop causal illusions and illusory correlations (e.g., Barberia et al., 2013;Lilienfeld et al., 2014;Matute et al., 2015Matute et al., , 2011Watts et al., 2015). ...
... This is also more consistent with the standard implementation of the Rescorla-Wagner model. manipulated orthogonally (taken from Blanco et al., 2013). As can be seen, the Comparator Hypothesis predicts that responding to the target cue should always be higher for conditions with P(o) = .80 ...
Preprint
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection.
... One of these factors is the probability with which the cause and the effect are presented. Thus, the higher the probability of the cause, the higher the contingency reported between cause and effect, even in the case in which the actual contingency is null [52][53][54] . A similar effect has been described when the effect is presented with high frequency 53,[55][56][57][58] . ...
... Thus, the higher the probability of the cause, the higher the contingency reported between cause and effect, even in the case in which the actual contingency is null [52][53][54] . A similar effect has been described when the effect is presented with high frequency 53,[55][56][57][58] . ...
... To assess causal illusions and Jumping to Conclusions, we used two computerized adaptations of widely used tasks: the contingency learning task 53,55,73 and the Beads task 7,12 , respectively. These two adaptations were presented as a web application based on World Wide Web Consortium (W3C) standards (i.e., HTML, CSS, and JavaScript). ...
Article
Full-text available
Previous research proposed that cognitive biases contribute to produce and maintain the symptoms exhibited by deluded patients. Specifically, the tendency to jump to conclusions (i.e., to stop collecting evidence soon before making a decision) has been claimed to contribute to delusion formation. Additionally, deluded patients show an abnormal understanding of cause-effect relationships, often leading to causal illusions (i.e., the belief that two events are causally connected, when they are not). Both types of bias appear in psychotic disorders, but also in healthy individuals. In two studies, we test the hypothesis that the two biases (jumping to conclusions and causal illusions) appear in the general population and correlate with each other. The rationale is based on current theories of associative learning that explain causal illusions as the result of a learning bias that tends to wear off as additional information is incorporated. We propose that participants with higher tendency to jump to conclusions will stop collecting information sooner in a causal learning study than those participants with lower tendency to jump to conclusions, which means that the former will not reach the learning asymptote, leading to biased judgments. The studies provide evidence in favour that the two biases are correlated but suggest that the proposed mechanism is not responsible for this association.
... and the reward, leading them to be more likely to "stay." One factor that increases perception of contingency is frequent outcomes (sometimes referred to as the outcome-density effect; Blanco, Matute, & Vadillo, 2013;Vallée-Tourangeau, Murphy, & Baker, 2005); another is frequent causal candidates (the cuedensity effect; Blanco et al., 2013;Vadillo et al., 2011). People also overattribute contingency when the causal candidate is the subject's own actions (the illusion of control; Langer, 1975;Thompson, 1999), possibly because self-involvement usually increases the frequency of the causal candidate, causing a cuedensity effect (Yarritu, Matute, & Vadillo, 2014). ...
... and the reward, leading them to be more likely to "stay." One factor that increases perception of contingency is frequent outcomes (sometimes referred to as the outcome-density effect; Blanco, Matute, & Vadillo, 2013;Vallée-Tourangeau, Murphy, & Baker, 2005); another is frequent causal candidates (the cuedensity effect; Blanco et al., 2013;Vadillo et al., 2011). People also overattribute contingency when the causal candidate is the subject's own actions (the illusion of control; Langer, 1975;Thompson, 1999), possibly because self-involvement usually increases the frequency of the causal candidate, causing a cuedensity effect (Yarritu, Matute, & Vadillo, 2014). ...
Article
Full-text available
Human decision-makers often exhibit the hot-hand phenomenon, a tendency to perceive positive serial autocorrelations in independent sequential events. The term is named after the observation that basketball fans and players tend to perceive streaks of high accuracy shooting when they are demonstrably absent. That is, both observing fans and participating players tend to hold the belief that a player’s chance of hitting a shot are greater following a hit than following a miss. We hypothesize that this bias reflects a strong and stable tendency among primates (including humans) to perceive positive autocorrelations in temporal sequences, that this bias is an adaptation to clumpy foraging environments, and that it may even be ecologically rational. Several studies support this idea in humans, but a stronger test would be to determine whether nonhuman primates also exhibit a hot-hand bias. Here we report behavior of 3 monkeys performing a novel gambling task in which correlation between sequential gambles (i.e., temporal clumpiness) is systematically manipulated. We find that monkeys have better performance (meaning, more optimal behavior) for clumped (positively correlated) than for dispersed (negatively correlated) distributions. These results identify and quantify a new bias in monkeys’ risky decisions, support accounts that specifically incorporate cognitive biases into risky choice, and support the suggestion that the hot-hand phenomenon is an evolutionary ancient bias.
... With all other things being equal and given the null contingency, the illusion of a cause producing an outcome will be significantly stronger if the cause occurs in 75% of the occasions than when it occurs in 25%. This effect is also called the cause-density or cause-frequency bias and has also been shown in many experiments (Allan and Jenkins, 1983;Wasserman et al., 1996;Perales et al., 2005;Matute et al., 2011;Vadillo et al., 2011;Blanco et al., 2013;Yarritu et al., 2014). The effect is particularly strong when the probability of the outcome is high as well, since there will be more opportunities for coincidences . ...
... The model tries to associate causes and outcomes co-occurring in an environment while minimizing prediction errors. Each of the four lines shown in Figure 1 denotes the behavior of the model when exposed to each of the four conditions used by Blanco et al. (2013). In this experiment, all participants were exposed to a sequence of trials where the contingency between a potential cause and an outcome was actually 0. The probability of the cause was high (0.80) for half of the participants and low (0.20) for the other half. ...
Article
Full-text available
Illusions of causality occur when people develop the belief that there is a causal connection between two events that are actually unrelated. Such illusions have been proposed to underlie pseudoscience and superstitious thinking, sometimes leading to disastrous consequences in relation to critical life areas, such as health, finances, and wellbeing. Like optical illusions, they can occur for anyone under well-known conditions. Scientific thinking is the best possible safeguard against them, but it does not come intuitively and needs to be taught. Teaching how to think scientifically should benefit from better understanding of the illusion of causality. In this article, we review experiments that our group has conducted on the illusion of causality during the last 20 years. We discuss how research on the illusion of causality can contribute to the teaching of scientific thinking and how scientific thinking can reduce illusion.
... Using this set of equations, it is easy to show that the Comparator Hypothesis predicts both the cue-and outcome-density effects explained in previous sections of this paper. For illustrative purposes, Figure 1 shows the predictions of the model in four different conditions where the probability of the outcome and the probability of the cue are manipulated orthogonally (taken from Blanco et al., 2013). As can be seen, the Comparator Hypothesis predicts that responding to the target cue should always be higher for conditions with P(o) = .80 ...
... The analyses conducted on these ratings revealed only a small cue-density bias that disappeared in some conditions.However, in a more recent paper we re-analysed an alternative measure of participants' perception of contingency between cue and outcome. Specifically,Blanco et al. (2013) requested all participants to predict on each trial whether the outcome would be present or not. Using these binary responses, it is possible to measure participants' perception of contingency by checking whether the proportion of "yes" responses was higher in trials in which the cue was present than in trials in which the cue was absent (e.g., Allan et al.,2005; Collins & Shanks, 2002). ...
Article
Full-text available
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection.
... This position is strongly supported by accumulating evidence that higher levels of activity of one participant are positively correlated with higher estimates of control in the context of noncontingent outcomes (Blanco & Matute, 2015;Blanco, Matute, & Vadillo, 2009Matute, Vadillo, Vegas, & Blanco, 2007). Additionally, a high probability of outcomes is also correlated with the overestimation of personal control (Blanco, Matute, & Vadillo, 2013;Moreno-Fernández, Blanco, & Matute, 2017). ...
... Elucidation of the illusion of control that emphasizes the role of coincidences between behavior and environmental changes is an important step toward providing a basic background for understanding behavioral and learning mechanisms that are related to the origins of false beliefs (Blanco, 2017;Blanco et al., 2009Blanco et al., , 2011Blanco et al., , 2012Blanco et al., , 2013Matute, 1996;Matute et al., 2007). The present data support this approach to better understand the general notion of the illusion of control. ...
Article
Full-text available
The notion of superstitious behavior can provide a basic background for understanding such notions as illusions and beliefs. The present study investigated the social mechanism of the transmission of superstitious behavior in an experiment that utilized participant replacement. The sample was composed of a total of 38 participants. Participants performed a task on a computer: they could click a colored rectangle using the mouse. When the rectangle was in a particular color, the participants received points independently of their behavior (variable time schedule). When the color of the rectangle was changed, no points were presented (extinction). Under an Individual Exposure condition, ten participants worked alone on the task. Other participants were exposed to the same experimental task under a Social Exposure condition, in which each participant first learned by observation and then worked on the task in a participant replacement (chain) procedure. The first participant in each chain in the Social Exposure condition was a confederate who worked on the task “superstitiously,” clicking the rectangle when points were presented. Superstitious responding was transmitted because of the behavior of the confederate. This also influenced estimates of personal control. These findings suggest that social learning can facilitate the acquisition and maintenance of superstitious behavior and the illusion of control. Our data also suggest that superstitious behavior and the illusion of control may involve similar learning principles.
... Spurious learning is generally explained in terms of simple associative mechanisms that overestimate causal relationships between external events or between the animal's actions and outcomes (13)(14)(15)(16). Indeed, simple associative learning models, like the Rescorla-Wagner model, depend heavily on correlations between events and can be easily fooled into strengthening associations based on coincidences (17). However, since animals can construct elaborate cognitive structures, it is an open question if they also inappropriately impose complex structures on objectively random events. ...
... Moreover, choices consistent with subjective ordering remained strong in a PD schedule that actively disrupted them by dynamically increasing reward probability for whichever alternative had been selected least often. In contrast, a Q-learning algorithm failed to show subjective ordering under this schedule, although it replicated it under a PN schedule that did not discourage consistent preferences, capturing the well-known vulnerability of associative models to spurious reward correlations (13)(14)(15)(16)(17). These results are consistent with a wealth of studies showing that pure associative learning is not sufficient to explain TI learning (22)(23)(24)30). ...
Article
Humans and other animals often infer spurious associations among unrelated events. However, such superstitious learning is usually accounted for by conditioned associations, raising the question of whether an animal could develop more complex cognitive structures independent of reinforcement. Here, we tasked monkeys with discovering the serial order of two pictorial sets: a “learnable” set in which the stimuli were implicitly ordered and monkeys were rewarded for choosing the higher-rank stimulus and an “unlearnable” set in which stimuli were unordered and feedback was random regardless of the choice. We replicated prior results that monkeys reliably learned the implicit order of the learnable set. Surprisingly, the monkeys behaved as though some ordering also existed in the unlearnable set, showing consistent choice preference that transferred to novel untrained pairs in this set, even under a preference-discouraging reward schedule that gave rewards more frequently to the stimulus that was selected less often. In simulations, a model-free reinforcement learning algorithm ( Q -learning) displayed a degree of consistent ordering among the unlearnable set but, unlike the monkeys, failed to do so under the preference-discouraging reward schedule. Our results suggest that monkeys infer abstract structures from objectively random events using heuristics that extend beyond stimulus–outcome conditional learning to more cognitive model-based learning mechanisms.
... Many of the studies on illusory causation have explored the probability of the cue and outcome and their effect on generating a false association. Manipulations that increase cause-outcome coincidences (i.e., trial type a in Table 1) appear to be particularly effective in producing stronger causal judgments regarding the cue-outcome relationship, regardless of whether the two events are actually causally associated with one another (Blanco, Matute, & Vadillo, 2013;Wasserman, 1990). In other words, a higher coincidence of the cue and outcome results in stronger beliefs that the cue causes the outcome. ...
... If the principles uncovered in this research are applicable to real-world health beliefs then they suggest that choosing to use ineffective treatments is a self-perpetuating problem: treatments that are used more frequently are perceived to be more effective, and treatments perceived to be effective are used more frequently. This cycle is particularly prevalent for the treatment of ailments with a high rate of spontaneous remission (Blanco et al., 2013). ...
Article
Full-text available
Illusory causation refers to a consistent error in human learning in which the learner develops a false belief that two unrelated events are causally associated. Laboratory studies usually demonstrate illusory causation by presenting two events—a cue (e.g., drug treatment) and a discrete outcome (e.g., patient has recovered from illness)—probabilistically across many trials such that the presence of the cue does not alter the probability of the outcome. Illusory causation in these studies is further augmented when the base rate of the outcome is high, a characteristic known as the outcome density effect. Illusory causation and the outcome density effect provide laboratory models of false beliefs that emerge in everyday life. However, unlike laboratory research, the real-world beliefs to which illusory causation is most applicable (e.g., ineffective health therapies) often involve consequences that are not readily classified in a discrete or binary manner. This study used a causal learning task framed as a medical trial to investigate whether similar outcome density effects emerged when using continuous outcomes. Across two experiments, participants observed outcomes that were either likely to be relatively low (low outcome density) or likely to be relatively high (high outcome density) along a numerical scale from 0 (no health improvement) to 100 (full recovery). In Experiment 1, a bimodal distribution of outcome magnitudes, incorporating variance around a high and low modal value, produced illusory causation and outcome density effects equivalent to a condition with two fixed outcome values. In Experiment 2, the outcome density effect was evident when using unimodal skewed distributions of outcomes that contained more ambiguous values around the midpoint of the scale. Together, these findings provide empirical support for the relevance of the outcome density bias to real-world situations in which outcomes are not binary but occur to differing degrees. This has implications for the way in which we apply our understanding of causal illusions in the laboratory to the development of false beliefs in everyday life. Electronic supplementary material The online version of this article (10.1186/s41235-018-0149-9) contains supplementary material, which is available to authorized users.
... For instance, they are asked to indicate the level of effectiveness of the medicine, which they should typically rate on a scale from 0 (ineffective) to 100 (totally effective). This measure is taken as an indicator of the degree of perceived causal relationship and has been extensively used in previous studies (e.g., [10][11][12][13][14]). The relative densities of the different combinations of events will indicate the level of contingency between medicine administration and cure. ...
... Another limitation refers to the scale used to measure the development of causal illusion in our contingency learning tasks. Although many previous studies have regularly relied on this type of causal or effectiveness rating [10][11][12][13][14], this measure is not without problems. In this sense, absolute scores on this scale are difficult to interpret, as it is not clear whether the participants are actually expressing the strength of the causal relation or if these ratings are influenced by other aspects, such as their confidence in the judgement [28]. ...
Article
Full-text available
The prevalence of pseudoscientific beliefs in our societies negatively influences relevant areas such as health or education. Causal illusions have been proposed as a possible cognitive basis for the development of such beliefs. The aim of our study was to further investigate the specific nature of the association between causal illusion and endorsement of pseudoscientific beliefs through an active contingency detection task. In this task, volunteers are given the opportunity to manipulate the presence or absence of a potential cause in order to explore its possible influence over the outcome. Responses provided are assumed to reflect both the participants’ information interpretation strategies as well as their information search strategies. Following a previous study investigating the association between causal illusion and the presence of paranormal beliefs, we expected that the association between causal illusion and pseudoscientific beliefs would disappear when controlling for the information search strategy (i.e., the proportion of trials in which the participants decided to present the potential cause). Volunteers with higher pseudoscientific beliefs also developed stronger causal illusions in active contingency detection tasks. This association appeared irrespective of the participants with more pseudoscientific beliefs showing (Experiment 2) or not (Experiment 1) differential search strategies. Our results suggest that both information interpretation and search strategies could be significantly associated to the development of pseudoscientific (and paranormal) beliefs.
... Previous research on this subject reveals that two factors play an essential role in the development of causal illusions. In our previous scenario, even if the chances of recovery are the same among patients taking the drug and patients not taking it, the drug has greater chances of being perceived as effective if the recovery is, in general, very frequent (high outcome density) and if there is a high proportion of patients taking the drug (high cause density) (e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Allan et al., 2005;Hannah and Beneteau, 2009;Blanco et al., 2013). Table 1 represents an example with both a high outcome density and a high cause density, illustrating a situation that should encourage strong causal illusions. ...
... Participants were exposed to a standard causal learning task. Specifically, they were asked to imagine that they were specialists in a strange and dangerous disease called Lindsay Syndrome (e.g., Blanco et al., 2011Blanco et al., , 2013Matute et al., 2011). They were further told that the crises produced by this disease might be overcome with a new experimental drug (Batatrim) whose effectiveness was still to be determined. ...
Article
Full-text available
We carried out an experiment using a conventional causal learning task but extending the number of learning trials participants were exposed to. Participants in the standard training group were exposed to 48 learning trials before being asked about the potential causal relationship under examination, whereas for participants in the long training group the length of training was extended to 288 trials. In both groups, the event acting as the potential cause had zero correlation with the occurrence of the outcome, but both the outcome density and the cause density were high, therefore providing a breeding ground for the emergence of a causal illusion. In contradiction to the predictions of associative models such the Rescorla-Wagner model, we found moderate evidence against the hypothesis that extending the learning phase alters the causal illusion. However, assessing causal impressions recurrently did weaken participants’ causal illusions.
... Spurious learning is generally explained in terms of simple associative mechanisms that overestimate causal relationships between external events or between the animal's actions and outcomes (13)(14)(15)(16). Indeed, simple associative learning models, like the Rescorla-Wagner model, depend heavily on correlations between events and can be easily fooled into strengthening associations based on coincidences (17). However, since animals can construct elaborate cognitive structures, it is an open question if they also inappropriately impose complex structures on objectively random events. ...
... Moreover, choices consistent with subjective ordering remained strong in a PD schedule that actively disrupted them by dynamically increasing reward probability for whichever alternative had been selected least often. In contrast, a Q-learning algorithm failed to show subjective ordering under this schedule, although it replicated it under a PN schedule that did not discourage consistent preferences, capturing the well-known vulnerability of associative models to spurious reward correlations (13)(14)(15)(16)(17). These results are consistent with a wealth of studies showing that pure associative learning is not sufficient to explain TI learning (22)(23)(24)30). ...
Preprint
Survival depends on identifying learnable features of the environment that predict reward, and avoiding others that are random and unlearnable. However, humans and other animals often infer spurious associations among unrelated events, raising the question of how well they can distinguish learnable patterns from unlearnable events. Here, we tasked monkeys with discovering the serial order of two pictorial sets: a “learnable” set in which the stimuli were implicitly ordered and monkeys were rewarded for choosing the higher-rank stimulus and an “unlearnable” set in which stimuli were unordered and feedback was random regardless of the choice. We replicated prior results that monkeys reliably learned the implicit order of the learnable set. Surprisingly, the monkeys behaved as though some ordering also existed in the unlearnable set, showing consistent choice preference that transferred to novel untrained pairs in this set, even under a preference-discouraging reward schedule that gave rewards more frequently to the stimulus that was selected less often. In simulations, a model-free RL algorithm ( Q -learning) displayed a degree of consistent ordering among the unlearnable set but, unlike the monkeys, failed to do so under the preference, discouraging reward schedule. Our results suggest that monkeys infer abstract structures from objectively random events using heuristics that extend beyond stimulus-outcome conditional learning to more cognitive model-based learning mechanisms.
... [26,29,30]), and when the probability of the potential cause is high (cue density effect, e.g. [5,23,29]), leading to particularly intense causal illusions when both probabilities are high [21,31]. Moreover, it has been shown that in situations where the percentage of healings is high and participants are allowed to choose between giving or not giving the drug, they are inclined to administer the drug to a majority of the patients, thereby tending to expose themselves to more patients that take the drug than to patients that do not take it [20,32]. ...
Article
Full-text available
Cognitive biases such as causal illusions have been related to paranormal and pseudoscientific beliefs and, thus, pose a real threat to the development of adequate critical thinking abilities. We aimed to reduce causal illusions in undergraduates by means of an educational intervention combining training-in-bias and training-in-rules techniques. First, participants directly experienced situations that tend to induce the Barnum effect and the confirmation bias. Thereafter, these effects were explained and examples of their influence over everyday life were provided. Compared to a control group, participants who received the intervention showed diminished causal illusions in a contingency learning task and a decrease in the precognition dimension of a paranormal belief scale. Overall, results suggest that evidence-based educational interventions like the one presented here could be used to significantly improve critical thinking skills in our students.
... These effects might be explained by visualization readers' over-sensitivity to co-occurrences and a tendency to generalize based on limited data [35]. By disclosing more variability in data, visualization authors provide readers more counterexamples, which can mitigate cognitive biases and encourage critical thinking [9,48]. ...
Preprint
Visualization research often focuses on perceptual accuracy or helping readers interpret key messages. However, we know very little about how chart designs might influence readers' perceptions of the people behind the data. Specifically, could designs interact with readers' social cognitive biases in ways that perpetuate harmful stereotypes? For example, when analyzing social inequality, bar charts are a popular choice to present outcome disparities between race, gender, or other groups. But bar charts may encourage deficit thinking, the perception that outcome disparities are caused by groups' personal strengths or deficiencies, rather than external factors. These faulty personal attributions can then reinforce stereotypes about the groups being visualized. We conducted four experiments examining design choices that influence attribution biases (and therefore deficit thinking). Crowdworkers viewed visualizations depicting social outcomes that either mask variability in data, such as bar charts or dot plots, or emphasize variability in data, such as jitter plots or prediction intervals. They reported their agreement with both personal and external explanations for the visualized disparities. Overall, when participants saw visualizations that hide within-group variability, they agreed more with personal explanations. When they saw visualizations that emphasize within-group variability, they agreed less with personal explanations. These results demonstrate that data visualizations about social inequity can be misinterpreted in harmful ways and lead to stereotyping. Design choices can influence these biases: Hiding variability tends to increase stereotyping while emphasizing variability reduces it.
... That is, if contingency is held constant, judgments of contingency and of causal relationships increase as P(O|S) increases. This is known as the outcome-density bias and has been reported in a wide variety of procedures, conditions, and laboratories (e.g., Allan & Jenkins, 1983;Allan, Siegel, & Tangen, 2005;Alloy & Abramson, 1979;Blanco, Matute, & Vadillo, 2013;Buehner, Cheng, & Clifford, 2003;Matute, 1995;Msetfi, Murphy, Simpson, & Kornbrot, 2005;Musca, Vadillo, Blanco, & Matute, 2010;Shanks, Lo´pez, Darby, & Dickinson, 1996;Wasserman et al., 1996). ...
Article
Full-text available
A stimulus is a reliable signal of an outcome when the probability that the outcome occurs in its presence is different from in its absence. Reliable signals of important outcomes are responsible for triggering critical anticipatory or prepara-tory behavior, which is any form of behavior that prepares the organism to receive a biologically significant event. Previous research has shown that humans and other animals prepare more for outcomes that occur in the presence of highly reliable (i.e., highly contingent) signals, that is, those for which that difference is larger. However, it seems reason-able to expect that, all other things being equal, the probability with which the outcome follows the signal should also affect preparatory behavior. In the present experiment with humans, we used two signals. They were differentially fol-lowed by the outcome, but they were equally (and relatively weakly) reliable. The dependent variable was preparatory behavior in a Martians video game. Participants prepared more for the outcome (a Martians' invasion) when the out-come was most probable. These results indicate that the probability of the outcome can bias preparatory behavior to occur with different intensities despite identical outcome signaling.
... However, although it is true that causal judgments tend to correlate with ∆P, this correlation is far from perfect (Perales & Shanks, 2007). There are a number of studies showing that judgments systematically vary across conditions in which ∆P is held constant (Allan, Siegel, & Tangen, 2005;Blanco, Matute & Vadillo, 2013;Blanco, Matute, & Vadillo, 2011;Buehner, Cheng, & Clifford, 2003;Lober & Shanks, 2000;López, Almaraz, Fernández, & Shanks, 1999;Matute, 1995Matute, , 1996Matute, Yarritu, & Vadillo, 2011;Hannah & Beneteau, 2009;Yarritu, Matute, & Vadillo, 2014;Wasserman et al, 1993); so it is apparent that ∆P does not provide a complete description human causal reasoning. As White (2009) pointed out, one problem is that the ∆P rule only indicates the degree of empirical association between a posible cause and an outcome, and not the strength of the cause or the likelihood that it does cause the outcome. ...
Thesis
Full-text available
Causal learning models make different assumptions about how people should combine the influence of different potential causes presented in combination. Based on the linear integration rule, some models propose that the causal impact of a compound should equal the linear sum of each of the causes presented in isolation. Other models such as the Power PC theory are based on a different integration rule, the noisy-OR, suggesting that the rational way of computing the causal impact of a compound involves correcting the sum of the causes by subtracting the overlap between them. The present experiments tested which integration rule people use. Four different cover stories were used to ensure that the participants understood the independence of the causes. The experiments used different sets of probabilities and several formats for presenting information. The results of most experiments do not confirm the predictions of the noisy-OR integration rule. Only one experiment (of ten) supports the predictions of the noisy-OR rule. In spite of having mixed evidence, people do not appear to spontaneously use this rule. We discuss the implications of our results and alternative explanations for our pattern of data, including inhibitory mechanisms and an averaging heuristic.
... However, systematic departures from normative contingency have also been reported in the literature (Blanco et al., 2011;Matute et al., 2015;Shanks, 1995;Kao & Wasserman, 1993;Ward & Jenkins, 1965). Different factors such as the relative frequency of a potential cause or the outcome have been shown to bias participant's judgments (Musca et al., 2010;Blanco et al., 2013; see Matute et al., 2019, for a recent overview of biasing factors in contingency learning tasks). ...
Article
Prior knowledge has been shown to be an important factor in causal judgments. However, inconsistent patterns have been reported regarding the interaction between prior knowledge and the processing of contingency information. In three studies, we examined the effect of the plausibility of the putative cause on causal judgments, when prior expectations about the rate at which the cause is accompanied by the effect in question are explicitly controlled for. Results clearly show that plausibility has a clear efect that is independent of contingency information and type of task (passive or active). We also examined the role of strategy use as an individual difference in causal judgments. Specifically, the dual-strategy model suggests that people can either use a Statistical or a Counterexample strategy to process information. Across all three studies, results showed that Strategy use has a clear effect on causal judgments that is independent of both plausibility and contingency.
... Instead, a negative contingency was inferred if the cue base rate mismatched the outcome base rate, for instance, if Brand X was more prevalent than Brand Y, but fewer customers were satisfied than dissatisfied. Together, these findings clearly attest to the effect of base rate alignment on contingency judgments, at least if bivariate observations of cue and outcome are impossible (also see Blanco et al., 2013;Eder et al., 2011;Ernst et al., 2019;Fiedler, 2010; for demonstrations of base-rate effects in paradigms with joint cueand outcome observations). ...
Article
Full-text available
Humans are evidently able to learn contingencies from the co-occurrence of cues and outcomes. But how do humans judge contingencies when observations of cue and outcome are learned on different occasions? The pseudocontingency framework proposes that humans rely on base-rate correlations across contexts, that is, whether outcome base rates increase or decrease with cue base rates. Here, we elaborate on an alternative mechanism for pseudocontingencies that exploits base rate information within contexts. In two experiments, cue and outcome base rates varied across four contexts, but the correlation by base rates was kept constant at zero. In some contexts, cue and outcome base rates were aligned (e.g., cue and outcome base rates were both high). In other contexts, cue and outcome base rates were misaligned (e.g., cue base rate was high, but outcome base rate was low). Judged contingencies were more positive for contexts in which cue and outcome base rates were aligned than in contexts in which cue and outcome base rates were misaligned. Our findings indicate that people use the alignment of base rates to infer contingencies conditional on the context. As such, they lend support to the pseudocontingency framework, which predicts that decision makers rely on base rates to approximate contingencies. However, they challenge previous conceptions of pseudocontingencies as a uniform inference from correlated base rates. Instead, they suggest that people possess a repertoire of multiple contingency inferences that differ with regard to informational requirements and areas of applicability.
... The way in which most theories explain causal illusions when actual contingency is zero implies the assignment of different weights to each trial type. It is often reported that participants give more importance to those trial types in which the potential cause and the outcome coincide [23,24], and this so-called differential cell weighting could account for the illusion of causality [24,47]. Interestingly, weighting differently each type of trial is a rational strategy in many causal inference situations, but the specific rank of trial weights depends on further factors [48,49]. ...
Article
Full-text available
In the reasoning literature, paranormal beliefs have been proposed to be linked to two related phenomena: a biased perception of causality and a biased information-sampling strategy (believers tend to test fewer hypotheses and prefer confirmatory information). In parallel, recent contingency learning studies showed that, when two unrelated events coincide frequently, individuals interpret this ambiguous pattern as evidence of a causal relationship. Moreover, the latter studies indicate that sampling more cause-present cases than cause-absent cases strengthens the illusion. If paranormal believers actually exhibit a biased exposure to the available information, they should also show this bias in the contingency learning task: they would in fact expose themselves to more cause-present cases than cause-absent trials. Thus, by combining the two traditions, we predicted that believers in the paranormal would be more vulnerable to developing causal illusions in the laboratory than nonbelievers because there is a bias in the information they experience. In this study, we found that paranormal beliefs (measured using a questionnaire) correlated with causal illusions (assessed by using contingency judgments). As expected, this correlation was mediated entirely by the believers' tendency to expose themselves to more cause-present cases. The association between paranormal beliefs, biased exposure to information, and causal illusions was only observed for ambiguous materials (i.e., the noncontingent condition). In contrast, the participants' ability to detect causal relationships which did exist (i.e., the contingent condition) was unaffected by their susceptibility to believe in paranormal phenomena.
... One explanation for why people might perceive a positive contingency where none exists is the tendency for people to overweigh instances where the putative cause and the target outcome are both present (e.g., taking Echinacea and recovering from the cold), than when either the putative cause or the target outcome is absent (or both are absent). Experimental studies have shown that manipulations that increase cause-outcome coincidences are particularly effective at producing stronger judgements about the causal relationship between the two events, regardless of whether the two events are actually causally associated with each other [28,29]. Similarly, there is considerable evidence that under certain conditions people do not use base-rate information appropriately during causal induction [30]. ...
Article
Full-text available
Beliefs about cause and effect, including health beliefs, are thought to be related to the frequency of the target outcome (e.g., health recovery) occurring when the putative cause is present and when it is absent (treatment administered vs. no treatment); this is known as contingency learning. However, it is unclear whether unvalidated health beliefs, where there is no evidence of cause–effect contingency, are also influenced by the subjective perception of a meaningful contingency between events. In a survey, respondents were asked to judge a range of health beliefs and estimate the probability of the target outcome occurring with and without the putative cause present. Overall, we found evidence that causal beliefs are related to perceived cause–effect contingency. Interestingly, beliefs that were not predicted by perceived contingency were meaningfully related to scores on the paranormal belief scale. These findings suggest heterogeneity in pseudoscientific health beliefs and the need to tailor intervention strategies according to underlying causes.
... Por ejemplo, la tradición asociacionista, encarnada en el conductismo y el conexionismo, da prioridad a la sucesión regular de eventos y el descubrimiento de la contingencia potencial que existe entre tales elementos (Musca, Vadillo, Blanco, & Matute, 2008). El punto clave aquí es que la comprensión causal emerge de procesos más básicos, como el de la detección de la regularidad estadística (Blanco et al., 2013). De otro lado, la idea de Piaget (1929) de la causación como agencia (e. g., capacidad pa-ra ejercer control sobre los eventos del entorno a partir de la propia acción) convierte a la entrevista en la técnica privilegiada para indagar si los niños se consideran o no agentes causales de fenómenos cotidianos, como la sucesión de día y noche. ...
Article
Full-text available
Like other cognitive skills, the ability to reason causally changes during the course of development from early childhood to adulthood. There is, however, no agreement about how its development occurs. In this paper we propose a theoretical analysis to understand this process, namely, the idea that causal reasoning is a domain-general ability that is gradually enriched by the refinement of metacognitive skills, which allows reasoning independently from the immediate context. This proposal is based on the analysis of evidence of causal reasoning in young children, as well as evidence of integration of these skills during early adolescence with processes of argumentation and explanation. The paper also points out some methodological differences in studies with children and adolescents.
... Por ejemplo, la tradición asociacionista, encarnada en el conductismo y el conexionismo, da prioridad a la sucesión regular de eventos y el descubrimiento de la contingencia potencial que existe entre tales elementos (Musca, Vadillo, Blanco, & Matute, 2008). El punto clave aquí es que la comprensión causal emerge de procesos más básicos, como el de la detección de la regularidad estadística (Blanco et al., 2013). De otro lado, la idea de Piaget (1929) de la causación como agencia (e. g., capacidad para ejercer control sobre los eventos del entorno a partir de la propia acción) convierte a la entrevista en la técnica privilegiada para indagar si los niños se consideran o no agentes causales de fenómenos cotidianos, como la sucesión de día y noche. ...
Article
Full-text available
Like other cognitive skills, the ability to reason causally changes during the course of development from early childhood to adulthood. There is, however, no agreement about how its development occurs. In this paper we propose a theoretical analysis to understand this process, namely, the idea that causal reasoning is a domain-general ability that is gradually enriched by the refinement of metacognitive skills, which allows reasoning independently from the immediate context. This proposal is based on the analysis of evidence of causal reasoning in young children, as well as evidence of integration of these skills during early adolescence with processes of argumentation and explanation. The paper also points out some methodological differences in studies with children and adolescents.
... Moreover, and as has also been predicted by associative models, many experiments have shown that not only the frequency with which the outcome occurs but also the frequency with which the cue occurs are good predictors of the overestimation of contingency (Allan & Jenkins, 1983;Matute et al., 2011;Perales, Catena, Shanks, & González, 2005;Wasserman et al., 1996;Yarritu et al., 2014). Finally, the results of Blanco et al. (2013) showed that a high probability of the outcome was necessary for the development of the illusion and that a high probability of the cue increased the effect. That is, these effects become even stronger when both the probability of the cue and the probability of the outcome are high. ...
Article
Many experiments have shown that humans and other animals can detect contingency between events accurately. This learning is used to make predictions and to infer causal relationships, both of which are critical for survival. Under certain conditions, however, people tend to overestimate a null contingency. We argue that a successful theory of contingency learning should explain both results. The main purpose of the present review is to assess whether cue-outcome associations might provide the common underlying mechanism that would allow us to explain both accurate and biased contingency learning. In addition, we discuss whether associations can also account for causal learning. After providing a brief description on both accurate and biased contingency judgments, we elaborate on the main predictions of associative models and describe some supporting evidence. Then, we discuss a number of findings in the literature that, although conducted with a different purpose and in different areas of research, can also be regarded as supportive of the associative framework. Finally, we discuss some problems with the associative view and discuss some alternative proposals as well as some of the areas of current debate. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... The psychological research on causality estimation is based primarily on experimental procedures in which human participants are asked to judge a causal relationship between two events, such as, for instance, using a fictional drug and patients' recovery (Blanco, Matute, and Vadillo, 2013). Participants are usually presented with a series of trials corresponding to the cells in the contingency matrix (figure 3.3). ...
Chapter
In the last decades, cognitive Psychology has provided researchers with a powerful background and the rigor of experimental methods to better understand why so many people believe in pseudoscience, paranormal phenomena and superstitions. According to recent evidence, those irrational beliefs could be the unintended result of how the mind evolved to use heuristics and reach conclusions based on scarce and incomplete data. Thus, we present visual illusions as a parallel to the type of fast and frugal cognitive bias that underlies pseudoscientific belief. In particular, we focus on the causal illusion, which consists of people believing that there is a causal link between two events that coincide just by chance. The extant psychological theories that can account for this causal illusion are described, as well as the factors that are able to modulate the bias. We also discuss that causal illusions are adaptive under some circumstances, although they often lead to utterly wrong beliefs. Finally, we mention several debiasing strategies that have been proved effective in fighting the causal illusion and preventing some of its consequences, such as pseudoscientific belief.
... In a recent human study, Pérez and colleagues (2016) (Blanco, Matute, & Vadillo, 2013;Dickinson, Shanks, & Evenden, 1984;Hammond & Paynter, 1983;Wasserman, Chatlosh, & Neunaber, 1983) or variations of this metric (Cheng, 1997;Novick & Cheng, 2004). The most plausible explanation for Reed's (2001) results is that participants attribute causal control in line with the correlational properties of the schedules, independently of reward rate or reward probability. ...
Preprint
Full-text available
The higher response rates observed on ratio than on matched interval reward schedules has been attributed to the differential reinforcement of longer inter-response times (IRTs) on the interval contingency. Some data, however, seem to contradict this hypothesis, showing that the difference is still observed when the role of IRT reinforcement is neutralized by using a regulated-probability interval schedule (RPI). Given the mixed evidence for these predictions, we re-examined this hypothesis by training three groups of rats to lever press under ratio, interval and RPI schedules across two phases while matching reward rates within triads. At the end of the first phase, the master ratio and RPI groups responded at similar rates. In the second phase, an interval group yoked to the same master ratio group of the first phase responded at a lower rate than the RPI group. Post-hoc analysis showed comparable reward rates for master and yoked schedules. The experienced response-outcome rate correlations were likewise similar, and approached zero as training progressed. We discuss these results in terms of dual-system theories of instrumental conditioning.
... In this context, it is important to note that contingency in non-contingent actionoutcome paradigms is generally overestimated, an effect that is known as the illusion of control (IoC) (Langer, 1975). Research has shown that increasing the probability of the outcome in non-contingent paradigms increases the level of perceived control over the outcome, thereby enhancing the IoC and thus the perceived contingency (Blanco et al., 2013;Jenkins & Ward, 1965;Matute et al., 2015;Studer et al., 2020;Thompson et al., 2007). ...
Article
Full-text available
The reduction of neural responses to self‐generated stimuli compared to external stimuli is thought to result from the matching of motor‐based sensory predictions and sensory reafferences and to serve the identification of changes in the environment as caused by oneself. The amplitude of the auditory event‐related potential (ERP) component N1 seems to closely reflect this matching process, while the later positive component (P2/ P3a) has been associated with judgments of agency, which are also sensitive to contextual top‐down information. In this study, we examined the effect of perceived control over sound production on the processing of self‐generated and external stimuli, as reflected in these components. We used a new version of a classic two‐button choice task to induce different degrees of the illusion of control (IoC) and recorded ERPs for the processing of self‐generated and external sounds in a subsequent task. N1 amplitudes were reduced for self‐generated compared to external sounds, but not significantly affected by IoC. P2/3a amplitudes were affected by IoC: We found reduced P2/3a amplitudes after a high compared to a low IoC induction training, but only for self‐generated, not for external sounds. These findings suggest that prior contextual belief information induced by an IoC affects later processing as reflected in the P2/P3a, possibly for the formation of agency judgments, while early processing reflecting motor‐based predictions is not affected.
... For example, the formation of illusory causation between a cue and an outcome when there is zero contingency (i.e., when the probability of the outcome is the same when the cue is present and when the cue is absent) is assumed to be the result of failing to adequately consider the base-rates of the outcome when the cue is absent (e.g., Matute et al., 2015). This illusory causation is also greater when the outcome is experienced frequently (Blanco, Matute & Vadillo, 2013). It is possible that these effects are stronger in trial-by-trial learning conditions than when participants are made explicitly aware of the base-rates. ...
Article
People often fail to use base-rate information appropriately in decision-making. This is evident in the inverse base-rate effect, a phenomenon in which people tend to predict a rare outcome for a new and ambiguous combination of cues. While the effect was first reported in 1988, it has recently seen a renewed interest from researchers concerned with learning, attention and decision-making. However, some researchers have raised concerns that the effect arises in specific circumstances and is unlikely to provide insight into general learning and decision-making processes. In this review, we critically evaluate the evidence for and against the main explanations that have been proposed to explain the effect, and identify where this evidence is currently weak. We argue that concerns about the effect are not well supported by the data. Instead, the evidence supports the conclusion that the effect is a result of general mechanisms that provides a useful opportunity to understand the processes involved in learning and decision making. We discuss gaps in our knowledge and some promising avenues for future research, including the relevance of the effect to models of attentional change in learning, an area where the phenomenon promises to contribute new insights.
... Manipulations that increase cue-outcome coincidences (i.e. trial type a in Table 1) appear to be particularly effective in inflating causal judgment, regardless of whether the two events are actually causally associated with one another (Wasserman, 1990;Blanco, Matute & Vadillo, 2013). Many of the studies on illusory causation have thus explored the frequency of the cue and outcome on generating a false association. ...
Conference Paper
Full-text available
Illusory causation is a consistent error in human learning in which people perceive two unrelated events as being causally related. Causal illusions are greatly increased when the target outcome occurs frequently rather than rarely, a characteristic known as the outcome density bias. Unlike most experimental designs using binary outcomes, real-world problems to which illusory causation is most applicable (e.g. beliefs about ineffective health therapies) involve continuous and variable consequences that are not readily classifiable as the presence or absence of a salient event. This study used a causal learning task framed as a medical trial to investigate whether outcome density effects emerged when using a continuous and variable outcome that appeared on every trial. Experiment 1 compared the effects of using fixed outcome values (i.e. consistent low and high magnitudes) versus variable outcome values (i.e. low and high magnitudes varying around two means in a bimodal distribution). Experiment 2 compared positively skewed (low density) and negatively skewed (high density) continuous distributions. These conditions yielded comparable outcome density effects, providing empirical support for the relevance of the outcome density bias to real-world situations in which outcomes are not binary but occur to differing degrees.
... This will eventually bias their judgements. Indeed, even when sampling is not completely biased toward the cause and some type c or type d instances are collected, it has been repeatedly shown that the higher the tendency to sample information about the cause, the higher the probability of overestimating the cause-effect relationship [37][38][39]. ...
Article
Background The internet is a relevant source of health-related information. The huge amount of information available on the internet forces users to engage in an active process of information selection. Previous research conducted in the field of experimental psychology showed that information selection itself may promote the development of erroneous beliefs, even if the information collected does not. Objective The aim of this study was to assess the relationship between information searching strategy (ie, which cues are used to guide information retrieval) and causal inferences about health while controlling for the effect of additional information features. Methods We adapted a standard laboratory task that has previously been used in research on contingency learning to mimic an information searching situation. Participants (N=193) were asked to gather information to determine whether a fictitious drug caused an allergic reaction. They collected individual pieces of evidence in order to support or reject the causal relationship between the two events by inspecting individual cases in which the drug was or was not used or in which the allergic reaction appeared or not. Thus, one group (cause group, n=105) was allowed to sample information based on the potential cause, whereas a second group (effect group, n=88) was allowed to sample information based on the effect. Although participants could select which medical records they wanted to check—cases in which the medicine was used or not (in the cause group) or cases in which the effect appeared or not (in the effect group)—they all received similar evidence that indicated the absence of a causal link between the drug and the reaction. After observing 40 cases, they estimated the drug–allergic reaction causal relationship. Results Participants used different strategies for collecting information. In some cases, participants displayed a biased sampling strategy compatible with positive testing, that is, they required a high proportion of evidence in which the drug was administered (in the cause group) or in which the allergic reaction appeared (in the effect group). Biased strategies produced an overrepresentation of certain pieces of evidence at the detriment of the representation of others, which was associated with the accuracy of causal inferences. Thus, how the information was collected (sampling strategy) demonstrated a significant effect on causal inferences (F1,185=32.53, P<.001, η2p=0.15) suggesting that inferences of the causal relationship between events are related to how the information is gathered. Conclusions Mistaken beliefs about health may arise from accurate pieces of information partially because of the way in which information is collected. Patient or person autonomy in gathering health information through the internet, for instance, may contribute to the development of false beliefs from accurate pieces of information because search strategies can be biased.
Article
The dual strategy model of reasoning suggests that people can either use a Statistical or a Counterexample strategy to process information. Previous studies on contingency learning have shown a sufficiency bias: people give more importance to events where the potential cause is present (sufficiency) rather than events where the potential cause is absent (necessity). We examine the hypothesis that strategy use predicts individual differences in use of sufficiency information in contingency judgements. Study 1 used an active learning contingency task. Results showed that Statistical reasoners were more influenced by sufficiency information than Counterexample reasoners. Study 2 used a passive learning contingency task, where sufficiency was constant and only necessity information (based on outcomes when the potential cause was absent) was varied. Results showed that only Counterexample reasoners were sensitive to necessity information. These results demonstrate that strategy use is correlated with individual differences in information processing in contingency learning.
Article
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to test the assumption of reducing the illusion of control by using a causal question in desirable and undesirable results. The influence of the causal question on the size of the illusion of control, measured by the self-esteem of the subjects, was not found. Keywords: cognitive distortions, illusion of control.
Article
The illusion of control is the belief that our behavior produces an effect that is actually independent from it. This illusion is often at the core of superstitious and pseudoscientific thinking. Although recent research has proposed several evidence-based strategies that can be used to reduce the illusion, the majority of these experiments have involved positive illusions—that is, those in which the potential outcomes are desired (e.g., recovery from illness or earning points). By contrast, many real-life superstitions and pseudosciences are tied to negative illusions—that is, those in which the potential consequences are undesired. Examples are walking under a ladder, breaking a mirror, or sitting in row 13, all of which are supposed to generate bad luck. Thus, the question is whether the available evidence on how to reduce positive illusions would also apply to situations in which the outcomes are undesired. We conducted an experiment in which participants were exposed to undesired outcomes that occurred independently of their behavior. One strategy that has been shown to reduce positive illusions consists of warning people that the outcomes might have alternative causes, other than the participants’ actions, and telling them that the best they can do to find out whether an alternative cause is at work is to act on only about 50 % of the trials. When we gave our participants this information in an experiment in which the outcomes were undesired, their illusion was enhanced rather than reduced, contrary to what happens when the outcome is desired. This suggests that the strategies that reduce positive illusions may work in just the opposite way when the outcome is undesired.
Article
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to test the assumption of reducing the illusion of control by using a causal question in desirable and undesirable results. The influence of the causal question on the size of the illusion of control, measured by the self-esteem of the subjects, was not found. Keywords: cognitive distortions, illusion of control.
Article
Most previous research on illusions of control focused on generative scenarios, in which participants' actions aim to produce a desired outcome. By contrast, the illusions that may appear in preventive scenarios, in which actions aim to prevent an undesired outcome before it occurs, are less known. In this experiment, we studied two variables that modulate generative illusions of control, the probability with which the action takes place, P(A), and the probability of the outcome, P(O), in two different scenarios: generative and preventive. We found that P(O) affects the illusion in symmetrical, opposite directions in each scenario, while P(A) is positively related to the magnitude of the illusion. Our conclusion is that, in what concerns the illusions of control, the occurrence of a desired outcome is equivalent to the nonoccurrence of an undesired outcome, which explains why the P(O) effect is reversed depending on the scenario.
Article
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to test the assumption of reducing the illusion of control by using a causal question in desirable and undesirable results. The influence of the causal question on the size of the illusion of control, measured by the self-esteem of the subjects, was not found. Keywords: cognitive distortions, illusion of control.
Article
Full-text available
Teachers sometimes believe in the efficacy of instructional practices that have little empirical support. These beliefs have proven difficult to efface despite strong challenges to their evidentiary basis. Teachers typically develop causal beliefs about the efficacy of instructional practices by inferring their effect on students' academic performance. Here, we evaluate whether causal inferences about instructional practices are susceptible to an outcome density effect using a contingency learning task. In a series of six experiments, participants were ostensibly presented with students' assessment outcomes, some of whom had supposedly received teaching via a novel technique and some of whom supposedly received ordinary instruction. The distributions of the assessment outcomes was manipulated to either have frequent positive outcomes (high outcome density condition) or infrequent positive outcomes (low outcome density condition). For both continuous and categorical assessment outcomes, participants in the high outcome density condition rated the novel instructional technique as effective, despite the fact that it either had no effect or had a negative effect on outcomes, while the participants in the low outcome density condition did not. These results suggest that when base rates of performance are high, participants may be particularly susceptible to drawing inaccurate inferences about the efficacy of instructional practices.
Thesis
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to assess the relationship of emotional characteristics and formulation of the question on the illusion of control, depending on the desirable and undesirable results. In the study, it was assumed that the illusion of control depends on the amount of effort applied to achieve the result. It has also been suggested to reduce the illusion of control when asking a causal question in the case where the result is desirable and the participant acts to make that result appear, and in the case where the result is undesirable and the subject acts to prevent it from occurring. The influence of the cause-effect question and emotional characteristics on the value of the illusion of control, measured by the self-esteem of the subjects was not found. There was also no correlation between the amount of effort and the illusion of control.
Article
The ability to learn cause-effect relations from experience is critical for humans to behave adaptively - to choose causes that bring about desired effects. However, traditional experiments on experience-based learning involve events that are artificially compressed in time so that all learning occurs over the course of minutes. These paradigms therefore exclusively rely upon working memory. In contrast, in real-world situations we need to be able to learn cause-effect relations over days and weeks, which necessitates long-term memory. 413 participants completed a smartphone study, which compared learning a cause-effect relation one trial per day for 24 days versus the traditional paradigm of 24 trials back- to- back. Surprisingly, we found few differences between the short versus long timeframes. Subjects were able to accurately detect generative and preventive causal relations, and they exhibited illusory correlations in both the short and long timeframe tasks. These results provide initial evidence that experience-based learning over long timeframes exhibits similar strengths and weaknesses as in short timeframes. However, learning over long timeframes may become more impaired with more complex tasks.
Article
Causal illusions have been postulated as cognitive mediators of pseudoscientific beliefs, which, in turn, might lead to the use of pseudomedicines. However, while the laboratory tasks aimed to explore causal illusions typically present participants with information regarding the consequences of administering a fictitious treatment versus not administering any treatment, real-life decisions frequently involve choosing between several alternative treatments. In order to mimic these realistic conditions, participants in two experiments received information regarding the rate of recovery when each of two different fictitious remedies were administered. The fictitious remedy that was more frequently administered was given higher effectiveness ratings than the low-frequency one, independent of the absence or presence of information about the spontaneous recovery rate. Crucially, we also introduced a novel dependent variable that involved imagining new occasions in which the ailment was present and asking participants to decide which treatment they would opt for. The inclusion of information about the base rate of recovery significantly influenced participants’ choices. These results imply that the mere prevalence of popular treatments might make them seem particularly effective. It also suggests that effectiveness ratings should be interpreted with caution as they might not accurately reflect real treatment choices. Materials and datasets are available at the Open Science Framework [https://osf.io/fctjs/].
Article
Empirical information available for causal judgment in everyday life tends to take the form of quasi-experimental designs, lacking control groups, more than the form of contingency information that is usually presented in experiments. Stimuli were presented in which values of an outcome variable for a single individual were recorded over six time periods and an intervention was introduced between the fifth and sixth time periods. Participants judged whether and how much the intervention affected the outcome. With numerical stimulus information, judgments were higher for a pre-intervention profile in which all values were the same than for pre-intervention profiles with any other kind of trend. With graphical stimulus information, judgments were more sensitive to trends, tending to be higher when an increase after the intervention was preceded by a decreasing series than when it was preceded by an increasing series ending on the same value at the fifth time period. It is suggested that a feature-analytic model, in which the salience of different features of information varies between presentation formats, may provide the best prospect of explaining the results.
Article
The human cognitive system is fine-tuned to detect patterns in the environment with the aim of predicting important outcomes and, eventually, to optimize behavior. Built under the logic of the least-costly mistake, this system has evolved biases to not overlook any meaningful pattern, even if this means that some false alarms will occur, as in the case of when we detect a causal link between two events that are actually unrelated (i.e., a causal illusion). In this review, we examine the positive and negative implications of causal illusions, emphasizing emotional aspects (i.e., causal illusions are negatively associated with negative mood and depression) and practical, health-related issues (i.e., causal illusions might underlie pseudoscientific beliefs, leading to dangerous decisions). Finally, we describe several ways to obtain control over causal illusions, so that we could be able to produce them when they are beneficial and avoid them when they are harmful.
Article
Background We have previously presented two educational interventions aimed to diminish causal illusions and promote critical thinking. In both cases, these interventions reduced causal illusions developed in response to active contingency learning tasks, in which participants were able to decide whether to introduce the potential cause in each of the learning trials. The reduction of causal judgments appeared to be influenced by differences in the frequency with which the participants decided to apply the potential cause, hence indicating that the intervention affected their information sampling strategies. Objective In the present study, we investigated whether one of these interventions also reduces causal illusions when covariation information is acquired passively. Method Forty-one psychology undergraduates received our debiasing intervention, while 31 students were assigned to a control condition. All participants completed a passive contingency learning task. Results We found weaker causal illusions in students that participated in the debiasing intervention, compared to the control group. Conclusion The intervention affects not only the way the participants look for new evidence, but also the way they interpret given information. Teaching implications Our data extending previous results regarding evidence-based educational interventions aimed to promote critical thinking to situations in which we act as mere observers.
Article
Causal illusion has been proposed as a cognitive mediator of pseudoscientific beliefs. However, previous studies have only tested the association between this cognitive bias and a closely related but different type of unwarranted beliefs, those related to superstition and paranormal phenomena. Participants (n = 225) responded to a novel questionnaire of pseudoscientific beliefs designed for this study. They also completed a contingency learning task in which a possible cause, infusion intake, and a desired effect, headache remission, were actually non‐contingent. Volunteers with higher scores on the questionnaire also presented stronger causal illusion effects. These results support the hypothesis that causal illusions might play a fundamental role in the endorsement of pseudoscientific beliefs.
Article
Full-text available
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
Article
Full-text available
Recent research has shown superstitious behaviour and illusion of control in human subjects exposed to the negative reinforcement conditions that are traditionally assumed to lead to the opposite outcome (i.e. learned helplessness). The experiments reported in this paper test the generality of these effects in two different tasks and under different conditions of percentage (75% vs. 25%) and distribution (random vs. last-trials) of negative reinforcement (escape from uncontrollable noise). All three experiments obtained superstitious behaviour and illusion of control and question the generality of learned helplessness as a consequence of exposing humans to uncontrollable outcomes.
Article
Full-text available
Depressive realism consists of the lower personal control over uncontrollable events perceived by depressed as compared to nondepressed individuals. In this article, we propose that the realism of depressed individuals is caused not by an increased accuracy in perception, but by their more comprehensive exposure to the actual environmental contingencies, which in turn is due to their more pas-sive pattern of responding. To test this hypothesis, dysphoric and nondysphoric participants were exposed to an uncontrollable task and both their probability of responding and their judgment of control were assessed. As was expected, higher levels of depression correlated negatively with probability of responding and with the illusion of control. Implications for a therapy of depression are discussed.
Article
Full-text available
Several classic studies have concluded that the accuracy of identifying uncontrollable situations depends heavily on depressive mood. Nondepressed participants tend to exhibit an optimistic illusion of control, whereas depressed participants tend to better detect a lack of control. Recently, we suggested that the different activity levels (measured as the probability of responding during a contingency learning task) exhibited by depressed and nondepressed individuals is partly responsible for this effect. The two studies presented in this paper provide further support for this mediational hypothesis, in which mood is the distal cause of the illusion of control operating through activity level, the proximal cause. In Study 1, the probability of responding, P(R), was found to be a mediator variable between the depressive symptoms and the judgments of control. In Study 2, we intervened directly on the mediator variable: The P(R) for both depressed and nondepressed participants was manipulated through instructions. Our results confirm that P(R) manipulation produced differences in the participants' perceptions of uncontrollability. Importantly, the intervention on the mediator variable cancelled the effect of the distal cause; the participants' judgments of control were no longer mood dependent when the P(R) was manipulated. This result supports the hypothesis that the so-called depressive realism effect is actually mediated by the probability of responding.
Article
Full-text available
Although normatively irrelevant to the relationship between a cue and an outcome, outcome density (i.e. its base-rate probability) affects people's estimation of causality. By what process causality is incorrectly estimated is of importance to an integrative theory of causal learning. A potential explanation may be that this happens because outcome density induces a judgement bias. An alternative explanation is explored here, following which the incorrect estimation of causality is grounded in the processing of cue–outcome information during learning. A first neural network simulation shows that, in the absence of a deep processing of cue information, cue–outcome relationships are acquired but causality is correctly estimated. The second simulation shows how an incorrect estimation of causality may emerge from the active processing of both cue and outcome information. In an experiment inspired by the simulations, the role of a deep processing of cue information was put to test. In addition to an outcome density manipulation, a shallow cue manipulation was introduced: cue information was either still displayed (concurrent) or no longer displayed (delayed) when outcome information was given. Behavioural and simulation results agree: the outcome-density effect was maximal in the concurrent condition. The results are discussed with respect to the extant explanations of the outcome-density effect within the causal learning framework.
Article
Full-text available
It is well known that certain variables can bias judgements about the perceived contingency between an action and an outcome, making them depart from the normative predictions. For instance, previous studies have proven that the activity level or probability of responding, P(R), is a crucial variable that can affect these judgements in objectively noncontingent situations. A possible account for the P(R) effect is based on the differential exposure to actual contingencies during the training phase, which is in turn presumably produced by individual differences in participants' P(R). The current two experiments replicate the P(R) effect in a free-response paradigm, and show that participants' judgements are better predicted by P(R) than by the actual contingency to which they expose themselves. Besides, both experiments converge with previous empirical data, showing a persistent bias that does not vanish as training proceeds. These findings contrast with the preasymptotic and transitory effect predicted by several theoretical models.
Article
Full-text available
The acquisition of a negative evaluation of a fictitious minority social group in spite of the absence of any objective correlation between group membership and negative behaviours was described by Hamilton and Gifford (1976) as an instance of an illusory correlation. We studied the acquisition and attenuation through time of this correlation learning effect. In two experiments we asked for participants' judgements of two fictitious groups using an online version of a group membership belief paradigm. We tested how judgements of the two groups changed as a function of the amount of training they received. Results suggest that the perception of the illusory correlation effect is initially absent, emerges with intermediate amounts of absolute experience, but diminishes and is eliminated with increased experience. This illusory correlation effect can be considered to reflect incomplete learning rather than a bias due to information loss in judgements or distinctiveness.
Article
Full-text available
Active contingency tasks, such as those used to explore judgments of control, suffer from variability in the actual values of critical variables. The authors debut a new, easily implemented procedure that restores control over these variables to the experimenter simply by telling participants when to respond, and when to withhold responding. This command-performance procedure not only restores control over critical variables such as actual contingency, it also allows response frequency to be manipulated independently of contingency or outcome frequency. This yields the first demonstration, to our knowledge, of the equivalent of a cue density effect in an active contingency task. Judgments of control are biased by response frequency outcome frequency, just as they are also biased by outcome frequency.
Article
Full-text available
In 4 experiments, 144 depressed and 144 nondepressed undergraduates (Beck Depression Inventory) were presented with one of a series of problems varying in the degree of contingency. In each problem, Ss estimated the degree of contingency between their responses (pressing or not pressing a button) and an environmental outcome (onset of a green light). Depressed Ss' judgments of contingency were suprisingly accurate in all 4 experiments. Nondepressed Ss overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired. Thus, predictions derived from social psychology concerning the linkage between subjective and objective contingencies were confirmed for nondepressed but not for depressed Ss. The learned helplessness and self-serving motivational bias hypotheses are evaluated as explanations of the results. (41/2 p ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
Full-text available
How humans infer causation from covariation has been the subject of a vigorous debate, most recently between the computational causal power account (P. W. Cheng, 1997) and associative learning theorists (e.g., K. Lober & D. R. Shanks, 2000). Whereas most researchers in the subject area agree that causal power as computed by the power PC theory offers a normative account of the inductive process. Lober and Shanks, among others, have questioned the empirical validity of the theory. This article offers a full report and additional analyses of the original study featured in Lober and Shanks's critique (M. J. Buehner & P. W. Cheng, 1997) and reports tests of Lober and Shanks's and other explanations of the pattern of causal judgments. Deviations from normativity, including the outcome-density bias, were found to be misperceptions of the input or other artifacts of the experimental procedures rather than inherent to the process of causal induction.
Article
Full-text available
The perception of the effectiveness of instrumental actions is influenced by depressed mood. Depressive realism (DR) is the claim that depressed people are particularly accurate in evaluating instrumentality. In two experiments, the authors tested the DR hypothesis using an action-outcome contingency judgment task. DR effects were a function of intertrial interval length and outcome density, suggesting that depressed mood is accompanied by reduced contextual processing rather than increased judgment accuracy. The DR effect was observed only when participants were exposed to extended periods in which no actions or outcomes occurred. This implies that DR may result from an impairment in contextual processing rather than accurate but negative expectations. Therefore, DR is consistent with a cognitive distortion view of depression.
Article
Full-text available
There are many psychological tasks that involve the pairing of binary variables. The various tasks used often address different questions and are motivated by different theoretical issues and traditions. Upon closer examination, however, the tasks are remarkably similar in structure. In the present paper, we examine two such tasks, the contingency judgment task and the signal detection task, and we apply a signal detection analysis to contingency judgment data. We suggest that the signal detection analysis provides a novel interpretation of a well-established but poorly understood phenomenon of contingency judgments--the outcome-density effect.
Article
Full-text available
In three experiments, we show that people respond differently when they make predictions as opposed to when they are asked to estimate the causal or the predictive value of cues: Their response to each of those three questions is based on different sets of information. More specifically, we show that prediction judgments depend on the probability of the outcome given the cue, whereas causal and predictive-value judgments depend on the cue-outcome contingency. Although these results might seem problematic for most associative models in their present form, they can be explained by explicitly assuming the existence of postacquisition processes that modulate participants' responses in a flexible way.
Article
Full-text available
In cause-outcome contingency judgement tasks, judgements often reflect the actual contingency but are also influenced by the overall probability of the outcome, P(O). Action-outcome instrumental learning tasks can foster a pattern in which judgements of positive contingencies become less positive as P(O) increases. Variable contiguity between the action and the outcome may produce this bias. Experiment 1 recorded judgements of positive contingencies that were largely uninfluenced by P(O) using an immediate contiguity procedure. Experiment 2 directly compared variable versus constant contiguity. The predicted interaction between contiguity and P(O) was observed for positive contingencies. These results stress the sensitivity of the causal learning mechanism to temporal contiguity.
Article
Full-text available
In three experiments we tested how the spacing of trials during acquisition of zero, positive, and negative response-outcome contingencies differentially affected depressed and nondepressed students' judgements. Experiment 1 found that nondepressed participants' judgements of zero contingencies increased with longer intertrial intervals (ITIs) but not simply longer procedure durations. Depressed groups' judgements were not sensitive to either manipulation, producing an effect known as depressive realism only with long ITIs. Experiments 2 and 3 tested predictions of Cheng's (1997) Power PC theory and the Rescorla-Wagner (1972) model, that the increase in context exposure experienced during the ITI might influence judgements most with negative contingencies and least with positive contingencies. Results suggested that depressed people were less sensitive to differences in contingency and contextual exposure. We propose that a context-processing difference between depressed and nondepressed people removes any objective notion of "realism" that was originally employed to explain the depressive realism effect (Alloy & Abramson, 1979).
Article
Full-text available
When people try to obtain a desired event and this outcome occurs independently of their behavior, they often think that they are controlling its occurrence. This is known as the illusion of control, and it is the basis for most superstitions and pseudosciences. However, most experiments demonstrating this effect had been conducted many years ago and almost always in the controlled environment of the psychology laboratory and with psychology students as subjects. Here, we explore the generality of this effect and show that it is still today a robust phenomenon that can be observed even in the context of a very simple computer program that users try to control (and believe that they are controlling) over the Internet. Understanding how robust and general this effect is, is a first step towards eradicating irrational and pseudoscientific thinking.
Article
Full-text available
Causal judgment is assumed to play a central role in prediction, control, and explanation. Here, we consider the function or functions that map contingency information concerning the relationship between a single cue and a single outcome onto causal judgments. We evaluate normative accounts of causal induction and report the findings of an extensive meta-analysis in which we used a cross-validation model-fitting method and carried out a qualitative analysis of experimental trends in order to compare a number of alternative models. The best model to emerge from this competition is one in which judgments are based on the difference between the amount of confirming and disconfirming evidence. A rational justification for the use of this model is proposed.
Article
Wasserman (1990a) has reported an experiment that shows that people's attributions of causality in circumstances where they have to make inferences from brief verbal descriptions correspond to the attributions they make in situations where they actually experience causal sequences, and on the basis of that result he has suggested that psychologists should search for general laws of learning that might account for correspondences such as this one. I briefly describe some further similarities that can be obtained between causal judgments in experienced and described situations. However, the suggestion that a common mechanism may be involved in the two cases is questioned. Instead, I suggest that the correspondence is little more than a coincidence: It does not arise from the operation of a common mechanism. Instead, I argue that people, along with other organisms, possess an associative learning mechanism that operates in experienced situations, but we also possess domain-specific causal beliefs and metabeliefs about causation. These latter two types of beliefs play little role in experienced causal situations, but can readily be applied in making causal attributions from descriptions.
Article
Despite Miller's (1969) now-famous clarion call to "give psychology away" to the general public, scientific psychology has done relatively little to combat festering problems of ideological extremism and both inter- and intragroup conflict. After proposing that ideological extremism is a significant contributor to world conflict and that confirmation bias and several related biases are significant contributors to ideological extremism, we raise a crucial scientific question: Can debiasing the general public against such biases promote human welfare by tempering ideological extremism? We review the knowns and unknowns of debiasing techniques against confirmation bias, examine potential barriers to their real-world efficacy, and delineate future directions for research on debiasing. We argue that research on combating extreme confirmation bias should be among psychological science's most pressing priorities. © 2009 Association for Psychological Science.
Article
This chapter discusses that experimental psychology is no longer a unified field of scholarship. The most obvious sign of disintegration is the division of the Journal of Experimental Psychology into specialized periodicals. Many forces propel this fractionation. First, the explosion of interest in many small spheres of inquiry has made it extremely difficult for an individual to master more than one. Second, the recent popularity of interdisciplinary research has lured many workers away from the central issues of experimental psychology. Third, there is a growing division between researchers of human and animal behavior; this division has been primarily driven by contemporary cognitive psychologists, who see little reason to refer to the behavior of animals or to inquire into the generality of behavioral principles. The chapter considers the study of causal perception. This area is certainly at the core of experimental psychology. Although recent research in animal cognition has taken the tack of bringing human paradigms into the animal laboratory, the experimental research is described has adopted the reverse strategy of bringing animal paradigms into the human laboratory. A further unfortunate fact is that today's experimental psychologists are receiving little or no training in the history and philosophy of psychology. This neglected aspect means that investigations of a problem area are often undertaken without a full understanding of the analytical issues that would help guide empirical inquiry.
Article
This chapter discusses the associative accounts of causality judgment. The perceptual and cognitive approaches to causal attribution can be contrasted with a more venerable tradition of associationism. The only area of psychology that offered an associative account of a process sensitive to causality is that of conditioning. An instrumental or operant conditioning procedure presents a subject with a causal relationship between an action and an outcome, the reinforcer; performing the action either causes the reinforcer to occur under a positive contingency or prevents its occurrence under a negative one, and the subjects demonstrate sensitivity to these causal relationships by adjusting their behavior appropriately. Most of these associative theories are developed to explain classic or Pavlovian conditioning rather than the instrumental or operant variety, but there are good reasons for assuming that the two types of conditioning are mediated by a common learning process.
Article
Experiments in which subjects are asked to analytically assess response-outcome relationships have frequently yielded accurate judgments of response-outcome independence, but more naturalistically set experiments in which subjects are instructed to obtain the outcome have frequently yielded illusions of control The present research tested the hypothesis that a differential probability of responding p(R), between these two traditions could be at the basis of these different results Subjects received response-independent outcomes and were instructed either to obtain the outcome (naturalistic condition) or to behave scientifically in order to find out how much control over the outcome was possible (analytic condition) Subjects in the naturalistic condition tended to respond at almost every opportunity and developed a strong illusion of control Subjects in the analytic condition maintained their p(R) at a point close to 5 and made accurate judgments of control The illusion of control observed in the naturalistic condition appears to be a collateral effect of a high tendency to respond in subjects who are trying to obtain an outcome, this tendency to respond prevents them from learning that the outcome would have occurred with the same probability if they had not responded
Article
Two experiments used a rich and systematic set of noncontingent problems to examine humans' ability to detect the absence of an inter-event relation. Each found that Ss who used nonnormative strategies were quite inaccurate in judging some types of noncontingent problems. Group data indicate that Ss used the 2 × 2 information in the order Cell A > Cell B > Cell C > Cell D; individual data indicate that Ss considered the information in Cell A to be most important, that in Cell D to be least important, and that in Cells B and C to be of intermediate importance. Trial-by-trial presentation led to less accurate contingency judgments and to more uneven use of 2 × 2 cell information than did summary-table presentation. Finally, the judgment processes of about 70% and 80%, respectively, of nonnormative strategy users under trial-by-trial and summary-table procedures could be accounted for by an averaging model. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Examines the varied measures of contingency that have appeared in the psychological judgment literature concerning binary variables. It is argued that accurate judgments about related variables should not be used to infer that the judgments are based on appropriate information. (9 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Studies concerned with judgments of contingency between binary variables have often ignored what the variables stand for. The two values of a binary variable can be represented as a prevailing state (nonevent) or as an active state (event). Judgments under the four conditions resulting from the combination of a binary input variable that can be represented as event-nonevent or event-event with an outcome variable that can be represented in the same way were obtained. It is shown in Experiment 1, that judgments of data sets which exhibit the same degree of covariation depend upon how the input and output variables are represented. In Experiment 2 the case where both the input and output variables are represented as event-nonevent is examined. Judgments were higher when the pairing of the input event was with the output event and the input nonevent with the output nonevent that when the pairing was of event with nonevent, suggesting a causal compatibility of event-event pairings and a causal incompatibility of event-nonevent pairings. Experiment 3 demonstrates that judgments of the strength of the relation between binary input and output variables is not based on the appropriate statistical measure, the difference between two conditional probabilities. The overall pattern of judgments in the three experiments is mainly explicable on the basis of two principles: (1) judgments tend to be based on the difference between confirming and disconfirming cases and (2) causal compatibility in the representation of the input and output variables plays a critical role.
Article
[Ilt may be that … reason, self-consciousness and self-control which seem to sever human intellect so sharply from that of all other animals are really but secondary re- sults of the tremendous increase in the number, delicacy and complexity of associations which the human animal can form. It may be that the evolution of intellect has no breaks, that its progress is continuous from its first appearance to its present condition in adult … human beings. If we could prove that what we call ideational life and reasoning were not new and unexplainable species of intellectual life but only the natural consequences of an increase in the number, delicacy, and complexity of associations of the general animal sort, we should have made out an evolution of mind comparable to the evolution of living forms. (p. 286)
Article
Pseudoscience, superstitions, and quackery are serious problems that threaten public health and in which many variables are involved. Psychology, however, has much to say about them, as it is the illusory perceptions of causality of so many people that needs to be understood. The proposal we put forward is that these illusions arise from the normal functioning of the cognitive system when trying to associate causes and effects. Thus, we propose to apply basic research and theories on causal learning to reduce the impact of pseudoscience. We review the literature on the illusion of control and the causal learning traditions, and then present an experiment as an illustration of how this approach can provide fruitful ideas to reduce pseudoscientific thinking. The experiment first illustrates the development of a quackery illusion through the testimony of fictitious patients who report feeling better. Two different predictions arising from the integration of the causal learning and illusion of control domains are then proven effective in reducing this illusion. One is showing the testimony of people who feel better without having followed the treatment. The other is asking participants to think in causal terms rather than in terms of effectiveness.
Article
Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.
Article
A Treatise of Human Nature / David Hume Note: The University of Adelaide Library eBooks @ Adelaide.
Article
The impact of the presentation format of covariation information on causal judgements was examined in a fictitious virus-disease causal induction task. Six different judgement conditions were created by crossing two levels of virus-disease co