Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Similarly, other things being equal, participants' judgments tend to covary with the marginal probability of the cue, defined as the proportion of trials in which the cue is present; that is, p(cue) = (a + b)/(a + b + c + d). Figure 1F represents an example where the probability of the cue is high, but there is no contingency between cue and outcome. Again, participants tend to overestimate contingency in situations like this (Allan & Jenkins, 1983;Matute et al., 2011;Perales et al., 2005;Vadillo, Musca, Blanco, & Matute, 2011;Wasserman et al., 1996). The biasing effects of the probability of the outcome and the probability of the cue are typically known as outcomeand cue-density biases. ...
... To answer this question, we decided to reanalyze data from our own laboratory using the strategy followed by Allan et al. Specifically, we reanalyzed data from nine experimental conditions exploring the cue-density bias (originally published in Blanco et al., 2013;Matute et al., 2011;Vadillo et al., 2011;Yarritu, Matute, & Vadillo, 2014) and three experimental conditions exploring the outcome-density bias (originally published in Musca et al., 2010;Vadillo, Miller, & Matute, 2005). All these experiments were conducted using the standard experimental paradigm outlined above. ...
... However, in Matute et al. (2011) and Vadillo et al. (2005, 2011 they were asked to provide several judgments. In the case of Matute et al. (2011) and Vadillo et al. (2011), we included in the analyses the judgment that yielded the stronger cue-probability bias in each experiment. If anything, selecting judgments that show strong biases should make it easier to observe any potential dissociation between judgments and trial-by-trial predictions. ...
Article
Full-text available
Decades of research in causal and contingency learning show that people’s estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artefacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive.
... In addition to active vs. passive roles of participants, there are many other variants that can be introduced in this task and that have been shown to affect the participants' estimations of causality. Examples include changing the wording of questions asked at the end of the experiment about the causal relationship (Crocker, 1982;Vadillo et al., 2005Vadillo et al., , 2011Collins and Shanks, 2006;De Houwer et al., 2007;Blanco et al., 2010;Shou and Smithson, 2015), the order in which the different trial types are presented (Langer and Roth, 1975;López et al., 1998), the frequency with which judgments are requested (Collins and Shanks, 2002;Matute et al., 2002), the description of the relevant events as causes, predictors, or effects (Waldmann and Holyoak, 1992;Cobos et al., 2002;Pineño et al., 2005), the temporal contiguity between the two events (e.g., Shanks et al., 1989;Wasserman, 1990;Lagnado and Sloman, 2006;Lagnado et al., 2007), and many other variables that fortunately are becoming well known. In the following sections, we will focus on the variables that seem to affect the illusion most critically in cases of null contingency. ...
... With all other things being equal and given the null contingency, the illusion of a cause producing an outcome will be significantly stronger if the cause occurs in 75% of the occasions than when it occurs in 25%. This effect is also called the cause-density or cause-frequency bias and has also been shown in many experiments (Allan and Jenkins, 1983;Wasserman et al., 1996;Perales et al., 2005;Matute et al., 2011;Vadillo et al., 2011;Blanco et al., 2013;Yarritu et al., 2014). The effect is particularly strong when the probability of the outcome is high as well, since there will be more opportunities for coincidences . ...
... The results showed that even though participants did not perform the action themselves, observing many vs. few cases in which the cause was present had a significant effect on the illusion that they developed. Those observing fewer patients who followed the treatment reported significantly weaker illusions (e.g., Matute et al., 2011;Vadillo et al., 2011;Yarritu et al., 2014). Thus, there is not only a tendency to use with greater frequency those treatments that seem to be more effective, but also a tendency to perceive more frequently used treatments as more effective. ...
Article
Full-text available
Illusions of causality occur when people develop the belief that there is a causal connection between two events that are actually unrelated. Such illusions have been proposed to underlie pseudoscience and superstitious thinking, sometimes leading to disastrous consequences in relation to critical life areas, such as health, finances, and wellbeing. Like optical illusions, they can occur for anyone under well-known conditions. Scientific thinking is the best possible safeguard against them, but it does not come intuitively and needs to be taught. Teaching how to think scientifically should benefit from better understanding of the illusion of causality. In this article, we review experiments that our group has conducted on the illusion of causality during the last 20 years. We discuss how research on the illusion of causality can contribute to the teaching of scientific thinking and how scientific thinking can reduce illusion.
... This is known as the outcome-density bias and is a key factor in the development of the illusion of causality and of control (Allan & Jenkins, 1983;Alloy & Abramson, 1979;Hannah & Beneteau, 2009;Matute, 1995;Msetfi et al., 2005;Tenenn & Sharp, 1983). In addition to the outcome-density bias there is the cue-density bias which refers to an overestimation of contingency judgments when the probability of the potential cause, p(C), is high (Allan & Jenkins, 1983;Hannah & Beneteau, 2009;Vadillo, Musca, Blanco, & Matute, 2011). While the outcome-density bias has been widely studied, both in situation in which participants are personally involved (i.e., the participants' behavior is the potential cause, see, e.g., Matute, 1995) and in which they are not (i.e., an external event is the potential cause, see, e.g., Allan, Siegel, & Tangen, 2005), the effect of the probability of the cause (i.e., the action) on the illusion of control has received less attention. ...
... Importantly, there is evidence suggestive that being the one who performs the action is not even necessary. The effect of p(C) has been demonstrated in situations in which the potential cause is an external event (e.g., Kutzner, Freytag, Vogel, & Fiedler, 2008;Matute et al., 2011;Perales, Catena, Shanks, & Gonzµlez, 2005;Vadillo et al., 2011). For instance, in a recent experiment by Matute et al. (2011), the contingency between a potential cause (i.e., a fictitious medicine administered by a fictitious agent) and an outcome (recovery from illness) was zero, but p(O) was .80. ...
... This is in line with previous studies in which the influence of p(C) was tested. Indeed, this p(C) effect is often described more generally as the probability of the cue effect, or the cue-density effect, as it occurs with either causes or predictors as cue events; see, e.g., Blanco et al., 2011Blanco et al., , 2013Hannah & Beneteau, 2009;Matute, 1996;Matute et al., 2011;Perales et al., 2005;Vadillo et al., 2011). ...
Article
Full-text available
The illusion of control consists of overestimating the influence that our behavior exerts over uncontrollable outcomes. Available evidence suggests that an important factor in development of this illusion is the personal involvement of participants who are trying to obtain the outcome. The dominant view assumes that this is due to social motivations and self-esteem protection. We propose that this may be due to a bias in contingency detection which occurs when the probability of the action (i.e., of the potential cause) is high. Indeed, personal involvement might have been often confounded with the probability of acting, as participants who are more involved tend to act more frequently than those for whom the outcome is irrelevant and therefore become mere observers. We tested these two variables separately. In two experiments, the outcome was always uncontrollable and we used a yoked design in which the participants of one condition were actively involved in obtaining it and the participants in the other condition observed the adventitious cause-effect pairs. The results support the latter approach: Those acting more often to obtain the outcome developed stronger illusions, and so did their yoked counterparts.
... In particular, the causal illusion is the systematic error of perceiving a causal link between unrelated events that happen to occur in time proximity . This cognitive bias could explain why people sometimes judge that completely ineffective treatments cause health benefits (Matute et al., 2011), particularly when both the administration of the treatment (i.e., the cause) and the relief of the symptoms (i.e., the outcome) occur with high frequency (Allan et al., 2005;Hannah and Beneteau, 2009;Musca et al., 2010;Perales et al., 2005;Vadillo et al., 2010). ...
... For that purpose, we designed a procedure in which participants had to judge the effectiveness of two fictitious medicines, one alternative and the other one conventional, so that each of the two scenarios was more or less consistent with the participants' prior beliefs in pseudoscience. First, our results replicate the causal illusion bias that has been found in many previous studies when there is no causal relationship between cause and outcome but the probability of the cause and the probability of the outcome are high (Allan et al., 2005;Hannah and Beneteau, 2009;Musca et al., 2010;Perales et al., 2005;Vadillo et al., 2010). More importantly, we found that those participants who did not believe in alternative medicine (low scores in PES and net trust) tended to show weaker causal illusions in the alternative medicine scenario. ...
Article
Full-text available
Previous research suggests that people may develop stronger causal illusions when the existence of a causal relationship is consistent with their prior beliefs. In the present study, we hypothesized that prior pseudoscientific beliefs will influence judgments about the effectiveness of both alternative medicine and scientific medicine. Participants ( N = 98) were exposed to an adaptation of the standard causal illusion task in which they had to judge whether two fictitious treatments, one described as conventional medicine and the other as alternative medicine, could heal the crises caused by two different syndromes. Since both treatments were completely ineffective, those believing that any of the two medicines worked were exhibiting a causal illusion. Participants also responded to the Pseudoscience Endorsement Scale (PES) and some questions about trust in alternative therapies that were taken from the Survey on the Social Perception of Science and Technology conducted by FECYT. The results replicated the causal illusion effect and extended them by revealing an interaction between the prior pseudoscientific beliefs and the scientific/pseudoscientific status of the fictitious treatment. Individuals reporting stronger pseudoscientific beliefs were more vulnerable to the illusion in both scenarios, whereas participants with low adherence to pseudoscientific beliefs seemed to be more resistant to the illusion in the alternative medicine scenario.
... Likewise, Allan et al. (2005) have argued that the outcome density effect (for a given DP value, participants are more likely to perceive a positive contingency between the cue and the outcome if the overall probability of the outcome is higher) is the result of changes in the threshold, rather than in the sensitivity to the contingencies. Likewise, Perales et al. (2005) have made a similar argument concerning the cue density effect (for a given DP, participants are more likely to perceive a positive contingency between the cue and the outcome if the overall probability of the cue is higher; Allan & Jenkins, 1983;Matute et al., 2011;Vadillo et al., 2010;Wasserman et al., 1996;White, 2003). ...
... Equation 4 suggests that the rapid and repetitive presentation of X during a stream drove a(X) to a high value, thereby explaining the unusually large cue density effect in the present series. For comparison purposes, the cue density effect observed by Vadillo et al. (2010) corresponds to a Cohen's d of .34, which is quite typical. ...
Article
Full-text available
In a signal detection theory approach to associative learning, the perceived (i.e., subjective) contingency between a cue and an outcome is a random variable drawn from a Gaussian distribution. At the end of the sequence, participants report a positive cue-outcome contingency provided the subjective contingency is above some threshold. Some researchers have suggested that the mean of the subjective contingency distributions and the threshold are controlled by different variables. The present data provide empirical support for this claim. In three experiments, participants were exposed to rapid streams of trials at the end of which they had to indicate whether a target outcome O1 was more likely following a target cue X. Interfering treatments were incorporated in some streams to impend participants' ability to identify the objective X-O1 contingency: interference trials (X was paired with an irrelevant outcome O2), nonreinforced trials (X was presented alone), plus control trials (an irrelevant cue W was paired with O2). Overall, both interference and nonreinforced trials impaired participants' sensitivity to the contingencies as measured by signal detection theory's d', but they also enhanced detection of positive contingencies through a cue density effect, with nonreinforced trials being more susceptible to this effect than interference trials. These results are explicable if one assumes interference and nonreinforced trials impact the mean of the associative strength distribution, while the cue density influences the threshold. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... For instance, for any fixed level of ΔP, participants' judgments tend to increase with the overall probability of the outcome, P(o), an effect known as the outcome-density bias (e.g., Allan and Jenkins, 1983;Allan et al., 2005;Buehner et al., 2003;López et al., 1998;Msetfi et al., 2005;Moreno-Fernández et al., 2017;Musca et al., 2010;Wasserman et al., 1996). Similarly, for any fixed level of ΔP, judgments tend to increase with the overall probability of the cue, P (c), an effect known as the cue-density bias (e.g., Allan and Jenkins, 1983;Matute et al., 2011;Perales et al., 2005;Vadillo et al., 2011;Wasserman et al., 1996). As could be expected, judgments are higher when both P(o) and P(c) are large (Blanco et al., 2013), as the number of cue-outcome coincidences is maximal under these circumstances. ...
... To overcome this problem, we also fitted both models to the numerical judgments of contingency provided by participants at the end of the experiment. These judgments were collected using slightly different procedures across experiments and, consequently, they are noisier than ΔP pred Vadillo et al., 2005Vadillo et al., , 2011. However, they are more likely to capture participants' sensitivity to contingency at the end of the experiment. ...
Preprint
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection.
... For instance, for any fixed level of ΔP, participants' judgments tend to increase with the overall probability of the outcome, P(o), an effect known as the outcome-density bias (e.g., Allan and Jenkins, 1983;Allan et al., 2005;Buehner et al., 2003;López et al., 1998;Msetfi et al., 2005;Moreno-Fernández et al., 2017;Musca et al., 2010;Wasserman et al., 1996). Similarly, for any fixed level of ΔP, judgments tend to increase with the overall probability of the cue, P (c), an effect known as the cue-density bias (e.g., Allan and Jenkins, 1983;Matute et al., 2011;Perales et al., 2005;Vadillo et al., 2011;Wasserman et al., 1996). As could be expected, judgments are higher when both P(o) and P(c) are large (Blanco et al., 2013), as the number of cue-outcome coincidences is maximal under these circumstances. ...
... To overcome this problem, we also fitted both models to the numerical judgments of contingency provided by participants at the end of the experiment. These judgments were collected using slightly different procedures across experiments and, consequently, they are noisier than ΔP pred Vadillo et al., 2005Vadillo et al., , 2011. However, they are more likely to capture participants' sensitivity to contingency at the end of the experiment. ...
Article
Full-text available
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection.
... Some researchers have argued that this disassociation between prediction ratings and causal ratings indicates the involvement of a different system or psychological process when participants are making predictions rather than causal judgements (Allan, Siegel, & Tangen, 2005;Waldmann, 2001). However, given that this is a single dissociation where the effect is reliable on one measure and inconsistent on the other, it may simply come down to the sensitivity of each individual measure to subtle biases (De Houwer, Vandorpe, & Beckers, 2007;Vadillo et al., 2005;Vadillo, Musca, Blanco, & Matute, 2011). Ultimately, the extent to which aggregate predictions or causal ratings predict actual classroom behaviour is an empirical question requiring further investigation. ...
Article
Full-text available
Teachers sometimes believe in the efficacy of instructional practices that have little empirical support. These beliefs have proven difficult to efface despite strong challenges to their evidentiary basis. Teachers typically develop causal beliefs about the efficacy of instructional practices by inferring their effect on students' academic performance. Here, we evaluate whether causal inferences about instructional practices are susceptible to an outcome density effect using a contingency learning task. In a series of six experiments, participants were ostensibly presented with students' assessment outcomes, some of whom had supposedly received teaching via a novel technique and some of whom supposedly received ordinary instruction. The distributions of the assessment outcomes was manipulated to either have frequent positive outcomes (high outcome density condition) or infrequent positive outcomes (low outcome density condition). For both continuous and categorical assessment outcomes, participants in the high outcome density condition rated the novel instructional technique as effective, despite the fact that it either had no effect or had a negative effect on outcomes, while the participants in the low outcome density condition did not. These results suggest that when base rates of performance are high, participants may be particularly susceptible to drawing inaccurate inferences about the efficacy of instructional practices.
... Similarly, when there is a high probability of the cause occurring (inflating the frequency of a and b trials relative to c and d), participants typically report greater causal judgments than when the cause rarely occurs (Allan & Jenkins, 1983;Vadillo, Musca, Blanco, & Matute, 2011). This is known as the cue density effect (e.g., Allan & Jenkins, 1983;Matute et al., 2011;Wasserman, Kao, Van Hamme, Katagiri, & Young, 1996). ...
Article
Full-text available
Illusory causation refers to a consistent error in human learning in which the learner develops a false belief that two unrelated events are causally associated. Laboratory studies usually demonstrate illusory causation by presenting two events—a cue (e.g., drug treatment) and a discrete outcome (e.g., patient has recovered from illness)—probabilistically across many trials such that the presence of the cue does not alter the probability of the outcome. Illusory causation in these studies is further augmented when the base rate of the outcome is high, a characteristic known as the outcome density effect. Illusory causation and the outcome density effect provide laboratory models of false beliefs that emerge in everyday life. However, unlike laboratory research, the real-world beliefs to which illusory causation is most applicable (e.g., ineffective health therapies) often involve consequences that are not readily classified in a discrete or binary manner. This study used a causal learning task framed as a medical trial to investigate whether similar outcome density effects emerged when using continuous outcomes. Across two experiments, participants observed outcomes that were either likely to be relatively low (low outcome density) or likely to be relatively high (high outcome density) along a numerical scale from 0 (no health improvement) to 100 (full recovery). In Experiment 1, a bimodal distribution of outcome magnitudes, incorporating variance around a high and low modal value, produced illusory causation and outcome density effects equivalent to a condition with two fixed outcome values. In Experiment 2, the outcome density effect was evident when using unimodal skewed distributions of outcomes that contained more ambiguous values around the midpoint of the scale. Together, these findings provide empirical support for the relevance of the outcome density bias to real-world situations in which outcomes are not binary but occur to differing degrees. This has implications for the way in which we apply our understanding of causal illusions in the laboratory to the development of false beliefs in everyday life. Electronic supplementary material The online version of this article (10.1186/s41235-018-0149-9) contains supplementary material, which is available to authorized users.
... This suggests that highly superstitious people behaved differently (i.e., they pressed the button more frequently), but that once this difference in behaviour was accounted for, there were no differences in their interpretation of the actual contingency information -the patterns of causes and effects. This hypothesis is supported by the observation that merely seeing the cause occur more frequently has been shown to be sufficient to increase people's ratings of perceived causation, regardless of their superstitious beliefs (the cue density effect; Vadillo, Musca, Blanco & Matute, 2010). ...
Article
Full-text available
Superstitions are common, yet we have little understanding of the cognitive mechanisms that bring them about. This study used a laboratory‐based analogue for superstitious beliefs that involved people monitoring the relationship between undertaking an action (pressing a button) and an outcome occurring (a light illuminating). The task was arranged such that there was no objective contingency between pressing the button and the light illuminating – the light was just as likely to illuminate whether the button was pressed or not. Nevertheless, most people rated the causal relationship between the button press and the light illuminating to be moderately positive, demonstrating an illusion of causality. This study found that the magnitude of this illusion was predicted by people's level of endorsement of common superstitious beliefs (measured using a novel Superstitious Beliefs Questionnaire), but was not associated with mood variables or their self‐rated locus of control. This observation is consistent with a more general individual difference or bias to overweight conjunctive events over disjunctive events during causal reasoning in those with a propensity for superstitious beliefs.
... La segunda es el sesgo de densidad de la clave, y es similar al sesgo de densidad del outcome: aunque mantengamos la contingencia entre clave y outcome constante, los cambios en la probabilidad marginal de la clave, P(Clave), afectan a los juicios de contingencia (Blanco, Matute y Vadillo, 2009;2012;Hannah y Beneteau, 2009;Matute, 1996;Matute, Yarritu y Vadillo, 2011;Perales, Catena, Shanks, y González, 2005;Vadillo, Musca, Blanco, y Matute, 2011;Wasserman y cols., 1996). La literatura reciente indica que el sesgo de densidad de la clave interactúa con el sesgo de densidad del outcome, de forma que su efecto sólo es visible cuando la P(Outcome) es elevada. ...
Thesis
La contingencia entre dos eventos, clave y outcome, es una pista fundamental para inferir relaciones causales en la mayoría de las situaciones. La evidencia experimental indica que tanto las personas como otros animales son capaces de adaptar su conducta a la contingencia, si bien se han documentado algunos sesgos. En concreto, el sesgo de densidad del outcome es la sobrestimación de la contingencia cuando la probabilidad de ocurrencia del outcome es elevada. En el presente trabajo, utilizamos una red neuronal auto-heteroasociativa para simular una tarea típica de aprendizaje de contingencia en humanos. Nuestras simulaciones demuestran que la red es capaz de discriminar satisfactoriamente entre distintos grados de contingencia, aunque no se observa evidencia del sesgo de densidad del outcome.
... A significant number of studies have shown that participants' judgments tend to be higher as the probability of the outcome, p (O), increases (e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Matute, 1995;López et al., 1998;Msetfi et al., 2005;Hannah and Beneteau, 2009;Byrom et al., 2015), even when that probability is the same in the presence and in the absence of the potential cause (i.e., zero contingency; e.g., Alloy and Abramson, 1979;Allan and Jenkins, 1983;Matute, 1995;Blanco et al., 2013). Similarly, it has been observed that as the probability of the cause, p (C), increases, participants' judgments also tend to increase (Allan and Jenkins, 1983;Perales and Shanks, 2007;Hannah and Beneteau, 2009;White, 2009;Musca et al., 2010;Vadillo et al., 2011), even when the potential cause and the outcome are non-contingently related (e.g., Hannah and Beneteau, 2009;Blanco et al., 2013;Yarritu et al., 2015). The combination of these two biases increases the overestimation of the causal relationship when the two probabilities, p (C) and p (O), are high (Blanco et al., 2013). ...
Article
Full-text available
It is generally assumed that the way people assess the relationship between a cause and an outcome is closely related to the actual evidence existing about the co-occurrence of these events. However, people’s estimations are often biased, and this usually translates into illusions of causality. Some have suggested that such illusions could be the result of previous knowledge-based expectations. In the present research we explored the role that previous knowledge has in the development of illusions of causality. We propose that previous knowledge influences the assessment of causality by influencing the decisions about responding or not (i.e., presence or absence of the potential cause), which biases the information people are exposed to, and this in turn produces illusions congruent with such biased information. In a non-contingent situation in which participants decided whether the potential cause was present or absent (Experiment 1), the influence of expectations on participants’ judgments was mediated by the probability of occurrence of the potential cause (determined by participants’ responses). However, in an identical situation, except that the participants were not allowed to decide the occurrence of the potential cause, only the probability of the cause was significant, not the expectations or the interaction. Together, these results support our hypothesis that knowledge-based expectations affect the development of causal illusions by the mediation of behavior, which biases the information received.
... and the reward, leading them to be more likely to "stay." One factor that increases perception of contingency is frequent outcomes (sometimes referred to as the outcome-density effect; Blanco, Matute, & Vadillo, 2013;Vallée-Tourangeau, Murphy, & Baker, 2005); another is frequent causal candidates (the cuedensity effect; Blanco et al., 2013;Vadillo et al., 2011). People also overattribute contingency when the causal candidate is the subject's own actions (the illusion of control; Langer, 1975;Thompson, 1999), possibly because self-involvement usually increases the frequency of the causal candidate, causing a cuedensity effect (Yarritu, Matute, & Vadillo, 2014). ...
Article
Full-text available
Human decision-makers often exhibit the hot-hand phenomenon, a tendency to perceive positive serial autocorrelations in independent sequential events. The term is named after the observation that basketball fans and players tend to perceive streaks of high accuracy shooting when they are demonstrably absent. That is, both observing fans and participating players tend to hold the belief that a player’s chance of hitting a shot are greater following a hit than following a miss. We hypothesize that this bias reflects a strong and stable tendency among primates (including humans) to perceive positive autocorrelations in temporal sequences, that this bias is an adaptation to clumpy foraging environments, and that it may even be ecologically rational. Several studies support this idea in humans, but a stronger test would be to determine whether nonhuman primates also exhibit a hot-hand bias. Here we report behavior of 3 monkeys performing a novel gambling task in which correlation between sequential gambles (i.e., temporal clumpiness) is systematically manipulated. We find that monkeys have better performance (meaning, more optimal behavior) for clumped (positively correlated) than for dispersed (negatively correlated) distributions. These results identify and quantify a new bias in monkeys’ risky decisions, support accounts that specifically incorporate cognitive biases into risky choice, and support the suggestion that the hot-hand phenomenon is an evolutionary ancient bias.
... Furthermore, these predictions are made during the training phase, including therefore many measures from trials in which participants are still learning what the relationship between the cue and the outcome is. In a related experiment, we recently tried to find out whether the cue-density bias could be measured by using a predictive question (Vadillo et al., 2011). Unlike Allan et al. (2005) and Perales et al. (2005), however, participants had to answer this predictive question at the end of the experiment (i.e., once the training phase was over and they presumably had learned the relationship between cue and outcome) and by means of a 0–100 rating scale (i.e., as opposed to yes/no discrete responses). ...
... One of the variables that has been most clearly established to affect the development of the illusion of causality is the probability of the outcome, for instance, the probability with which spontaneous remissions of pain occur (e.g., Alloy and Abramson, 1979; Allan and Jenkins, 1983; Matute, 1995; Wasserman et al., 1996; Buehner et al., 2003; Allan et al., 2005 Allan et al., , 2008 Msetfi et al., 2005 Msetfi et al., , 2007 Musca et al., 2010). Another variable that is known to affect this illusion is the probability of responding (or, more generally , the probability with which the potential cause occurs; e.g., Allan and Jenkins, 1983; Matute, 1996; Wasserman et al., 1996; Perales et al., 2005; Hannah and Beneteau, 2009; Matute et al., 2011; Vadillo et al., 2011). The higher these two probabilities, the higher the probability that coincidences will occur between the potential cause and the outcome, and thus, the higher the probability than an illusion of control will develop (see Blanco et al., 2011 Blanco et al., , 2013 Hannah and Beneteau, 2009). ...
Article
Full-text available
An illusion of control is said to occur when a person believes that he or she controls an outcome that is uncontrollable. Pathological gambling has often been related to an illusion of control, but the assessment of the illusion has generally used introspective methods in domain-specific (i.e., gambling) situations. The illusion of control of pathological gamblers, however, could be a more general problem, affecting other aspects of their daily life. Thus, we tested them using a standard associative learning task which is known to produce illusions of control in most people under certain conditions. The results showed that the illusion was significantly stronger in pathological gamblers than in a control undiagnosed sample. This suggests (1) that the experimental tasks used in basic associative learning research could be used to detect illusions of control in gamblers in a more indirect way, as compared to introspective and domain-specific questionnaires; and (2), that in addition to gambling-specific problems, pathological gamblers may have a higher-than-normal illusion of control in their daily life.
... This phenomenon is sometimes referred to as the outcome-density effect (Allan & Jenkins, 1983;Allan, Siegel, & Tangen, 2005;Buehner, Cheng, & Clifford, 2003;Musca, Vadillo, Blanco, & Matute, 2010;Wasserman, Kao, Van Hamme, Katagiri, & Young, 1996). In a similar vein, some studies have shown an analogous bias when the probability of the cue, P(C), is manipulated (this is known as the cue-density effect; Allan & Jenkins, 1983;Matute et al., 2011;Perales, Catena, Shanks, & González, 2005;Vadillo, Musca, Blanco, & Matute, 2011;Wasserman et al., 1996). In these cases, the higher the probability of the cue, the higher the subjective judgment of the cue-outcome contingency. ...
Article
Overestimations of null contingencies between a cue, C, and an outcome, O, are widely reported effects that can arise for multiple reasons. For instance, a high probability of the cue, P(C), and a high probability of the outcome, P(O), are conditions that promote such overestimations. In two experiments, participants were asked to judge the contingency between a cue and an outcome. Both P(C) and P(O) were given extreme values (high and low) in a factorial design, while maintaining the contingency between the two events at zero. While we were able to observe main effects of the probability of each event, our experiments showed that the cue- and outcome-density biases interacted such that a high probability of the two stimuli enhanced the overestimation beyond the effects observed when only one of the two events was frequent. This evidence can be used to better understand certain societal issues, such as belief in pseudoscience, that can be the result of overestimations of null contingencies in high-P(C) or high-P(O) situations.
... Those participants who decided to respond in most of the trials also developed the strongest overestimation at the end of the training phase (Figure 1). This result, known as the P(R) effect, is consistent with previous studies (Blanco et al., 2009; Hannah & Beneteau, 2009; Matute, 1996), and it is parallel to the cue density bias reported in observational tasks in which the cue or the potential cause, C, is an external event rather than the participant's response (Matute, Yarritu, & Vadillo, 2010; Perales, Catena, Shanks, & González, 2005; Vadillo, Musca, Blanco, & Matute, 2010; Wasserman, Kao, Van Hamme, Katagari, & Young, 1996). As reported above, we replicated the finding by Matute (1996) that higher levels of P(R) lead to higher judgements of contingency. ...
Article
Full-text available
It is well known that certain variables can bias judgements about the perceived contingency between an action and an outcome, making them depart from the normative predictions. For instance, previous studies have proven that the activity level or probability of responding, P(R), is a crucial variable that can affect these judgements in objectively noncontingent situations. A possible account for the P(R) effect is based on the differential exposure to actual contingencies during the training phase, which is in turn presumably produced by individual differences in participants' P(R). The current two experiments replicate the P(R) effect in a free-response paradigm, and show that participants' judgements are better predicted by P(R) than by the actual contingency to which they expose themselves. Besides, both experiments converge with previous empirical data, showing a persistent bias that does not vanish as training proceeds. These findings contrast with the preasymptotic and transitory effect predicted by several theoretical models.
Conference Paper
Full-text available
Illusions of causality arise when people observe statistically unrelated events and yet form a belief that the events are causally linked. When participants observe a sequence of discrete binary events (e.g., a patient was either administered a treatment or no treatment, and subsequently recovers or does not recover from their illness), the frequency of the putative cause and outcome occurring inflates the illusion of causality. Recently, similar effects have been observed using outcomes of continuous magnitude. Participants are more likely to endorse the causal status of a (completely ineffective) cue if the target outcome (e.g., high magnitude outcomes) occur frequently. Here, we extended these findings by investigating how predictions and causal judgments for a cue of continuous magnitude were affected by the distribution of cue values presented. Participants observed cue values (dose of a fictitious medicine) sourced from either a continuous distribution or from two discrete values, and were followed by outcomes that were either continuous (Experiment 1) or binary in nature (Experiment 2). Our results show that participants were more likely to assume a linear relationship between drug dose and magnitude of recovery when cue dosage were predominantly high than when they were predominantly low.
Article
Background We have previously presented two educational interventions aimed to diminish causal illusions and promote critical thinking. In both cases, these interventions reduced causal illusions developed in response to active contingency learning tasks, in which participants were able to decide whether to introduce the potential cause in each of the learning trials. The reduction of causal judgments appeared to be influenced by differences in the frequency with which the participants decided to apply the potential cause, hence indicating that the intervention affected their information sampling strategies. Objective In the present study, we investigated whether one of these interventions also reduces causal illusions when covariation information is acquired passively. Method Forty-one psychology undergraduates received our debiasing intervention, while 31 students were assigned to a control condition. All participants completed a passive contingency learning task. Results We found weaker causal illusions in students that participated in the debiasing intervention, compared to the control group. Conclusion The intervention affects not only the way the participants look for new evidence, but also the way they interpret given information. Teaching implications Our data extending previous results regarding evidence-based educational interventions aimed to promote critical thinking to situations in which we act as mere observers.
Article
In many literatures, scholars study summarized attribute preferences: overall evaluative summaries of an attribute (e.g., a person's liking for the attribute “attractive” in a mate). But we know little about how people form these ideas about their likes and dislikes in the first place, in part because of a dearth of paradigms that enable researchers to experimentally change people's attribute preferences. Drawing on theory and methods in covariation detection and social cognition, we developed a paradigm that examines how people infer summarized preferences for novel attributes from functional attribute preferences: the extent to which the attribute predicts an individual's evaluations across multiple targets (e.g., a person's tendency to positively evaluate mates who are more vs. less attractive). In three studies, participants encountered manipulated information about their own functional preference for a novel attribute in a set of targets. They then inferred a summarized preference for the attribute. Summarized preferences corresponded strongly to the functional preference manipulation when targets varied on only one attribute. But additional complexity (in the form of a second novel attribute) caused summarized and functional preferences to diverge, and biases emerged: Participants reported stronger summarized preferences for the attribute when the population of targets possessed more of the attribute on average (regardless of functional preference strength). We also documented some support for a standard-of-comparison mechanism to explain this inferential bias. These studies elucidate factors that may warp the translation process from people's experienced evaluative responses in the world to their overall, summary judgments about their attribute preferences.
Article
Full-text available
Previous research on causal learning has usually made strong claims about the relative complexity and temporal priority of some processes over others based on evidence about dissociations between several types of judgments. In particular, it has been argued that the dissociation between causal judgments and trial-type frequency information is incompatible with the general cognitive architecture proposed by associative models. In contrast with this view, we conduct an associative analysis of this process showing that this need not be the case. We conclude that any attempt to gain a better insight on the cognitive architecture involved in contingency learning cannot rely solely on data about these dissociations. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Chapter
Full-text available
2007 by Alison Gopnik and Laura Schulz. All rights reserved. Causal induction has two components: learning about the structure of causal models and learning about causal strength and other quantitative parameters. This chapter argues for several interconnected theses. First, people represent causal knowledge qualitatively, in terms of causal structure; quantitative knowledge is derivative. Second, people use a variety of cues to infer causal structure aside from statistical data (e.g. temporal order, intervention, coherence with prior knowledge). Third, once a structural model is hypothesized, subsequent statistical data are used to confirm, refute, or elaborate the model. Fourth, people are limited in the number and complexity of causal models that they can hold in mind to test, but they can separately learn and then integrate simple models, and revise models by adding and removing single links. Finally, current computational models of learning need further development before they can be applied to human learning.
Article
Full-text available
This study showed that accuracy of the estimated relationship between a fictitious symptom and a disease depends on the interaction between the frequency of judgment and the last trial type. This effect appeared both in positive and zero contingencies (Experiment 1), and judgments were less accurate as frequency increased (Experiment 2). The effect can be explained neither by interference of previous judgments or memory demands (Experiment 3), nor by the perceptual characteristics of the stimuli (Experiments 4 and 5), and instructions intended to alter processing strategies do not produce any reliable effect. The interaction between frequency and trial type on covariation judgment is not predicted by any model (either statistical or associative) currently used to explain performance in covariation detection. The authors propose a belief-revision model to explain this effect as an important response mode variable on covariation learning.
Chapter
Full-text available
This book outlines the recent revolutionary work in cognitive science formulating a “probabilistic model” theory of learning and development. It provides an accessible and clear introduction to the probabilistic modeling in psychology, including causal model, Bayes net, and Bayesian approaches. It also outlines new cognitive and developmental psychological studies of statistical and causal learning, imitation and theory-formation, new philosophical approaches to causation, and new computational approaches to the representation of intuitive concepts and theories. This book brings together research in all of these areas of cognitive science, with chapters by researchers in all these disciplines. Understanding causal structure is a central task of human cognition. Causal learning underpins the development of our concepts and categories, our intuitive theories, and our capacities for planning, imagination, and inference. This new work uses the framework of probabilistic models and interventionist accounts of causation in philosophy in order to provide a rigorous formal basis for “theory theories” of concepts and cognitive development. Moreover, the causal learning mechanisms this interdisciplinary research program has uncovered go dramatically beyond both the traditional mechanisms of nativist theories such as modularity theories, and empiricist ones such as association or connectionism. The chapters cover three topics: the role of intervention and action in causal understanding, the role of causation in categories and concepts, and the relationship between causal learning and intuitive theory formation. Though coming from different disciplines, the chapters converge on showing how we can use our own actions and the evidence we observe in order to accurately learn about the world.
Article
Full-text available
Recent research has shown superstitious behaviour and illusion of control in human subjects exposed to the negative reinforcement conditions that are traditionally assumed to lead to the opposite outcome (i.e. learned helplessness). The experiments reported in this paper test the generality of these effects in two different tasks and under different conditions of percentage (75% vs. 25%) and distribution (random vs. last-trials) of negative reinforcement (escape from uncontrollable noise). All three experiments obtained superstitious behaviour and illusion of control and question the generality of learned helplessness as a consequence of exposing humans to uncontrollable outcomes.
Article
Full-text available
This study showed that accuracy of the estimated relationship between a fictitious symptom and a disease depends on the interaction between the frequency of judgment and the last trial type. This effect appeared both in positive and zero contingencies (Experiment 1), and judgments were less accurate as frequency increased (Experiment 2). The effect can be explained neither by interference of previous judgments or memory demands (Experiment 3), nor by the perceptual characteristics of the stimuli (Experiments 4 and 5), and instructions intended to alter processing strategies do not produce any reliable effect. The interaction between frequency and trial type on covariation judgment is not predicted by any model (either statistical or associative) currently used to explain performance in covariation detection. The authors propose a belief-revision model to explain this effect as an important response mode variable on covariation learning. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Because causal relations are neither observable nor deducible, they must be induced from observable events. The 2 dominant approaches to the psychology of causal induction-the covariation approach and the causal power approach-are each crippled by fundamental problems. This article proposes an integration of these approaches that overcomes these problems. The proposal is that reasoners innately treat the relation between covariation (a function defined in terms of observable events) and causal power(an unobservable entity) as that between scientists' law or model and their theory explaining the model. This solution is formalized in the power PC theory, a causal power theory of the probabilistic contrast model(P. W. Cheng & L. R. Novick, 1990). The article reviews diverse old and new empirical tests discriminating this theory from previous models, none of which is justified by a theory. The results uniquely support the power PC theory.
Article
Full-text available
Although normatively irrelevant to the relationship between a cue and an outcome, outcome density (i.e. its base-rate probability) affects people's estimation of causality. By what process causality is incorrectly estimated is of importance to an integrative theory of causal learning. A potential explanation may be that this happens because outcome density induces a judgement bias. An alternative explanation is explored here, following which the incorrect estimation of causality is grounded in the processing of cue–outcome information during learning. A first neural network simulation shows that, in the absence of a deep processing of cue information, cue–outcome relationships are acquired but causality is correctly estimated. The second simulation shows how an incorrect estimation of causality may emerge from the active processing of both cue and outcome information. In an experiment inspired by the simulations, the role of a deep processing of cue information was put to test. In addition to an outcome density manipulation, a shallow cue manipulation was introduced: cue information was either still displayed (concurrent) or no longer displayed (delayed) when outcome information was given. Behavioural and simulation results agree: the outcome-density effect was maximal in the concurrent condition. The results are discussed with respect to the extant explanations of the outcome-density effect within the causal learning framework.
Article
Full-text available
It is generally assumed that the function of contingency learning is to predict the occurrence of important events in order to prepare for them. This assumption, however, has scarcely been tested. Moreover, the little evidence that is available suggests just the opposite result. People do not use contingency to prepare for outcomes, nor to predict their occurrence, although they do use it to infer the causal and predictive value of cues. By using both judgmental and behavioral data, we designed the present experiments as a further test for this assumption. The results show that-at least under certain conditions-people do use contingency to prepare for outcomes, even though they would still not use it to predict their occurrence. The functional and adaptive aspects of these results are discussed in the present article.
Article
Full-text available
Active contingency tasks, such as those used to explore judgments of control, suffer from variability in the actual values of critical variables. The authors debut a new, easily implemented procedure that restores control over these variables to the experimenter simply by telling participants when to respond, and when to withhold responding. This command-performance procedure not only restores control over critical variables such as actual contingency, it also allows response frequency to be manipulated independently of contingency or outcome frequency. This yields the first demonstration, to our knowledge, of the equivalent of a cue density effect in an active contingency task. Judgments of control are biased by response frequency outcome frequency, just as they are also biased by outcome frequency.
Article
Full-text available
In 4 experiments, 144 depressed and 144 nondepressed undergraduates (Beck Depression Inventory) were presented with one of a series of problems varying in the degree of contingency. In each problem, Ss estimated the degree of contingency between their responses (pressing or not pressing a button) and an environmental outcome (onset of a green light). Depressed Ss' judgments of contingency were suprisingly accurate in all 4 experiments. Nondepressed Ss overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired. Thus, predictions derived from social psychology concerning the linkage between subjective and objective contingencies were confirmed for nondepressed but not for depressed Ss. The learned helplessness and self-serving motivational bias hypotheses are evaluated as explanations of the results. (41/2 p ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
Full-text available
The covariation component of everyday causal inference has been depicted, in both cognitive and social psychology as well as in philosophy, as heterogeneous and prone to biases. The models and biases discussed in these domains are analyzed with respect to focal sets: contextually determined sets of events over which covariation is computed. Moreover, these models are compared to our probabilistic contrast model, which specifies causes as first and higher order contrasts computed over events in a focal set. Contrary to the previous depiction of covariation computation, the present assessment indicates that a single normative mechanism--the computation of probabilistic contrasts--underlies this essential component of natural causal induction both in everyday and in scientific situations.
Article
Full-text available
Associative and statistical theories of causal and predictive learning make opposite predictions for situations in which the most recent information contradicts the information provided by older trials (e.g., acquisition followed by extinction). Associative theories predict that people will rely on the most recent information to best adapt their behavior to the changing environment. Statistical theories predict that people will integrate what they have learned in the two phases. The results of this study showed one or the other effect as a function of response mode (trial by trial vs. global), type of question (contiguity, causality, or predictiveness), and postacquisition instructions. That is, participants are able to give either an integrative judgment, or a judgment that relies on recent information as a function of test demands. The authors concluded that any model must allow for flexible use of information once it has been acquired.
Article
Full-text available
How humans infer causation from covariation has been the subject of a vigorous debate, most recently between the computational causal power account (P. W. Cheng, 1997) and associative learning theorists (e.g., K. Lober & D. R. Shanks, 2000). Whereas most researchers in the subject area agree that causal power as computed by the power PC theory offers a normative account of the inductive process. Lober and Shanks, among others, have questioned the empirical validity of the theory. This article offers a full report and additional analyses of the original study featured in Lober and Shanks's critique (M. J. Buehner & P. W. Cheng, 1997) and reports tests of Lober and Shanks's and other explanations of the pattern of causal judgments. Deviations from normativity, including the outcome-density bias, were found to be misperceptions of the input or other artifacts of the experimental procedures rather than inherent to the process of causal induction.
Article
Full-text available
There are many psychological tasks that involve the pairing of binary variables. The various tasks used often address different questions and are motivated by different theoretical issues and traditions. Upon closer examination, however, the tasks are remarkably similar in structure. In the present paper, we examine two such tasks, the contingency judgment task and the signal detection task, and we apply a signal detection analysis to contingency judgment data. We suggest that the signal detection analysis provides a novel interpretation of a well-established but poorly understood phenomenon of contingency judgments--the outcome-density effect.
Article
Full-text available
In three experiments, we show that people respond differently when they make predictions as opposed to when they are asked to estimate the causal or the predictive value of cues: Their response to each of those three questions is based on different sets of information. More specifically, we show that prediction judgments depend on the probability of the outcome given the cue, whereas causal and predictive-value judgments depend on the cue-outcome contingency. Although these results might seem problematic for most associative models in their present form, they can be explained by explicitly assuming the existence of postacquisition processes that modulate participants' responses in a flexible way.
Article
Full-text available
Studies performed by different researchers have shown that judgements about cue-outcome relationships are systematically influenced by the type of question used to request those judgements. It is now recognized that judgements about the strength of the causal link between a cue and an outcome are mostly determined by the cue-outcome contingency, whereas predictions of the outcome are more influenced by the probability of the outcome given the cue. Although these results make clear that those different types of judgement are mediated by some knowledge of the normative differences between causal estimations and outcome predictions, they do not speak to the underlying processes of these effects. The experiment presented here reveals an interaction between the type of question and the order of trials that challenges standard models of causal and predictive learning that are framed exclusively in associative terms or exclusively in higher order reasoning terms. However, this evidence could be easily explained by assuming the combined intervention of both types of process.
Article
Full-text available
Causal judgment is assumed to play a central role in prediction, control, and explanation. Here, we consider the function or functions that map contingency information concerning the relationship between a single cue and a single outcome onto causal judgments. We evaluate normative accounts of causal induction and report the findings of an extensive meta-analysis in which we used a cross-validation model-fitting method and carried out a qualitative analysis of experimental trends in order to compare a number of alternative models. The best model to emerge from this competition is one in which judgments are based on the difference between the amount of confirming and disconfirming evidence. A rational justification for the use of this model is proposed.
Article
Associative and statistical theories of causal and predictive learning make opposite predictions for situations in which the most recent information contradicts the information provided by older trials (e.g., acquisition followed by extinction). Associative theories predict that people will rely on the most recent information to best adapt their behavior to the changing environment. Statistical theories predict that people will integrate what they have learned in the two phases. The results of this study showed one or the other effect as a function of response mode (trial by trial vs. global), type of question (contiguity, causality, or predictiveness), and postacquisition instructions. That is, participants are able to give either an integrative judgment, or a judgment that relies on recent information as a function of test demands. The authors concluded that any model must allow for flexible use of information once it has been acquired.
Article
Ciiven the task of di the source of a patient's aUer^'ic reav-tion. college students jiuigcii the causal efficacy of common (A') and distinctive (A and Bj elements of compound stimuli: AX and BX. As the differential correlation of AX and BX with the occurrence and nonoccurrence ofthe allergic reaction rose from .00 to 1.00. ratings of ihe distinctive A and B elements diverged; most importantly, ratings ofthe common X element fell. These causal judgments of humans closely parallel the conditioned responses of animals in associa-tive learning studies, and clearly disclose that stimuli compete with one another for control over behavior.
Article
Examines the varied measures of contingency that have appeared in the psychological judgment literature concerning binary variables. It is argued that accurate judgments about related variables should not be used to infer that the judgments are based on appropriate information. (9 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Researchers confronting their own data often find those data to be more unruly, ill-mannered, and irascible than the well-behaved, cooperative data found in textbook examples. Irascible data that slap us in the face at least get our attention. More dangerous are those stealthy, sinister observations that can go undetected and yet have a disproportionate and untoward effect on our analyses. This chapter describes techniques for detecting and remedying those nasty data that otherwise could ruin analyses. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Studies concerned with judgments of contingency between binary variables have often ignored what the variables stand for. The two values of a binary variable can be represented as a prevailing state (nonevent) or as an active state (event). Judgments under the four conditions resulting from the combination of a binary input variable that can be represented as event-nonevent or event-event with an outcome variable that can be represented in the same way were obtained. It is shown in Experiment 1, that judgments of data sets which exhibit the same degree of covariation depend upon how the input and output variables are represented. In Experiment 2 the case where both the input and output variables are represented as event-nonevent is examined. Judgments were higher when the pairing of the input event was with the output event and the input nonevent with the output nonevent that when the pairing was of event with nonevent, suggesting a causal compatibility of event-event pairings and a causal incompatibility of event-nonevent pairings. Experiment 3 demonstrates that judgments of the strength of the relation between binary input and output variables is not based on the appropriate statistical measure, the difference between two conditional probabilities. The overall pattern of judgments in the three experiments is mainly explicable on the basis of two principles: (1) judgments tend to be based on the difference between confirming and disconfirming cases and (2) causal compatibility in the representation of the input and output variables plays a critical role.
Article
[Ilt may be that … reason, self-consciousness and self-control which seem to sever human intellect so sharply from that of all other animals are really but secondary re- sults of the tremendous increase in the number, delicacy and complexity of associations which the human animal can form. It may be that the evolution of intellect has no breaks, that its progress is continuous from its first appearance to its present condition in adult … human beings. If we could prove that what we call ideational life and reasoning were not new and unexplainable species of intellectual life but only the natural consequences of an increase in the number, delicacy, and complexity of associations of the general animal sort, we should have made out an evolution of mind comparable to the evolution of living forms. (p. 286)
Article
Pseudoscience, superstitions, and quackery are serious problems that threaten public health and in which many variables are involved. Psychology, however, has much to say about them, as it is the illusory perceptions of causality of so many people that needs to be understood. The proposal we put forward is that these illusions arise from the normal functioning of the cognitive system when trying to associate causes and effects. Thus, we propose to apply basic research and theories on causal learning to reduce the impact of pseudoscience. We review the literature on the illusion of control and the causal learning traditions, and then present an experiment as an illustration of how this approach can provide fruitful ideas to reduce pseudoscientific thinking. The experiment first illustrates the development of a quackery illusion through the testimony of fictitious patients who report feeling better. Two different predictions arising from the integration of the causal learning and illusion of control domains are then proven effective in reducing this illusion. One is showing the testimony of people who feel better without having followed the treatment. The other is asking participants to think in causal terms rather than in terms of effectiveness.
Article
Associative models of causal learning predict recency effects. Judgments at the end of a trial series should be strongly biased by recently presented information. Prior research, however, presents a contrasting picture of human performance. López, Shanks, Almaraz, and Fernández (1998) observed recency, whereas Dennis and Ahn (2001) found the opposite, primacy. Here we replicate both of these effects and provide an explanation for this paradox. Four experiments show that the effect of trial order on judgments is a function of judgment frequency, where incremental judgments lead to recency while single final judgments abolish recency and lead instead to integration of information across trials (i.e., primacy). These results challenge almost all existing accounts of causal judgment. We propose a modified associative account in which participants can base their causal judgments either on current associative strength (momentary strategy) or on the cumulative change in associative strength since the previous judgment (integrative strategy).
Article
3 experiments are reported in which Ss were asked to judge the degree of contingency between responses and outcomes. They were exposed to 60 trials on which a choice between 2 responses was followed by 1 of 2 possible outcomes. Each S judged both contingent and noncontingent problems. Some Ss actually made response choices while others simply viewed the events. Judgments were made by Ss who attempted to produce a single favorable outcome or, on the other hand, to control the occurrence of two neutral outcomes. In all conditions the amount of contingency judged was correlated with the number of successful trials, but was entirely unrelated to the actual degree of contingency. Accuracy of judgment was not improved by pretraining Ss on selected examples, even though it was possible to remove the correlation between judgment and successes by means of an appropriate selection of pretraining problems. The relation between everyday judgments of causal relations and the present experiment is considered.
Article
An eyetracking version of the classic Shepard, Hovland, and Jenkins (1961) experiment was conducted. Forty years of research has assumed that category learning often involves learning to selectively attend to only those stimulus dimensions useful for classification. We confirmed that participants learned to allocate their attention optimally. We also found that learners tend to fixate all stimulus dimensions early in learning. This result obtained despite evidence that participants were also testing one-dimensional rules during this period. Finally, the restriction of eye movements to only relevant dimensions tended to occur only after errors were largely (or completely) eliminated. We interpret these findings as consistent with multiple-systems theories of learning which maximize information input in order to maximize the number of learning modules involved, and which focus solely on relevant information only after one module has solved the learning problem.
Article
A number of studies using trial-by-trial learning tasks have shown that judgments of covariation between a cue c and an outcome o deviate from normative metrics. Parameters based on trial-by-trial predictions were estimated from signal detection theory (SDT) in a standard causal learning task. Results showed that manipulations of P(c) when contingency (deltaP) was held constant did not affect participants' ability to predict the appearance of the outcome (d') but had a significant effect on response criterion (c) and numerical causal judgments. The association between criterion c and judgment was further demonstrated in 2 experiments in which the criterion was directly manipulated by linking payoffs to the predictive responses made by learners. In all cases, the more liberal the criterion c was, the higher judgments were. The results imply that the mechanisms underlying the elaboration of judgments and those involved in the elaboration of predictive responses are partially dissociable.
Article
Previous studies on causal learning showed that judgements about the causal effect of a cue on an outcome depend on the statistical contingency between the presence of the cue and the outcome. We demonstrate that statistical contingency has a different impact on preparation judgements (i.e., judgements about the usefulness of responses that allow one to prepare for the outcome). Our results suggest that preparation judgements primarily reflect information about the outcome in prior situations that are identical to the test situation. These findings also add to previous evidence showing that people can use contingency information in a flexible manner depending on the type of test question.
Causation and association The psychology of learning and motivation
  • E A Wasserman
  • S F Kao
  • L J Van Hamme
  • M Katagari
  • M E Young
Wasserman, E. A., Kao, S. F., Van Hamme, L. J., Katagari, M., & Young, M. E. (1996). Causation and association. In D. R. Shanks, K. J. Holyoak, & D. L. Medin (Eds.), The psychology of learning and motivation, Vol. 34: Causal learning (pp. 207–264). San Diego, CA: Academic Press. doi:10.1016/S0079-7421(08)60562-9