Article

Learning mechanisms underlying accurate and biased contingency judgments

If you want to read the PDF, try requesting it from the authors.

Abstract

Many experiments have shown that humans and other animals can detect contingency between events accurately. This learning is used to make predictions and to infer causal relationships, both of which are critical for survival. Under certain conditions, however, people tend to overestimate a null contingency. We argue that a successful theory of contingency learning should explain both results. The main purpose of the present review is to assess whether cue-outcome associations might provide the common underlying mechanism that would allow us to explain both accurate and biased contingency learning. In addition, we discuss whether associations can also account for causal learning. After providing a brief description on both accurate and biased contingency judgments, we elaborate on the main predictions of associative models and describe some supporting evidence. Then, we discuss a number of findings in the literature that, although conducted with a different purpose and in different areas of research, can also be regarded as supportive of the associative framework. Finally, we discuss some problems with the associative view and discuss some alternative proposals as well as some of the areas of current debate. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For example, a prominent feature of delusional thinking is the abnormal perception of the relationships between events 37 . Patients with schizophrenia often maintain deviating views on cause-effect relationships 38 , but these deviations can also be detected in the general population, for example, in the form of causal illusions 27,39 . ...
... Previous research has shown that, although people can use the contingency between cause and effect to infer causality [42][43][44][45][46][47] , under some circumstances, they can easily develop a causal illusion, that is, the belief that there is a causal connection between two events that are actually unrelated (i.e., non-contingent on each other). Causal illusions have been described as cognitive biases that appear in the general population and may underlie many relevant and societal problems, such as prejudice and pseudoscience 27,39,[48][49][50][51] . ...
... A prominent theory to explain causal illusions has been developed from associative learning theories that aim to model human and animal learning. From this perspective, causal beliefs emerge because people learn the associations in their environment, and causal illusions are the result of an incomplete or pre-asymptotic learning experience 39 . According to this view, the formation and strengthening of the associations depend on the general mechanisms of Pavlovian and instrumental learning 62,63 . ...
Article
Full-text available
Previous research proposed that cognitive biases contribute to produce and maintain the symptoms exhibited by deluded patients. Specifically, the tendency to jump to conclusions (i.e., to stop collecting evidence soon before making a decision) has been claimed to contribute to delusion formation. Additionally, deluded patients show an abnormal understanding of cause-effect relationships, often leading to causal illusions (i.e., the belief that two events are causally connected, when they are not). Both types of bias appear in psychotic disorders, but also in healthy individuals. In two studies, we test the hypothesis that the two biases (jumping to conclusions and causal illusions) appear in the general population and correlate with each other. The rationale is based on current theories of associative learning that explain causal illusions as the result of a learning bias that tends to wear off as additional information is incorporated. We propose that participants with higher tendency to jump to conclusions will stop collecting information sooner in a causal learning study than those participants with lower tendency to jump to conclusions, which means that the former will not reach the learning asymptote, leading to biased judgments. The studies provide evidence in favour that the two biases are correlated but suggest that the proposed mechanism is not responsible for this association.
... That is, the users' beliefs about the effectiveness of the treatment are causal in nature, i.e., "the treatment causes the symptom remission, " or "the treatment prevents me from falling ill." Thus, it is possible to study the patients' beliefs of treatment effectiveness through causal learning experiments (see reviews in Matute et al., 2015;Matute et al., 2019). This possibility offers a number of advantages. ...
... Rather, they must form their beliefs of effectiveness on the basis of a more limited comparison: how often symptoms were observed before the treatment started vs. how often they occur during the treatment, on the same patient (usually, themselves). Most causal learning experiments do not take into account this limitation, and instead provide participants with information about a series of different patients (Blanco et al., 2014;Matute et al., 2019). This is useful to investigate the formation of causal knowledge in general, but it is not realistic when applied to the case of patients' beliefs of effectiveness, as the procedure clearly departs from the actual experience of patients with their own treatments. ...
... In particular, when the probability of the desired outcome is high, judgments tend to be higher even in null contingency conditions (Alloy and Abramson, 1979;Buehner et al., 2003;Blanco et al., 2014Chow et al., 2019), contributing to what has been called a "causal illusion." This is a bias consisting of the belief in a causal link that is actually inexistent (Matute et al., 2015;Matute et al., 2019). The causal illusion bias share some features with other phenomena like the classical illusory correlation effect Chapman, 1967, 1969), and pseudocontingencies (Kutzner et al., 2011;Fiedler et al., 2009). 1 Despite their different explanations and assumptions, all these phenomena coincide in the importance of event probabilities, such as the probability of the cause and the probability of the outcome, when judging causal relationships. ...
Article
Causal illusions have been postulated as cognitive mediators of pseudoscientific beliefs, which, in turn, might lead to the use of pseudomedicines. However, while the laboratory tasks aimed to explore causal illusions typically present participants with information regarding the consequences of administering a fictitious treatment versus not administering any treatment, real-life decisions frequently involve choosing between several alternative treatments. In order to mimic these realistic conditions, participants in two experiments received information regarding the rate of recovery when each of two different fictitious remedies were administered. The fictitious remedy that was more frequently administered was given higher effectiveness ratings than the low-frequency one, independent of the absence or presence of information about the spontaneous recovery rate. Crucially, we also introduced a novel dependent variable that involved imagining new occasions in which the ailment was present and asking participants to decide which treatment they would opt for. The inclusion of information about the base rate of recovery significantly influenced participants’ choices. These results imply that the mere prevalence of popular treatments might make them seem particularly effective. It also suggests that effectiveness ratings should be interpreted with caution as they might not accurately reflect real treatment choices. Materials and datasets are available at the Open Science Framework [https://osf.io/fctjs/].
... That is, the users' beliefs about the effectiveness of the treatment are causal in nature, i.e., "the treatment causes the symptom remission, " or "the treatment prevents me from falling ill." Thus, it is possible to study the patients' beliefs of treatment effectiveness through causal learning experiments (see reviews in Matute et al., 2015;Matute et al., 2019). This possibility offers a number of advantages. ...
... Rather, they must form their beliefs of effectiveness on the basis of a more limited comparison: how often symptoms were observed before the treatment started vs. how often they occur during the treatment, on the same patient (usually, themselves). Most causal learning experiments do not take into account this limitation, and instead provide participants with information about a series of different patients (Blanco et al., 2014;Matute et al., 2019). This is useful to investigate the formation of causal knowledge in general, but it is not realistic when applied to the case of patients' beliefs of effectiveness, as the procedure clearly departs from the actual experience of patients with their own treatments. ...
... In particular, when the probability of the desired outcome is high, judgments tend to be higher even in null contingency conditions (Alloy and Abramson, 1979;Buehner et al., 2003;Blanco et al., 2014Chow et al., 2019), contributing to what has been called a "causal illusion." This is a bias consisting of the belief in a causal link that is actually inexistent (Matute et al., 2015;Matute et al., 2019). The causal illusion bias share some features with other phenomena like the classical illusory correlation effect Chapman, 1967, 1969), and pseudocontingencies (Kutzner et al., 2011;Fiedler et al., 2009). 1 Despite their different explanations and assumptions, all these phenomena coincide in the importance of event probabilities, such as the probability of the cause and the probability of the outcome, when judging causal relationships. ...
Article
Full-text available
Patients' beliefs about the effectiveness of their treatments are key to the success of any intervention. However, since these beliefs are usually formed by sequentially accumulating evidence in the form of the covariation between the treatment use and the symptoms, it is not always easy to detect when a treatment is actually working. In Experiments 1 and 2, we presented participants with a contingency learning task in which a fictitious treatment was actually effective to reduce the symptoms of fictitious patients. However, the base-rate of the symptoms was manipulated so that, for half of participants, the symptoms were very frequent before the treatment, whereas for the rest of participants, the symptoms were less frequently observed. Although the treatment was equally effective in all cases according to the objective contingency between the treatment and healings, the participants' beliefs on the effectiveness of the treatment were influenced by the base-rate of the symptoms, so that those who observed frequent symptoms before the treatment tended to produce lower judgments of effectiveness. Experiment 3 showed that participants were probably basing their judgments on an estimate of effectiveness relative to the symptom base-rate, rather than on contingency in absolute terms. Data and materials are publicly available at the Open Science Framework: https://osf.io/emzbj/
... Next, to derive predictions from the Rescorla-Wagner Model, we ran a series of simulations using the Rescorla & Wagner Model Simulator Version 5 software (Chung et al., 2018). As for the associability parameters, and , we used the same values as Matute et al. (2019) who had demonstrated a cue-density effect, an outcome density-effect, as well as an interaction resulting from a strong incremental effect when cue and outcome density are both high (i.e., and ). ...
... And does it depend on Vaccine X among people suffering from a certain virus variant? To address these questions, decision makers do not rely on data alone, but try to integrate them into their prior expectations about causal relations (Matute et al., 2019;Waldmann, 1996). Specifically, tri-variate relations can imply different causal structures -suppression, mediation, confound, or moderation. ...
Article
Full-text available
Humans are evidently able to learn contingencies from the co-occurrence of cues and outcomes. But how do humans judge contingencies when observations of cue and outcome are learned on different occasions? The pseudocontingency framework proposes that humans rely on base-rate correlations across contexts, that is, whether outcome base rates increase or decrease with cue base rates. Here, we elaborate on an alternative mechanism for pseudocontingencies that exploits base rate information within contexts. In two experiments, cue and outcome base rates varied across four contexts, but the correlation by base rates was kept constant at zero. In some contexts, cue and outcome base rates were aligned (e.g., cue and outcome base rates were both high). In other contexts, cue and outcome base rates were misaligned (e.g., cue base rate was high, but outcome base rate was low). Judged contingencies were more positive for contexts in which cue and outcome base rates were aligned than in contexts in which cue and outcome base rates were misaligned. Our findings indicate that people use the alignment of base rates to infer contingencies conditional on the context. As such, they lend support to the pseudocontingency framework, which predicts that decision makers rely on base rates to approximate contingencies. However, they challenge previous conceptions of pseudocontingencies as a uniform inference from correlated base rates. Instead, they suggest that people possess a repertoire of multiple contingency inferences that differ with regard to informational requirements and areas of applicability.
... Researchers have described two systematic deviations: the influence of the probability of effect occurrence [29][30][31][32][33][34], when the effect occurs frequently, the causal relationship tends to be overestimated ( Figure 1, panel B); and the influence of the probability of occurrence of the cause [24,35,36], when the probability of the cause is high, the contingency perceived between cause and effect is also high (Figure 1, panel C). These biases can be detected even if the contingency between the cause and the effect is null, leading to causal illusions [37]. shows the four information types as a function of whether the cause and the effect are present, (B) shows an example with a high probability of the effect with null contingency, and (C) shows an example with a high probability of the cause with null contingency. ...
... This will eventually bias their judgements. Indeed, even when sampling is not completely biased toward the cause and some type c or type d instances are collected, it has been repeatedly shown that the higher the tendency to sample information about the cause, the higher the probability of overestimating the cause-effect relationship [37][38][39]. ...
Article
Background The internet is a relevant source of health-related information. The huge amount of information available on the internet forces users to engage in an active process of information selection. Previous research conducted in the field of experimental psychology showed that information selection itself may promote the development of erroneous beliefs, even if the information collected does not. Objective The aim of this study was to assess the relationship between information searching strategy (ie, which cues are used to guide information retrieval) and causal inferences about health while controlling for the effect of additional information features. Methods We adapted a standard laboratory task that has previously been used in research on contingency learning to mimic an information searching situation. Participants (N=193) were asked to gather information to determine whether a fictitious drug caused an allergic reaction. They collected individual pieces of evidence in order to support or reject the causal relationship between the two events by inspecting individual cases in which the drug was or was not used or in which the allergic reaction appeared or not. Thus, one group (cause group, n=105) was allowed to sample information based on the potential cause, whereas a second group (effect group, n=88) was allowed to sample information based on the effect. Although participants could select which medical records they wanted to check—cases in which the medicine was used or not (in the cause group) or cases in which the effect appeared or not (in the effect group)—they all received similar evidence that indicated the absence of a causal link between the drug and the reaction. After observing 40 cases, they estimated the drug–allergic reaction causal relationship. Results Participants used different strategies for collecting information. In some cases, participants displayed a biased sampling strategy compatible with positive testing, that is, they required a high proportion of evidence in which the drug was administered (in the cause group) or in which the allergic reaction appeared (in the effect group). Biased strategies produced an overrepresentation of certain pieces of evidence at the detriment of the representation of others, which was associated with the accuracy of causal inferences. Thus, how the information was collected (sampling strategy) demonstrated a significant effect on causal inferences (F1,185=32.53, P<.001, η2p=0.15) suggesting that inferences of the causal relationship between events are related to how the information is gathered. Conclusions Mistaken beliefs about health may arise from accurate pieces of information partially because of the way in which information is collected. Patient or person autonomy in gathering health information through the internet, for instance, may contribute to the development of false beliefs from accurate pieces of information because search strategies can be biased.
... However, systematic departures from normative contingency have also been reported in the literature (Blanco et al., 2011;Matute et al., 2015;Shanks, 1995;Kao & Wasserman, 1993;Ward & Jenkins, 1965). Different factors such as the relative frequency of a potential cause or the outcome have been shown to bias participant's judgments (Musca et al., 2010;Blanco et al., 2013; see Matute et al., 2019, for a recent overview of biasing factors in contingency learning tasks). ...
Article
Prior knowledge has been shown to be an important factor in causal judgments. However, inconsistent patterns have been reported regarding the interaction between prior knowledge and the processing of contingency information. In three studies, we examined the effect of the plausibility of the putative cause on causal judgments, when prior expectations about the rate at which the cause is accompanied by the effect in question are explicitly controlled for. Results clearly show that plausibility has a clear efect that is independent of contingency information and type of task (passive or active). We also examined the role of strategy use as an individual difference in causal judgments. Specifically, the dual-strategy model suggests that people can either use a Statistical or a Counterexample strategy to process information. Across all three studies, results showed that Strategy use has a clear effect on causal judgments that is independent of both plausibility and contingency.
... For example, we tend not to realise when we push 'placebo buttons' attached to pre-programmed traffic crossings, elevators and office thermostats (Luo, 2004), which may reflect exaggerated expectations about control (Moore, 2016). Moreover, classic studies have demonstrated that participants reliably over-report being in control of a flashing lightbulb, even when the flashes are programmed to occur randomly (Alloy & Abramson, 1979; see also Matute et al., 2019;Vázquez, 1987;Yarritu et al., 2014), and recent work suggests agents often believe their actions can stabilise objectively volatile environments they are interacting with (Weiss et al., 2019). ...
Article
Full-text available
We frequently experience feelings of agency over events we do not objectively influence - so-called 'illusions of control'. These illusions have prompted widespread claims that we can be insensitive to objective relationships between actions and outcomes, and instead rely on grandiose beliefs about our abilities. However, these illusory biases could instead arise if we are highly sensitive to action-outcome correlations, but attribute agency when such correlations emerge simply by chance. We motion-tracked participants while they made agency judgements about a cursor that could be yoked to their actions or follow an independent trajectory. A combination of signal detection analysis, reverse correlation methods and computational modelling indeed demonstrated that 'illusions' of control could emerge solely from sensitivity to spurious action-outcome correlations. Counterintuitively, this suggests that illusions of control could arise because agents have excellent insight into the relationships between actions and outcomes in a world where causal relationships are not perfectly deterministic.
... While humans are generally good at tracking contingencies between events and outcomes (Shanks & Dickinson, 1988;Wasserman, 1990), there is clear evidence that there are robust biases that undermine these inferences (e.g. Don & Livesey, 2017;Hannah & Beneteau, 2009;Matute, Blanco, & Díaz-Lago, 2019). The current study replicates the outcome density effect in a new domain and suggests that outcome frequency may be an important factor in determining teachers' instructional beliefs. ...
Article
Full-text available
Teachers sometimes believe in the efficacy of instructional practices that have little empirical support. These beliefs have proven difficult to efface despite strong challenges to their evidentiary basis. Teachers typically develop causal beliefs about the efficacy of instructional practices by inferring their effect on students' academic performance. Here, we evaluate whether causal inferences about instructional practices are susceptible to an outcome density effect using a contingency learning task. In a series of six experiments, participants were ostensibly presented with students' assessment outcomes, some of whom had supposedly received teaching via a novel technique and some of whom supposedly received ordinary instruction. The distributions of the assessment outcomes was manipulated to either have frequent positive outcomes (high outcome density condition) or infrequent positive outcomes (low outcome density condition). For both continuous and categorical assessment outcomes, participants in the high outcome density condition rated the novel instructional technique as effective, despite the fact that it either had no effect or had a negative effect on outcomes, while the participants in the low outcome density condition did not. These results suggest that when base rates of performance are high, participants may be particularly susceptible to drawing inaccurate inferences about the efficacy of instructional practices.
... Perhaps the most widely studied of these biases is the outcome-density effect (OD): Given a fixed (usually null) contingency value, judgments will systematically increase above zero when the marginal probability of the outcome, ( ), is high, compared to when it is low (Alloy & Abramson, 1979;Buehner, Cheng & Clifford, 2003;Moreno-Fernández, Blanco & Matute, 2017;Musca, Vadillo, Blanco & Matute, 2010). This is a robust effect that has been replicated, and that could lead to causal illusions (perception of causal links in situations in which there is none; see for review Matute, Blanco & Díaz-Lago, 2019). These mistaken beliefs could, in turn, entail serious consequences, as they could underlie illusions of effectiveness of pseudoscientific medicine (Matute, Yarritu & Vadillo, 2011), and contribute to maintain social prejudice (Blanco, Gómez-Fortes, & Matute, 2018;Rodríguez-Ferreiro & Barberia, 2017). ...
Article
Full-text available
Judgments of a treatment's effectiveness are usually biased by the probability with which the outcome (e.g., symptom relief) appears: even when the treatment is completely ineffective (i.e., there is a null contingency between cause and outcome), judgments tend to be higher when outcomes appear with high probability. In this research, we present ambiguous stimuli, expecting to find individual differences in the tendency to interpret them as outcomes. In Experiment 1, judgments of effectiveness of a completely ineffective treatment increased with the spontaneous tendency of participants to interpret ambiguous stimuli as outcome occurrences (i.e., healings). In Experiment 2, this interpretation bias was affected by the overall treatment-outcome contingency, suggesting that the tendency to interpret ambiguous stimuli as outcomes is learned and context-dependent. In conclusion, we show that, to understand how judgments of effectiveness are affected by outcome probability, we need to also take into account the variable tendency of people to interpret ambiguous information as outcome occurrences.
... For example, we tend not to realise when we push 'placebo buttons' attached to pre-programmed traffic crossings, elevators and office thermostats (Luo, 2004), which may reflect exaggerated expectations about control (Moore, 2016). Moreover, classic studies have demonstrated that participants reliably over-report being in control of a flashing lightbulb, even when the flashes are programmed to occur randomly (Alloy & Abramson, 1979; see also Matute et al., 2019;Vázquez, 1987;Yarritu et al., 2014), and recent work suggests agents often believe their actions can stabilise objectively volatile environments they are interacting with (Weiss et al., 2019). ...
Preprint
Full-text available
We frequently experience feelings of agency over events we do not objectively influence – so-called ‘illusions of control’. These illusions have prompted widespread claims that we can be insensitive to objective relationships between actions and outcomes, and instead rely on grandiose beliefs about our abilities. However, these illusory biases could instead arise if we are highly sensitive to action-outcome correlations, but attribute agency when such correlations emerge simply by chance. We motion-tracked participants while they made agency judgements about a cursor that could be yoked to their actions or follow an independent trajectory. A combination of signal detection analysis, reverse correlation methods and computational modelling indeed demonstrated that ‘illusions’ of control could emerge solely from sensitivity to spurious action-outcome correlations. Counterintuitively, this suggests that illusions of control could arise because agents have excellent insight into the relationships between actions and outcomes in a world where causal relationships are not perfectly deterministic.
Article
Full-text available
The strength of the learned relation between two events, a model for causal perception, has been found to depend on their overall statistical relation, and might be expected to be related to both training trial frequency and trial duration. We report five experiments using a rapid-trial streaming procedure containing Event 1-Event 2 pairings (A trials), Event 1-alone (B trials), Event 2-alone (C trials), and neither event (D trials), in which trial frequencies and durations were independently varied. Judgements of association increased with increasing frequencies of A trials and decreased with increasing frequencies of both B and C trials but showed little effect of frequency of D trials. Across five experiments, a weak but often significant effect of trial duration was also detected, which was always in the same direction as trial frequency. Thus, both frequency and duration of trials influenced learning, but frequency had decidedly stronger effects. Importantly, the benefit of more trials greatly outweighed the observed reduction in effect size caused by a proportional decrease in trial duration. In experiment 5, more trials of proportionately shorter duration enhanced effects on contingency judgments despite a shortening of the training session. We consider the observed 'frequency advantage' with respect to both frequentist models of learning and models based on information. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Article
Full-text available
Rationale Self-limited diseases resolve spontaneously without treatment or intervention. From the patient's viewpoint, this means experiencing an improvement of the symptoms with increasing probability over time. Previous studies suggest that the observation of this pattern could foster illusory beliefs of effectiveness, even if the treatment is completely ineffective. Therefore, self-limited diseases could provide an opportunity for pseudotherapies to appear as if they were effective. Objective In three computer-based experiments, we investigate how the beliefs of effectiveness of a pseudotherapy form and change when the disease disappears gradually regardless of the intervention. Methods Participants played the role of patients suffering from a fictitious disease, who were being treated with a fictitious medicine. The medicine was completely ineffective, because symptom occurrence was uncorrelated to medicine intake. However, in one of the groups the trials were arranged so that symptoms were less likely to appear at the end of the session, mimicking the experience of a self-limited disease. Except for this difference, both groups received similar information concerning treatment effectiveness. Results In Experiments 1 and 2, when the disease disappeared progressively during the session, the completely ineffective medicine was judged as more effective than when the same information was presented in a random fashion. Experiment 3 extended this finding to a new situation in which symptom improvement was also observed before the treatment started. Conclusions We conclude that self-limited diseases can produce strong overestimations of effectiveness for treatments that actually produce no effect. This has practical implications for preventative and primary health services. The data and materials that support these experiments are freely available at the Open Science Framework (https://bit.ly/2FMPrMi)
Article
Full-text available
How stable and general is behavior once maximum learning is reached? To answer this question and understand post-acquisition behavior and its related individual differences, we propose a psychological principle that naturally extends associative models of Pavlovian conditioning to a dynamical oscillatory model where subjects have a greater memory capacity than usually postulated, but with greater forecast uncertainty. This results in a greater resistance to learning in the first few sessions followed by an over-optimal response peak and a sequence of progressively damped response oscillations. We detected the first peak and trough of the new learning curve in our data, but their dispersion was too large to also check the presence of oscillations with smaller amplitude. We ran an unusually long experiment with 32 rats over 3,960 trials, where we excluded habituation and other well-known phenomena as sources of variability in the subjects' performance. Using the data of this and another Pavlovian experiment by Harris et al. (2015), as an illustration of the principle we tested the theory against the basic associative single-cue Rescorla–Wagner (RW) model. We found evidence that the RW model is the best non-linear regression to data only for a minority of the subjects, while its dynamical extension can explain the almost totality of data with strong to very strong evidence. Finally, an analysis of short-scale fluctuations of individual responses showed that they are described by random white noise, in contrast with the colored-noise findings in human performance.
Article
Full-text available
Individuals interpret themselves as causal agents when executing an action to achieve an outcome, even when action and outcome are independent. How can illusion of control be managed? Once established, does it decay? This study aimed to analyze the effects of valence, probability of the outcome [p(O)] and probability of the actions performed by the participant [p(A)], on the magnitude of judgments of control and corresponding associative measures (including Rescorla–Wagner’s, Probabilistic Contrast, and Cheng’s Power Probabilistic Contrast models). A traffic light was presented on a computer screen to 81 participants who tried to control the green or red lights by pressing the spacebar, after instructions describing a productive or a preventive scenario. There were 4 blocks of 50 trials under all of 4 different p(O)s in random order (0.10, 0.30, 0.70, and 0.90). Judgments were assessed in a bidimensional scale. The 2 × 4 × 4 mixed experimental design was analyzed through General Linear Models, including factor group (between-subject valence), and block and p(O) (within subjects). There was a small effect of group and a large and direct effect of p(O) on judgments. Illusion was reported by 66% of the sample and was positive in the productive group. The oscillation of p(O) produced stronger illusions; decreasing p(O)s produced nil or negative illusions. Only Rescorla–Wagner’s could model causality properly. The reasons why p(A) and the other models could not generate significant results are discussed. The results help to comprehend the importance of keeping moderate illusions in productive and preventive scenarios.
Article
Full-text available
Illusory causation refers to a consistent error in human learning in which the learner develops a false belief that two unrelated events are causally associated. Laboratory studies usually demonstrate illusory causation by presenting two events—a cue (e.g., drug treatment) and a discrete outcome (e.g., patient has recovered from illness)—probabilistically across many trials such that the presence of the cue does not alter the probability of the outcome. Illusory causation in these studies is further augmented when the base rate of the outcome is high, a characteristic known as the outcome density effect. Illusory causation and the outcome density effect provide laboratory models of false beliefs that emerge in everyday life. However, unlike laboratory research, the real-world beliefs to which illusory causation is most applicable (e.g., ineffective health therapies) often involve consequences that are not readily classified in a discrete or binary manner. This study used a causal learning task framed as a medical trial to investigate whether similar outcome density effects emerged when using continuous outcomes. Across two experiments, participants observed outcomes that were either likely to be relatively low (low outcome density) or likely to be relatively high (high outcome density) along a numerical scale from 0 (no health improvement) to 100 (full recovery). In Experiment 1, a bimodal distribution of outcome magnitudes, incorporating variance around a high and low modal value, produced illusory causation and outcome density effects equivalent to a condition with two fixed outcome values. In Experiment 2, the outcome density effect was evident when using unimodal skewed distributions of outcomes that contained more ambiguous values around the midpoint of the scale. Together, these findings provide empirical support for the relevance of the outcome density bias to real-world situations in which outcomes are not binary but occur to differing degrees. This has implications for the way in which we apply our understanding of causal illusions in the laboratory to the development of false beliefs in everyday life. Electronic supplementary material The online version of this article (10.1186/s41235-018-0149-9) contains supplementary material, which is available to authorized users.
Article
Full-text available
We carried out an experiment using a conventional causal learning task but extending the number of learning trials participants were exposed to. Participants in the standard training group were exposed to 48 learning trials before being asked about the potential causal relationship under examination, whereas for participants in the long training group the length of training was extended to 288 trials. In both groups, the event acting as the potential cause had zero correlation with the occurrence of the outcome, but both the outcome density and the cause density were high, therefore providing a breeding ground for the emergence of a causal illusion. In contradiction to the predictions of associative models such the Rescorla-Wagner model, we found moderate evidence against the hypothesis that extending the learning phase alters the causal illusion. However, assessing causal impressions recurrently did weaken participants’ causal illusions.
Article
Full-text available
Previous studies have provided evidence that selective attention tends to prioritize the processing of stimuli that are good predictors of upcoming events over nonpredictive stimuli. Moreover, studies using eye-tracking to measure attention demonstrate that this attentional bias towards predictive stimuli is at least partially under voluntary control and can be flexibly adapted via instruction. Our experiment took a similar approach to these prior studies, manipulating participants’ experience of the predictiveness of different stimuli over the course of trial-by-trial training; we then provided explicit verbal instructions regarding stimulus predictiveness that were designed to be either consistent or inconsistent with the previously established learned predictiveness. Critically, we measured the effects of training and instruction on attention to stimuli using a dot probe task, which allowed us to assess rapid shifts of attention (unlike the eye-gaze measures used in previous studies). Results revealed a rapid attentional bias towards stimuli experienced as predictive (versus those experienced as nonpredictive), that was completely unaffected by verbal instructions. This was not due to participants’ failure to recall or use instructions appropriately, as revealed by analyses of their learning about stimuli, and their memory for instructions. Overall, these findings suggest that rapid attentional biases such as those measured by the dot probe task are more strongly influenced by our prior experience during training than by our current explicit knowledge acquired via instruction.
Article
Full-text available
Superstitions are common, yet we have little understanding of the cognitive mechanisms that bring them about. This study used a laboratory‐based analogue for superstitious beliefs that involved people monitoring the relationship between undertaking an action (pressing a button) and an outcome occurring (a light illuminating). The task was arranged such that there was no objective contingency between pressing the button and the light illuminating – the light was just as likely to illuminate whether the button was pressed or not. Nevertheless, most people rated the causal relationship between the button press and the light illuminating to be moderately positive, demonstrating an illusion of causality. This study found that the magnitude of this illusion was predicted by people's level of endorsement of common superstitious beliefs (measured using a novel Superstitious Beliefs Questionnaire), but was not associated with mood variables or their self‐rated locus of control. This observation is consistent with a more general individual difference or bias to overweight conjunctive events over disjunctive events during causal reasoning in those with a propensity for superstitious beliefs.
Article
Full-text available
The purpose of this research is to investigate the impact of a foreign language on the causality bias (i.e., the illusion that two events are causally related when they are not). We predict that using a foreign language could reduce the illusions of causality. A total of 36 native English speakers participated in Experiment 1, 80 native Spanish speakers in Experiment 2. They performed a standard contingency learning task, which can be used to detect causal illusions. Participants who performed the task in their native tongue replicated the illusion of causality effect, whereas those performing the task in their foreign language were more accurate in detecting that the two events were causally unrelated. Our results suggest that presenting the information in a foreign language could be used as a strategy to debias individuals against causal illusions, thereby facilitating more accurate judgements and decisions in non-contingent situations. They also contribute to the debate on the nature and underlying mechanisms of the foreign language effect, given that the illusion of causality is rooted in basic associative processes.
Article
Full-text available
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection.
Article
Full-text available
Cognitive biases such as causal illusions have been related to paranormal and pseudoscientific beliefs and, thus, pose a real threat to the development of adequate critical thinking abilities. We aimed to reduce causal illusions in undergraduates by means of an educational intervention combining training-in-bias and training-in-rules techniques. First, participants directly experienced situations that tend to induce the Barnum effect and the confirmation bias. Thereafter, these effects were explained and examples of their influence over everyday life were provided. Compared to a control group, participants who received the intervention showed diminished causal illusions in a contingency learning task and a decrease in the precognition dimension of a paranormal belief scale. Overall, results suggest that evidence-based educational interventions like the one presented here could be used to significantly improve critical thinking skills in our students.
Article
Full-text available
Emotions are at the core of human nature. There is evidence that emotional reactivity in foreign languages compared to native languages is reduced. We explore whether this emotional distance could modulate fear conditioning, an essential mechanism for the understanding and treatment of anxiety disorders. A group of participants was verbally informed (either in a foreign or in a native language) that two different stimuli could be either cueing the potential presence of a threat stimulus or its absence. We registered pupil size and electrodermal activity and calculated the difference in psychophysiological responses to conditioned and to unconditioned stimuli. Our findings provided evidence that verbal conditioning processes are affected by language context in this paradigm. We report the first experimental evidence regarding how the use of a foreign language may reduce fear conditioning. This observation opens the avenue to the potential use of a foreign language in clinical contexts.
Article
Full-text available
Individual differences in behavior are understood generally as arising from an interaction between genes and environment, omitting a crucial component. The literature on animal and human learning suggests the need to posit principles of learning to explain our differences. One of the challenges for the advancement of the field has been to establish how general principles of learning can explain the almost infinite variation in behavior. We present a case that: (a) individual differences in behavior emerge, in part, from principles of learning; (b) associations provide a descriptive mechanism for understanding the contribution of experience to behavior; and (c) learning theories explain dissociable aspects of behavior. We use 4 examples from the field of learning to illustrate the importance of involving psychology, and associative theory in particular, in the analysis of individual differences, these are (a) fear learning; (b) behavior directed to cues for outcomes (i.e., sign- and goal- tracking); (c) stimulus learning related to attention; and (d) human causal learning.
Article
Full-text available
Previous research has studied the relationship between political ideology and cognitive biases, such as the tendency of conservatives to form stronger illusory correlations between negative infrequent behaviors and minority groups. We further explored these findings by studying the relation between illusory correlation and moral values. According to the moral foundations theory, liberals and conservatives differ in the relevance they concede to different moral dimensions: Care, Fairness, Loyalty, Authority, and Purity. Whereas liberals consistently endorse the Care and Fairness foundations more than the Loyalty, Authority and Purity foundations, conservatives tend to adhere to the five foundations alike. In the present study, a group of participants took part in a standard illusory correlation task in which they were presented with randomly ordered descriptions of either desirable or undesirable behaviors attributed to individuals belonging to numerically different majority and minority groups. Although the proportion of desirable and undesirable behaviors was the same in the two groups, participants attributed a higher frequency of undesirable behaviors to the minority group, thus showing the expected illusory correlation effect. Moreover, this effect was specifically associated to our participants’ scores in the Loyalty subscale of the Moral Foundations Questionnaire. These results emphasize the role of the Loyalty moral foundation in the formation of attitudes towards minorities among conservatives. Our study points out the moral system as a useful fine-grained framework to explore the complex interaction between basic cognitive processes and ideology.
Article
Full-text available
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent.
Article
Full-text available
Most associative models typically assume that learning can be understood as a gradual change in associative strength that captures the situation into one single parameter, or representational state. We will call this view single-state learning. However, there is ample evidence showing that under many circumstances different relationships that share features can be learned independently, and animals can quickly switch between expressing one or another. We will call this multiple-state learning. Theoretically, it is understudied because it needs a different data analysis approach from those usually employed. In this paper, we present a Bayesian model of the Partial Reinforcement Extinction Effect (PREE) that can test the predictions of the multiple-state view. This implies estimating the moment of change in the responses (from the acquisition to the extinction performance), both at the individual and at the group levels. We used this model to analyze data from a PREE experiment with three levels of reinforcement during acquisition (100%, 75% and 50%). We found differences in the estimated moment of switch between states during extinction, so that it was delayed after leaner partial reinforcement schedules. The finding is compatible with the multiple-state view. It is the first time, to our knowledge, that the predictions from the multiple-state view are tested directly. The paper also aims to show the benefits that Bayesian methods can bring to the associative learning field.
Article
Full-text available
Causal asymmetry is one of the most fundamental features of the physical world: Causes produce effects, but not vice versa. This article is part of a debate between the view that, in principle, people are sensitive to causal directionality during learning (causal-model theory) and the view that learning primarily involves acquiring associations between cues and outcomes irrespective of their causal role (associative theories). Four experiments are presented that use asymmetries of cue competition to discriminate between these views. These experiments show that, contrary to associative accounts, cue competition interacts with causal status and that people are capable of differentiating between predictive and diagnostic inferences. Additional implications of causal-model theory are elaborated and empirically tested against alternative accounts. The results uniformly favor causal-model theory.
Article
Full-text available
Two experiments on human causal induction with multiple candidate causes are reported. Experiment 1 investigated the influence of a perfect preventive cause on the ratings of a less contingent cause. Whereas the Rescorla-Wagner model (RWM) and Cheng's probabilistic contrast model predict that the less contingent cause should be completely discounted, the Pearce model predicts, in most cases, an enhancement of that cause's perceived importance Results corresponded more closely to the predictions of the Pearce model. The predictions of both the RWM and the Pearce model rely on a constant context cue acquiring associative strength, yet no such cue was explicitly identified in the task scenario employed in Experiment 1. Experiment 2 replicated a number of key conditions of Experiment 1 with a task scenario that afforded ratings of the causal importance of the context in which the effectiveness of the discrete candidate causes was evaluated. In addition, the number of trials was increased to test the possibility that the ratings in Experiment 1 were the product of incomplete learning. The results of the first experiment were replicated and the ratings of the effectiveness of the context cue were anticipated by both the RWM and the Pearce model. Overall, the Pearce model offers a more comprehensive account of the causal inferences recorded in this study.
Article
Full-text available
In the reasoning literature, paranormal beliefs have been proposed to be linked to two related phenomena: a biased perception of causality and a biased information-sampling strategy (believers tend to test fewer hypotheses and prefer confirmatory information). In parallel, recent contingency learning studies showed that, when two unrelated events coincide frequently, individuals interpret this ambiguous pattern as evidence of a causal relationship. Moreover, the latter studies indicate that sampling more cause-present cases than cause-absent cases strengthens the illusion. If paranormal believers actually exhibit a biased exposure to the available information, they should also show this bias in the contingency learning task: they would in fact expose themselves to more cause-present cases than cause-absent trials. Thus, by combining the two traditions, we predicted that believers in the paranormal would be more vulnerable to developing causal illusions in the laboratory than nonbelievers because there is a bias in the information they experience. In this study, we found that paranormal beliefs (measured using a questionnaire) correlated with causal illusions (assessed by using contingency judgments). As expected, this correlation was mediated entirely by the believers' tendency to expose themselves to more cause-present cases. The association between paranormal beliefs, biased exposure to information, and causal illusions was only observed for ambiguous materials (i.e., the noncontingent condition). In contrast, the participants' ability to detect causal relationships which did exist (i.e., the contingent condition) was unaffected by their susceptibility to believe in paranormal phenomena.
Article
Full-text available
Illusions of causality occur when people develop the belief that there is a causal connection between two events that are actually unrelated. Such illusions have been proposed to underlie pseudoscience and superstitious thinking, sometimes leading to disastrous consequences in relation to critical life areas, such as health, finances, and wellbeing. Like optical illusions, they can occur for anyone under well-known conditions. Scientific thinking is the best possible safeguard against them, but it does not come intuitively and needs to be taught. Teaching how to think scientifically should benefit from better understanding of the illusion of causality. In this article, we review experiments that our group has conducted on the illusion of causality during the last 20 years. We discuss how research on the illusion of causality can contribute to the teaching of scientific thinking and how scientific thinking can reduce illusion.
Article
Full-text available
Evaluation of causal reasoning models depends on how well the subjects' causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant's responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of the responses can be substantially influenced by the type of question (structure induction versus strength estimation versus prediction). Study 2A demonstrates that subjects' responses to a question requiring them to predict the effect of a candidate cause can be significantly lower and more heterogeneous than their responses to a question asking them to diagnose a cause when given an effect. Study 2B suggests that diagnostic reasoning can strongly benefit from cues relating to temporal precedence of the cause in the question. Finally, we evaluated 16 variations of recent computational models and found the model fitting was substantially influenced by the type of questions. Our results show that future research in causal reasoning should place a high priority on disentangling the effects of question formats from the effects of experimental manipulations, because that will enable comparisons between models of causal reasoning uncontaminated by method artifact.
Article
Full-text available
It is generally assumed that the way people assess the relationship between a cause and an outcome is closely related to the actual evidence existing about the co-occurrence of these events. However, people’s estimations are often biased, and this usually translates into illusions of causality. Some have suggested that such illusions could be the result of previous knowledge-based expectations. In the present research we explored the role that previous knowledge has in the development of illusions of causality. We propose that previous knowledge influences the assessment of causality by influencing the decisions about responding or not (i.e., presence or absence of the potential cause), which biases the information people are exposed to, and this in turn produces illusions congruent with such biased information. In a non-contingent situation in which participants decided whether the potential cause was present or absent (Experiment 1), the influence of expectations on participants’ judgments was mediated by the probability of occurrence of the potential cause (determined by participants’ responses). However, in an identical situation, except that the participants were not allowed to decide the occurrence of the potential cause, only the probability of the cause was significant, not the expectations or the interaction. Together, these results support our hypothesis that knowledge-based expectations affect the development of causal illusions by the mediation of behavior, which biases the information received.
Article
Full-text available
Research into language-emotion interactions has revealed intriguing cognitive inhibition effects by emotionally negative words in bilinguals. Here, we turn to the domain of human risk taking and show that the experience of positive recency in games of chance-the "hot hand" effect-is diminished when game outcomes are provided in a second language rather than the native language. We engaged late Chinese-English bilinguals with "play" or "leave" decisions upon presentation of equal-odds bets while manipulating language of feedback and outcome value. When positive game outcomes were presented in their second language, English, participants subsequently took significantly fewer gambles and responded slower compared with the trials in which equivalent feedback was provided in Chinese, their native language. Positive feedback was identified as driving the cross-language difference in preference for risk over certainty: feedback for previous winning outcomes presented in Chinese increased subsequent risk taking, whereas in the English context no such effect was observed. Complementing this behavioral effect, event-related brain potentials elicited by feedback words showed an amplified response to Chinese relative to English in the feedback-related negativity window, indicating a stronger impact in the native than in the second language. We also observed a main effect of language on P300 amplitude and found it correlated with the cross-language difference in risk selections, suggesting that the greater the difference in attention between languages, the greater the difference in risk-taking behavior. These results provide evidence that the hot hand effect is at least attenuated when an individual operates in a non-native language. Copyright © 2015 the authors 0270-6474/15/355983-07$15.00/0.
Article
Full-text available
The vast majority of published work in the field of associative learning seeks to test the adequacy of various theoretical accounts of the learning process using average data. Of course, averaging hides important information, but individual departures from the average are usually designated "error" and largely ignored. However, from the perspective of an individual differences approach, this error is the data of interest; and when associative models are applied to individual learning curves the error is substantial. To some extent individual differences can be reasonably understood in terms of parametric variations of the underlying model. Unfortunately, in many cases, the data cannot be accomodated in this way and the applicability of the underlying model can be called into question. Indeed several authors have proposed alternatives to associative models because of the poor fits between data and associative model. In the current paper a novel associative approach to the analysis of individual learning curves is presented. The Memory Environment Cue Array Model (MECAM) is described and applied to two human predictive learning datasets. The MECAM is predicated on the assumption that participants do not parse the trial sequences to which they are exposed into independent episodes as is often assumed when learning curves are modeled. Instead, the MECAM assumes that learning and responding on a trial may also be influenced by the events of the previous trial. Incorporating non-local information the MECAM produced better approximations to individual learning curves than did the Rescorla-Wagner Model (RWM) suggesting that further exploration of the approach is warranted.
Article
Full-text available
Researchers have warned that causal illusions are at the root of many superstitious beliefs and fuel many people's faith in pseudoscience, thus generating significant suffering in modern society. Therefore, it is critical that we understand the mechanisms by which these illusions develop and persist. A vast amount of research in psychology has investigated these mechanisms, but little work has been done on the extent to which it is possible to debias individuals against causal illusions. We present an intervention in which a sample of adolescents was introduced to the concept of experimental control, focusing on the need to consider the base rate of the outcome variable in order to determine if a causal relationship exists. The effectiveness of the intervention was measured using a standard contingency learning task that involved fake medicines that typically produce causal illusions. Half of the participants performed the contingency learning task before participating in the educational intervention (the control group), and the other half performed the task after they had completed the intervention (the experimental group). The participants in the experimental group made more realistic causal judgments than did those in the control group, which served as a baseline. To the best of our knowledge, this is the first evidence-based educational intervention that could be easily implemented to reduce causal illusions and the many problems associated with them, such as superstitions and belief in pseudoscience.
Article
Full-text available
The illusion of control consists of overestimating the influence that our behavior exerts over uncontrollable outcomes. Available evidence suggests that an important factor in development of this illusion is the personal involvement of participants who are trying to obtain the outcome. The dominant view assumes that this is due to social motivations and self-esteem protection. We propose that this may be due to a bias in contingency detection which occurs when the probability of the action (i.e., of the potential cause) is high. Indeed, personal involvement might have been often confounded with the probability of acting, as participants who are more involved tend to act more frequently than those for whom the outcome is irrelevant and therefore become mere observers. We tested these two variables separately. In two experiments, the outcome was always uncontrollable and we used a yoked design in which the participants of one condition were actively involved in obtaining it and the participants in the other condition observed the adventitious cause-effect pairs. The results support the latter approach: Those acting more often to obtain the outcome developed stronger illusions, and so did their yoked counterparts.
Article
Full-text available
Learned helplessness and superstition accounts of uncontrollability predict opposite results for subjects exposed to noncontingent reinforcement. Experiment 1 used the instrumental-cognitive triadic design proposed by Hiroto and Seligman (1975) for the testing of learned helplessness in humans, but eliminated the "failure light" that they introduced in their procedure. Results showed that Yoked subjects tend to superstitious behavior and illusion of control during exposure to uncontrollable noise. This, in turn, prevents the development of learned helplessness because uncontrollability is not perceived. In Experiment 2, the failure feedback manipulation was added to the Yoked condition. Results of this experiment replicate previous findings of a proactive interference effect in humans—often characterized as learned helplessness. This effect, however, does not support learned helplessness theory because failure feedback is needed for its development. It is argued that conditions of response-independent reinforcement commonly used in human research do not lead to learned helplessness, but to superstitious behavior and illusion of control. Different conditions could lead to learned helplessness, but the limits between superstition and helplessness have not yet been investigated.
Article
Full-text available
Defining cues for instrumental causality are the temporal, spatial and contingency relationships between actions and their effects. In this study, we carried out a series of causal learning experiments that systematically manipulated time and context in positive and negative contingency conditions. In addition, we tested participants categorized as non-dysphoric and mildly dysphoric because depressed mood has been shown to affect the processing of all these causal cues. Findings showed that causal judgements made by non-dysphoric participants were contextualized at baseline and were affected by the temporal spacing of actions and effects only with generative, but not preventative, contingency relationships. Participants categorized as dysphoric made less contextualized causal ratings at baseline but were more sensitive than others to temporal manipulations across the contingencies. These effects were consistent with depression affecting causal learning through the effects of slowed time experience on accrued exposure to the context in which causal events took place. Taken together, these findings are consistent with associative approaches to causal judgement.
Article
Prior research has found that pigeons are indifferent between an option that always provides a signal for reinforcement and an alternative that provides a signal for reinforcement only 50% of the time (and a signal for the absence of reinforcement 50% of the time). This suboptimal choice suggests that the frequency of the signal for reinforcement plays virtually no role and choice depends only on the predictive value of the signal for reinforcement associated with each alternative. In the present research we tested the hypothesis that if there are two or three signals for reinforcement associated with the suboptimal alternative but each occurs only 25% or 17% of the time, respectively, pigeons would show a greater preference for the suboptimal alternative. Although we found that increasing the number of signals for reinforcement associated with the suboptimal alternative did not increase the preference for the suboptimal alternative (relative to a single signal for reinforcement) extended training on this task resulted in a significant preference for the suboptimal alternative by both groups. This result suggests that contrast between the expected outcome at the time of choice (50% reinforcement) and the value of the signal for reinforcement (100% reinforcement) is also responsible for choice of the suboptimal alternative.
Article
Objective: We tested a novel intervention for reducing demand for ineffective health remedies. The intervention aimed to empower participants to overcome the illusion of causality, which otherwise drives erroneous perceptions regarding remedy efficacy. Design: A laboratory experiment adopted a between-participants design with six conditions that varied the amount of information available to participants (N = 245). The control condition received a basic refutation of multivitamin efficacy, whereas the principal intervention condition received a full contingency table specifying the number of people reporting a benefit vs. no benefit from both the product and placebo, plus an alternate causal explanation for inefficacy over placebo. Main outcome measures: We measured participants’ willingness to pay (WTP) for multivitamin products using two incentivized experimental auctions. General attitudes towards health supplements were assessed as a moderator of WTP. We tested generalization using ratings of the importance of clinical-trial results for making future health purchases. Results: Our principal intervention significantly reduced participants’ WTP for multivitamins (by 23%) and increased their recognition of the importance of clinical-trial results. Conclusion: We found evidence that communicating a simplified full- contingency table and an alternate causal explanation may help reduce demand for ineffective health remedies by countering the illusion of causality.
Article
Associative and statistical theories of causal and predictive learning make opposite predictions for situations in which the most recent information contradicts the information provided by older trials (e.g., acquisition followed by extinction). Associative theories predict that people will rely on the most recent information to best adapt their behavior to the changing environment. Statistical theories predict that people will integrate what they have learned in the two phases. The results of this study showed one or the other effect as a function of response mode (trial by trial vs. global), type of question (contiguity, causality, or predictiveness), and postacquisition instructions. That is, participants are able to give either an integrative judgment, or a judgment that relies on recent information as a function of test demands. The authors concluded that any model must allow for flexible use of information once it has been acquired.
Chapter
In the last decades, cognitive Psychology has provided researchers with a powerful background and the rigor of experimental methods to better understand why so many people believe in pseudoscience, paranormal phenomena and superstitions. According to recent evidence, those irrational beliefs could be the unintended result of how the mind evolved to use heuristics and reach conclusions based on scarce and incomplete data. Thus, we present visual illusions as a parallel to the type of fast and frugal cognitive bias that underlies pseudoscientific belief. In particular, we focus on the causal illusion, which consists of people believing that there is a causal link between two events that coincide just by chance. The extant psychological theories that can account for this causal illusion are described, as well as the factors that are able to modulate the bias. We also discuss that causal illusions are adaptive under some circumstances, although they often lead to utterly wrong beliefs. Finally, we mention several debiasing strategies that have been proved effective in fighting the causal illusion and preventing some of its consequences, such as pseudoscientific belief.
Article
The objective of this experiment was to study similarities between superstitious behavior and illusion of control. We used different motivational instructions to generate high and low rates of responding and exposed participants to noncontingent reinforcement in order to evaluate superstitious behavior and illusion of control. College students (n = 40) responded over three 10-min sessions in a computer-based free operant procedure that alternated signaled periods of noncontingent presentation of points (VT schedule) and periods in which the points were not presented (extinction, EXT). In one group of participants, points were the only reward; for the other group, instructions stated that points were later exchangeable for photocopy vouchers. We compared rates of responding and estimates of control. Points exchangeable for photocopy vouchers produced higher rates of responding and estimates of control. Frequency of response and estimates of control were positively correlated. It was concluded that motivational instructions influenced both rate of responding and judgment of control. Even when a high rate of responding was extended in time (two more sessions for each participant), judgments of control were biased by noncontingent reinforcement. Through direct comparison between superstitious behavior and illusion of control, we showed that behavioral dynamics can be important in studies of illusion of control.
Article
A growing literature demonstrates that using a foreign language affects choice. This is surprising because if people understand their options, choice should be language independent. Here, we review the impact of using a foreign language on risk, inference, and morality, and discuss potential explanations, including reduced emotion, psychological distance, and increased deliberation.
Article
This article presents a comprehensive survey of research concerning interactions between associative learning and attention in humans. Four main findings are described. First, attention is biased toward stimuli that predict their consequences reliably (). This finding is consistent with the approach taken by Mackintosh (1975) in his attentional model of associative learning in nonhuman animals. Second, the strength of this attentional bias is modulated by the value of the outcome (). That is, predictors of high-value outcomes receive especially high levels of attention. Third, the related but opposing idea that may result in increased attention to stimuli (Pearce & Hall, 1980), receives less support. This suggests that hybrid models of associative learning, incorporating the mechanisms of both the Mackintosh and Pearce-Hall theories, may not be required to explain data from human participants. Rather, a simpler model, in which attention to stimuli is determined by how strongly they are associated with significant outcomes, goes a long way to account for the data on human attentional learning. The last main finding, and an exciting area for future research and theorizing, is that and modulate both deliberate attentional focus, and more automatic attentional capture. The automatic influence of learning on attention does not appear to fit the traditional view of attention as being either or . Rather, it suggests a new kind of "derived" attention. (PsycINFO Database Record
Article
People perceive that they have control over events to the extent that the same events do not occur outside of their control, randomly, in the environment or context. Therefore, perceived control should be enhanced if there is a large contrast between one's own control and the control that the context itself seems to exert over events. Given that depression is associated with low perceived control, we tested the hypothesis that enhanced attentional focus to context will increase perceived control in people with and without depression. A total of 106 non-depressed and mildly depressed participants completed a no control zero-contingency task with low and high outcome probability conditions. In the experimental context-focus group, participants were instructed to attend to the context, whereas in the control group, participants were instructed to attend to their thoughts. Irrespective of attentional focus, non-depressed participants displayed illusory control. However, people with mild depression responded strongly to the attention focus manipulation. In the control group, they evidenced low perceived control with classic depressive realism effects. In the experimental group, when asked to focus on the context in which events took place, participants with mild depression displayed enhanced perceived control or illusory control, similar to non-depressed participants. Findings are discussed in relation to whether depression effects on perceived control represent tendencies towards realism or attentional aspects of depressive thoughts.
Chapter
This chapter describes the potential explanatory power of a specific response rule and its implications for models of acquisition. This response rule is called the “comparator hypothesis.” It was originally inspired by Rescorla's contingency theory. Rescorla noted that if the number and frequency of conditioned stimulus–unconditioned stimulus (CS–US) pairings are held constant, unsignaled presentations of the US during training attenuate conditioned responding. This observation complemented the long recognized fact that the delivery of nonreinforced presentations of the CS during training also attenuates conditioned responding. The symmetry of the two findings prompted Rescorla to propose that during training, subjects inferred both the probability of the US in the presence of the CS and the probability of the US in the absence of the CS and they then established a CS–US association based upon a comparison of these quantities. The comparator hypothesis is a qualitative response rule, which, in principle, can complement any model of acquisition.
Article
Causal Bayes nets have been developed in philosophy, statistics, and computer sciences to provide a formalism to represent causal structures, to induce causal structure from data and to derive predictions (Glymour and Cooper, in Computation, causation, and discovery, 1999; Spirtes et al., in Causation, prediction, and search, 2000). Causal Bayes nets have been used as psychological theories in at least two ways. They were used as rational, computational models of causal reasoning (e.g., Gopnik et al., in Psychol Rev 111:3-32, 2004) and they were used as formal models of mental causal models (e.g., Sloman, in Causal models: how we think about the world and its alternatives, 2005). A crucial assumption made by them is the Markov condition, which informally states that variables are independent of other variables that are not their direct or indirect effects conditional on their immediate causes. Whether people’s inferences conform to the causal Markov and the faithfulness condition has recently been investigated empirically. A review of respective research indicates that inferences frequently violate these conditions. This finding challenges some uses of causal Bayes nets in psychology. They entail that causal Bayes nets may not be appropriate to derive predictions for causal model theories of causal reasoning. They also question whether causal Bayes nets as a rational model are empirically descriptive. They do not challenge, however, causal Bayes nets as normative models and their usage as formal models of causal reasoning.
Article
Many have argued that moral judgment is driven by one of two types of processes. Rationalists argue that reasoned processes are the source of moral judgments, whereas sentimentalists argue that emotional processes are. We provide evidence that both positions are mistaken; there are multiple mental processes involved in moral judgment, and it is possible to manipulate which process is engaged when considering moral dilemmas by presenting them in a non-native language. The Foreign-Language Effect (FLE) is the activation of systematic reasoning processes by thinking in a foreign language. We demonstrate that the FLE extends to moral judgment. This indicates that different types of processes can lead to the formation of differing moral judgments. One implication of the FLE is that it raises the possibility that moral judgments can be made more systematic, and that the type of processing used to form them might be relevant to normative and applied ethics.
Article
Four experiments examined trial sequencing effects in human contingency judgment. In Experiments 1–3, ratings of contingency between a target cue and outcome were affected by the presentation order of a series of trials distributed in 2 distinct blocks and showed a recency bias. Experiment 4 replicated this effect when the trials were partly intermixed. These recency effects are predicted by an associative learning model that computes associative strengths trial by trial and incorporates configural coding of cues but are problematic for probabilistic contrast accounts, which currently have no provision in the contingency computation for the differential weighting of trials as a function of their order of presentation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The ability to speak two languages often marvels monolinguals, although bilinguals report no difficulties in achieving this feat. Here, we examine how learning and using two languages affect language acquisition and processing as well as various aspects of cognition. We do so by addressing three main questions. First, how do infants who are exposed to two languages acquire them without apparent difficulty? Second, how does language processing differ between monolingual and bilingual adults? Last, what are the collateral effects of bilingualism on the executive control system across the lifespan? Research in all three areas has not only provided some fascinating insights into bilingualism but also revealed new issues related to brain plasticity and language learning.
Article
Additivity-related assumptions have been proven to modulate blocking in human causal learning. Typically, these assumptions are manipulated by means of pretraining phases (including exposure to different outcome magnitudes), or through explicit instructions. In two experiments, we used a different approach that involved neither pretraining nor instructional manipulations. Instead, we manipulated the causal structure in which the cues were embedded, thereby appealing directly to the participants' prior knowledge about causal relations and how causes would add up to yield stronger outcomes. Specifically, in our "different-system" condition, the participants should assume that the outcomes would add up, whereas in our "same-system" condition, a ceiling effect would prevent such an assumption. Consistent with our predictions, Experiment 1 showed that, when two cues from separate causal systems were combined, the participants did expect a stronger outcome on compound trials, and blocking was found, whereas when the cues belonged to the same causal system, the participants did not expect a stronger outcome on compound trials, and blocking was not observed. The results were partially replicated in Experiment 2, in which this pattern was found when the cues were tested for the second time. This evidence supports the claim that prior knowledge about the nature of causal relations can affect human causal learning. In addition, the fact that we did not manipulate causal assumptions through pretraining renders the results hard to account for with associative theories of learning.
Article
Despite Miller's (1969) now-famous clarion call to "give psychology away" to the general public, scientific psychology has done relatively little to combat festering problems of ideological extremism and both inter- and intragroup conflict. After proposing that ideological extremism is a significant contributor to world conflict and that confirmation bias and several related biases are significant contributors to ideological extremism, we raise a crucial scientific question: Can debiasing the general public against such biases promote human welfare by tempering ideological extremism? We review the knowns and unknowns of debiasing techniques against confirmation bias, examine potential barriers to their real-world efficacy, and delineate future directions for research on debiasing. We argue that research on combating extreme confirmation bias should be among psychological science's most pressing priorities. © 2009 Association for Psychological Science.
Article
This chapter discusses that experimental psychology is no longer a unified field of scholarship. The most obvious sign of disintegration is the division of the Journal of Experimental Psychology into specialized periodicals. Many forces propel this fractionation. First, the explosion of interest in many small spheres of inquiry has made it extremely difficult for an individual to master more than one. Second, the recent popularity of interdisciplinary research has lured many workers away from the central issues of experimental psychology. Third, there is a growing division between researchers of human and animal behavior; this division has been primarily driven by contemporary cognitive psychologists, who see little reason to refer to the behavior of animals or to inquire into the generality of behavioral principles. The chapter considers the study of causal perception. This area is certainly at the core of experimental psychology. Although recent research in animal cognition has taken the tack of bringing human paradigms into the animal laboratory, the experimental research is described has adopted the reverse strategy of bringing animal paradigms into the human laboratory. A further unfortunate fact is that today's experimental psychologists are receiving little or no training in the history and philosophy of psychology. This neglected aspect means that investigations of a problem area are often undertaken without a full understanding of the analytical issues that would help guide empirical inquiry.
Article
This chapter discusses the associative accounts of causality judgment. The perceptual and cognitive approaches to causal attribution can be contrasted with a more venerable tradition of associationism. The only area of psychology that offered an associative account of a process sensitive to causality is that of conditioning. An instrumental or operant conditioning procedure presents a subject with a causal relationship between an action and an outcome, the reinforcer; performing the action either causes the reinforcer to occur under a positive contingency or prevents its occurrence under a negative one, and the subjects demonstrate sensitivity to these causal relationships by adjusting their behavior appropriately. Most of these associative theories are developed to explain classic or Pavlovian conditioning rather than the instrumental or operant variety, but there are good reasons for assuming that the two types of conditioning are mediated by a common learning process.
Article
This chapter reviews that nonverbal behavioral assessment of causal judgment is apt to be more veridical than is verbal assessment, which is compromised by the demand characteristics and ambiguities of language. Organisms presumably evolved the ability to learn cause-effect relationships in order to prepare for and sometimes influence future events in the real world, not in order to verbally describe these causal relationships. The use of nonverbal behavioral assessment invites direct comparisons between human causal judgment behavior and animal behavior in similar situations. Cues of high biological relevance appear to be relatively invulnerable to cue competition compared to cues of low biological relevance, which are quite susceptible to cue competition. It discusses that this convergence of findings in the causal judgment and animal learning literatures suggests that the two fields can each benefit by attending to the findings of the other. Another likely finding from studies of cue competition in animals that is profitably examined in causal judgment situations with humans is the learning-performance distinction. There is also some discussion that causal judgments results from those associations that have a forward relationship from one event to another and that are not nor in competition with other associations that are active at the time the target association is tested.
Article
Experiments in which subjects are asked to analytically assess response-outcome relationships have frequently yielded accurate judgments of response-outcome independence, but more naturalistically set experiments in which subjects are instructed to obtain the outcome have frequently yielded illusions of control The present research tested the hypothesis that a differential probability of responding p(R), between these two traditions could be at the basis of these different results Subjects received response-independent outcomes and were instructed either to obtain the outcome (naturalistic condition) or to behave scientifically in order to find out how much control over the outcome was possible (analytic condition) Subjects in the naturalistic condition tended to respond at almost every opportunity and developed a strong illusion of control Subjects in the analytic condition maintained their p(R) at a point close to 5 and made accurate judgments of control The illusion of control observed in the naturalistic condition appears to be a collateral effect of a high tendency to respond in subjects who are trying to obtain an outcome, this tendency to respond prevents them from learning that the outcome would have occurred with the same probability if they had not responded
Article
In 5 experiments, humans played video games in which 2 events or causes covaried with an outcome. In Experiments 1 and 2, a highly correlated cause (a plane) of an outcome (success at traversing a minefield) reduced judgments of the strength of a weaker cause (camouflaging or painting a tank). In Experiment 3, similar results were found when both causes were negatively correlated with the outcome. In Experiment 4, strong positive or negative contingencies caused the subjects to reduce judgments of contingencies of the opposite polarity. These results can be accounted for by associative or connectionist models from animal learning such as the Rescorla-Wagner model. In Experiment 5, this type of model was contrasted with a representational model in which subjects are claimed to monitor accurately the various contingencies but use a rule in which the presence of a strong contingency causes them to discount weaker contingencies.
Article
Overestimations of null contingencies between a cue, C, and an outcome, O, are widely reported effects that can arise for multiple reasons. For instance, a high probability of the cue, P(C), and a high probability of the outcome, P(O), are conditions that promote such overestimations. In two experiments, participants were asked to judge the contingency between a cue and an outcome. Both P(C) and P(O) were given extreme values (high and low) in a factorial design, while maintaining the contingency between the two events at zero. While we were able to observe main effects of the probability of each event, our experiments showed that the cue- and outcome-density biases interacted such that a high probability of the two stimuli enhanced the overestimation beyond the effects observed when only one of the two events was frequent. This evidence can be used to better understand certain societal issues, such as belief in pseudoscience, that can be the result of overestimations of null contingencies in high-P(C) or high-P(O) situations.