If you want to read the PDF, try requesting it from the authors.

Abstract

Pseudoscience, superstitions, and quackery are serious problems that threaten public health and in which many variables are involved. Psychology, however, has much to say about them, as it is the illusory perceptions of causality of so many people that needs to be understood. The proposal we put forward is that these illusions arise from the normal functioning of the cognitive system when trying to associate causes and effects. Thus, we propose to apply basic research and theories on causal learning to reduce the impact of pseudoscience. We review the literature on the illusion of control and the causal learning traditions, and then present an experiment as an illustration of how this approach can provide fruitful ideas to reduce pseudoscientific thinking. The experiment first illustrates the development of a quackery illusion through the testimony of fictitious patients who report feeling better. Two different predictions arising from the integration of the causal learning and illusion of control domains are then proven effective in reducing this illusion. One is showing the testimony of people who feel better without having followed the treatment. The other is asking participants to think in causal terms rather than in terms of effectiveness.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For these reasons, researchers often take a keen interest in fallacious biases in causal learning. Examples of important real-world phenomena that researchers have argued could be strongly and negatively influenced by learning biases include stereotype formation (Hamilton & Gifford, 1976;Le Pelley et al., 2010), judgment of guilt and voluntariness of confessions in the courtroom (Lassiter, 2002), and the use of potentially ineffective health therapies (Matute, Yarritu, & Vadillo, 2011). ...
... In conditions where the button press had absolutely no effect on the light (i.e., zero contingency), participants were more likely to overestimate the action-outcome relationship when the light frequently turned on (high OD) than when it rarely turned on (low OD). This effect has now been replicated across a wide variety of learning tasks with zero-contingency events using binary outcomes (e.g., Matute et al., 2011). Essentially, a high OD increases the frequency of a and c trials relative to b and d, even though contingency remains zero, and this is found to be sufficient in generating strong illusions of causality. ...
... Similarly, when there is a high probability of the cause occurring (inflating the frequency of a and b trials relative to c and d), participants typically report greater causal judgments than when the cause rarely occurs (Allan & Jenkins, 1983;Vadillo, Musca, Blanco, & Matute, 2011). This is known as the cue density effect (e.g., Allan & Jenkins, 1983;Matute et al., 2011;Wasserman, Kao, Van Hamme, Katagiri, & Young, 1996). ...
Article
Full-text available
Illusory causation refers to a consistent error in human learning in which the learner develops a false belief that two unrelated events are causally associated. Laboratory studies usually demonstrate illusory causation by presenting two events—a cue (e.g., drug treatment) and a discrete outcome (e.g., patient has recovered from illness)—probabilistically across many trials such that the presence of the cue does not alter the probability of the outcome. Illusory causation in these studies is further augmented when the base rate of the outcome is high, a characteristic known as the outcome density effect. Illusory causation and the outcome density effect provide laboratory models of false beliefs that emerge in everyday life. However, unlike laboratory research, the real-world beliefs to which illusory causation is most applicable (e.g., ineffective health therapies) often involve consequences that are not readily classified in a discrete or binary manner. This study used a causal learning task framed as a medical trial to investigate whether similar outcome density effects emerged when using continuous outcomes. Across two experiments, participants observed outcomes that were either likely to be relatively low (low outcome density) or likely to be relatively high (high outcome density) along a numerical scale from 0 (no health improvement) to 100 (full recovery). In Experiment 1, a bimodal distribution of outcome magnitudes, incorporating variance around a high and low modal value, produced illusory causation and outcome density effects equivalent to a condition with two fixed outcome values. In Experiment 2, the outcome density effect was evident when using unimodal skewed distributions of outcomes that contained more ambiguous values around the midpoint of the scale. Together, these findings provide empirical support for the relevance of the outcome density bias to real-world situations in which outcomes are not binary but occur to differing degrees. This has implications for the way in which we apply our understanding of causal illusions in the laboratory to the development of false beliefs in everyday life. Electronic supplementary material The online version of this article (10.1186/s41235-018-0149-9) contains supplementary material, which is available to authorized users.
... Other examples involve the formation of stereotypes and illusory correlations, such as, for instance, when we believe that people from certain minority groups are more likely to commit crimes (Le Pelley et al., 2010;Rodríguez-Ferreiro & Barberia, 2017). An additional, and very different, example involves the preference that some people show toward pseudoscience and against evidence-based medicine just because they (erroneously) judge that certain pseudotherapies are contingent with health benefits, even though they know there is no scientific evidence supporting their belief (MacFarlane, Hurlstone, & Ecker, 2018;Matute et al., 2015;Matute, Yarritu, & Vadillo, 2011). ...
... In addition, recent research has also shown that this result is also observed when the outcome is gradual rather than all-or-none (for instance, when we progressively recover from a headache; see Chow, Colagiuri, & Livesey, 2019). Moreover, and as has also been predicted by associative models, many experiments have shown that not only the frequency with which the outcome occurs but also the frequency with which the cue occurs are good predictors of the overestimation of contingency (Allan & Jenkins, 1983;Matute et al., 2011;Perales, Catena, Shanks, & González, 2005;Wasserman et al., 1996;Yarritu et al., 2014). Finally, the results of Blanco et al. (2013) showed that a high probability of the outcome was necessary for the development of the illusion and that a high probability of the cue increased the effect. ...
... They concluded that personal-involvement effects could be predicted by associative theories, as personal involvement had been confounded with a high p(C) in previous studies. Furthermore, when the cue is an external event and both the cue and the outcome occur frequently, people still develop strong illusions of causality, which suggests that personal involvement is probably not critical (e.g., Matute et al., 2011). ...
Article
Many experiments have shown that humans and other animals can detect contingency between events accurately. This learning is used to make predictions and to infer causal relationships, both of which are critical for survival. Under certain conditions, however, people tend to overestimate a null contingency. We argue that a successful theory of contingency learning should explain both results. The main purpose of the present review is to assess whether cue-outcome associations might provide the common underlying mechanism that would allow us to explain both accurate and biased contingency learning. In addition, we discuss whether associations can also account for causal learning. After providing a brief description on both accurate and biased contingency judgments, we elaborate on the main predictions of associative models and describe some supporting evidence. Then, we discuss a number of findings in the literature that, although conducted with a different purpose and in different areas of research, can also be regarded as supportive of the associative framework. Finally, we discuss some problems with the associative view and discuss some alternative proposals as well as some of the areas of current debate. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... However, the multivariate regression model showed that users' assessment of the therapies' scientific validity is explained by their confidence in those therapies' usefulness. Among other causes, this confidence in the usefulness of pseudotherapies may be motivated by an intuitive and uncritical acceptance of the correlation between their use and the perception of efficacy (illusion of causality), which is reinforced by the tendency to selectively remember only the results that lead to improvements [20][21][22]. Initially, a positive perception of pseudotherapies' usefulness may arise from exposure to very convincing personal narratives or biased sources such as online forums and social networks, although it can also be derived from the recommendation of medical professionals themselves [6,22,23]. ...
... One of these possible strategies would be to disseminate accurate information on pseudotherapies' effectiveness to both the population and health-related interest groups, based on the "information deficit model" [22]. For these campaigns to be effective, it is important to take into account the psychological factors that induce belief in pseudotherapies' effectiveness, such as the illusion of causality [20,25] and the illusion of control [26] and show the mechanisms by which an ineffective intervention can be correlated with apparent efficacy [27,28]. The cultural factors that help to legitimize pseudotherapies and their use should also be considered, especially narratives about their efficacy [24]. ...
Article
Full-text available
Objectives: To identify how perceptions, attitudes, and beliefs towards pseudotherapies, health, medicine, and the public health system influence the pseudotherapy use in Spain. Methods: We carried out a cross-sectional study using the Survey of Social Perception of Science and Technology-2018 (5,200 interviews). Dependent variable: ever use of pseudotherapies. Covariables: attitude towards medicine, health and public health system; perceived health; assessment of the scientific character of homeopathy/acupuncture. The association was estimated using prevalence ratios obtained by Poisson regression models. The model was adjusted for age and socioeconomic variables. Results: Pseudotherapy use was higher in women (24.9%) than in men (14.2%) (p < 0.001). The probability of use in men (p < 0.001) and women (p < 0.001) increases with the belief in pseudotherapies’ usefulness. Among men, a proactive attitude (reference: passive) towards medicine and health (RP:1.3), and a negative (reference: positive) assessment of the quality of the public health system increased use-probability (RP:1.2). For women, poor health perceived (referencie: good) increased likelihood of use (RP:1.2). Conclusion: Pseudotherapy use in Spain was associated with confidence in its usefulness irrespective of users’ assessment of its scientific validity.
... Los sesgos cognitivos han contribuido al desarrollo evolutivo de las personas, premiando la eficacia frente al conocimiento de la verdad. Realizar estimaciones, incluso aunque en ocasiones no sean correctas, no siempre implica una mejora evolutiva y sí un coste cognitivo muy elevado (Tversky & Kahneman, 1974;Matute, Yarritu, & Vadillo, 2011;Blanco & Matute, 2018). El coste de asumir algunas respuestas equivocadas a situaciones que no revierten peligro, sería un mal mucho menor comparado con la no respuesta a situaciones peligrosas no detectadas (Matute et al., 2011). ...
... Realizar estimaciones, incluso aunque en ocasiones no sean correctas, no siempre implica una mejora evolutiva y sí un coste cognitivo muy elevado (Tversky & Kahneman, 1974;Matute, Yarritu, & Vadillo, 2011;Blanco & Matute, 2018). El coste de asumir algunas respuestas equivocadas a situaciones que no revierten peligro, sería un mal mucho menor comparado con la no respuesta a situaciones peligrosas no detectadas (Matute et al., 2011). Este proceso, además de por cuestiones de economía cognitiva, vendría reforzado por una pertenencia social en donde se comparte una interpretación de la realidad a través de fuentes y sesgos similares. ...
Article
Full-text available
RESUMEN El presente trabajo analiza el nivel de conocimiento que sobre el mundo actual tienen los futuros docentes que cursan estudios de Grado y Postgrado en Educación en la Universidad del País Vasco. En un contexto de hiperconectividad donde comprobar la veracidad de las informaciones se hace cada vez más complejo, es de gran importancia analizar la percepción que el futuro profesorado tiene sobre el estado actual del mundo. Este conocimiento permite proponer medidas de actuación que ayuden a los docentes a comprender cómo es la adquisición y generación de conocimiento, a través de unos medios de comunicación cada vez más digitalizados. Para ello, se ha utilizado el cuestionario ideado por Rosling et. al, que consta de trece preguntas con tres posibles respuestas cada una. Los resultados obtenidos muestran un porcentaje de aciertos del 21%, una media que no llega a los tres aciertos por persona, y una moda que se sitúa en los dos aciertos. Estos datos son muy similares a los obtenidos por las personas de países socio-económicamente avanzados. Igualmente, existe gran coincidencia en los ítems en los que se obtienen mejores y peores niveles de acierto. El paso por los estudios de educación parece no tener incidencia en el desarrollo de esquemas mentales que ayuden a tener visiones más acertadas de la realidad, que puedan prevenir tanto entre el alumnado como el profesorado la prevalencia de creencias y falacias infundadas.
... This phenomenon is often referred to as the illusion of causality or illusory causation (for a review, see Matute et al., 2015). The illusion of causality has previously been associated with the development and maintenance of pseudomedicine beliefs (Matute, Yarritu, & Vadillo, 2011), as well as judgements of guilt in a criminal setting (Lassiter, Geers, Munhall, Ploutz-Snyder, & Breitenbecher, 2002). We argue that this cognitive bias also presents a problem to educators, as it might result in teachers endorsing teaching practices that are not effective in improving students' academic performance. ...
... implementing the teaching practice regularly) also result in heightened illusory causation relative to when the cue is presented infrequently. This is referred to as the cue density bias (Allan & Jenkins, 1983;Matute et al., 2011). In theory, these event densities may present a cycle of illusory belief that is difficult to break: teachers develop strong false belief in the efficacy of an ineffectual teaching practice when they have a high-performing cohort (i.e. the outcome density effect), and this results in the persistence of the teaching practice that further strengthens the belief in its efficacy (i.e. the cue density effect), although neither the outcome or cue density effect have been shown in educationally relevant situations previously. ...
Article
Full-text available
Teachers sometimes believe in the efficacy of instructional practices that have little empirical support. These beliefs have proven difficult to efface despite strong challenges to their evidentiary basis. Teachers typically develop causal beliefs about the efficacy of instructional practices by inferring their effect on students' academic performance. Here, we evaluate whether causal inferences about instructional practices are susceptible to an outcome density effect using a contingency learning task. In a series of six experiments, participants were ostensibly presented with students' assessment outcomes, some of whom had supposedly received teaching via a novel technique and some of whom supposedly received ordinary instruction. The distributions of the assessment outcomes was manipulated to either have frequent positive outcomes (high outcome density condition) or infrequent positive outcomes (low outcome density condition). For both continuous and categorical assessment outcomes, participants in the high outcome density condition rated the novel instructional technique as effective, despite the fact that it either had no effect or had a negative effect on outcomes, while the participants in the low outcome density condition did not. These results suggest that when base rates of performance are high, participants may be particularly susceptible to drawing inaccurate inferences about the efficacy of instructional practices.
... Uma explicação alternativa a essa recorre à ação de processos de aprendizagem e mudança comportamental em função da experiência com coincidências envolvendo comportamento e mudanças ambientais (e.g., Blanco, Matute, & Vadillo, 2011;Matute, Yarritu, & Vadillo, 2011). As abordagens teóricas para essa explicação alternativa são várias, incluindo abordagens teóricas sobre aprendizagem derivadas do condicionamento pavloviano (aprendizagem associativa, ver Vadillo, Blanco, Yarritu, & Matute, 2016). ...
... Blanco, F.,Matute, H., & Vadillo, M. A. (2011). Making the uncontrollable seem control- lable: The role of action in the illusion of control. ...
Chapter
Full-text available
The chapter discribes the classical punishment experiment of Skinner (1938) and the ongoing debate on the explanation of the punishment suppression it offers. Different research lines on punishment derived from these issues are presented. Recomended for young researchers and psychology students (text in portuguese).
... Uma explicação alternativa a essa recorre à ação de processos de aprendizagem e mudança comportamental em função da experiência com coincidências envolvendo comportamento e mudanças ambientais (e.g., Blanco, Matute, & Vadillo, 2011;Matute, Yarritu, & Vadillo, 2011). As abordagens teóricas para essa explicação alternativa são várias, incluindo abordagens teóricas sobre aprendizagem derivadas do condicionamento pavloviano (aprendizagem associativa, ver Vadillo, Blanco, Yarritu, & Matute, 2016). ...
... Blanco, F.,Matute, H., & Vadillo, M. A. (2011). Making the uncontrollable seem control- lable: The role of action in the illusion of control. ...
... Similarly, pseudoscientific information (hereafter PI) refers to references and news that promote false scientific content (e.g., Escolà-Gascón et al., 2020;Tsai et al., 2012). False scientific content is not supported by evidence from academic research (for this reason, it is considered "false") (e.g., Matute et al., 2011;Sugavanam and Natarajan, 2020). In summary, there are three main reasons for creating pseudoscientific information: (1) the subjective misinterpretation of an event or an experience, which acquires a meaning that differs from its formal characteristics (e.g., Lange et al., 2017;Mohr et al., 2019). ...
... However, analysis of pseudoscientific beliefs indicates that the differences were not significant between the two groups. This result invites the hypothesis that it is the causal illusions related to pseudoscience (and not the pseudoscientific beliefs themselves) that are the variable that truly is a psychopathological risk for people's mental health (see Matute et al., 2011). Therefore, the "act of believing" or the pseudoscientific belief itself does not represent a health risk; it is the person's use of this belief (in this case, a use based on causal illusions) that proves to be a psychopathological risk. ...
Article
Full-text available
This research aims to analyze the effects of pseudoscientific information (PI) about COVID-19 on the mental well-being of the general population. A total of 782 participants were classified according to the type of municipality in which they lived (rural municipalities and urban municipalities). The participants answered psychometric questionnaires that assessed psychological well-being, pseudoscientific beliefs and the ability to discriminate between scientific and pseudoscientific information about COVID-19. The results indicated the following: the greater the ability to discriminate between false information and true information, the greater the levels of psychological well-being perceived by the participant. The ability to discriminate predicts up to 32% of psychological well-being only for subjects living in rural municipalities. Residents in urban municipalities showed lower levels of well-being than residents in rural municipalities. It is concluded that new social resources are needed to help the general population of urban municipalities discriminate between pseudoscientific and scientific information.
... Examples include pareidolia and the Barnum effect and affective (e.g., Belloch et al., 1995;Shermer, 2011). Numerous studies have found that these distortions of perception are common in subjects who believe in the existence of the paranormal (e.g., Matute et al., 2011;Griffiths et al., 2018;Torres et al., 2020). Likewise, in some cases, they represent causal attributions or illusions that try to reduce levels of uncertainty in the face of specific problems, so that their psychological function responds to the need to seek control (e.g., Groth-Marnat and Pegden, 1998;Matute et al., 2015). ...
... Taking into account that the believing subjects in the "supernatural" tend to present higher levels in the different scales that measure hallucinations and perceptual deformations with respect to non-believers (see Matute et al., 2011;Griffiths et al., 2018;Torres et al., 2020;Wright et al., 2020), a possible way for the covariation between the macrofactors to increase would be by replicating the CFA for the 5-factor model only with subjects believing in the existence of the paranormal. It seems likely that the participants of this sample do not believe in the existence of the paranormal with the same intensity as the subjects of the Spanish samples. ...
Article
Full-text available
This paper presents the English adaptation of the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2), a questionnaire developed specifically for psychological assessment and prediction of anomalous phenomena. The sample consisted of 613 respondents from England (47.6% were women and 52.4% men). All of them were of legal age (mean = 34.5; standard deviation = 8.15). An exploratory factor analysis was applied, and three confirmatory factor models were adjusted. Omega coefficients and test-retest designs were used for reliability analysis. The MMSI-2 has a valid internal structure consisting of five macrofactors: Clinical Personality Tendencies (CPT), Anomalous Perceived Phenomena (APP), Incoherent Manipulations (IMA), Altered States of Consciousness (ASC), and Openness (OP). Omega coefficients for CPT and OP factors were low but acceptable. Furthermore, test-retest trials were excellent for all scales and factors. The psychological factors CPT, IMA, and ASC predicted 18.3% of the variance of anomalous experiences (APP). The authors concluded the English MMSI-2 was a valid and reliable test for the evaluation of anomalous phenomena but recommend that subsequent research reviews the predictive quality of the underlying model.
... For instance, when a bogus medicine is used to treat a given symptom that disappears spontaneously very often, it is common to mistakenly believe that the remission of the symptom is caused by the medicine. Causal illusions have been typically identified in contingency learning experiments conducted in the laboratory, but they have been proposed to underlie many everyday superstitions and irrational beliefs [2][3][4][5][6], thus giving birth to a fruitful research field that taps into both theoretical and societal issues. Even in laboratory experiments, a relevant amount of evidence has been collected in computer tasks that used meaningful scenarios, such as the typical medicine-evaluation task, in which participants are asked to judge the effectiveness of a medicine in treating a fictitious disease. ...
... After the 50 eggs were presented, the participants were asked to answer several questions. The first one was a causal judgment, with a wording similar to the one we used in previous studies [2,3]: "To what extent do you think that the mutagenic agent was effective to produce aliens with the XG Vulnerability?". This was answered on a scale from 0 (labeled "Not effective at all") to 50 ("Moderately effective") to 100 ("Perfectly effective"). ...
Article
Full-text available
Previous research revealed that people’s judgments of causality between a target cause and an outcome in null contingency settings can be biased by various factors, leading to causal illusions (i.e., incorrectly reporting a causal relationship where there is none). In two experiments, we examined whether this causal illusion is sensitive to prior expectations about base-rates. Thus, we pretrained participants to expect either a high outcome base-rate (Experiment 1) or a low outcome base-rate (Experiment 2). This pretraining was followed by a standard contingency task in which the target cause and the outcome were not contingent with each other (i.e., there was no causal relation between them). Subsequent causal judgments were affected by the pretraining: When the outcome base-rate was expected to be high, the causal illusion was reduced, and the opposite was observed when the outcome base-rate was expected to be low. The results are discussed in the light of several explanatory accounts (associative and computational). A rational account of contingency learning based on the evidential value of information can predict our findings.
... This perspective includes perceptual distortion and cognitive styles [36]. In fact, some studies concluded that subjects that believe in pseudosciences develop causal illusions more frequently and more heightened than nonbelieving subjects [37]. The psychobiological function of perceptual distortion is based on survival: if the cause of a phenomenon is known, the cause itself and the respective phenomenon could be prevented; this would allow anticipating environmental threats and finding answers that would guarantee the survival of the species [5,37,38]. ...
... In fact, some studies concluded that subjects that believe in pseudosciences develop causal illusions more frequently and more heightened than nonbelieving subjects [37]. The psychobiological function of perceptual distortion is based on survival: if the cause of a phenomenon is known, the cause itself and the respective phenomenon could be prevented; this would allow anticipating environmental threats and finding answers that would guarantee the survival of the species [5,37,38]. In this area, the most studied perceptual distortions are causal illusions and pareidolia [39], which is also very common in believers in pseudoscience [40]. ...
Article
Full-text available
Background: The health crisis caused by COVID-19 has led many countries to opt for social quarantine of the population. During this quarantine, communication systems have been characterized by disintermediation, the acceleration of digitization and an infodemic (excess and saturation of information). The following debate arises: Do the levels related to the psychotic phenotype and pseudoscientific beliefs related to the interpretation of information vary before and after social quarantine? Objectives: This research aims to examine the psychological effects of social quarantine on the psychotic phenotype and pseudoscientific beliefs-experiences of the general nonclinical population. The following hypothesis was posed: social quarantine alters the levels of magical thinking, pseudoscientific beliefs and anomalous perceptions due to quarantine. Methods: A pre- and posttest analysis design was applied based on the difference in means, and complementary Bayesian estimation was performed. A total of 174 Spanish subjects responded to different questionnaires that evaluated psychopathological risks based on psychotic phenotypes, pseudoscientific beliefs and experiences before and after quarantine. Results: Significant differences were obtained for the variables positive psychotic symptoms, depressive symptoms, and certain perceptual alterations (e.g., cenesthetic perceptions), and a significant increase in pseudoscientific beliefs was also observed. The perceptual disturbances that increased the most after quarantine were those related to derealization and depersonalization. However, paranoid perceptions showed the highest increase, doubling the initial standard deviation. These high increases could be related to the delimitation of physical space during social quarantine and distrust towards information communicated by the government to the population. Is it possible that social alarmism generated by the excess of information and pseudoscientific information has increased paranoid perceptual alterations? Conclusions: Measures taken after quarantine indicate that perceptual disturbances, subclinical psychotic symptoms and beliefs in the pseudoscience have increased.
... In the present study, we adapted the standard laboratory task used in research on causal illusions to an extended learning situation. Most of the previous experiments exploring causal illusions employed training sessions of about 40 trials in the shortest cases (e.g., Barberia et al., 2013;Blanco et al., 2015;Griffiths et al., 2018) to about 100 trials in the longest cases (e.g., Matute et al., 2011;Yarritu et al., 2014). For this study, we decided to greatly extend the amount of training the participants were exposed to and explore if this manipulation diminished the intensity of the illusion developed. ...
... Participants were exposed to a standard causal learning task. Specifically, they were asked to imagine that they were specialists in a strange and dangerous disease called Lindsay Syndrome (e.g., Blanco et al., 2011Blanco et al., , 2013Matute et al., 2011). They were further told that the crises produced by this disease might be overcome with a new experimental drug (Batatrim) whose effectiveness was still to be determined. ...
Article
Full-text available
We carried out an experiment using a conventional causal learning task but extending the number of learning trials participants were exposed to. Participants in the standard training group were exposed to 48 learning trials before being asked about the potential causal relationship under examination, whereas for participants in the long training group the length of training was extended to 288 trials. In both groups, the event acting as the potential cause had zero correlation with the occurrence of the outcome, but both the outcome density and the cause density were high, therefore providing a breeding ground for the emergence of a causal illusion. In contradiction to the predictions of associative models such the Rescorla-Wagner model, we found moderate evidence against the hypothesis that extending the learning phase alters the causal illusion. However, assessing causal impressions recurrently did weaken participants’ causal illusions.
... Spurious learning is generally explained in terms of simple associative mechanisms that overestimate causal relationships between external events or between the animal's actions and outcomes (13)(14)(15)(16). Indeed, simple associative learning models, like the Rescorla-Wagner model, depend heavily on correlations between events and can be easily fooled into strengthening associations based on coincidences (17). ...
... Moreover, choices consistent with subjective ordering remained strong in a PD schedule that actively disrupted them by dynamically increasing reward probability for whichever alternative had been selected least often. In contrast, a Q-learning algorithm failed to show subjective ordering under this schedule, although it replicated it under a PN schedule that did not discourage consistent preferences, capturing the well-known vulnerability of associative models to spurious reward correlations (13)(14)(15)(16)(17). These results are consistent with a wealth of studies showing that pure associative learning is not sufficient to explain TI learning (22)(23)(24)30). ...
Preprint
Survival depends on identifying learnable features of the environment that predict reward, and avoiding others that are random and unlearnable. However, humans and other animals often infer spurious associations among unrelated events, raising the question of how well they can distinguish learnable patterns from unlearnable events. Here, we tasked monkeys with discovering the serial order of two pictorial sets: a “learnable” set in which the stimuli were implicitly ordered and monkeys were rewarded for choosing the higher-rank stimulus and an “unlearnable” set in which stimuli were unordered and feedback was random regardless of the choice. We replicated prior results that monkeys reliably learned the implicit order of the learnable set. Surprisingly, the monkeys behaved as though some ordering also existed in the unlearnable set, showing consistent choice preference that transferred to novel untrained pairs in this set, even under a preference-discouraging reward schedule that gave rewards more frequently to the stimulus that was selected less often. In simulations, a model-free RL algorithm ( Q -learning) displayed a degree of consistent ordering among the unlearnable set but, unlike the monkeys, failed to do so under the preference, discouraging reward schedule. Our results suggest that monkeys infer abstract structures from objectively random events using heuristics that extend beyond stimulus-outcome conditional learning to more cognitive model-based learning mechanisms.
... This tendency to see causal relationships where none exist has been shown in the laboratory on numerous occasions (see Matute, Yarritu & Vadillo, 2011 for a review). In a typical task, people are given a button that they can press (the cue or action) and are shown a light that sometimes turns on (the outcome event). ...
... Under these circumstances, people will often give a modest positive rating for the causal efficacy of the button, even if there is actually zero contingency between pressing the button and the light illuminating (i.e., the light is just as likely to illuminate if the button is pressed as if it is not: Alloy & Abramson, 1979;Matute, 1994Matute, , 1996Pronin, Wegner, McCarthy, & Rodriguez, 2006). This has been termed an illusion of causality (Matute et al, 2011), as people perceive a causal relationship where none exists. ...
Article
Full-text available
Superstitions are common, yet we have little understanding of the cognitive mechanisms that bring them about. This study used a laboratory‐based analogue for superstitious beliefs that involved people monitoring the relationship between undertaking an action (pressing a button) and an outcome occurring (a light illuminating). The task was arranged such that there was no objective contingency between pressing the button and the light illuminating – the light was just as likely to illuminate whether the button was pressed or not. Nevertheless, most people rated the causal relationship between the button press and the light illuminating to be moderately positive, demonstrating an illusion of causality. This study found that the magnitude of this illusion was predicted by people's level of endorsement of common superstitious beliefs (measured using a novel Superstitious Beliefs Questionnaire), but was not associated with mood variables or their self‐rated locus of control. This observation is consistent with a more general individual difference or bias to overweight conjunctive events over disjunctive events during causal reasoning in those with a propensity for superstitious beliefs.
... Spurious learning is generally explained in terms of simple associative mechanisms that overestimate causal relationships between external events or between the animal's actions and outcomes (13)(14)(15)(16). Indeed, simple associative learning models, like the Rescorla-Wagner model, depend heavily on correlations between events and can be easily fooled into strengthening associations based on coincidences (17). ...
... Moreover, choices consistent with subjective ordering remained strong in a PD schedule that actively disrupted them by dynamically increasing reward probability for whichever alternative had been selected least often. In contrast, a Q-learning algorithm failed to show subjective ordering under this schedule, although it replicated it under a PN schedule that did not discourage consistent preferences, capturing the well-known vulnerability of associative models to spurious reward correlations (13)(14)(15)(16)(17). These results are consistent with a wealth of studies showing that pure associative learning is not sufficient to explain TI learning (22)(23)(24)30). ...
Article
Humans and other animals often infer spurious associations among unrelated events. However, such superstitious learning is usually accounted for by conditioned associations, raising the question of whether an animal could develop more complex cognitive structures independent of reinforcement. Here, we tasked monkeys with discovering the serial order of two pictorial sets: a “learnable” set in which the stimuli were implicitly ordered and monkeys were rewarded for choosing the higher-rank stimulus and an “unlearnable” set in which stimuli were unordered and feedback was random regardless of the choice. We replicated prior results that monkeys reliably learned the implicit order of the learnable set. Surprisingly, the monkeys behaved as though some ordering also existed in the unlearnable set, showing consistent choice preference that transferred to novel untrained pairs in this set, even under a preference-discouraging reward schedule that gave rewards more frequently to the stimulus that was selected less often. In simulations, a model-free reinforcement learning algorithm ( Q -learning) displayed a degree of consistent ordering among the unlearnable set but, unlike the monkeys, failed to do so under the preference-discouraging reward schedule. Our results suggest that monkeys infer abstract structures from objectively random events using heuristics that extend beyond stimulus–outcome conditional learning to more cognitive model-based learning mechanisms.
... Pseudoscientific beliefs have many definitions (e.g., Fasce et al., 2020;Lilienfeld et al., 2005;Matute et al., 2011). Current research posits, pseudoscientific beliefs arise if certain content or information is accepted as scientific when in fact it lacks insufficient objective evidence (Fasce & Picó, 2019). ...
... This error can occur in other contexts not limited to personality descriptions. Originally, this bias was studied in the field of horoscopes and pseudoscience's (see Matute et al., 2011). Research results suggest that people who do not effectively detect fake news regularly commit the Barnum Effect. ...
Article
Full-text available
Awareness of the potential psychological significance of false news increased during the coronavirus pandemic, however, its impact on psychopathology and individual differences remains unclear. Acknowledging this, the authors investigated the psychological and psychopathological profiles that characterize fake news consumption. A total of 1452 volunteers from the general population with no previous psychiatric history participated. They responded to clinical psychopathology assessment tests. Respondents solved a fake news screening test, which allowed them to be allocated to a quasi-experimental condition: group 1 (non-fake news consumers) or group 2 (fake news consumers). Mean comparison, Bayesian inference, and multiple regression analyses were applied. Participants with a schizotypal, paranoid, and histrionic personality were ineffective at detecting fake news. They were also more vulnerable to suffer its negative effects. Specifically, they displayed higher levels of anxiety and committed more cognitive biases based on suggestibility and the Barnum Effect. No significant effects on psychotic symptomatology or affective mood states were observed. Corresponding to these outcomes, two clinical and therapeutic recommendations related to the reduction of the Barnum Effect and the reinterpretation of digital media sensationalism were made. The impact of fake news and possible ways of prevention are discussed.
... Once we believe something to be true, it can be very difficult to unlearn what we now "know", and that is how myths and pseudoscience take hold. While there are many mechanisms that can reinforce this reluctance to change one's stance, there are two in particular that seem to recur over and over: (I) illusion of causality (seeing causal relationships that do not actu ally exist); and (2) confirmation bias (consciously or unconsciously seeking out information to reinforce or support what we already believe; Matute et al. 2011;Yarritu et al., 2015). While these cognitive shortcomings can actually be adaptive by preserving self-esteem and well-being (Taylor and Brown, 1988), they can also cause us to repeatedly make poor choices, reinforcing faulty beliefs and assumptions. ...
... Finally, the use of the term "causal illusion" in our study might be subject to discussion. Tasks investigating causal illusions have typically relied on contingency [30] as the normative statistic to which to compare causal impressions, and the terms "causal illusion" or "illusion of causality" have become the norm to denote the phenomenon of medium to high causal ratings in zero contingency contexts (e.g., [15,31,32]). Nevertheless, some authors have suggested that ratings that deviate from the programmed contingency should not necessarily be interpreted as errors or illusions, and have offered a rational explanation for the special importance given to conjunctive trials. ...
Article
Full-text available
The prevalence of pseudoscientific beliefs in our societies negatively influences relevant areas such as health or education. Causal illusions have been proposed as a possible cognitive basis for the development of such beliefs. The aim of our study was to further investigate the specific nature of the association between causal illusion and endorsement of pseudoscientific beliefs through an active contingency detection task. In this task, volunteers are given the opportunity to manipulate the presence or absence of a potential cause in order to explore its possible influence over the outcome. Responses provided are assumed to reflect both the participants’ information interpretation strategies as well as their information search strategies. Following a previous study investigating the association between causal illusion and the presence of paranormal beliefs, we expected that the association between causal illusion and pseudoscientific beliefs would disappear when controlling for the information search strategy (i.e., the proportion of trials in which the participants decided to present the potential cause). Volunteers with higher pseudoscientific beliefs also developed stronger causal illusions in active contingency detection tasks. This association appeared irrespective of the participants with more pseudoscientific beliefs showing (Experiment 2) or not (Experiment 1) differential search strategies. Our results suggest that both information interpretation and search strategies could be significantly associated to the development of pseudoscientific (and paranormal) beliefs.
... For instance, in an experiment by Matute, Yarritu, & Vadillo (2011), the contingency between a potential cause (i.e., a fictitious medicine administered by a fictitious agent) and an outcome (recovery) is zero. ...
... Symptom levels have been shown to be related to unhealthy or maladaptive increases in control. For example, high perceived control is sometimes thought of as 'illusory control', which is defined as 'overestimating the influence that one's actions have over uncontrollable events' [15] and is linked to beliefs about the effectiveness of bogus treatments for diseases [16], and excessive risk taking [17]. Indeed, individuals with schizophrenia have been shown to be more susceptible to illusory control than others [18,19], and higher levels of obsessive compulsive disorder symptoms are correlated with higher levels of perceived control [20]. ...
Article
Full-text available
The relationship between the constructs of perceived control and symptoms of mood disorders has been demonstrated. The current study evaluates cultural values both as an individual difference moderating variable and as one of the mechanisms through which the association between perceived control and mood disturbances may operate. The hypotheses were examined with a sample of 615 participants recruited in Saudi Arabia. Participants completed measures of perceived control, individualism and collectivism, and symptoms of depression and bipolar disorder. In general, the results supported a model in which higher levels of perceived control promote a less symptomatic mood state. In most cases, cultural values positively mediated the relationship between perceived control and mood disturbance with lower symptom levels predicted. However, when the components of perceived control were examined separately, high perceived mastery together with highly individualistic values predicted higher levels of bipolar symptoms. In this sample, there was less evidence of cultural values moderating the control-mood disturbance relationship. Only one moderator relationship was identified, which showed low control linking to higher symptom levels only in those who disagreed with individualistic values. Overall, our data are in agreement with the notion that pre-existing cultural values have an important effect on mood disorder symptoms.
... Symptom levels have been shown to be related to unhealthy or maladaptive increases in control. For example, high perceived control is sometimes thought of as 'illusory control', which is defined as 'overestimating the influence that one's actions have over uncontrollable events' [15] and is linked to beliefs about the effectiveness of bogus treatments for diseases [16], and excessive risk taking [17]. Indeed, individuals with schizophrenia have been shown to be more susceptible to illusory control than others [18,19], and higher levels of obsessive compulsive disorder symptoms are correlated with higher levels of perceived control [20]. ...
Article
Full-text available
Video games are a source of entertainment for a wide population and have varied effects on well-being. The purpose of this article is to comprehensively examine game-play research to identify the factors that contribute to these disparate well-being outcomes and to highlight the potential positive effects. On the basis of existing literature, we argue that the effects of gaming on well-being are moderated by other variables, such as motivations for gaming and video-game characteristics. Specifically, the inclusion of social activity can benefit prosocial behaviors and affect the relationship between violent video games and aggression that some studies have demonstrated. Moreover, the research on the relationship between violent video games and aggression depends greatly on individual and sociocontextual variables outside of game play. The inclusion of physical activity in games can provide an improvement in physical health with high levels of enjoyment, potentially increasing adherence rates. Overall, following our review, we determined that the effects of gaming on well-being are moderated by and depend on the motivation for gaming, outside variables, the presence of violence, social interaction, and physical activity. Thus, we argue that there is potential for an “optimal gaming profile” that can be used in the future for both academic- and industry-related research.
... What increases the risk of believing in pseudoscience? Among many cognitive errors, incorrectly inferring causation (Matute, Yarritu, & Vadillo, 2011); wishful thinking and overestimating cognitive abilities (Pasquinelli, 2012); and confirmation bias (Lewandowsky & Oberauer, 2016;Lilienfeld, Ammirati, & Landfield, 2009) all contribute. Trusting gut feelings about complex scientific topics, like genetically modified organisms, may also lead people astray in their thinking (Blancke, Van Breusegem, De Jaeger, Braeckman, & Van Montagu, 2015). ...
Article
Full-text available
Due to the prevalence of pseudoscience, scientific illiteracy, and fake news, scientists are increasingly concerned about pseudoscientific beliefs among individuals without advanced scientific training. We recruited 85 undergraduate participants who read 10 pseudoscientific texts in each of the following conditions: APAstyle references, credentialed names, absolute language, probabilistic language, and a control. We collected data on participants’ perceived scientificness, credibility, and belief for each condition to explore potential changes in belief when pseudoscientific texts were disguised as science. Our results for scientificness revealed moderate effects for added references (d = 0.64) and smaller effects for credentialed names (d = 0.29). Results for credibility paralleled those for scientificness, showing a large effect for the reference condition (d = 0.83), and a smaller, though meaningful effect for credentialed names (d = 0.42). Belief in pseudoscience did not change before or after any study condition, implying that beliefs are stable even when pseudoscience appears scientific and credible.
... For instance, in an experiment by Matute, Yarritu, & Vadillo (2011), the contingency between a potential cause (i.e., a fictitious medicine administered by a fictitious agent) and an outcome (recovery) is zero. ...
Preprint
Full-text available
The disconnect between the effectuation literature and the cognitive bias research creates artificial boundaries to inhibit the development of a more integrated understanding of decision-making in entrepreneurship. We analyze the effect of effectuation vis a vis on the biases of overconfidence and illusion of control. We test the effect in both a field survey with entrepreneurs and an experiment. Unraveling the patterns of relationships between effectuation and biases helps ground the burgeoning effectuation theory to more established cognitive science theories and advance the scholarly understanding of entrepreneurial decision-making.
... Typically, psychological researchers refer to superstition as a phenomenon exerting an imagined influence on an action or an outcome when no real causal relationship is apparent, or indeed present (e.g. Carlson, Mowen, and Fang 2009;Matute, Yarritu, and Vadillo 2011). ...
Article
Despite humans’ capacity for rational thought, they are not immune to superstitions. Superstitions are strongly tied to cultural practices, especially in India. Although 17% of the world’s population resides in India, Indian culture is understudied, and there have not been sufficient attempts to understand Indian superstitions in a scientific manner from a psychometric standpoint. By creating a proper superstition measurement for the Indian population, we can better understand how Indians think and behave. The goal of the present research is to create a superstition measure specific to Indian culture. The results reveal 18 items reflecting Indian superstitions that can be generalised across contemporary India.
... Pseudoscientific claims have been profusely investigated from a cognitive perspective -for example, regarding their close relationship with intuitive cognitive style (Pennycook et al., 2012), causal illusions (Matute et al., 2011), and pseudo-profound bullshit receptivity (Pennycook et al., 2015). In addition, there is a growing corpus of research outcomes on their ideological and political dimensions that has flourished within the 'politically motivated reasoning paradigm' (Kahan, 2016). ...
Article
Full-text available
Recent research highlights the implications of group dynamics in the acceptance and promotion of misconceptions, particularly in relation to the identity-protective attitudes that boost polarisation over scientific information. In this study, we successfully test a mediational model between right-wing authoritarianism and pseudoscientific beliefs. First, we carry out a comprehensive literature review on the socio-political background of pseudoscientific beliefs. Second, we conduct two studies (n = 1189 and n = 1097) to confirm our working hypotheses: H1 – intercorrelation between pseudoscientific beliefs, authoritarianism and three axioms (reward for application, religiosity and fate control); H2 – authoritarianism and social axioms fully explain rightists’ proneness to pseudoscience; and H3 – the association between pseudoscience and authoritarianism is partially mediated by social axioms. Finally, we discuss our results in relation to their external validity regarding paranormal and conspiracy beliefs, as well as to their implications for group polarisation and science communication.
... The vaccination and autism example illustrates quite well how information sampling may become a crucial element for establishing and maintaining mistaken beliefs; however, the biases in sampling strategies can be extended to a wide range of health issues; people interested in assessing the causal relationship between any common behavior and an infrequent disease will find a high proportion of information where the behavior and the disease coincide if they use the effect as a cue in their internet search. Correspondingly, people using the cause to guide their internet search may end up neglecting the base rate of the effect and end up overestimating the causal relationship when the effect is frequent [43]. For example, a recent study [44] which tracked internet-browsing behavior in a controlled setting showed that when women were required to consult the internet for health information after the hypothetical onset of an unfamiliar breast change (eg, nipple rash), most participants used rash-related search terms (a cue-guided sampling strategy), and the majority accessed websites containing breast cancer information with National Health Service Paget disease of the nipple being the most visited site. ...
Article
Background The internet is a relevant source of health-related information. The huge amount of information available on the internet forces users to engage in an active process of information selection. Previous research conducted in the field of experimental psychology showed that information selection itself may promote the development of erroneous beliefs, even if the information collected does not. Objective The aim of this study was to assess the relationship between information searching strategy (ie, which cues are used to guide information retrieval) and causal inferences about health while controlling for the effect of additional information features. Methods We adapted a standard laboratory task that has previously been used in research on contingency learning to mimic an information searching situation. Participants (N=193) were asked to gather information to determine whether a fictitious drug caused an allergic reaction. They collected individual pieces of evidence in order to support or reject the causal relationship between the two events by inspecting individual cases in which the drug was or was not used or in which the allergic reaction appeared or not. Thus, one group (cause group, n=105) was allowed to sample information based on the potential cause, whereas a second group (effect group, n=88) was allowed to sample information based on the effect. Although participants could select which medical records they wanted to check—cases in which the medicine was used or not (in the cause group) or cases in which the effect appeared or not (in the effect group)—they all received similar evidence that indicated the absence of a causal link between the drug and the reaction. After observing 40 cases, they estimated the drug–allergic reaction causal relationship. Results Participants used different strategies for collecting information. In some cases, participants displayed a biased sampling strategy compatible with positive testing, that is, they required a high proportion of evidence in which the drug was administered (in the cause group) or in which the allergic reaction appeared (in the effect group). Biased strategies produced an overrepresentation of certain pieces of evidence at the detriment of the representation of others, which was associated with the accuracy of causal inferences. Thus, how the information was collected (sampling strategy) demonstrated a significant effect on causal inferences (F1,185=32.53, P<.001, η2p=0.15) suggesting that inferences of the causal relationship between events are related to how the information is gathered. Conclusions Mistaken beliefs about health may arise from accurate pieces of information partially because of the way in which information is collected. Patient or person autonomy in gathering health information through the internet, for instance, may contribute to the development of false beliefs from accurate pieces of information because search strategies can be biased.
... This is a robust effect that has been replicated, and that could lead to causal illusions (perception of causal links in situations in which there is none; see for review Matute, Blanco & Díaz-Lago, 2019). These mistaken beliefs could, in turn, entail serious consequences, as they could underlie illusions of effectiveness of pseudoscientific medicine (Matute, Yarritu & Vadillo, 2011), and contribute to maintain social prejudice (Blanco, Gómez-Fortes, & Matute, 2018;Rodríguez-Ferreiro & Barberia, 2017). ...
Article
Full-text available
Judgments of a treatment's effectiveness are usually biased by the probability with which the outcome (e.g., symptom relief) appears: even when the treatment is completely ineffective (i.e., there is a null contingency between cause and outcome), judgments tend to be higher when outcomes appear with high probability. In this research, we present ambiguous stimuli, expecting to find individual differences in the tendency to interpret them as outcomes. In Experiment 1, judgments of effectiveness of a completely ineffective treatment increased with the spontaneous tendency of participants to interpret ambiguous stimuli as outcome occurrences (i.e., healings). In Experiment 2, this interpretation bias was affected by the overall treatment-outcome contingency, suggesting that the tendency to interpret ambiguous stimuli as outcomes is learned and context-dependent. In conclusion, we show that, to understand how judgments of effectiveness are affected by outcome probability, we need to also take into account the variable tendency of people to interpret ambiguous information as outcome occurrences.
... Likewise, Allan et al. (2005) have argued that the outcome density effect (for a given DP value, participants are more likely to perceive a positive contingency between the cue and the outcome if the overall probability of the outcome is higher) is the result of changes in the threshold, rather than in the sensitivity to the contingencies. Likewise, Perales et al. (2005) have made a similar argument concerning the cue density effect (for a given DP, participants are more likely to perceive a positive contingency between the cue and the outcome if the overall probability of the cue is higher; Allan & Jenkins, 1983;Matute et al., 2011;Vadillo et al., 2010;Wasserman et al., 1996;White, 2003). ...
Article
Full-text available
In a signal detection theory approach to associative learning, the perceived (i.e., subjective) contingency between a cue and an outcome is a random variable drawn from a Gaussian distribution. At the end of the sequence, participants report a positive cue-outcome contingency provided the subjective contingency is above some threshold. Some researchers have suggested that the mean of the subjective contingency distributions and the threshold are controlled by different variables. The present data provide empirical support for this claim. In three experiments, participants were exposed to rapid streams of trials at the end of which they had to indicate whether a target outcome O1 was more likely following a target cue X. Interfering treatments were incorporated in some streams to impend participants' ability to identify the objective X-O1 contingency: interference trials (X was paired with an irrelevant outcome O2), nonreinforced trials (X was presented alone), plus control trials (an irrelevant cue W was paired with O2). Overall, both interference and nonreinforced trials impaired participants' sensitivity to the contingencies as measured by signal detection theory's d', but they also enhanced detection of positive contingencies through a cue density effect, with nonreinforced trials being more susceptible to this effect than interference trials. These results are explicable if one assumes interference and nonreinforced trials impact the mean of the associative strength distribution, while the cue density influences the threshold. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... In a recent integrative theoretical framework, Rizeq et al. (2021) suggested to consider conspiracy and paranormal beliefs and anti-science attitudes as three components of a higher order psychological factor termed as "contaminated mindware". According to this approach, specific cognitive processing styles result in a contaminated mindware, such as a biased perception of probability and causality (e.g., perceiving meaningful patterns or causality in unrelated events), low levels of reality testing and open-minded thinking (e.g., low ability or motivation to critically test the plausibility of one's beliefs), ontological confusions (e.g., believing that lifeless natural objects are animate or that thoughts can be manifested as physical forces), and related to all these aspects, an over-reliance on intuitive-experiential over rational processing in judgments and decision making (e.g., Betsch et al., 2020;Blackmore & Moore, 1994;Blanco et al., 2015;Brugger & Graves, 1997;Čavojová et al., 2020;Denovan et al., 2018Denovan et al., , 2020Drinkwater et al., 2012;Foster & Kokko, 2009;Irwin, 2009;Leonard & Williams, 2019;Lindeman & Aarnio, 2007;Matute et al., 2011;Musch & Ehrenberg, 2002;Pennycook et al., 2012;Rizeq et al., 2021;Ståhl & van Prooijen, 2018;van Prooijen, Douglas, et al., 2018). ...
Article
Full-text available
The global coronavirus (COVID-19) pandemic sparked a great interest in psychological factors that determine or explain peoples' responses to the novel threatening situation and the preventive measures (e.g., wearing masks, social distancing). In this study, we focused on contaminated mindware (conspiracy and paranormal beliefs) and investigated its relationship with both acceptance of and adherence to COVID-19 preventive measures, along with other variables from the domains of emotion (trait anxiety, fear), traditional personality traits (Big 5, locus of control, optimism/pessimism) and motivation (self-control, dispositional regulatory focus). A total of 22 variables were measured in an online survey (N = 374) that took place during the second wave of COVID-19 (Nov. 2020 - March 2021) in Switzerland. Of all variables, the endorsement of specific COVID-19 conspiracy beliefs was most strongly associated with lower acceptance and adherence to the preventive measures, together with mistrust in science and a more right-wing political orientation. In contrast, fear of COVID-19 and prevention regulatory focus were positively associated with acceptance and adherence. Our results therefore highlight the importance of fighting (conspiratorial) misinformation and of increasing the perceived credibility of science in reducing the spread of the coronavirus. Moreover, when acceptance was used as predictor for adherence, agreeableness and dispositional prevention regulatory focus still explained unique variance in adherence, suggesting that such personality and motivational variables play an important role in adhering and regulating preventive behaviour independent from the attitude towards the preventive measures themselves.
... The observation of an association between the jump-to-conclusions bias and causal illusion is interesting for our study because endorsement of pseudoscientific beliefs has also been associated with a greater tendency to develop causal illusions 14 . In fact, causal illusions can be considered a laboratory model of the emergence of pseudoscientific beliefs 15 . ...
Article
Full-text available
Previous studies have proposed that low evidential criteria or proneness to jump to conclusions influences the formation of paranormal beliefs. We investigated whether the low evidential criteria hypothesis for paranormal beliefs extends to a conceptually distinct type of unwarranted beliefs: those related to pseudoscience. We presented individuals varying in their endorsement of pseudoscientific beliefs with two hypothesis testing tasks. In the beads task, the participants were asked to decide from which of two jars containing different proportions of colored beads they were collecting samples. In the mouse trap task, they were asked to guess which rule determined whether a participant-controlled mouse obtained a piece of cheese or was trapped. In both cases, the volunteers were free to decide when to stop collecting evidence before completing the tasks. Our results indicate that, compared to skeptics, individuals presenting stronger endorsement of pseudoscientific beliefs tend to require less evidence before coming to a conclusion in hypothesis testing situations.
... In terms of behavior, it has been shown that an individual's sense of loss of control leads to an increase in stress levels and an irrational search for resources to recover the perceived lack of safety. This idea is important because stress and the irrational search for control can generate causal illusions [32] and states of paranoia [5], which ultimately triggers the so-called herd behaviors [3]. This was very common during the early months of the COVID-19 pandemic [33]. ...
Article
Full-text available
International travel and the infrastructures involved are key elements in controlling and predicting the number of infections by an infectious disease (specifically COVID-19 cases). This research presents the rates or percentages of compliance with COVID-19 mitigation measures at several international airports in Europe (Madrid, Dublin, Paris-Charles de Gaulle, Zurich, Barcelona, and Bilbao). A structured survey called the COVID-19 Measures Implementation Rate at Airports (MIRA) was developed. First, the validity and reliability of the measurements obtained by MIRA were analyzed. A total of 1239 volunteers (passengers, cabin crew, and ground crew) participated in the study and answered the MIRA questionnaire. Second, once the validity and reliability of the measurements were assured, the rates or percentages of cases that observed compliance with the mitigation measures were calculated. The results indicated that participants perceived a low degree of compliance with sanitary measures in their international travel (the proportions ranged from 52.6% to 59%). The airports with the highest compliance with mitigation measures were the Dublin (with a rate of 70%) and Zurich airports (with a rate of 69.1%). In conclusion, the percentages could be low due to the ineffective implementation of some of the mitigation measures. The low percentages are not related to the health measures themselves. The implications of mitigation measures for containing the transmission of infectious diseases such as COVID-19 are discussed.
... Researchers have argued that these biases in contingency learning may underpin pseudoscientific beliefs in the real world, including pseudoscientific health beliefs [6,7]. Pseudoscientific beliefs are false causal beliefs that appear to be based on facts and evidence but that are not grounded in the scientific method. ...
Article
Full-text available
Beliefs about cause and effect, including health beliefs, are thought to be related to the frequency of the target outcome (e.g., health recovery) occurring when the putative cause is present and when it is absent (treatment administered vs. no treatment); this is known as contingency learning. However, it is unclear whether unvalidated health beliefs, where there is no evidence of cause–effect contingency, are also influenced by the subjective perception of a meaningful contingency between events. In a survey, respondents were asked to judge a range of health beliefs and estimate the probability of the target outcome occurring with and without the putative cause present. Overall, we found evidence that causal beliefs are related to perceived cause–effect contingency. Interestingly, beliefs that were not predicted by perceived contingency were meaningfully related to scores on the paranormal belief scale. These findings suggest heterogeneity in pseudoscientific health beliefs and the need to tailor intervention strategies according to underlying causes.
... In a recent integrative theoretical framework, Rizeq et al. (2021) suggested to consider conspiracy and paranormal beliefs and anti-science attitudes as three components of a higher order psychological factor termed as "contaminated mindware". According to this approach, specific cognitive processing styles result in a contaminated mindware, such as a biased perception of probability and causality (e.g., perceiving meaningful patterns or causality in unrelated events), low levels of reality testing and open-minded thinking (e.g., low ability or motivation to critically test the plausibility of one's beliefs), ontological confusions (e.g., believing that lifeless natural objects are animate or that thoughts can be manifested as physical forces), and related to all these aspects, an over-reliance on intuitive-experiential over rational processing in judgments and decision making (e.g., Betsch et al., 2020;Blackmore & Moore, 1994;Blanco et al., 2015;Brugger & Graves, 1997;Čavojová et al., 2020;Denovan et al., 2018Denovan et al., , 2020Drinkwater et al., 2012;Foster & Kokko, 2009;Irwin, 2009;Leonard & Williams, 2019;Lindeman & Aarnio, 2007;Matute et al., 2011;Musch & Ehrenberg, 2002;Pennycook et al., 2012;Rizeq et al., 2021;Ståhl & van Prooijen, 2018;van Prooijen, Douglas, et al., 2018). ...
Preprint
Full-text available
Since the outbreak of the coronavirus disease (COVID-19), there is an exploding interest in psychological factors that determine how people respond to the novel threatening situation and the preventive measures. In the present research, we assessed the role of a comprehensive list of 22 psychological variables from the domain of emotion (trait anxiety, fear of COVID-19, fear of death), cognition (COVID-19 specific and general conspiracy beliefs, paranormal beliefs, mistrust in science, faith in intuition), motivation (self-control, regulatory focus) and more traditional personality traits (Big 5, locus of control, optimism/pessimism) on the acceptance and adherence to the preventive measures. The survey took place during the second wave in Switzerland (Nov. 2020-March 2021; N = 374). Fear of COVID-19, prevention regulatory focus and social norm compliance were positively associated with both acceptance and adherence to the preventive measures, while the opposite was true for COVID-19 specific conspiracy beliefs, mistrust in science, conspiracy mentality, and paranormal beliefs. From these latter variables, mistrust in science was still a significant predictor when COVID-19 specific conspiracy beliefs was considered as mediator. Interestingly, none of the Big 5 variables was associated with acceptance. However, when controlling for acceptance, agreeableness and openness (together with self-control and prevention regulatory focus) were positively associated with adherence. Finally, more right-wing political orientation was associated with lower acceptance and adherence to the preventive measures. Our results highlight the importance of fighting (conspiratorial) misinformation and increasing the perceived credibility of science in reducing the spread of the coronavirus. Furthermore, self-control and prevention regulatory focus seem important motivational aspects for the actual preventive behaviour.
... The illusion of control can be considered one type of causality illusion that is at the heart of cultural practices usually referred to as pseudoscience or superstition (Matute, Yarritu, & Vadillo, 2011) and is related to social processes of perception and causal attribution (Dela Coleta & Dela Coleta, 2011). The notion of the illusion of control is generally one result that emerges when people have difficulty perceiving one frequent aspect of social life, the "overlap between skill and luck" (Langer, 1975, p. 311). ...
Article
Full-text available
The notion of superstitious behavior can provide a basic background for understanding such notions as illusions and beliefs. The present study investigated the social mechanism of the transmission of superstitious behavior in an experiment that utilized participant replacement. The sample was composed of a total of 38 participants. Participants performed a task on a computer: they could click a colored rectangle using the mouse. When the rectangle was in a particular color, the participants received points independently of their behavior (variable time schedule). When the color of the rectangle was changed, no points were presented (extinction). Under an Individual Exposure condition, ten participants worked alone on the task. Other participants were exposed to the same experimental task under a Social Exposure condition, in which each participant first learned by observation and then worked on the task in a participant replacement (chain) procedure. The first participant in each chain in the Social Exposure condition was a confederate who worked on the task “superstitiously,” clicking the rectangle when points were presented. Superstitious responding was transmitted because of the behavior of the confederate. This also influenced estimates of personal control. These findings suggest that social learning can facilitate the acquisition and maintenance of superstitious behavior and the illusion of control. Our data also suggest that superstitious behavior and the illusion of control may involve similar learning principles.
... In practice, people struggle to distinguish between low-and high-quality scientific evidence, particularly in popular press contexts. Pseudoscientific claims may be especially compelling because of 'illusions of causality, ' in which people tend to infer causal relationships when none exists because of a general causality bias (Matute et al., 2011). In a similar vein, people often accept correlational data as evidence of causality in science media reports (e.g., Burrage, 2008;Robinson & Levin, 2019;Rodriguez, Ng, et al., 2016). ...
Article
Full-text available
Today’s citizens are expected to use evidence, frequently presented in the media, to inform decisions about health, behavior, and public policy. However, science misinformation is ubiquitous in the media, making it difficult to apply research appropriately. Across two experiments, we addressed how anecdotes and prior beliefs impact readers’ ability to both identify flawed science and make appropriate decisions based on flawed science in media articles. Each article described the results of flawed research on one of four educational interventions to improve learning (Experiment 1 included articles about having a tidy classroom and exercising while learning; Experiment 2 included articles about using virtual/augmented reality and napping at school). Experiment 1 tested the impact of a single anecdote and found no significant effect on either participants’ evidence evaluations or decisions to implement the learning interventions. However, participants were more likely to adopt the more plausible intervention (tidy classroom) despite identifying that it was unsupported by the evidence, suggesting effects of prior beliefs. In Experiment 2, we tested whether this intervention effect was driven by differences in beliefs about intervention plausibility and included two additional interventions (virtual reality = high plausible, napping = low plausible). We again found that participants were more likely to implement high plausible than low plausible interventions, and that evidence quality was underweighed as a factor in these decisions. Together, these studies suggest that evidence-based decisions are more strongly determined by prior beliefs than beliefs about the quality of evidence itself.
... Given our premise that the supernatural does not exist, it seems reasonable to assume that such believers have misinterpreted certain stimuli and that their interpretation of the event is crucial in shaping their belief. Indeed, a tendency to see causal relationships where there are none is common (Blanco et al., 2015;Griffiths et al., 2019;Matute et al., 2011). However, believers appear to be more biased to interpret random patterns as signals of supernatural causes, presenting more misidentifications or false alarms (events mislabeled as supernatural) than non-believers, and being more confident in their interpretations, responding faster in the face of unclear information, whereas disbelievers are more careful, less confident and slower, giving fewer errors (Simmonds- Moore, 2014;Van Elk, 2015). ...
Article
Supernatural fears, although common, are not as well-understood as natural fears and phobias (e.g., social, blood, and animal phobias) which are prepared by evolution, such that they are easily acquired through direct experience and relatively immune to cognitive mediation. In contrast, supernatural fears do not involve direct experience but seem to be related to sensory or cognitive biases in the interpretation of stimuli as well as culturally driven cognitions and beliefs. In this multidisciplinary synthesis and collaborative review, we claim that supernatural beliefs are “super natural.” That is, they occur spontaneously and are easy to acquire, possibly because such beliefs rest on intuitive concepts such as mind-body dualism and animism, and may inspire fear in believers as well as non-believers. As suggested by psychological and neuroscientific evidence, they tap into an evolutionarily prepared fear of potential impending dangers or unknown objects and have their roots in “prepared fears” as well as “cognitively prepared beliefs,” making fear of supernatural agents a fruitful research avenue for social, anthropological, and psychological inquires.
... Such illusions of causality are well documented. For example, people often overestimate their ability to control random events (Langer, 1975;Presson & Benassi, 1996), or report illusory causality between unrelated events (Blanco, Matute, & Vadillo, 2011;Matute et al., 2015;Matute, Yarritu, & Vadillo, 2011). Wegner's apparent mental causation model suggests that our experience of willing an action simply arises from interpreting our thoughts as the cause of our actions (Wegner, 2002;Wegner & Wheatley, 1999). ...
Article
Full-text available
Forcing techniques allow magicians to subtly influence spectators’ choices and the outcome of their actions, and they provide powerful tools to study decision-making and the illusory sense of agency and freedom over choices we make. We investigated the equivoque force, a technique that exploits semantic ambiguities and people’s failure to notice inconsistencies, to ensure that a spectator ends up with a predetermined outcome. Similar to choice blindness paradigms, the equivoque forces participants to end up with an item they did not choose in the first place. However, here, the subterfuge is accomplished in full view. In 3 experiments, we showed that the equivoque is highly effective in providing participants an illusory sense of agency over the outcome of their actions, even after 2 repetitions of the trick (Experiment 2) and using items for which preexisting preferences can be present (Experiment 3). Across all experiments, participants were oblivious to inconsistencies in the procedure used to guide their decisions, and they were genuinely surprised by the experimenter’s matching prediction. Contrary to our prediction, the equivoque force did not significantly change participants’ preference for the chosen item. We discuss the results with regard to other illusions of agency (e.g., forcing, choice blindness), failures in noticing semantic inconsistencies (e.g., Moses illusion), and issues surrounding choice-induced-preference literature. (PsycInfo Database Record (c) 2020 APA, all rights reserved)
Article
The ability to learn cause-effect relations from experience is critical for humans to behave adaptively - to choose causes that bring about desired effects. However, traditional experiments on experience-based learning involve events that are artificially compressed in time so that all learning occurs over the course of minutes. These paradigms therefore exclusively rely upon working memory. In contrast, in real-world situations we need to be able to learn cause-effect relations over days and weeks, which necessitates long-term memory. 413 participants completed a smartphone study, which compared learning a cause-effect relation one trial per day for 24 days versus the traditional paradigm of 24 trials back- to- back. Surprisingly, we found few differences between the short versus long timeframes. Subjects were able to accurately detect generative and preventive causal relations, and they exhibited illusory correlations in both the short and long timeframe tasks. These results provide initial evidence that experience-based learning over long timeframes exhibits similar strengths and weaknesses as in short timeframes. However, learning over long timeframes may become more impaired with more complex tasks.
Article
Persistence of superstitions in the modern era could be justified by considering them as a by-product of the brain's capacity to detect associations and make assumptions about cause-effect relationships. This ability, which supports predictive behaviour, directly relates to associative learning. We tested whether variability in superstitious behaviour reflects individual variability in the efficiency of mechanisms akin to habit learning. Forty-eight individuals performed a Serial Reaction Time Task (SRTT) or an Implicit Cuing Task (ICT). In the SRTT, participants were exposed to a hidden sequence and progressively learnt to optimize responses, a process akin to skill learning. In the ICT participants met with a hidden association, which (if detected) provided a benefit (cf. habit learning). An index of superstitious beliefs was also collected. A correlation emerged between susceptibility to personal superstitions and performance at the ICT only. This novel finding is discussed in view of current ideas on how superstitions are instated.
Article
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to test the assumption of reducing the illusion of control by using a causal question in desirable and undesirable results. The influence of the causal question on the size of the illusion of control, measured by the self-esteem of the subjects, was not found. Keywords: cognitive distortions, illusion of control.
Article
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to test the assumption of reducing the illusion of control by using a causal question in desirable and undesirable results. The influence of the causal question on the size of the illusion of control, measured by the self-esteem of the subjects, was not found. Keywords: cognitive distortions, illusion of control.
Thesis
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to assess the relationship of emotional characteristics and formulation of the question on the illusion of control, depending on the desirable and undesirable results. In the study, it was assumed that the illusion of control depends on the amount of effort applied to achieve the result. It has also been suggested to reduce the illusion of control when asking a causal question in the case where the result is desirable and the participant acts to make that result appear, and in the case where the result is undesirable and the subject acts to prevent it from occurring. The influence of the cause-effect question and emotional characteristics on the value of the illusion of control, measured by the self-esteem of the subjects was not found. There was also no correlation between the amount of effort and the illusion of control.
Article
Background We have previously presented two educational interventions aimed to diminish causal illusions and promote critical thinking. In both cases, these interventions reduced causal illusions developed in response to active contingency learning tasks, in which participants were able to decide whether to introduce the potential cause in each of the learning trials. The reduction of causal judgments appeared to be influenced by differences in the frequency with which the participants decided to apply the potential cause, hence indicating that the intervention affected their information sampling strategies. Objective In the present study, we investigated whether one of these interventions also reduces causal illusions when covariation information is acquired passively. Method Forty-one psychology undergraduates received our debiasing intervention, while 31 students were assigned to a control condition. All participants completed a passive contingency learning task. Results We found weaker causal illusions in students that participated in the debiasing intervention, compared to the control group. Conclusion The intervention affects not only the way the participants look for new evidence, but also the way they interpret given information. Teaching implications Our data extending previous results regarding evidence-based educational interventions aimed to promote critical thinking to situations in which we act as mere observers.
Article
Causal illusions have been postulated as cognitive mediators of pseudoscientific beliefs, which, in turn, might lead to the use of pseudomedicines. However, while the laboratory tasks aimed to explore causal illusions typically present participants with information regarding the consequences of administering a fictitious treatment versus not administering any treatment, real-life decisions frequently involve choosing between several alternative treatments. In order to mimic these realistic conditions, participants in two experiments received information regarding the rate of recovery when each of two different fictitious remedies were administered. The fictitious remedy that was more frequently administered was given higher effectiveness ratings than the low-frequency one, independent of the absence or presence of information about the spontaneous recovery rate. Crucially, we also introduced a novel dependent variable that involved imagining new occasions in which the ailment was present and asking participants to decide which treatment they would opt for. The inclusion of information about the base rate of recovery significantly influenced participants’ choices. These results imply that the mere prevalence of popular treatments might make them seem particularly effective. It also suggests that effectiveness ratings should be interpreted with caution as they might not accurately reflect real treatment choices. Materials and datasets are available at the Open Science Framework [https://osf.io/fctjs/].
Thesis
Full-text available
Most research on health interventions aims to find evidence to support better causal inferences about those interventions. However, for decades, a majority of this research has been criticised for inadequate control of bias and overconfident conclusions that do not reflect the uncertainty. Yet, despite the need for improvement, clear signs of progress have not appeared, suggesting the need for new ideas on ways to reduce bias and improve the quality of research. With the aim of understanding why bias has been difficult to reduce, we first explore the concepts of causal inference, bias and uncertainty as they relate to health intervention research. We propose a useful definition of ‘a causal inference’ as: ‘a conclusion that the evidence available supports either the existence, or the non-existence, of a causal effect’. We used this definition in a methodological review that compared the statistical methods used in health intervention cohort studies with the strength of causal language expressed in each study’s conclusions. Studies that used simple instead of multivariable methods, or did not conduct a sensitivity analysis, were more likely to contain overconfident conclusions and potentially mislead readers. The review also examined how the strength of causal language can be judged, including an attempt to create an automatic rating algorithm that we ultimately deemed cannot succeed. This review also found that a third of the articles (94/288) used a propensity score method, highlighting the popularity of a method developed specifically for causal inference. On the other hand, 11% of the articles did not adjust for any confounders, relying on methods such as t-tests and chi-squared tests. This suggests that many researchers still lack an understanding of how likely it is that confounding affects their results. Drawing on knowledge from statistics, philosophy, linguistics, cognitive psychology, and all areas of health research, the central importance of how people think and make decisions is examined in relation to bias in research. This reveals the many hard-wired cognitive biases that, aside from confirmation bias, are mostly unknown to statisticians and researchers in health. This is partly because they mostly occur without conscious awareness, yet everyone is susceptible. But while the existence of biases such as overconfidence bias, anchoring, and failure to account for the base rate have been raised in the health research literature, we examine biases that have not been raised in health, or we discuss them from a different perspective. This includes a tendency of people to accept the first explanation that comes to mind (called take-the-first heuristic); how we tend to believe that other people are more susceptible to cognitive biases than we are (bias blind spot); a tendency to seek arguments that defend our beliefs, rather than seeking the objective truth (myside bias); a bias for causal explanations (various names including the causality heuristic); and our desire to avoid cognitive effort (many names including the ‘law of least mental effort’). This knowledge and understanding also suggest methods that might counter these biases and improve the quality of research. This includes any technique that encourages the consideration of alternative explanations of the results. We provide novel arguments for a number of methods that might help, such as the deliberate listing of alternative explanations, but also some novel ideas including a form of adversarial collaboration. Another method that encourages the researcher to consider alternative explanations is causal diagrams. However, we introduce them in a way that differs from the more formal presentation that is currently the norm, avoiding most of the terminology to focus instead on their use as an intuitive framework, helping the researcher to understand the biases that may lead to different conclusions. We also present a case study where we analysed the data for a pragmatic randomised controlled trial of a telemonitoring service. Considerable missing data hampered the forming of conclusions; however, this enabled an exploration of methods to better understand, reduce and communicate the uncertainty that remained after the analysis. Methods used included multiple imputation, causal diagrams, a listing of alternative explanations, and the parametric g-formula to handle bias from time-dependent confounding. Finally, we suggest strategies, resources and tools that may overcome some of the barriers to better control of bias and improvements in causal inference, based on the knowledge and ideas presented in this thesis. This includes a proposed online searchable causal diagram database, to make causal diagrams themselves easier to learn and use.
Article
Causal illusion has been proposed as a cognitive mediator of pseudoscientific beliefs. However, previous studies have only tested the association between this cognitive bias and a closely related but different type of unwarranted beliefs, those related to superstition and paranormal phenomena. Participants (n = 225) responded to a novel questionnaire of pseudoscientific beliefs designed for this study. They also completed a contingency learning task in which a possible cause, infusion intake, and a desired effect, headache remission, were actually non‐contingent. Volunteers with higher scores on the questionnaire also presented stronger causal illusion effects. These results support the hypothesis that causal illusions might play a fundamental role in the endorsement of pseudoscientific beliefs.
Article
Full-text available
Studies of people's beliefs about how much they control events have shown that people often overestimate the extent to which the result depends on their own behavior. The purpose of this study is to test the assumption of reducing the illusion of control by using a causal question in desirable and undesirable results. The influence of the causal question on the size of the illusion of control, measured by the self-esteem of the subjects, was not found. Keywords: cognitive distortions, illusion of control.
Article
Full-text available
We examined whether individual differences in susceptibility to the illusion of control predicted differential vulnerability to depressive responses after a laboratory failure and naturally occurring life stressors. The illusion of control decreased the likelihood that subjects (N= 145) would (a)show immediate negative mood reactions to the laboratory failure, (b) become discouraged after naturally occurring negative life events, and (c) experience increases in depressive symptoms a month later given the occurrence of a high number of negative life events. In addition, the stress-moderating effect of the illusion of control on later depressive symptoms appeared to be mediated in part by its effect on reducing the discouragement subjects experienced from the occurrence of negative life events. These findings provide support for the hopelessness theory of depression and for the optimistic illusion-mental health link.
Article
Full-text available
Recent research has shown superstitious behaviour and illusion of control in human subjects exposed to the negative reinforcement conditions that are traditionally assumed to lead to the opposite outcome (i.e. learned helplessness). The experiments reported in this paper test the generality of these effects in two different tasks and under different conditions of percentage (75% vs. 25%) and distribution (random vs. last-trials) of negative reinforcement (escape from uncontrollable noise). All three experiments obtained superstitious behaviour and illusion of control and question the generality of learned helplessness as a consequence of exposing humans to uncontrollable outcomes.
Article
Full-text available
Learned helplessness and superstition accounts of uncontrollability predict opposite results for subjects exposed to noncontingent reinforcement. Experiment 1 used the instrumental-cognitive triadic design proposed by Hiroto and Seligman (1975) for the testing of learned helplessness in humans, but eliminated the "failure light" that they introduced in their procedure. Results showed that Yoked subjects tend to superstitious behavior and illusion of control during exposure to uncontrollable noise. This, in turn, prevents the development of learned helplessness because uncontrollability is not perceived. In Experiment 2, the failure feedback manipulation was added to the Yoked condition. Results of this experiment replicate previous findings of a proactive interference effect in humans—often characterized as learned helplessness. This effect, however, does not support learned helplessness theory because failure feedback is needed for its development. It is argued that conditions of response-independent reinforcement commonly used in human research do not lead to learned helplessness, but to superstitious behavior and illusion of control. Different conditions could lead to learned helplessness, but the limits between superstition and helplessness have not yet been investigated.
Article
Full-text available
In fall 1995, the worldwide-accessible Web Experimental Psychology Lab (http://wexlab.eu) opened its doors to Web surfers and Web experimenters. It offers a frequently visited place at which to conduct true experiments over the Internet. Data from 5 years of laboratory running time are presented, along with recommendations for setting up and maintaining a virtual laboratory, including sections on the history of the Web laboratory and of Web experimenting, the laboratory's structure and design, visitor demographics, the Kids' Experimental Psychology Lab, access statistics, administration, software and hardware, marketing, other Web laboratories, data security, and data quality. It is concluded that experimental data collection via the Internet has proven to be an enrichment to science. Consequently, the Web Experimental Psychology Lab will continue and extend its services to the scientific community.
Article
Full-text available
The present research had several objectives: (1) to adapt Tobacyk's (1988) Revised Paranormal Beliefs Scale (RPBS) into Spanish in order to make cross-cultural comparisons possible, (2) to test the reliability and dimensionality of the instrument and check if the previously found dimensions are replicated with Spanish-speaking participants, and (3) to test the hypothesis of the nonequivalence in paranormal beliefs across fields of study groups. The study included 355 students from six university departments, both scientific and nonscientific. The results showed the questionnaire is highly reliable although not additive within our sample. We found a set of conceptually valid first-order empiric dimensions that replicated the findings of two earlier studies, and two second-order factors in line with results of a third study. In addition, differences among students from different fields of study were found, suggesting that training in scientific method produces differences in paranormal beliefs. Cross-cultural research based on this questionnaire is possible.
Article
Full-text available
Depressive realism consists of the lower personal control over uncontrollable events perceived by depressed as compared to nondepressed individuals. In this article, we propose that the realism of depressed individuals is caused not by an increased accuracy in perception, but by their more comprehensive exposure to the actual environmental contingencies, which in turn is due to their more pas-sive pattern of responding. To test this hypothesis, dysphoric and nondysphoric participants were exposed to an uncontrollable task and both their probability of responding and their judgment of control were assessed. As was expected, higher levels of depression correlated negatively with probability of responding and with the illusion of control. Implications for a therapy of depression are discussed.
Article
Full-text available
A nonequivalent control group design was employed to test the effectiveness of an interdisciplinary course on the scientific method in increasing students' skepticism toward the paranormal. The course explored legitimate methods of scientific inquiry and compared them to faulty, and often fraudulent, methods of pseudosciences. Topics included elementary logic, logical fallacies, statistics, probability, the scientific method, characteristics of pseudosciences, and the prevalence and persistence of pseudoscientific theories and beliefs. Students enrolled in a psychology and law class served as a control group for the “Science and Pseudoscience” class (the treatment group). At the start of the term, students in both groups completed the Belief in the Paranormal Scale (Jones, Russell, and Nickel, 1977) and a measure of beliefs in their own psychic powers. At the end of the semester, students completed these same measures. Results demonstrated that while there were no initial differences between the control and treatment groups in their belief in the paranormal, students in the “Science and Pseudoscience” class demonstrated substantially reduced belief in the paranormal relative to the control class. There were no changes in students' beliefs in their own paranormal powers. Implications for science education and research on teaching thinking are discussed.
Article
Full-text available
Tested the hypothesis that the phrasing of a question about the relationship between 2 events can influence what information Ss feel they need to answer the question. 60 college students were presented with 1 of 2 covariation problems (concerning tennis or rainfall) and were asked a question that explicitly mentioned 1 type of instance or a 2nd type of instance (e.g., the effects of practice on winning or on losing a tennis match) or an unbiased question that mentioned all 4 relevant types of instances (e.g., the effects of practice/no practice on winning/losing). As predicted, Ss who were asked a biased question most often requested the frequency of instances mentioned in the question. Ss who were asked an unbiased question most frequently requested positive confirming instances and requested significantly more information to answer the question. The relationship of this study to other studies demonstrating confirmatory hypothesis-testing strategies and implications for conducting research on intuitive judgments about relationships between events are discussed. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Article
Full-text available
In fall 1995, the worldwide-accessible Web Experimental Psychology Lab (http://www.genpsylab.unizh.ch) opened its doors to Web surfers and Web experimenters. It offers a frequently visited place at which to conduct true experiments over the Internet. Data from 5 years of laboratory running time are presented, along with recommendations for setting up and maintaining a virtual laboratory, including sections on the history of the Web laboratory and of Web experimenting, the laboratory’s structure and design, visitor demographics, the Kids’ Experimental Psychology Lab, access statistics, administration, software and hardware, marketing, other Web laboratories, data security, and data quality. It is concluded that experimental data collection via the Internet has proven to be an enrichment to science. Consequently, the Web Experimental Psychology Lab will continue and extend its services to the scientific community.
Conference Paper
Full-text available
Abstract People often believe that they exert control on uncontrollable outcomes, a phenomenon that has been called illusion of control. Psychologists tend ,to attribute ,this illusion to personality variables. However, we present simulations showing,that the illusion of control ,can be explained ,at a simpler level of analysis. In brief, if a person desires an outcome and tends to act as often as possible in order to get it, this person will never be able to know that the outcome,could have occurred with the same probability if he/she had done nothing. Our simulations show that a very high probability of action is usually the best possible strategy if one ,wants to maximize the likelihood of occurrence of a desired event, but the choice of this strategy gives rise to illusion of control.
Article
Full-text available
Active contingency tasks, such as those used to explore judgments of control, suffer from variability in the actual values of critical variables. The authors debut a new, easily implemented procedure that restores control over these variables to the experimenter simply by telling participants when to respond, and when to withhold responding. This command-performance procedure not only restores control over critical variables such as actual contingency, it also allows response frequency to be manipulated independently of contingency or outcome frequency. This yields the first demonstration, to our knowledge, of the equivalent of a cue density effect in an active contingency task. Judgments of control are biased by response frequency outcome frequency, just as they are also biased by outcome frequency.
Article
Full-text available
In 4 experiments, 144 depressed and 144 nondepressed undergraduates (Beck Depression Inventory) were presented with one of a series of problems varying in the degree of contingency. In each problem, Ss estimated the degree of contingency between their responses (pressing or not pressing a button) and an environmental outcome (onset of a green light). Depressed Ss' judgments of contingency were suprisingly accurate in all 4 experiments. Nondepressed Ss overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired. Thus, predictions derived from social psychology concerning the linkage between subjective and objective contingencies were confirmed for nondepressed but not for depressed Ss. The learned helplessness and self-serving motivational bias hypotheses are evaluated as explanations of the results. (41/2 p ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
Full-text available
We examined whether individual differences in susceptibility to the illusion of control predicted differential vulnerability to depressive responses after a laboratory failure and naturally occurring life stressors. The illusion of control decreased the likelihood that subjects (N = 145) would (a) show immediate negative mood reactions to the laboratory failure, (b) become discouraged after naturally occurring negative life events, and (c) experience increases in depressive symptoms a month later given the occurrence of a high number of negative life events. In addition, the stress-moderating effect of the illusion of control on later depressive symptoms appeared to be mediated in part by its effect on reducing the discouragement subjects experienced from the occurrence of negative life events. These findings provide support for the hopelessness theory of depression and for the optimistic illusion-mental health link.
Article
Full-text available
The research reported in this article replicated the well-established phenomenon of competition between causes (C) as well as the more controversial presence and absence of competition between effects (E). The test question was identified as a crucial factor leading to each outcome. Competition between causes was obtained when the test question asked about the probability of E given C, p(E/C), implicitly compared with the probability of E given some alternative cause, p(E/C'). competition between effects was obtained when the test question asked about p(C/E) implicitly compared with p(C/E'). Under these conditions, effects competed for diagnostic value just as causes competed for predictive value. Additionally, some conditions in which neither causes nor effects competed were identified. These results suggest a bidirectional and noncompetitive learning process, the contents of which can be used in different ways (competitively or noncompetitively and forward or backward) as a function of test demands.
Article
Full-text available
We report three experiments in which we tested asymptotic and dynamic predictions of the Rescorla-Wagner (R-W) model and the asymptotic predictions of Cheng's probabilistic contrast model (PCM) concerning judgments of causality when there are two possible causal candidates. We used a paradigm in which the presence of a causal candidate that is highly correlated with an effect influences judgments of a second, moderately correlated or uncorrelated cause. In Experiment 1, which involved a moderate outcome density, judgments of a moderately positive cause were attenuated when it was paired with either a perfect positive or perfect negative cause. This attenuation was robust over a large set of trials but was greater when the strong predictor was positive. In Experiment 2, in which there was a low overall density of outcomes, judgments of a moderately correlated positive cause were elevated when this cause was paired with a perfect negative causal candidate. This elevation was also quite robust over a large set of trials. In Experiment 3, estimates of the strength of a causal candidate that was uncorrelated with the outcome were reduced when it was paired with a perfect cause. The predictions of three theoretical models of causal judgments are considered. Both the R-W model and Cheng's PCM accounted for some but not all aspects of the data. Pearce's model of stimulus generalization accounts for a greater proportion of the data.
Article
Full-text available
Associative and statistical theories of causal and predictive learning make opposite predictions for situations in which the most recent information contradicts the information provided by older trials (e.g., acquisition followed by extinction). Associative theories predict that people will rely on the most recent information to best adapt their behavior to the changing environment. Statistical theories predict that people will integrate what they have learned in the two phases. The results of this study showed one or the other effect as a function of response mode (trial by trial vs. global), type of question (contiguity, causality, or predictiveness), and postacquisition instructions. That is, participants are able to give either an integrative judgment, or a judgment that relies on recent information as a function of test demands. The authors concluded that any model must allow for flexible use of information once it has been acquired.
Article
Full-text available
How humans infer causation from covariation has been the subject of a vigorous debate, most recently between the computational causal power account (P. W. Cheng, 1997) and associative learning theorists (e.g., K. Lober & D. R. Shanks, 2000). Whereas most researchers in the subject area agree that causal power as computed by the power PC theory offers a normative account of the inductive process. Lober and Shanks, among others, have questioned the empirical validity of the theory. This article offers a full report and additional analyses of the original study featured in Lober and Shanks's critique (M. J. Buehner & P. W. Cheng, 1997) and reports tests of Lober and Shanks's and other explanations of the pattern of causal judgments. Deviations from normativity, including the outcome-density bias, were found to be misperceptions of the input or other artifacts of the experimental procedures rather than inherent to the process of causal induction.
Article
Full-text available
The rapid growth of the Internet provides a wealth of new research opportunities for psychologists. Internet data collection methods, with a focus on self-report questionnaires from self-selected samples, are evaluated and compared with traditional paper-and-pencil methods. Six preconceptions about Internet samples and data quality are evaluated by comparing a new large Internet sample (N = 361,703) with a set of 510 published traditional samples. Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age. Moreover, Internet findings generalize across presentation formats, are not adversely affected by nonserious or repeat responders, and are consistent with findings from traditional methods. It is concluded that Internet methods can contribute to many areas of psychology.
Article
Full-text available
The perception of the effectiveness of instrumental actions is influenced by depressed mood. Depressive realism (DR) is the claim that depressed people are particularly accurate in evaluating instrumentality. In two experiments, the authors tested the DR hypothesis using an action-outcome contingency judgment task. DR effects were a function of intertrial interval length and outcome density, suggesting that depressed mood is accompanied by reduced contextual processing rather than increased judgment accuracy. The DR effect was observed only when participants were exposed to extended periods in which no actions or outcomes occurred. This implies that DR may result from an impairment in contextual processing rather than accurate but negative expectations. Therefore, DR is consistent with a cognitive distortion view of depression.
Article
Full-text available
There are many psychological tasks that involve the pairing of binary variables. The various tasks used often address different questions and are motivated by different theoretical issues and traditions. Upon closer examination, however, the tasks are remarkably similar in structure. In the present paper, we examine two such tasks, the contingency judgment task and the signal detection task, and we apply a signal detection analysis to contingency judgment data. We suggest that the signal detection analysis provides a novel interpretation of a well-established but poorly understood phenomenon of contingency judgments--the outcome-density effect.
Article
Full-text available
In three experiments, we show that people respond differently when they make predictions as opposed to when they are asked to estimate the causal or the predictive value of cues: Their response to each of those three questions is based on different sets of information. More specifically, we show that prediction judgments depend on the probability of the outcome given the cue, whereas causal and predictive-value judgments depend on the cue-outcome contingency. Although these results might seem problematic for most associative models in their present form, they can be explained by explicitly assuming the existence of postacquisition processes that modulate participants' responses in a flexible way.
Article
Associative and statistical theories of causal and predictive learning make opposite predictions for situations in which the most recent information contradicts the information provided by older trials (e.g., acquisition followed by extinction). Associative theories predict that people will rely on the most recent information to best adapt their behavior to the changing environment. Statistical theories predict that people will integrate what they have learned in the two phases. The results of this study showed one or the other effect as a function of response mode (trial by trial vs. global), type of question (contiguity, causality, or predictiveness), and postacquisition instructions. That is, participants are able to give either an integrative judgment, or a judgment that relies on recent information as a function of test demands. The authors concluded that any model must allow for flexible use of information once it has been acquired.
Article
One hundred and fifty participants played a computer task in which points were either gained (reinforcement) or lost (punishment) randomly on 75%, 50%, or 25% of trials. Despite the noncontingent nature of the task, participants frequently suggested superstitious rules by which points were either gained or lost. Rules were more likely to be suggested and supported higher confidence ratings under conditions of maximal reinforcement or minimal punishment, and participants gaining points tended to express more rules than did those losing points. Superstitious rule generation was in no way related to a person's locus of control, as measured by Rotter's Internal-External Scale. Participants losing points were more accurate in keeping track of their total number of points than were participants gaining points. Results are discussed in terms of reinforcement and punishment's effects on the stimulus control of rule-governed behavior, and comparisons are drawn with the illusion of control and learned helplessness literature.
Article
This chapter discusses that experimental psychology is no longer a unified field of scholarship. The most obvious sign of disintegration is the division of the Journal of Experimental Psychology into specialized periodicals. Many forces propel this fractionation. First, the explosion of interest in many small spheres of inquiry has made it extremely difficult for an individual to master more than one. Second, the recent popularity of interdisciplinary research has lured many workers away from the central issues of experimental psychology. Third, there is a growing division between researchers of human and animal behavior; this division has been primarily driven by contemporary cognitive psychologists, who see little reason to refer to the behavior of animals or to inquire into the generality of behavioral principles. The chapter considers the study of causal perception. This area is certainly at the core of experimental psychology. Although recent research in animal cognition has taken the tack of bringing human paradigms into the animal laboratory, the experimental research is described has adopted the reverse strategy of bringing animal paradigms into the human laboratory. A further unfortunate fact is that today's experimental psychologists are receiving little or no training in the history and philosophy of psychology. This neglected aspect means that investigations of a problem area are often undertaken without a full understanding of the analytical issues that would help guide empirical inquiry.
Article
Experiments in which subjects are asked to analytically assess response-outcome relationships have frequently yielded accurate judgments of response-outcome independence, but more naturalistically set experiments in which subjects are instructed to obtain the outcome have frequently yielded illusions of control The present research tested the hypothesis that a differential probability of responding p(R), between these two traditions could be at the basis of these different results Subjects received response-independent outcomes and were instructed either to obtain the outcome (naturalistic condition) or to behave scientifically in order to find out how much control over the outcome was possible (analytic condition) Subjects in the naturalistic condition tended to respond at almost every opportunity and developed a strong illusion of control Subjects in the analytic condition maintained their p(R) at a point close to 5 and made accurate judgments of control The illusion of control observed in the naturalistic condition appears to be a collateral effect of a high tendency to respond in subjects who are trying to obtain an outcome, this tendency to respond prevents them from learning that the outcome would have occurred with the same probability if they had not responded
Article
Two experiments used a rich and systematic set of noncontingent problems to examine humans' ability to detect the absence of an inter-event relation. Each found that Ss who used nonnormative strategies were quite inaccurate in judging some types of noncontingent problems. Group data indicate that Ss used the 2 × 2 information in the order Cell A > Cell B > Cell C > Cell D; individual data indicate that Ss considered the information in Cell A to be most important, that in Cell D to be least important, and that in Cells B and C to be of intermediate importance. Trial-by-trial presentation led to less accurate contingency judgments and to more uneven use of 2 × 2 cell information than did summary-table presentation. Finally, the judgment processes of about 70% and 80%, respectively, of nonnormative strategy users under trial-by-trial and summary-table procedures could be accounted for by an averaging model. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Conducted a series of 6 studies involving 631 adults to elucidate the "illusion of control" phenomenon, defined as an expectancy of a personal success probability inappropriately higher than the objective probability would warrant. It was predicted that factors from skill situations (competition, choice, familiarity, involvement) introduced into chance situations would cause Ss to feel inappropriately confident. In Study 1 Ss cut cards against either a confident or a nervous competitor; in Study 2 lottery participants were or were not given a choice of ticket; in Study 3 lottery participants were or were not given a choice of either familiar or unfamiliar lottery tickets; in Study 4, Ss in a novel chance game either had or did not have practice and responded either by themselves or by proxy; in Study 5 lottery participants at a racetrack were asked their confidence at different times; finally, in Study 6 lottery participants either received a single 3-digit ticket or 1 digit on each of 3 days. Indicators of confidence in all 6 studies supported the prediction. (38 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
In the first of two experiments on the concept of correlation in adult subjects, the subjects' frequency estimates and inferences of relationship were studied relative to five different 2 × 2 distributions, each presented in a fixed sequence. In experiment II, the subjects' spontaneous strategies in subdividing and analyzing one 2 × 2 distribution were studied in a free situation. It is concluded that adult subjects with no statistical training apparently have no adequate concept of correlation (based on the ratio of the two pairs of diagonal frequencies), and that, in so far as they reason statistically at all, they tend to depend exclusively on the frequency of ++ cases in judging relationship. The need for studies involving ordinal scale and fully quantified variates is stressed.
Article
Studies concerned with judgments of contingency between binary variables have often ignored what the variables stand for. The two values of a binary variable can be represented as a prevailing state (nonevent) or as an active state (event). Judgments under the four conditions resulting from the combination of a binary input variable that can be represented as event-nonevent or event-event with an outcome variable that can be represented in the same way were obtained. It is shown in Experiment 1, that judgments of data sets which exhibit the same degree of covariation depend upon how the input and output variables are represented. In Experiment 2 the case where both the input and output variables are represented as event-nonevent is examined. Judgments were higher when the pairing of the input event was with the output event and the input nonevent with the output nonevent that when the pairing was of event with nonevent, suggesting a causal compatibility of event-event pairings and a causal incompatibility of event-nonevent pairings. Experiment 3 demonstrates that judgments of the strength of the relation between binary input and output variables is not based on the appropriate statistical measure, the difference between two conditional probabilities. The overall pattern of judgments in the three experiments is mainly explicable on the basis of two principles: (1) judgments tend to be based on the difference between confirming and disconfirming cases and (2) causal compatibility in the representation of the input and output variables plays a critical role.
Article
[Ilt may be that … reason, self-consciousness and self-control which seem to sever human intellect so sharply from that of all other animals are really but secondary re- sults of the tremendous increase in the number, delicacy and complexity of associations which the human animal can form. It may be that the evolution of intellect has no breaks, that its progress is continuous from its first appearance to its present condition in adult … human beings. If we could prove that what we call ideational life and reasoning were not new and unexplainable species of intellectual life but only the natural consequences of an increase in the number, delicacy, and complexity of associations of the general animal sort, we should have made out an evolution of mind comparable to the evolution of living forms. (p. 286)
Article
Tested the hypothesis that an individual will feel control over an outcome if he causes the outcome and if he knows before causing it what he hopes to obtain. 65 male undergraduates were shown 2 consumer items and told that they would get to win 1 by a chance drawing. 2 marbles of different colors were placed in a can and mixed up. One-third of the Ss were told that the E would pick a marble to determine their prize and were told beforehand which marble stood for which prize. Another third were told to select a marble to determine their prize and were told beforehand which marble stood for which prize. The remaining Ss were told to select a marble to determine their prize but were not told until after they had picked their marble which marble stood for which prize. Ss th