Article

You Can't Not Believe Everything You Read

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Can people comprehend assertions without believing them? Descartes (1644/1984) suggested that people can and should, whereas Spinoza (1677/1982) suggested that people should but cannot. Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended. Ss were exposed to false information about a criminal defendant (Experiments 1 and 2) or a college student (Experiment 3). Some Ss were exposed to this information while under load (Experiments 1 and 2) or time pressure (Experiment 3). Ss made judgments about the target (sentencing decisions or liking judgments). Both load and time pressure caused Ss to believe the false information and to use it in making consequential decisions about the target. In Spinozan terms, both manipulations prevented Ss from "unbelieving" the false information they automatically believed during comprehension.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Across the two studies reported below, we relied on a paradigm that has been previously used in the truth-bias literature (Gilbert, Tafarodi & Malone, 1993;Pantazi et al., 2018). We used the four crime reports from Pantazi et al. (2018). ...
... According to a pre-test, the true evidence was rated as equally serious across the two reports (see Pantazi et al. 2018 for the pre-test information). We adopted this strategy from Gilbert et al. (1993) in order to ensure that participants would not infer that negating the false statements would create true statements. ...
... Based on past research, we employed judgments and memory as complementary measures of the truth-bias effects and expected a significant correlation between the percentage of false statements that participants misremembered as true and the difference between their judgments of (falsely) ag-gravated and mitigated defendants (see Gilbert, Tafarodi & Malone, 1993;Peter & Koch, 2015, Pantazi et al., 2018). ...
Article
Full-text available
Previous studies have shown that people are truth-biased in that they tend to believe the information they receive, even if it is clearly flagged as false. The truth bias has been recently proposed to be an instance of meta-cognitive myopia, that is, of a generalized human insensitivity towards the quality and correctness of the information available in the environment. In two studies we tested whether meta-cognitive myopia and the ensuing truth bias may operate in a courtroom setting. Based on a well-established paradigm in the truth-bias literature, we asked mock jurors (Study 1) and professional judges (Study 2) to read two crime reports containing aggravating or mitigating information that was explicitly flagged as false. Our findings suggest that jurors and judges are truth-biased, as their decisions and memory about the cases were affected by the false information. We discuss the implications of the potential operation of the truth bias in the courtroom, in the light of the literature on inadmissible and discredible evidence, and make some policy suggestions.
... 9 It is during this evaluative process one may seek guidance from epistemic norms, which tell us whether to form the belief that p or to reject it. The emerging scientific picture of belief, however, aligns much better with the so-called Spinozan account of belief-formation (Egan 2008;Gilbert 1991;Gilbert, Tafarodi, and Malone 1993;Mandelbaum 2014). On the Spinozan account, one forms beliefs more or less automatically when representational contents are made available through cognitive mechanisms such as perception and they are not products of a norm-invoking, effortful inference at the person-level. ...
... In addition to this admittedly somewhat suggestive evolutionary consideration, there are important empirical results that provide evidence for the Spinozan hypothesis (Gilbert 1991;Gilbert, Tafarodi, and Malone 1993;Hasson, Simmons, and Todorov 2005;Masip, Garrido, and Herrero 2006;Skurnik et al. 2005;Unkelbach 2007). For example, Gilbert, Tafarodi, and Malone (1993) designed six experiments which placed one group of participants under a cognitive load conditiona disabling performance constraintto make it harder for the participants to go through an evaluative process and compared these with a control group of participants who were not placed under a cognitive load. ...
... In addition to this admittedly somewhat suggestive evolutionary consideration, there are important empirical results that provide evidence for the Spinozan hypothesis (Gilbert 1991;Gilbert, Tafarodi, and Malone 1993;Hasson, Simmons, and Todorov 2005;Masip, Garrido, and Herrero 2006;Skurnik et al. 2005;Unkelbach 2007). For example, Gilbert, Tafarodi, and Malone (1993) designed six experiments which placed one group of participants under a cognitive load conditiona disabling performance constraintto make it harder for the participants to go through an evaluative process and compared these with a control group of participants who were not placed under a cognitive load. Broadly speaking, a Cartesian account of belief-formation would predict that participants under a cognitive load condition are less likely to arrive at a doxastic state compared to the control group and that the cognitive load will affect the forming and the rejection of belief equally. ...
Article
Full-text available
Belief is said to be essentially subject to a norm of truth. This view has been challenged on the ground that the truth norm cannot provide guidance on an intuitive inferentialist model of guidance and thus cannot be genuinely normative. One response to the No Guidance argument is to show how the truth norm can guide belief-formation on the inferentialist model of guidance. In this paper, I argue that this response is inadequate in light of emerging empirical evidence about our system of belief-formation. I will then motivate an alternative response and present, in rough outline, a viable, reason-responsive model of epistemic guidance on which the truth norm can guide.
... In fact, the functioning of democracy relies on the premise that each idea must be permitted in the "marketplace of ideas" (Gilbert et al., 1993). According to a metaphor introduced by John Stuart Mill (1975Mill ( /1859, abstract ideas can be thought of as being part of an imaginary market from which people can freely "shop" the ones that fit them the best. ...
... The truth bias has been consistently documented in experimental settings. In their seminal studies, Gilbert and colleagues presented people with statements that were explicitly tagged as true or false (e.g., accompanied by the words "True" or "False," Gilbert et al., 1990; or displayed in black = "True" vs. red = "False" fonts, Gilbert et al., 1993), under distraction or not. When asked to recall the statements' truth value, distracted participants were more likely to remember false statements as true compared to the undistracted ones. ...
... Research in cognitive psychology has established for many decades now that information overload leads to impaired cognitive performance in a variety of cognitive tasks (Franconeri et al., 2013), and in the past, it was argued that it reduced people's ability to scrutinize information and made them more gullible (Gilbert et al., 1990(Gilbert et al., , 1993. More recent studies have showed that gullibility per se is unlikely to be affected by cognitive load Pantazi et al., 2018), but the general notion that information or cognitive load impairs cognitive abilities remains uncontested. ...
Article
In the last few years, especially after the Brexit referendum and the 2016 U.S. elections, there has been a surge in academic interest for misinformation and disinformation. Social, cognitive, and political scientists' work on these phenomena has focused on two main aspects: • Individuals' (and by extension societies') vulnerability to misinformation; • Factors and interventions that can increase individuals' (and societies') resistance to misinformation. In this article, we offer a critical review of the psychological research pertaining to these two aspects. Drawing on this review, we highlight an emerging tension in the relevant literature. Indeed, the current state of the art of the political misinformation literature reflects the combined operation of two opposing psychological constructs: excess gullibility on the one hand and excess vigilance on the other. We argue that this conceptualization is important in both advancing theories of individuals' and societies' vulnerability to misinformation and in designing prospective research programs. We conclude with proposing what, in our view, are the most promising avenues for future research in the field.
... First, additive changes may be incrementally easier to process. Any component that can be subtracted must first be understood as part of the artefact before it can be considered as 'not' part of the artefact 22 . Second, over time, additive changes may come to be viewed more positively than subtractive changes. ...
... In experiments 6 to 8 (n = 1,153) (described in 'Experiment 6' and 'Experiments 7 and 8' in the Methods), we examined whether participants would be less likely to produce a subtractive transformation when they were under cognitive load (a state that is known to increase reliance on cognitive shortcuts 4,22,32,33 ). In an adapted version of experiment 5, participants completed four critical trials with no practice trials (Fig. 1a-d). ...
... In an adapted version of experiment 5, participants completed four critical trials with no practice trials (Fig. 1a-d). To induce a higher cognitive load, we used a concurrent head-movement task 33 in experiment 6 and a concurrent digit-search task 22,32 in experiments 7 and 8. Meta-analysis of the three experiments indicates that participants failed to identify the subtractive transformation for more puzzles in the higher-versus lower-load condition (Hedge's g = 0.18, z = 2.97, P = 0.003) ( Table 1; details and a one-trial version are given in Supplementary Information sections 1.7-1.10). When participants had more attentional resources available, they were more likely to identify a superior subtractive transformation. ...
Article
Full-text available
Improving objects, ideas or situations—whether a designer seeks to advance technology, a writer seeks to strengthen an argument or a manager seeks to encourage desired behaviour—requires a mental search for possible changes1–3. We investigated whether people are as likely to consider changes that subtract components from an object, idea or situation as they are to consider changes that add new components. People typically consider a limited number of promising ideas in order to manage the cognitive burden of searching through all possible ideas, but this can lead them to accept adequate solutions without considering potentially superior alternatives4–10. Here we show that people systematically default to searching for additive transformations, and consequently overlook subtractive transformations. Across eight experiments, participants were less likely to identify advantageous subtractive changes when the task did not (versus did) cue them to consider subtraction, when they had only one opportunity (versus several) to recognize the shortcomings of an additive search strategy or when they were under a higher (versus lower) cognitive load. Defaulting to searches for additive changes may be one reason that people struggle to mitigate overburdened schedules11, institutional red tape12 and damaging effects on the planet13,14. Observational and experimental studies of people seeking to improve objects, ideas or situations demonstrate that people default to searching for solutions that add new components rather than for solutions that remove existing components.
... Gilbert's theory is more than an esoteric idea about how the human cognitive system functions. There are empirical results to support it (Gilbert et al., 1990(Gilbert et al., , 1993, and it is of both theoretical and practical consequence in several areas. Through an ingenious experimental design (described below), Gilbert et al. (1990) tackle the question of the very formation of belief -of how people accept entirely novel contents. ...
... Despite the fact that Gilbert's account is invoked in all of these contexts, there has been surprisingly little empirical scrutiny of Gilbert et al.'s (1990) results (the few exceptions are Sperber et al., 2010;Street & Kingstone, 2017;Hasson et al., 2005;Richter et al., 2009;Brashier & Marsh, 2019;Nadarevic & Erdfelder, 2019;see Mercier, 2017, for a review). Our critique of Gilbert et al.'s (1990) findings necessitates a re-evaluation of the evidence and, by extension, of Gilbert and colleagues' theory of belief (e.g., Gilbert, 1991, Gilbert et al., 1993 that is supposed to account for the aforementioned phenomena. Gilbert's (1991) theory will likely have to be replaced by a view in which the relative plausibility of contents, as well as the reliability of sources of information, play a critical role in the formation of belief. ...
Article
Most of the claims we encounter in real life can be assigned some degree of plausibility, even if they are new to us. On Gilbert's (1991) influential account of belief formation, whereby understanding a sentence implies representing it as true, all new propositions are initially accepted, before any assessment of their veracity. As a result, plausibility cannot have any role in initial belief formation on this account. In order to isolate belief formation experimentally, Gilbert, Krull, and Malone (1990) employed a dual-task design: if a secondary task disrupts participants' evaluation of novel claims presented to them, then the initial encoding should be all there is, and if that initial encoding consistently renders claims ‘true’ (even where participants were told in the learning phase that the claims they had seen were false), then Gilbert's account is confirmed. In this pre-registered study, we replicate one of Gilbert et al.'s (1990) seminal studies (“The Hopi Language Experiment”) while additionally introducing a plausibility variable. Our results show that Gilbert's ‘truth bias' does not hold for implausible statements — instead, initial encoding seemingly renders implausible statements ‘false’. As alternative explanations of this finding that would be compatible with Gilbert's account can be ruled out, it questions Gilbert's account.
... Only thereafter, acceptance or unacceptance results from evaluating this content as being true or false (e.g., Zimbardo & Leippe, 1991). However, Gilbert et al. (1990; see also Gilbert, 1991;Gilbert, et al, 1993) opposed this intuitive view. Instead, they suggested acceptance and unacceptance to be asymmetrical processes and postulated that "belief is first, easy, and inexorable [whereas] ...
... Beyond establishing the first cornerstones for theoretical models on the processing of uncertainty cues, our observation of a memory distortion from fact toward speculation (rather than vice versa) arguably also relates to research on negation. This is because the latter commonly shares the 'recollection of facts' as a category of reference with our studies (Gilbert et al., 1990;Gilbert, 1991;Gilbert et al., 1993;Mayo et al., 2004). Specifically, the observation that people rendered facts as mere speculation indicates that the recollection of the former is less stable than is assumed by previous work in which facts were contrasted with negations. ...
Article
Full-text available
Modern media report news remarkably fast, often before the information is confirmed. This general tendency is even more pronounced in times of an increasing demand for information, such as during pressing natural phenomena or the pandemic spreading of diseases. Yet, even if early reports correctly identify their content as speculative (rather than factual), recipients may not adequately consider the preliminary nature of such information. Theories on language processing suggest that understanding a speculation requires its reconstruction as a factual assertion first-which can later be erroneously remembered. This would lead to a bias to remember and treat speculations as if they were factual, rather than falling for the reverse mistake. In six experiments, however, we demonstrate the opposite pattern. Participants read news headlines with explanations for distinct events either in form of a fact or a speculation (as still being investigated). Both kinds of framings increased participants' belief in the correctness of the respective explanations to an equal extent (relative to receiving no explanation). Importantly, however, this effect was not mainly driven by a neglect of uncertainty cues (as present in speculations). In contrast, our memory experiments (recognition and cued recall) revealed a reverse distortion: a bias to falsely remember and treat a presented "fact" as if it were merely speculative. Based on these surprising results, we outline new theoretical accounts on the processing of (un)certainty cues which incorporate their broader context. Particularly, we propose that facts in the news might be remembered differently once they are presented among speculations. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... Evidence in support of the idea that trust is presupposed in every communicative exchange is found in philosophical accounts (e.g. Reid 1764, Lewis 1969, Davidson 1984), but also stems from psychological accounts (Gilbert et al. 1990, Gilbert et al. 1993, who argue in favour of default human gullibility. However, these accounts are compatible with a principle of epistemic vigilance, as long as epistemic vigilance is conceived as a set of mechanisms that kicks in when circumstances prompt for it. ...
... Daniel Gilbert and his colleagues started to test some twenty years ago in a series of experiments the accuracy of the Cartesian idea which dissociates understanding and believing: he showed that individuals whose evaluative mechanisms where perturbed by parasitic stimuli ended up being unable to give up the beliefs that they had acquired just by being exposed to information, that is, just by processing it and understanding it (see Gilbert et al. 1990, Gilbert at al. 1993. ...
Thesis
Full-text available
This dissertation tackles the issue of uncooperative and manipulative communication from a cognitive pragmatic perspective and develops and documents the fundamental hypothesis that its success depends on whether the manipulator manages to keep the manipulative intention concealed. Specifically, manipulative communication is conceived here as a twofold process meant to i) mislead the addressee into processing limited sets of contextual information and ii) prevent her/him from processing contextual information that would be detrimental to the success of the manipulative attempt. The dissertation draws on several fields of research in the Humanities and attempts to interface findings from cognitive psychology (mainly research on cognitive biases) and argumentation theory (research on fallacies) into a consistent cognitive pragmatic account of information-processing. To this end, the so-called Contextual Selection Constraint model (CSC) is presented; this model specifies from a theoretical perspective how certain linguistic and argumentative strategies can be used to constrain the comprehension procedure so that targeted assumptions end up partaking in the derivation of meaning and other unwanted assumptions turn out to be disregarded – or unprocessed altogether. These possibilities are conceived as natural potential consequences of our cognitive system’s inherent fallibility.
... Research has shown that the effects of misinformation are particularly persistent and can influence other belief, attitudinal and behavioral outcomes (Green & Donohue, 2011;Skurnik, Yoon, Park & Schwarz, 2005). Even when corrected, misinformation yields lingering effects on affective attitudes, inferences and judgments regarding the subject, as well as related behaviors (Thorson, 2016;Gilbert, Tafarodi & Malone, 1993;Johnson & Seifert, 1994;Wegner, Coulton & Wenzlaff, 1985). The effects of misinformation are concerning from a theoretical point of view, but misinformation is especially damaging when it pertains to complex issues and may provide a "basis for political and societal decisions that run counter to a society's best interest" (Lewandowsky, et al., 2012, p. 107). ...
... Mere exposure to claims, even ones immediately identified as false, increases subsequent acceptance of the claims as true (Begg, Anas, and Farinacci 1992;Gilbert, Krull, and Malone 1990). People tend to process information on the assumption that it is true, and often still believe it despite it being identified as false (Anderson, Lepper, & Ross, 1980;Gilbert, Tafarodi & Malone, 1993;Johnson & Seifert, 1994;Wegner, Coulton, & Wenzlaff, 1985). Once an individual has a mental image of a concept (e.g., a negative view of the welfare system), it may be difficult to re-write that image in the absence of a contrasting personal account (Hasson, Simmons, & Todorov, 2005). ...
Article
Misinformation is a growing concern in the public health realm, as it is persistent and difficult to correct. One strategy recently considered to address misinformation is “inoculation”, which leverages forewarning and refutation to defend against a subsequent persuasive message. Here, I aimed to assess whether inoculation can be harnessed to forestall implicitly arising misinformation such as that from misleading natural cigarette ads, which have been shown to prompt widespread misbeliefs. I conducted three randomized online experiments assessing means of inoculating against misinformation. The first tested inoculation tactics to determine whether particular message formats are more effective (i.e., exemplar, narrative, or exposition), and to assess whether inoculations must refute the exact arguments from the misinformation or can more generally match argument themes. The second study tested an attenuated generic versus a specific refutation, and explored results over time. The final study focused on a particular inoculation strategy–highlighting prior deceptive messaging by the persuasive source. Results indicate that inoculations can successfully defend against misinformation from misleading ads; further, they do not need to match exact arguments or even exact themes from the arguments in order to reduce misbeliefs. In fact, high level, generic refutations successfully reduced misbeliefs both immediately and with a time delay, and, crucially, so too did inoculations that included an explicit forewarning but only an implicit refutation. Furthermore, multiple inoculation message formats were successful, and the effectiveness of inoculations was enhanced, to a limited degree, by identifying prior deceptive messaging by the persuasive source. Finally, findings supported counterarguing as a potential mediator of effects of inoculation messages on misbeliefs. The significance of the results here lies in their support for key inoculation components–forewarning and refutation–as well as the much-hypothesized mechanism of counterarguing, when attempting to combat misinformation. The core contribution of these studies is the consistent finding that we can successfully inoculate against implicit misinformation without directly addressing the exact misinformation claims, which is particularly important with implicitly arising, often difficult-to-anticipate misbeliefs from misleading advertising.
... Taken together, this set of data has long been interpreted as supporting the idea that while acceptance is the spontaneous and automatic attitude towards incoming information, epistemic assessment is optional and cognitively effortful. More recently, Pantazi et al. (2018) have run an auditory adaptation of Gilbert et al. (1993)'s study, in which the statements' truthvalue (true vs false) was associated with two distinct voices. The results did not show any effect of the cognitive-load manipulation: in both groups (cognitive load and no cognitive load), participants' guilt judgments were equally affected by false statements and these were likely to be misremembered as true. ...
Article
Full-text available
Communication is an effective and rich tool to transmit information to others. Different models of communication, though, disagree on how beliefs are acquired via testimony. While the Intentional-Inferential Model conceives communication as a matter of expressing, recognizing and evaluating intentions, the Direct Perception Model views communication as a means for direct belief transfer. What kind of experimental data can be brought to bear on this debate? We argue that developmental research can provide relevant insights for advancing the present debate. Drawing on recent experimental findings on the ontogeny of vigilance and trust, we question the idea that communication involves direct belief transfer and illustrate how children's reliance on communication is the result of smart processes of trust calibration.
... It was sometimes 19. For contemporary defenses of such a belief-default view, rightly traced back to Spinoza, see Gilbert (1991), Gilbert et al. (1993), Mandelbaum (2014). 20. ...
Article
Full-text available
In the Ethics, Spinoza advances two apparently irreconcilable construals of will [voluntas]. Initially, he presents will as a shorthand way of referring to the volitions that all ideas involve, namely affirmations and negations. But just a few propositions later, he defines it as striving when it is “related only to the mind” (3p9s). It is difficult to see how these two construals can be reconciled, since to affirm or assent to some content is to adopt an attitude with a cognitive (mind-to-world) direction of fit, while to strive to persevere in one’s being would seem to be to adopt an attitude with a conative (world-to-mind) direction of fit. Attempting to achieve consistency by taking striving under the attribute of thought to consist in affirming only pushes the equivocation problem onto the concept of affirmation (Lin 2019). It would seem, then, that Spinoza equivocates on the concepts of will, affirmation, or perhaps both. I defend the univocity of Spinoza’s accounts of will and affirmation, showing that it comports with established accounts of affirmation in early modern philosophy and yields a clear, uniform account of what it means to strive under the attribute of thought, preserving the systematicity of Spinoza’s account of mind in ways that other interpretations do not.
... People are likely to believe the information they have been exposed to (Gilbert et al., 1993), even when it is blatantly false, partly because familiar information is easily accepted (Fazio et al., 2015). Research also suggests that even a single exposure fosters familiarity and increases the perceived accuracy of false information (Pennycook et al., 2018). ...
Article
This study explores the role of the news-finds-me (NFM) perception-the belief that people can be well-informed without actively seeking news due to their social networks-in fostering social media users' inaccurate beliefs about COVID-19. Findings from a US national survey (N = 1003) suggest that NFM perception is positively associated with belief in COVID-19 misinformation and mediates the positive relationship between social media use and false beliefs when NFM is measured as a single-dimensional construct. However, the sub-dimensions of NFM have distinct implications: The reliance on peers and not seeking but feeling informed dimensions work in the same manner as when NFM is treated as a single-dimensional construct, whereas reliance on algorithmic news negatively predicts belief in misinformation and negatively mediates the aforementioned relationship. We also found the mediating role of exposure to misinformation in the relationship between social media use and false beliefs. Implications from these findings are discussed.
... Descartes believed the opposite; Thomas Aquinas in De Veritate 14,9 [314] said that they are not only separable but also mutually exclusive, that one cannot know and believe a thing at the same time. Spinoza's dictum has been demonstrated in Gilbert's experiment [285]. 83 And occasionally, as Kuhn observed, of completely new paradigms. ...
Preprint
Full-text available
In the Sociology of Scientific Knowledge, it is asserted that science is merely another belief system, and should not be accorded any credibility above other belief systems. This assertion shows a complete misunderstanding of how both science and philosophy work. Not only science but all logic-based philosophies become pointless under the belief system hypothesis. Science, formerly known as natural philosophy, is not a set of facts or beliefs, but rather a method for scrutinising ideas. In this it is far closer to a philosophical tool set than to an ideology. Popper’s view, widely endorsed by scientists, is that science requires disprovable propositions which can be evaluated using available evidence. Science is therefore not a system of belief, but a system of disbelief, which is a very different thing indeed. This paper reviews the origins of the Sociology of Scientific Knowledge, discusses the numerous flaws in its fundamental premises and revisits the views of Michael Polanyi and Karl Popper who have been falsely cited as supporters of these premises. Two appendices are included for reference: one on philosophies of science and one on history of scientific methods. A third appendix on ethics and science has been published separately.
... The first element to consider is that the counterarguing reduction stimulated by narrative immersion may have an important effect in making cli-fi consumers more sensitive to climate change concerns. Based on the idea that criticism requires not only some cognitive ability but also the motivation to put it at work (Gilbert et al., 1993), psychologists suggest that transported audiences may be less likely to dispute what is stated in a narrative. Interrupting the narrative flow to counterargue some of its features would indeed diminish the pleasure one takes in it, and therefore the required motivation. ...
Chapter
This chapter addresses fictional narratives as a specific kind of fiction capable of eliciting particular effects on their recipients. The first section of the chapter considers the status of climate fiction (cli-fi) as a literary genre, and identifies a set of standard properties that qualify most works in the category. The second section addresses the specific fictional engagement prompted by cli-fi and discusses its relationship with thought experiments. The third section examines, from a psychological angle, whether and how climate narratives can induce changes in the recipients’ beliefs and attitudes toward environmental issues. The chapter closes by listing some of the research questions raised by cli-fi that still await exploration. In the conclusion, it is tentatively suggested that, although the experience of consuming cli-fi will not change the planet, it might nonetheless heighten recipients’ concerns and willingness to take action against climate change.
... A similar effect was demonstrated by Gilbert and colleagues (Gilbert, Krull, & Malone, 1990;Gilbert, Tafarodi, & Malone, 1993). They presented participants with different statements along with feedback regarding the truth of the statements. ...
Article
Full-text available
Levy (2021) argues that bad beliefs predominately stem from automatic (albeit rational) updating in response to testimonial evidence. To counteract such beliefs, then, we should focus on ridding our epistemic environments of misleading testimony. This paper responds as follows. First, I argue that the suite of automatic processes related to bad beliefs extends well beyond the deference-based processes that Levy identifies. Second, I push back against Levy's claim that bad beliefs stem from wholly rational processes, suggesting that, in many cases, such processes are better characterised as arational. Finally, I note that Levy is too quick to dismiss the role that individuals can play in cleaning up their own epistemic environments, and I suggest one route through which this is possible.
... Si la gente cree solo bajo ciertas condiciones es porque está igualmente predispuesta a ejercer un cierto control crítico de la información. 16 Gilbert et al., 1993;Shermer, 2011. La noción de vigilancia epistémica (Sperber, 2010) se ha propuesto como contrapeso conceptual a la idea de que la credulidad es default condition. ...
Chapter
Full-text available
Ideologies, worldviews, or simply personal theories, often acquire a distorted and pathological character, and become a factor of alienation rather than an epistemic resource and an aid for personal existence. This paper attempts to better define the limits and characteristics of this experience, which we call distorted intellectual beliefs, or general conceptual beliefs (GB), while trying to highlight both its sometimes dramatic background and its personal and social consequences, which are no less potentially deleterious. We believe that such experiences should not be confused tout court with a broader and more complex phenomenon, such as extremism and politico-religious radicalism, but are a specific typology of that broader and multifaceted fact that is self-deception. We hypothesize that the self-deception implicit in experiences of intellectual distortion produces a cognitive dissonance of which the subject is normally, though with varying intensity, aware (or may become aware through honest introspection). The phenomenon occurs in two extreme forms: one is normal (sporadic), the other is exceptional (systematic). The passage from one to the other is a complex process of escalation and de-escalation, on which multiple external and internal variables act. In its ascending path, it essentially coincides with a process of psychological polarization and cognitive "de-pluralization", while its descending phase marks a return to reality and a "re-pluralization", where the subject returns to being what he basically is, namely, an active and tireless meaning seeker. In the central part of the chapter, similarities and differences between processes of deradicalization and phenomena of religious deconversion are analyzed, with reference, among others, to the case of the Austro-Hungarian writer Arthur Koestler. An abridged version of this text has been delivered for the Workshop "Explaining Extreme Belief and Extreme Behavior", September 15-16, 2022, Vrije Universiteit Amsterdam. I am grateful to David Konstan (NYU) for reading and commenting on a first version of this chapter and to Rik Peels (VUA) for his comments and questions during the Workshop. I would also like to thank my bachelor's and master's students at UDG, who discussed these topics at length with during semester 2022-B.
... What increases the chances of failing to put in the effort of rejecting a belief is cognitive load (Gilbert et al., 1993;Egan, 2008;Porot & Mandelbaum, 2020). Emotions themselves come with a big cognitive load (Plass & Kalyuga, 2019). ...
Article
Full-text available
In this paper, I defend the judgementalist theory of emotion against the argument from recalcitrant emotions. Judgementalism holds that a necessary condition for being in an emotional state is that an evaluative belief is formed. Recalcitrant emotions are emotions that contradict endorsed beliefs and judgements. The argument from recalcitrant emotions states that a judgementalist explanation of recalcitrant emotions results in the absurd conclusion that one would hold two contradictory beliefs. I argue that emotion involves a so-called Spinozan belief-forming process: a process which automatically generates beliefs, without taking all available information into account. The generated beliefs might contradict something one already believes, as the so-called Fragmentation of Belief Hypothesis predicts. Thus the judgementalist explanation of recalcitrant emotions does not lead to an absurd conclusion and therefore the argument from recalcitrant emotions is refuted.
... Here is the model of language comprehension underlying Fricker's account: Upon hearing or reading an utterance, receivers (i) form comprehension-based beliefs representing the speaker as asserting 8 certain content (p), and then (ii) either accept or reject p based on the assessment of the speaker's honesty and competence, which leads either to the formation of a corresponding testimony-based belief (that p) or no formation of belief. Since the seminal work of a psychologist Daniel Gilbert and his colleagues (Gilbert 1991;Gilbert et al. 1990;Gilbert et al. 1993), this model is often called the Cartesian model of language comprehension. If one thinks about filtering in terms of the Cartesian model, then it is natural to assume that the filter is 'located at the entrance to our belief box' and that its role is to keep contents of testimony from unreliable (dishonest or incompetent) sources from falling into the box. ...
Article
Full-text available
It is often suggested that we are equipped with a set of cognitive tools that help us to filter out unreliable testimony. But are these tools effective? I answer this question in two steps. Firstly, I argue that they are not real-time effective. The process of filtering, which takes place simultaneously with or right after language comprehension, does not prevent a particular hearer on a particular occasion from forming beliefs based on false testimony. Secondly, I argue that they are long-term effective. Some hearers sometimes detect false testimony, which increases speakers’ incentives for honesty and stabilizes the practice of human communication in which deception is risky and costly. In short, filtering prevents us from forming a large number of beliefs based on false testimony, not by turning each of us into a high-functioning polygraph but by turning the social environment of human communication into one in which such polygraphs are not required. Finally, I argue that these considerations support strong anti-reductionism about testimonial entitlement.
... Empirical research has long found that people automatically believe incoming messages and must exert cognitive effort in order to question the veracity of messages 906 | CLEMENTSON AND XIE and decide whether to disbelieve something (Gilbert, Tafarodi, and Malone 1993). According to TDT, humans are predisposed to believe each other, and people rarely ponder the possibility that they are being deceived. ...
Article
Full-text available
This study applies truth‐default theory (TDT) to presidential candidates. TDT holds that people tend to passively believe others without consciously considering whether they are being told the truth. But do voters have a truth‐default toward presidential candidates? In an experiment, voters across the United States (N = 294) watched a news interview in which a presidential candidate was either honest or deceptive. Party affiliation was also manipulated. Consistent with TDT, thought‐listing tasks revealed that most voters did not mention deception after exposure to the presidential campaign interview. Voters largely defaulted to the truth even when sustaining outgroup partisan exposure and deception, and when asked about the candidate's demeanor. Filling out closed‐ended scales, though, voters reported distrust, suspicion, and perceiving deceptive messaging. The discussion concerns the implications of voters' perceptions of a presidential candidate's veracity varying based on how voters are prompted.
... Our predictions rest on several related cognitive literatures. First, the initial memory may be stronger in the affirmation case, as people are biased to initially believe information and tagging something as false requires a second step that requires cognitive resources (Gilbert et al., 1993). Second, negative corrections (which tell people to unbelieve something) provide neither explanations for why beliefs are false, nor new positive claims with which to replace them (Lewandowsky et al., 2012). ...
Article
Information changes: science advances, newspapers retract claims, and recommendations shift. Successfully navigating the world requires updating and changing beliefs, a process that is sensitive to a person’s motivation to change their beliefs as well as the credibility of the source providing the new information. Here, we report three studies that consistently identify an additional factor influencing belief revision. Specifically, we document an asymmetry in belief revision: people are better able to believe in a claim once thought to be false, as opposed to unbelieving something once believed to be true. We discuss how this finding integrates and extends prior research on social and cognitive contributions to belief revisions. This work has implications for understanding the widespread prevalence and persistence of false beliefs in contemporary societies. This article is protected by copyright. All rights reserved.
... However, at the same time, misinformation repetition might boost misinformation familiarity even under load, thus potentially facilitating familiarity-backfire effects. This is supported by research on negations showing that people sometimes misremember negated information as true, especially when under load (Gilbert et al., 1990(Gilbert et al., , 1993. ...
Article
Full-text available
General Audience Summary Misinformation can continue to influence an individual’s reasoning even after a correction. This is known as the continued influence effect (CIE). It has previously been suggested that this effect occurs (at least partially) due to the familiarity of the misinformation. This has led to recommendations to avoid repeating the misinformation within a correction, as this may increase misinformation familiarity and thus, ironically, false beliefs. However, it has proven difficult to find strong evidence for such familiarity backfire effects. One situation that may produce familiarity-driven backfire is if misinformation is repeated within a correction while participants are distracted—misinformation repetition may automatically boost its familiarity, while the distraction may impede proper processing and integration of the correction. In this study, we investigated how misinformation repetition during distraction affected the CIE. The present study extends the generalizability of traditional misinformation research by asking participants to listen to misinformation and corrections while in a driving simulator. Misinformation familiarity was manipulated through the number of corrections provided that contained the misinformation. Distraction was applied not only through the background task of driving in the simulator, but also manipulated through a secondary math task, which was administered selectively during the correction-encoding phase, and which required manual responses on a cockpit-mounted tablet. As hypothesized, cognitive load reduced the effectiveness of corrections. Furthermore, we found no evidence of familiarity backfire effects, with multiple corrections being more effective in reducing misinformation reliance than a single correction. When participants were distracted, a single correction was entirely ineffective, and multiple corrections were required to achieve a reduction in misinformation reliance. This provides further evidence against familiarity backfire effects under conditions maximally favorable to their emergence and implies that practitioners can debunk misinformation without fear of inducing ironic backfire effects.
... Second, our findings seem to conflict with the idea that facts prove stable in memory-a widely spread notion in models and experiments on negation processing (Gilbert, 1991;Gilbert et al., 1990;Gilbert et al., 1993;Mayo et al., 2004). In that work, memories of negations were classically contrasted with those of affirmations (or true statements), whereby the latter were usually found or assumed to be less error prone. ...
Article
Full-text available
Modern media enable rapid reporting that does not refer to facts alone but is often interspersed with unconfirmed speculations. Whereas previous research has concentrated primarily on how unconfirmed contents might propagate, potential memory effects of reporting confirmed facts among speculations have so far been widely disregarded. Across four experiments, we show that the presence of speculative news (indexed by uncertainty cues such as "might") can reduce the remembered certainty of unrelated facts. The participants read headlines with exclusively speculative news, exclusively factual news, or a mixture of both. Our results indicate that uncertainty cues spread onto one's recollection of unrelated facts after having read a mixture of facts and speculations. This tendency persisted when both types of news were presented sequentially (e.g., factual news first), suggesting that the presence of speculative news does not specifically affect encoding-but can overshadow memories of facts in retrospect. Further, the tendency to misremember facts as speculations emerged even when the proportion of speculations among factual news was low (6/24 headlines) but increased linearly with the number of speculations intermingled. Given the widespread dissemination of speculative news, this bias poses a challenge in effectively getting confirmed information across to readers. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... Still, given proper motivation, many individuals are able to overcome these initial tendencies, much like the subjects in Devine's (1989) famous study on how automatic stereotypes can be overridden with conscious effort. Thus, while people may initially believe in false information, they can counteract this tendency, at least if they put in enough cognitive effort (Gilbert et al., 1993). Thus, misperceptions can be partially corrected, especially if they are debunked with detailed information, but the best approach is to not let them be established at all because ordinary citizens' motivation to override false information often seems to be lacking on political issues. ...
... Still, given proper motivation, many individuals are able to overcome these initial tendencies, much like the subjects in Devine's (1989) famous study on how automatic stereotypes can be overridden with conscious effort. Thus, while people may initially believe in false information, they can counteract this tendency, at least if they put in enough cognitive effort (Gilbert et al., 1993). Thus, misperceptions can be partially corrected, especially if they are debunked with detailed information, but the best approach is to not let them be established at all (Chan et al., 2017) because ordinary citizens' motivation to override false information often seems to be lacking on political issues. ...
... Similar effects have been obtained in experiments where participants judged the veracity of smiles (Gilbert et al., 1990) and learned translations of Hopi words (Gilbert et al., 1993). In the former, participants judged whether each of a series of videos depicted real or fake smiles; they received feedback on each guess. ...
Article
Why do consumers sometimes fall for spurious claims – e.g., brain training games that prevent cognitive decline, toning sneakers that sculpt one's body, flower essence that cures depression – and how can consumers protect themselves in the modern world where information is shared quickly and easily? As cognitive scientists, we view this problem through the lens of what we know, more generally, about how people evaluate information for its veracity, and how people update their beliefs. That is, the same processes that support true belief can also encourage people to sometimes believe misleading or false information. Anchoring on the large literature on truth and belief updating allows predictions about consumer behavior; it also highlights possible solutions while casting doubt on other possible responses to misleading communications.
... It is possible that high load, or more generally a state of exploitation, attenuates propositional processing, thus diminishing the ability to critically evaluate and ultimately filter prior misleading information. This account may underlie previous findings showing that high load increases reliance on false information 39 and findings showing that threat, which is interpreted as a form of load, leads to greater anchoring towards prior decisions 40 . We propose that other realms such as marketing, communication, education or even law investigations, should be informed that high load, like stress and pressure, may prompt comprehensive memory-related, perceptual, and behavioral biases towards stored associative information. ...
Article
Full-text available
Associative processing is central for human cognition, perception and memory. But while associations often facilitate performance, processing irrelevant associations can interfere with performance, for example when learning new information. The aim of this study was to explore whether associative interference is influenced by contextual factors such as resources availability. Experiments 1–3 show that associative interference increases under high cognitive load. This result generalized to both long-term and short-term memory associations, and to both explicitly learned as well as incidentally learned associations in the linguistic and pictorial domains. Experiment 4 further revealed that attention to associative information can delay one’s perceptual processing when lacking resources. Taken together, when resources diminish associative interference increases, and additionally, processing novel and ambiguous information is hindered. These findings bare relevance to other domains as well (e.g., social, educational), in which increased load or stress may prompt an undesirable bias towards prior, misleading information.
... (A stream of numbers was scrolling across the bottom of the screen, and they had to press a button every time they saw the number '5.') Participants in the control condition simply read and responded to the dilemmas without having to perform any other task at the same time. Prior studies provided strong evidence that performing this other task should decrease people's ability to respond using reflective cognition (Gilbert, Tafarodi & Malone, 1993). ...
Article
Full-text available
In the early years of experimental philosophy, a number of studies seemed to suggest that people’s philosophical intuitions were unstable. Some studies seemed to suggest that philosophical intuitions were unstable across demographic groups; others seemed to suggest that philosophical intuitions were unstable across situations. Now, approximately two decades into the development of experimental philosophy, we have much more data concerning these questions. The data available now appear to suggest that philosophical intuitions are actually quite stable. In particular, they suggest that philosophical intuitions are surprisingly stable across both demographic groups and situations.
... The existence and perpetuation of environmental misinformation can occur for a number of reasons. Evidence suggests that people tend to remember and believe previously heard misinformation (Gilbert, Tafarodi, & Malone, 1993Southwell & Thorson, 2015, meaning once misinformation is initially spread it can be difficult to counteract. People also tend to overestimate the extent to which others engage in harmful behaviors ("pluralistic ignorance"; Prentice & Miller, 1993;Taylor, 1982). ...
... Given these circumstances, examining how people process misinformation is of ever-growing importance. Previous research shows that misinformation tends to be directly encoded by readers (Gilbert et al., 1993). Even when they knew the correct answer before reading the false statement, after being confronted with misinformation they reproduce some of it as correct information (Fazio et al., 2013). ...
Conference Paper
Full-text available
Readers’ engagement with false information is a topic of growing importance. In two experiments, we investigated whether the misinformation effect can be reduced by educating participants about it prior to reading. In both experiments (N = 84 and N = 133), no reduction of the misinformation effect through psychoeducation was observed. Participants in both groups (control and psychoeducation) referenced a similar amount of misinformation after reading false information on items they previously answered correctly. Both reading false and reading neutral information did not change the confidence participants had in answers they previously knew, while reading correct information increased confidence.
... Research (2021) 6:52 confident in one's memory can be associated with more biased recall of thinking that past political attitudes were more similar to current attitudes than they actually were (Grady, 2019); it could be that the feeling of familiarity of a story that leads to more reported (but not actual) memory also leads to reduced effectiveness of the otherwise strong warning because that familiarity is also associated with truthfulness (Polage, 2012;Whittlesea, 1993). Future studies may want to investigate not just whether people remember seeing the headline before but whether they remember seeing the disputed notice on it, since the memory of the information alone may have been encoded as true simply by reading it, even after the warning (Gilbert et al., 1993). About 1/5th of the sample reported a potential false memory, reporting remembering at least one of the stories from outside of the survey, and those who did were especially likely to have come to believe the story was true later. ...
Article
Full-text available
Politically oriented “fake news”—false stories or headlines created to support or attack a political position or person—is increasingly being shared and believed on social media. Many online platforms have taken steps to address this by adding a warning label to articles identified as false, but past research has shown mixed evidence for the effectiveness of such labels, and many prior studies have looked only at either short-term impacts or non-political information. This study tested three versions of fake news labels with 541 online participants in a two-wave study. A warning that came before a false headline was initially very effective in both discouraging belief in false headlines generally and eliminating a partisan congruency effect (the tendency to believe politically congenial information more readily than politically uncongenial information). In the follow-up survey two weeks later, however, we found both high levels of belief in the articles and the re-emergence of a partisan congruency effect in all warning conditions, even though participants had known just two weeks ago the items were false. The new pre-warning before the headline showed some small improvements over other types, but did not stop people from believing the article once seen again without a warning. This finding suggests that warnings do have an important immediate impact and may work well in the short term, though the durability of that protection is limited.
... This initial acceptance allows for mentally testing and analyzing its implications. The process of analytically reviewing early intuitions or new ideas requires effort and deliberation that might not be always available (Gilbert et al., 1993). Sometimes ideas are unexamined or examined lightly and not questioned, reconfirmed, modified, deepened, or rejected. ...
Article
Full-text available
Supernatural fears, although common, are not as well-understood as natural fears and phobias (e.g., social, blood, and animal phobias) which are prepared by evolution, such that they are easily acquired through direct experience and relatively immune to cognitive mediation. In contrast, supernatural fears do not involve direct experience but seem to be related to sensory or cognitive biases in the interpretation of stimuli as well as culturally driven cognitions and beliefs. In this multidisciplinary synthesis and collaborative review, we claim that supernatural beliefs are “super natural.” That is, they occur spontaneously and are easy to acquire, possibly because such beliefs rest on intuitive concepts such as mind-body dualism and animism, and may inspire fear in believers as well as non-believers. As suggested by psychological and neuroscientific evidence, they tap into an evolutionarily prepared fear of potential impending dangers or unknown objects and have their roots in “prepared fears” as well as “cognitively prepared beliefs,” making fear of supernatural agents a fruitful research avenue for social, anthropological, and psychological inquires.
... For people to understand a statement, they must have an initial belief about it. Understanding is believing (Gilbert et al. 1993). People may read or hear something, and then refute it through critical thinking. ...
Chapter
Full-text available
The application of artificial intelligence (AI) algorithms is improving everyday tasks worldwide. But while the internet has transformational benefits, it also has its severe drawbacks. Internet infrastructure is extremely expensive and requires large private investment. To profit while giving free access has necessitated the presentation of personalized advertisements. Psychology-based strategies are employed to keep users perpetually engaged, often using emotional or aggressive stimuli that attract attention. Users’ responses and personal data are harvested from multiple sources and analysed through complex statistical algorithms. When hundreds of variables are collected on a person, personality traits, expense patterns, or political beliefs become fairly predictable. This happens because human cognition and emotions evolved for survival in Palaeolithic environments, and certain features are universal. Technology companies sell behaviour prediction models to anyone willing to pay. According to client purposes, users can be prodded to spend money or adopt politically motivated beliefs. Furthermore, smartphone beacons and face recognition technology make it possible to track political activists as well as criminals. Through the use of AI, therefore, tech corporations “design minds” to act as directed and socially engineer societies. Large ethical issues arise, that include privacy concerns, prediction errors, and the empowerment of transnational corporations to profit from directed human activities. As AI becomes part of everyday lives, the internet that intended to bring universal knowledge to the world is unwittingly throwing us back into the Palaeolithic era. Now more than ever, humans ought to become more peaceful and content rather than be driven by ever-increasing emotion-driven contests. This chapter discusses these important issues with the direct or indirect actions that need to be taken to maintain sustainable consumption, world peace, and democratic regimes.
... Results showed that those in a state of high physiological arousal were more likely to be persuaded by peripheral cues, such as the status of the presenter of the message, than those in a lower state of arousal. Similarly, increasing arousal through stress, such as making participants attend to two tasks simultaneously, results in participants' producing fewer counter-arguments and being more convinced by the message presented to them compared with a control group who did not have the distracting task (Gilbert, Tafarodi, & Malone, 1993). As a result of physical and psychological stress, military recruits are more likely to embrace the ideas presented to them. ...
Article
Full-text available
ABSTRACT Among rioters storming the Capitol Hill building in Washington, DC, on January, 6, 2021, two men carried zip ties, presumably for restraining lawmakers. The Zip-Tie Guys, as the media dubbed the duo, shared little except their path to radicalization through Risk and Status Seeking. This paper analyzes radicalization of the Zip-Tie guys in the context of the larger problem: radicalization in the U.S. military and among veterans. Other mechanisms contributing to radicalization in military training are Group Isolation and Threat, Group Polarization, and Slippery Slope. After retiring, many veterans are also experiencing the radicalizing effects of Unfreezing. Tracking and countering radicalization in active military and in military veterans might be prudent.
... Specifically, when deliberating the healthiness of a high-calorie sandwich and soda meal, those primed to think about why it is not a typical item at a health restaurant corrected for the calorie underestimation stemming from the positive halo of the healthy restaurant name. This finding is consistent with the biased hypothesis testing literature (Gilbert et al., 1993) and provides initial evidence of a selective accessibilitybased account underlying the negative health halo effect. If this is indeed the case, then the negative halo effect stemming from unnatural nutritional claims should be driven by negative global evaluations of the product, thereby having downstream effects on spontaneous inferences and consumption. ...
Article
Full-text available
Consumer advocates and regulators champion the view that transparent labeling practices will help consumers make better decisions. However, it is unclear how unnatural nutritional claims (e.g., artificial ingredients, food additives, genetically modified organisms) affect perceptions of packaged food. Many researchers have cautioned that such labels can be commonly misinterpreted and can further stigmatize food produced by conventional processes. Building on the selective accessibility model, we propose that unnatural nutritional claims on front‐of‐package food labeling may induce a negative health halo effect. Accessibility of information consistent with a target concept (e.g., a claim on a food label) shapes consumer inferences and evaluations of an associated product (e.g., the packaged food) in the same direction. We propose that such nutritional claims can lead to higher calorie estimates and therefore biased food decisions. Furthermore, we examine the moderating effect of dispositional critical thinking, priming opposing beliefs, and activating causal reasoning to help mitigate on the negative health halo. We test these predictions across five experiments. Together, these findings advance our understanding of the halo effect, inference, and persuasion, and they suggest strategies for helping consumers make more informed health‐related judgments and decisions.
... 4 On the empirical side of things, the psychological literature on truth bias, sometimes also called truth default theory, corroborates the foregoing theoretical insights. For example, Daniel T. Gilbert and his team provide strong evidence for thinking that acceptance of communication coincides with comprehension (Gilbert, Malone, & Krull, (1990), Gilbert (1991), and Gilbert, Tafarodi & Malone (1993)) . That is, subjects do not first comprehend an idea that is presented in communication before then deciding whether to accept or reject it. ...
Article
Full-text available
A prima facie plausible and widely held view in epistemology is that the epistemic standards governing the acquisition of testimonial knowledge are stronger than the epistemic standards governing the acquisition of perceptual knowledge. Conservatives about testimony hold that we need prior justification to take speakers to be reliable but recognise that the corresponding claim about perception is practically a non-starter. The problem for conservatives is how to establish theoretically significant differences between testimony and perception that would support asymmetrical epistemic standards. In this paper I defend theoretical symmetry of testimony and perception on the grounds that there are no good reasons for taking these two belief forming methods to have significant theoretical differences. I identify the four central arguments in defence of asymmetry and show that in each case either they fail to establish the difference that they purport to establish or they establish a difference that is not theoretically significant.
... In an experimental study, subjects recommended prison sentences based on reading black ink crime reports embedded with identifiable falsehoods (subjects were told that information in red ink, e.g., "robber had a gun", was erroneously mixed in from another unrelated case and should be discounted). Distracted subjects recommended that "perpetrators" serve nearly twice as much time when relying on crime-exacerbating information they should have ignored [62]. Such research should not be surprising when viewed in the context of the large body of literature on resource depletion and self-control, wherein repeated acts of inhibiting or "dampening" external/irrelevant information while fixing attention leads to exertion fatigue akin to muscular exertion-logical reasoning and extrapolation abilities diminish when individuals are depleted [63]. ...
Article
Full-text available
The term “Anthropocene Syndrome” describes the wicked interrelated challenges of our time. These include, but are not limited to, unacceptable poverty (of both income and opportunity), grotesque biodiversity losses, climate change, environmental degradation, resource depletion, the global burden of non-communicable diseases (NCDs), health inequalities, social injustices, the spread of ultra-processed foods, consumerism and incivility in tandem with a diminished emphasis on the greater potential of humankind, efforts toward unity, or the value of fulfilment and flourishing of all humankind. Planetary health is a concept that recognizes the interdependent vitality of all natural and anthropogenic ecosystems—social, political and otherwise; it blurs the artificial lines between health at scales of person, place and planet. Promoting planetary health requires addressing the underlying pathology of “Anthropocene Syndrome” and the deeper value systems and power dynamics that promote its various signs and symptoms. Here, we focus on misinformation as a toxin that maintains the syndromic status quo—rapid dissemination of falsehoods and dark conspiracies on social media, fake news, alternative facts and medical misinformation described by the World Health Organization as an “infodemic”. In the context of planetary health, we explore the historical antecedents of this “infodemic” and underscore an urgent need to remediate the misinformation mess. It is our contention that education (especially in early life) emphasizing mindfulness and understanding of the mechanisms by which propaganda is spread (and unhealthy products are marketed) is essential. We expand the discourse on positive social contagion and argue that empowerment through education can help lead to an information transformation with the aim of flourishing along every link in the person, place and planet continuum.
... Experimental psychological research has provided ample evidence that under certain conditions people come to believe and be influenced by contents of assertions that are explicitly tagged as false (e.g. Kissine and Klein 2013;Gilbert, Tafarodi, and Malone 1993). Thus, it seems that although accommodated presuppositions convey information in an indirect way, participants integrate it quite directly. ...
Chapter
Full-text available
Under the standard Lewis–Stalnaker view, accommodation is a pragmatic solution to a coordination problem. Accommodation processes are triggered when a speaker uses an expression that requires that the conversational background contain some hitherto unmentioned information. Accommodation, then, is not automatic; it is a process addressees engage in to adjust to the course of conversation. However, it is not entirely straightforward to predict when or which presuppositions will be accommodated. This issue is complicated by the existence of so‐called informative presuppositions, which carry new and at‐issue information. On the one hand, recent crosslinguistic and experimental research programs suggest that acceptability of presupposition accommodation varies relative to the kind of presupposition trigger involved. On the other hand, there exists a whole tradition in experimental social psychology which suggests that presuppositions are automatically accommodated, even though they are false.
... First, stereotype activation might not lead to manifestations of stereotype acceptance if we find our cogni tions might be inappropriately biased (see also Ford and Kruglanski 1995 about correcting for bias). Second, we will likely inhibit tendencies toward overt stereotype endorsement if we are cognizant of the prime, as our awareness might highlight potential bias and motivate controlled processing (Gilbert, Tafarodi, and Malone 1993;see White 2007 about explicit race cues). Third, a prime is only likely to bias our social judgments toward the cue if we interpret that cue as being in agreement with or relevant to the judgment PRIMING (e.g., Domke, Shah, and Wackman 1998;Herr, Sherman, and Fazio 1983). ...
Chapter
Full-text available
This chapter provides an overview of the theoretical framework of media priming and studies applying this framework to understand effects of media representations of race on people's use of stereotypes and counter-stereotypes in social judgments. The implication of having a memory structure that locates attributes of a minority group within a network of negative concepts is that priming any concept within that network negatively influences the way the readers think about a person their categorize as representing the minority group. Although the dual race and negative cues need not be integrated for priming effects to occur, negative evaluations are certainly intensified if the trigger expressly contextualizes race within a negative and stereotype-congruent framework. Priming effects tend to attract a minimal amount of our awareness; the readers tend not to recognize when a prime activates a relevant concept in our memory. It is under these circumstances when depictions of race, especially negative depictions of race, most influence our judgments. Priming the idea of multiculturalism seems to facilitate favorable attitudes toward targets who embody stereotypes and who thus can be located within their ethnic boundaries.
Article
Full-text available
Trust certification through so-called trust seals is a common strategy to help users ascertain the trustworthiness of a system. In this study, we examined trust seals for AI systems from two perspectives: (1) In a pre-registered online study participants, we asked whether trust seals can increase user trust in AI systems, and (2) qualitatively, we investigated what participants expect from such AI seals of trust. Our results indicate mixed support for the use of AI seals. While trust seals generally did not affect the participants’ trust, their trust in the AI system increased if they trusted the seal-issuing institution. Moreover, although participants understood verification seals the least, they desired verifications of the AI system the most.
Article
Full-text available
Belief, defined by William James as the mental state or function of cognizing reality, is a core psychological function with strong influence on emotion and behavior. Furthermore, strong and aberrant beliefs about the world and oneself play important roles in mental disorders. The underlying processes of belief have been the matter of a long debate in philosophy and psychology, and modern neuroimaging techniques can provide insight into the underlying neural processes. Here, we conducted a functional magnetic resonance imaging study with N = 30 healthy participants in which we presented statements about facts, politics, religion, conspiracy theories, and superstition. Participants judged whether they considered them as true (belief) or not (disbelief) and reported their certainty in the decision. We found belief‐associated activations in bilateral dorsolateral prefrontal cortex, left superior parietal cortex, and left lateral frontopolar cortex. Disbelief‐associated activations were found in an anterior temporal cluster extending into the amygdala. We found a larger deactivation for disbelief than belief in the ventromedial prefrontal cortex that was most pronounced during decisions, suggesting a role of the vmPFC in belief‐related decision‐making. As a category‐specific effect, we found disbelief‐associated activation in retrosplenial cortex and parahippocampal gyrus for conspiracy theory statements. Exploratory analyses identified networks centered at anterior cingulate cortex for certainty, and dorsomedial prefrontal cortex for uncertainty. The uncertainty effect identifies a neural substrate for Alexander Bain's notion from 1859 of uncertainty as the real opposite of belief. Taken together, our results suggest a two‐factor neural process model of belief with falsehood/veracity and uncertainty/certainty factors.
Article
Full-text available
This study examines how the valence and argument of comments affect viewers’ attitudes toward and validation of a pseudo-scientific claim. We developed and tested a hypothesized model based on the elaboration likelihood model. Participants watched a video that introduced pseudo-scientific claims with others’ comments on the same screen. We assigned participants (n = 646) to a control condition with no message presentation and a message condition, with the message condition divided into four conditions based on a combination of valence and substantiveness of the comments. Structural equation modeling analysis revealed that valence affected both heuristic and systematic thought, while substantiveness influenced systematic thought. The negativity of the comments not only suppressed the positive impressions, which were irrelevant to the content, but also facilitated the examination of the reasoning for the pseudoscience claims in the video. Including substantive content in the comments also led to an examination of the rationale for pseudoscience claims. The model also showed that positive impressions irrelevant to the content increased validity judgments, the final positive attitude, and agreement to the pseudo-scientific claim, while examination of the rationale for the claims decreased them.
Article
Philosophical discussions of free speech often focus on moral considerations such as the harm that certain forms of expression might cause. However, in addition to our moral obligations, we also have a distinct set of epistemic obligations—and even when a false belief doesn't harm anyone, it constitutes an epistemically bad outcome. Moreover, the existing psychological evidence suggests that human beings are vulnerable to the influence of a wide variety of false claims via a wide variety of psychological mechanisms. Taken together, these facts suggest that there is a purely epistemic justification for restricting the distribution of misinformation: Because each of us has an individual epistemic obligation to avoid unnecessary exposure to misinformation, and because avoiding such exposure is simply too difficult when acting alone, we all have a shared epistemic obligation to establish laws or regulations restricting the widespread distribution of misinformation.
Article
Full-text available
This paper problematises political satire in a time when the COVID-19 virus has provoked numerous deaths worldwide, and had dramatic effects on social behaviour, on a scale unknown in western nations since World War II. Most populations have endured lockdown, periods of enforced domestic imprisonment, which led to images of the empty streets of big cities appearing in media, symbols of the drastic changes that the health emergency was making necessary. Yet, from the outset, comic memes began to circulate across (social) media, while in mainstream print media political satirists continued to lampoon official responses to the ongoing crisis. The paper thus aims to explore the connection of political satire and humour, asking two principle research questions: firstly, how to explain the humorous effects of these multimodal artefacts in such depressing circumstances; secondly, from a pragmatic perspective, to account for their overall socio-political function.The study uses memes taken from various online sources (Facebook, Twitter, Google) during the crisis, analysed according to a mixed approach that blends notions from Humour studies, especially incongruity (Morreall 2016), with insights from linguistic pragmatics (e.g. Kecskes 2014). The findings emphasise the emotional dimension of this form of satire, as the memes work against the backdrop of a range of feelings (anger, bitterness, disappointment, frustration, despair, etc.), many of which have been widely generated by the COVID-19 crisis and political responses to it. In short, to paraphrase Walter Benjamin (2008: 378), man may run out of tears but not of laughter. The findings contribute to our understanding of online satire as an emergent genre, one that uses the affordances of new media to extend the social potentialities of a traditional subversive discourse form.
Thesis
Full-text available
Efekt przedłużonego wpływu dezinformacji (CIE) jest zjawiskiem polegającym na tym, że pewna informacja, mimo że została wycofana i skorygowana, nadal ma wpływ na relacje o zdarzeniu, rozumowanie, wnioskowanie i decyzje. W niniejszej pracy przedstawiono eksperyment, który miał na celu zbadanie, w jakim stopniu efekt ten uda się zredukować przy użyciu procedury inokulacji, polegającej na „zaszczepieniu” przeciwko wpływowi, w tym dezinformacji, oraz jak efekt ten może być moderowany przez wiarygodność korekt. Potwierdzono większość z postawionych hipotez. Wyniki pokazały, że wiarygodność źródeł korekt nie miała wpływu na ich przetwarzanie, gdy do inokulacji nie dochodziło, jednak wśród osób zaszczepionych doszło do znaczącej redukcji polegania na dezinformacji, jeśli jej korekta pochodziła z wysoce wiarygodnego źródła. Dla tego warunku źródła, w wyniku inokulacji, doszło również do znaczącego zwiększenia wiary w wycofanie, a także zmniejszenia wiary w dezinformację. Wbrew poprzednim doniesieniom okazało się również, że to wiara w dezinformację, a nie w wycofanie jest predyktorem polegania na dezinformacji. Ustalenia te mają duże znaczenie z perspektywy praktycznej, ponieważ odkryto warunki brzegowe techniki redukowania wpływu dezinformacji o sporej aplikowalności, a także teoretycznej, ponieważ umożliwiają one wgląd w mechanizmy odpowiedzialne za CIE. Wyniki interpretowano zarówno w związku z dotychczasowymi teoriami CIE, jak również w ramach modelu pamiętania.
Book
THE PALGRAVE HANDBOOK OF TOLERATION aims to provide a comprehensive presentation of toleration as the foundational idea associated with engagement with diversity. This handbook is intended to provide an authoritative exposition of contemporary accounts of toleration, the central justifications used to advance it, a presentation of the different concepts most commonly associated with it (e.g. respect, recognition) as well as the discussion of the many problems dominating the controversies on toleration at both the theoretical or practical level. The Palgrave Handbook of Toleration is aimed as a resource for a global scholarly audience looking for either a detailed presentation of major accounts of toleration, the most important conceptual issues associated with toleration and the many problems dividing either scholars, policy-makers or practitione
Thesis
It’s widely assumed that intuitions are central to the methods of contemporary analytic philosophy. In particular, it’s thought that philosophers appeal to intuitions as evidence, or as a source of evidence, for their claims. Indeed, this view, which has become known as ‘centrality’, has been put forward explicitly by, for example, Chalmers, Kornblith, Bealer, Baz, Richard, and Liao, to name but a few. Recently, however, this interpretation of philosophical practice has been challenged, most notably by Williamson, Deutsch, Ichikawa, and Cappelen (the ‘anti-centralists’), who argue that intuitions aren’t, after all, central to our arguments. Alongside this debate has come a resurgence of interest in the related question of how philosophers use ‘intuition-talk’, namely words like ‘intuition’ and ‘intuitive’; if this language isn’t citing evidence, then what is its purpose, if anything, and, if it is, then what exactly is being referenced in support of our theories, or, what are intuitions? In this thesis, I make two, primary claims. First, I argue that intuitions do, indeed, play a central role in analytic philosophy (contra the anti-centralists), and help to clarify that role. Specifically, I make the case for a centralist interpretation of the primary argument for epistemic contextualism, identifying, through a conscientious analysis of the most seminal literature, not one but several, specific ways in which contextualists appeal to intuitions in an evidential capacity. Since contextualism is chiefly motivated by said argument, and is having a burgeoning influence on modern epistemology, and considering that epistemology is of ubiquitous philosophical significance, with ties to arguably all other core philosophical topics, such as ethics and metaphysics, I thereby demonstrate that treating intuitions as evidence is profoundly shaping the discipline at large. Second, I develop a novel account of what the relevant philosophical intuitions are. I argue against extant ‘minimalist’ theories, showing that they aren’t reducible to beliefs or credences of any kind (contra Lewis, Parsons, and Kornblith, for example), dispositions to believe (contra Sosa and Lycan, for example), temptations to believe (contra Williamson and Van Inwagen, for example), or facts about ordinary language. Moreover, I argue that existing accounts of intuitions as ‘intellectual seemings’ – advocated by, for instance, Brogaard, Huemer, and Bealer – are too conservative to capture the intuitions in question. In place of these alternatives, I propose a more liberal version of the intellectual seemings thesis. Then, I argue that such seemings, and thus the relevant philosophical intuitions, aren’t sui generis, as many are wont to assume, but are, rather, a sub-category of mental states known in psychology as ‘epistemic feelings’. Epistemic feelings are experiences triggered by metacognitive monitoring and control subsystems, in response to features of a first-order cognitive process and/or its outputs, such as its fluency. This interdisciplinary thesis revolutionises our understanding of philosophical intuitions, and bridges two, hitherto largely segregated academic sub-disciplines. I conclude, overall, that metacognitive experiences profoundly shape philosophy, and briefly consider some of the possible implications of my discovery. In particular, I suggest that we should treat intuitions as higher-, not first-order evidence for their content.
Article
Full-text available
This article explores the role of explicit or implicit argumentation in explaining, and accounting for, the views people form about political events; events of which, necessarily, they generally have only mediated knowledge. The media do not only inform people of the events which happen, but also exercise a role in forming opinions about those events. This may occur through selection of what is printed, but also in editorial comments or indirectly through framing strategies, use of evaluative language, and so on. The Skripal/Novichok case in 2018 offers a good opportunity to assess some of these points, since it provoked great press attention and public interest and, moreover, Britain's politicians advanced a specific theory relating to the guilt of the Russian state, and Putin's personal involvement. The paper attempts to probe how far people's opinions on the case depend on media exposure, and to explore patterns of evidentiality in the discourse of interviewees about the topic.
Chapter
Storytelling is one of the cornerstones effective communication between an organization and its stakeholder in the fields of public relations. The authors propose the Narrative Persuasion Interactivity (NPI) theoretical model, which harnesses the capabilities to digital technology to enhance the persuasive effects of narrative storytelling by using interactive inputs within character and story design to promote character identification and experience taking. The NPI model provides a dynamic alternative to existing theories that provides a modern application of Kent and Taylor's principles of digital dialogic communication.
Article
Full-text available
As proposed for the emergence of modern languages, we argue that modern uses of languages (pragmatics) also evolved gradually in our species under the effects of human self‐domestication, with three key aspects involved in a complex feedback loop: (a) a reduction in reactive aggression, (b) the sophistication of language structure (with emerging grammars initially facilitating the transition from physical aggression to verbal aggression); and (c) the potentiation of pragmatic principles governing conversation, including, but not limited to, turn‐taking and inferential abilities. Our core hypothesis is that the reduction in reactive aggression, one of the key factors in self‐domestication processes, enabled us to fully exploit our cognitive and interactional potential as applied to linguistic exchanges, and ultimately to evolve a specific form of communication governed by persuasive reciprocity—a trait of human conversation characterized by both competition and cooperation. In turn, both early crude forms of language, well suited for verbal aggression/insult, and later more sophisticated forms of language, well suited for persuasive reciprocity, significantly contributed to the resolution and reduction of (physical) aggression, thus having a return effect on the self‐domestication processes. Supporting evidence for our proposal, as well as grounds for further testing, comes mainly from the consideration of cognitive disorders, which typically simultaneously present abnormal features of self‐domestication (including aggressive behavior) and problems with pragmatics and social functioning. While various approaches to language evolution typically reduce it to a single factor, our approach considers language evolution as a multifactorial process, with each player acting upon the other, engaging in an intense mutually reinforcing feedback loop. Moreover, we see language evolution as a gradual process, continuous with the pre‐linguistic cognitive abilities, which were engaged in a positive feedback loop with linguistic innovations, and where gene‐culture co‐evolution and cultural niche construction were the main driving forces.
Preprint
Full-text available
Complex brain-environment interactions influence biological systems organization and early childhood development affecting local cultural traditions and political behavior. One of the trajectories human development may take involves early childhood adversity (ELA) and the construction of antisocial personality traits consistent with conditional adaptation and autocratic governance. Neuroscientific evidence shows that motivation system processes implicated in essential behavior operate at a preconscious level, which would suggest that lasting effects of early life conditions may play a role at least as important as conscious ideological choice in the generation and support of autocratic states.
Book
Frames and Framing in Documentary Comics explores how graphic narratives reframe global crises while also interrogating practices of fact-finding. An analog print phenomenon in an era shaped by digitalization, documentary comics formulates a distinct counterapproach to conventional journalism. In what ways are ‘facts’ being presented and framed? What is documentary honesty in a world of fake news and post-truth politics? How can the stories of marginalized peoples and neglected crises be told? The author investigates documentary comics in its unique relationship to framing: graphic narratives are essentially shaped by a reciprocal relationship between the manifest frames on the page and the attention to the cognitive frames that they generate. To account for both the textuality of comics and its strategic use as rhetoric, the author combines theories of framing analysis and cognitive narratology with comics studies and its attention toward the medium’s visual frames.
Article
Full-text available
This chapter outlines the two basic routes to persuasion. One route is based on the thoughtful consideration of arguments central to the issue, whereas the other is based on the affective associations or simple inferences tied to peripheral cues in the persuasion context. This chapter discusses a wide variety of variables that proved instrumental in affecting the elaboration likelihood, and thus the route to persuasion. One of the basic postulates of the Elaboration Likelihood Model—that variables may affect persuasion by increasing or decreasing scrutiny of message arguments—has been highly useful in accounting for the effects of a seemingly diverse list of variables. The reviewers of the attitude change literature have been disappointed with the many conflicting effects observed, even for ostensibly simple variables. The Elaboration Likelihood Model (ELM) attempts to place these many conflicting results and theories under one conceptual umbrella by specifying the major processes underlying persuasion and indicating the way many of the traditionally studied variables and theories relate to these basic processes. The ELM may prove useful in providing a guiding set of postulates from which to interpret previous work and in suggesting new hypotheses to be explored in future research. Copyright © 1986 Academic Press Inc. Published by Elsevier Inc. All rights reserved.
Article
Full-text available
The hypothesis that misleading suggestions can impair recollection was supported in a study inspired by L. L. Jacoby and C. M. Kelley's (unpublished manuscript) "logic of opposition" and D. S. Lindsay and M. K. Johnson's (see record 1989-38908-001) hypotheses about source memory. Tendency to report suggested details was set in opposition to ability to remember their source by telling Ss not to report anything from the narrative. Conditions were manipulated so that in the high- but not the low-discriminability condition it was easy to remember the suggestions and their source. At test, Ss were told (truthfully) that any information in the narrative relevant to the questions was wrong. Suggested details were more often reported on misled than control items in the low- but not the high-discriminability condition, yet suggestions impaired accurate recall of event details in both conditions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Conducted 3 experiments to examine the effects that incriminating innuendo delivered by media sources have on audience impressions of innuendo targets. A total of 182 undergraduates served as Ss. The 1st study demonstrated innuendo effects by showing that audience impressions of a target were swayed in a negative direction by exposure to a prototypical innuendo headline, the incriminating question. A similar but substantially weaker effect was observed for an incriminating denial. The 2nd study showed that although variations in source credibility affected the persuasiveness of direct incriminating assertions, they had appreciably less impact on the persuasiveness of innuendos. In the 3rd study, the inferences an audience makes about the motives and knowledge of an innuendo source were investigated for their possible mediation of the innuendo effect. Audience inferences about the sensationalistic or muckraking qualities of the source were found to have a negligible influence on acceptance of innuendo from the source. The analysis also revealed that audiences commonly infer that the source is attempting to avoid charges of libel, which can reduce receptiveness to innuendo communication. (14 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Person perception includes three sequential processes: categorization (what is the actor doing?), characterization (what trait does the action imply?), and correction (what situational constraints may have caused the action?). We argue that correction is less automatic (i.e., more easily disrupted) than either categorization or characterization. In Experiment 1, subjects observed a target behave anxiously in an anxiety-provoking situation. In Experiment 2, subjects listened to a target read a political speech that he had been constrained to write. In both experiments, control subjects used information about situational constraints when drawing inferences about the target, but cognitively busy subjects (who performed an additional cognitive task during encoding) did not. The results (a) suggest that person perception is a combination of lower and higher order processes that differ in their susceptibility to disruption and (b) highlight the fundamental differences between active and passive perceivers. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Two experiments were conducted to test competing accounts of the distraction-persuasion relationship, thought disruption and effort justification, and also to show that the relationship is not limited to counterattitudinal communication. Exp I, with 132 undergraduates, varied distraction and employed 2 discrepant messages differing in how easy they were to counterargue. In accord with the thought disruption account, increasing distraction enhanced persuasion for a message that was readily counterarguable, but reduced persuasion for a message that was difficult to counter-argue. The effort notion implied no interaction with message counterarguability. Exp II, with 54 undergraduates, again varied distraction but the 2 messages took a nondiscrepant position. One message elicited primarily favorable thoughts, and the effect of distraction was to reduce the number of favorable thoughts generated; the other, less convincing message elicited primarily counterarguments, and the effect of distraction was to reduce counterarguments. A Message * Distraction interaction indicated that distraction tended to enhance persuasion for the counterarguable message but reduce persuasion for the message that elicited primarily favorable thoughts. The experiments together support the principle that distraction works by inhibiting the dominant cognitive response to persuasive communication and, therefore, it can result in either enhanced or reduced acceptance. (28 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
B. Spinoza (1677 [1982]) suggested that all information is accepted during comprehension and that false information is then unaccepted. Subjects were presented with true and false linguistic propositions and, on some trials, their processing of that information was interrupted. As Spinoza's model predicted, interruption increased the likelihood that subjects would consider false propositions true but not vice versa (Study 1). This was so even when the proposition was iconic and when its veracity was revealed before its comprehension (Study 2). In fact, merely comprehending a false proposition increased the likelihood that subjects would later consider it true (Study 3). The results suggest that both true and false information are initially represented as true and that people are not easily able to alter this method of representation. Results are discussed in terms of contemporary research on attribution, lie detection, hypothesis testing, and attitude change. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
people sometimes spontaneously infer others' traits from their behavior without intending to, or . . . being aware trait inferences need not result from causal thinking / both traits and causes may be embedded in the implicit knowledge structures [routinely activated] to understand events traits as causes / as dispositions that only become apparent under particular conditions / as summary behavioral frequencies role in self-perception / perseverance of misperceptions / self-fulfilling prophecies in social interaction / stereotype confirmation and persistence / vividness effects (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A number of philosophers and psychologists stress the importance of disconfirmation in reasoning and suggest that people are instead prone to a general deleterious "confirmation bias." In particular, it is suggested that people tend to test those cases that have the best chance of verifying current beliefs rather than those that have the best chance of falsifying them. We show, however, that many phenomena labeled "confirmation bias" are better understood in terms of a general positive test strategy. With this strategy, there is a tendency to test cases that are expected (or known) to have the property of interest rather than those expected (or known) to lack that property. We show that the positive test strategy can be a very good heuristic for determining the truth or falsity of a hypothesis under realistic conditions. It can, however, lead to systematic errors or inefficiencies. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Is there a difference between believing and merely understanding an idea? R. Descartes (e.g., 1641 [1984]) thought so. He considered the acceptance and rejection of an idea to be alternative outcomes of an effortful assessment process that occurs subsequent to the automatic comprehension of that idea. This article examined B. Spinoza's (1982) alternative suggestion that (1) the acceptance of an idea is part of the automatic comprehension of that idea and (2) the rejection of an idea occurs subsequent to, and more effortfully than, its acceptance. In this view, the mental representation of abstract ideas is quite similar to the mental representation of physical objects: People believe in the ideas they comprehend, as quickly and automatically as they believe in the objects they see. Research in social and cognitive psychology suggests that Spinoza's model may be a more accurate account of human belief than is that of Descartes. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Conducted 4 experiments with 303 undergraduates to examine the relationship between the rated truth of statements and prior study of parts of those statements. Findings from the 1st 2 experiments show that new details about familiar topics are rated truer than new details about unfamiliar topics. Consequently, recognition of a topic as familiar disposes Ss to accept new details as true. Results from the 3rd and 4th experiments show that statements initially studied under an affirmative bias are rated truer than statements originally studied under a negative bias. However, since even the negatively biased statements are rated truer than new ones, it is contended that Ss are not remembering the bias. Rather, different biases during study affect the probability that details will be encoded into memory. In contrast to differential biases, different study processes affect the likelihood that Ss will remember having studied the statements, but do not affect truth. Results are discussed in terms of the hypothesis that remembered factual details are the criterion of certitude against which tested statements are assessed. (French abstract) (38 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A 2-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically--without S control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the S. A series of studies, with approximately 8 Ss, using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search was utilized in varied-mapping paradigms, and in the present studies, it took the form of serial, terminating search. (60 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Investigated in 2 experiments the conditions that promote successful discounting of knowledge in making a judgment. 122 (Exp I) and 155 (Exp II) undergraduate students first learned a set of arguments describing a person. Later, they were told to use a subset of these arguments to judge the person. This was done in 1 of 2 ways. Half of the Ss received instructions specifying the subset of arguments that were actually to be used in the judgment. For the other half, the supplementary subset was specified; that is, they were told which of the arguments were to be ignored. As a result, in the latter condition the to-be-ignored arguments were salient, whereas in the former condition the to-be-used arguments were salient. Results of both experiments indicate that discounting was most successful when the to-be-ignored arguments were salient. Orthogonally to the salience manipulation, the experiments varied the extent to which the arguments were integrated before discounting. Exp II demonstrated that discounting fails when arguments are represented in an integrative rather than a discrete manner. Implications of these findings for theories of discounting are discussed. (30 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Tested, in 2 experiments, predictions of a formal model that decomposes the attribution of personal dispositions into identification and dispositional inference processes. The model assumes that identification processes initially represent the incoming stimulus information in terms of meaningful attribution-relevant categories. The results of the identification process serve as input for dispositional inference processes wherein causal schemata guide the inference of personal dispositions. The 2 illustrative experiments traced the processing of behavioral and situational information at the identification and dispositional inference stages and examined attributions as a joint product of the different stages. Findings and previous relevant research demonstrate that the proposed model can help reconcile conflicting findings in the literature, reveal new attributional phenomena, and improve understanding of the cognitive processes that produce self- and other-attribution. Dispositional attribution calculations are appended. (90 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Five alternative information processing models that relate memory for evidence to judgments based on the evidence are identified in the current social cognition literature: independent processing, availability, biased retrieval, biased encoding, and incongruity-biased encoding. A distinction between 2 types of judgment tasks, memory-based vs online, is introduced and is related to the 5 process models. In 3 experiments, using memory-based tasks where the availability model described Ss' thinking, direct correlations between memory and judgment measures were obtained. In a 4th experiment, using online tasks where any of the remaining 4 process models may apply, prediction of the memory–judgment relationship was equivocal but usually followed the independence model prediction of zero correlation. It is concluded that memory and judgment will be directly related when the judgment was based directly on the retrieval of evidence information in memory-based judgment tasks. (61 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Tested a new conceptualization of the impression-perseverance effect, using 92 undergraduates. As in earlier studies (e.g., Exp II conducted by L. Ross et al [see PA, Vol 55:7163]), some actor and observer Ss were given false feedback about the actor–Ss' performance in the experiment and then were informed during debriefing that the feedback had not been genuine. Other Ss, however, received a briefing about the falsity of the feedback before the task performance. These briefed Ss, like the debriefed Ss, subsequently made estimates of the actors' actual performance on the task that were significantly influenced in the direction of the false feedback. The briefed Ss did not, however, follow the debriefed Ss in making ability attributions to the actor in line with their performance estimates. Results cast doubt on the notion that attributional processing of the false information, as observed in the debriefing condition, is a necessary component of the perseverance effect. The idea that denied information and the denial may contribute independently to subsequent impressions is offered as an alternative explanation of briefing and debriefing phenomena. (31 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Using factual information of uncertain truth value as the stimulus material, previous investigators have found that repeated statements are rated more valid than non-repeated statements. Experiments 1 and 1A were designed to determine if this effect would also occur for opinion statements and for statements initially rated either true or false. Subjects were exposed to a 108-statement list one week and a second list of the same length a week later. This second list was comprised of some of the statements seen earlier plus some statements seen for the first time. Results suggested that all types of repeated statements are rated as more valid than their non-repeated counterparts. Experiment 2 demonstrated that the validity-enhancing effect of repetition does not occur in subject domains about which a person claims not be knowledgeable. From the results of both studies we concluded that familiarity is a basis for the judged validity of statements. The relation between this phenomenon and the judged validity of decisions and predictions was also discussed.
Article
Full-text available
Subjects rated how certain they were that each of 60 statements was true or false. The statements were sampled from areas of knowledge including politics, sports, and the arts, and were plausible but unlikely to be specifically known by most college students. Subjects gave ratings on three successive occasions at 2-week intervals. Embedded in the list were a critical set of statements that were either repeated across the sessions or were not repeated. For both true and false statements, there was a significant increase in the validity judgments for the repeated statements and no change in the validity judgments for the non-repeated statements. Frequency of occurrence is apparently a criterion used to establish the referential validity of plausible statements.
Article
Full-text available
Prior research has shown that repeating a statement results in an increase in its judged validity. One explanation that has been advanced to account for this finding is that familiarity is used as a basis to assess validity. Another explanation is that when subjects dissociate a statement from its true source, that statement is judged to be more valid. According to this latter explanation, repeated statements tend to be seen as more valid because each presentation is perceived as coming from different sources. Hence repeated statements benefit from perceived convergent validity. Experiment 1 tested these two explanations by presenting 40 statements during one session and repeating 20 of them amid 20 new ones either 1, 3, or 5 weeks later. A causal analysis lent support to both explanations, although source dissociation was found not to be a necessary condition for the validity-enhancing effect of repetition. Experiments 2 and 3 were designed to examine the boundary conditions for the influence of repetition on perceived validity. In Experiment 2 half of the subjects heard sentences about China, whereas the other half of the subjects heard control sentences. A week later (Week 2) one-third of the subjects in each of these two groups read passages about the specific topics covered by the China sentences, one-third read about other topics dealing with China, and one-third read control passages having nothing to do with China. One week later all subjects gave validity ratings to various sentences pertaining to China, including those seen during Week 1. The results indicated that hearing any passage having to do with China during Week 2 caused subjects to increase their judged validity of the China sentences originally seen during Week 1. In Experiment 3 some sentences were repeated each week over a 6-week period. The difference in rated validity between the repeated and nonrepeated statements was manifested by the second week and persisted during subsequent repetitions. The results of the three experiments were compared to findings in the semantic priming literature.
Article
Full-text available
Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard "outcome" debriefing. "Process" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.
Article
Full-text available
Subjects read a series of behaviors with instructions to form an impression of the person who performed them. In some conditions, subjects were told after reading the behaviors that an administrative error had been made and that certain ones should be disregarded. If the behaviors that subjects were told to disregard were descriptively unrelated to the other behaviors in the series, their influence on trait judgments was greatest when they were presented last. If the behaviors to be disregarded were descriptively inconsistent with the remaining behaviors, their influence was greatest when they were first in the series but subjects were not told to ignore them until after the remaining ones were presented. If the to-be-disregarded behaviors implied the same trait as the remaining ones, they had an influence on trait judgments in both of these conditions. All of these effects were consistently greater when the trait implied by the behaviors to be disregarded was favorable. In fact, when behaviors implied an unfavorable trait, instructions to disregard them often led the behaviors to have a contrast effect on trait judgments. Subjects appeared to base their judgments on implications of the cognitive representations they formed of the person at the time information was first presented. They then adjusted these subjective judgments at the time they reported them to compensate for the influence they perceived the to-be-disregarded information to have, making relatively greater adjustments when this information was unfavorable than when it was favorable. An additional experiment provided further evidence of adjustment processes, but it indicated that subjects also base their judgments on a partial review of the information they have received. A general model of person memory and judgment proposed by Wyer and Unverzagt (1985) provided a satisfactory account of the results of these studies.
Article
Full-text available
Subjects were given descriptions of a person's behavior with instructions to form an impression of the person. The first behaviors in the series had implications for one trait, and the last behaviors had implications for a second trait that differed in favorableness from the first. After receiving the first set of behaviors, some subjects were told that an error had been made and that the behaviors should be disregarded. Other subjects were told instead to disregard the last behaviors presented. To-be-disregarded behaviors that occurred first in the series had little influence on judgements of either the specific trait to which they pertained or judgements of the target's likeableness, although subjects could recall these behaviors quite well. In contrast, to-be-disregarded behaviors at the end of the series did have an influence on specific trait judgements of the target, although they were recalled relatively poorly. These and other results were accounted for in terms of the general model of person memory and social information processing proposed by Wyer and Srull.
Article
Full-text available
3 separate experiments were done at different universities to test the hypothesis that a persuasive communication that argues strongly against an opinion to which the audience is committed will be more effective if the audience is somewhat distracted from the communication so that they cannot adequately counterargue while listening. 2 films were prepared, each containing the same communication arguing strongly against fraternities. One was a normal film of the speaker making a speech. The other film, with the same track, had an utterly irrelevant and highly distracting visual presentation. Fraternity men were more influenced by the distracting presentation of the persuasive communication than by the ordinary version. There was no difference between the 2 for nonfraternity men. In general, the hypothesis concerning the effect of distraction was supported.
Article
Social interaction imposes a variety of attentional demands on those who attempt it. Such cognitively busy persons often fail to use contextual information to correct the impressions they form of others. The 4 experiments reported here examined the corrigibility of this effect. Although formerly busy perceivers were able to correct their mistaken impressions retroactively (Experiment 1), such retroactive correction was not inevitable (Experiment 2). In addition, when perceivers were able to correct their original impressions retroactively, they were still unable to correct subsequent inferences that had been biased by those original impressions (Experiments 3 and 4). As such, perceivers were occasionally able to overcome the primary, but not the subsidiary, effects of cognitive busyness. The results are discussed in terms of the metastasis of false knowledge. (PsycINFO Database Record (c) 2015 APA, all rights reserved)
Article
Brackets] enclose editorial explanations. Small ·dots· enclose material that has been added, but can be read as though it were part of the original text. Occasional • bullets, and also indenting of passages that are not quotations, are meant as aids to grasping the structure of a sentence or a thought. The basis from which this text was constructed was the translation by John Cottingham (Cambridge University Press), which is strongly recommended. Each four-point ellipsis . . . . indicates the omission of a short passage that seemed to be more trouble than it is worth. Longer omissions are reported between square brackets in normal-sized type.—Descartes wrote this work in Latin. A French translation appeared during his life-time, and he evidently saw and approved some of its departures from or additions to the Latin. A few of these will be incorporated, usually without sign-posting, in the present version.—When a section starts with a hook to something already said, it's a hook to • the thought at the end of the preceding section, not to • its own heading. In the definitive Adam and Tannery edition of Descartes's works, and presumably also in the first printing of the Principles, those items were not headings but marginal summaries.
Article
Lying and lie detection are the two components that, together, make up the exchange called as the “communication of deception.” Deception is an act that is intended to foster in another person a belief or understanding that the deceiver considers false. This chapter presents a primarily psychological point of view and a relatively microanalysis of the verbal and nonverbal exchange between the deceiver and the lie detector. The chapter discusses the definition of deception. It describes the deceiver's perspective in lie-detection, including the strategies of deception and behaviors associated with lie-telling. The lie-detector's perspective is also discussed in the chapter, and it has described behaviors associated with the judgments of deception and strategies of lie detection. The chapter discusses the outcomes of the deceptive communication process—that is, the accuracy of lie detection—and explores methodological issues, channel effects in the detection of deception, and other factors affecting the accuracy of lie detection.
Article
Much of the information we encounter every day appears in settings that are clearly marked as fictional (e.g., novels, television, movies). Our studies explore the extent to which information acquired through these fictional worlds is incorporated into real-world knowledge. We used short stories to introduce fictional facts. The first experiment demonstrated that fictional information penetrates into judgments about beliefs, suggesting incorporation. The second experiment demonstrated, nonetheless, that representations of fictional information retain features of compartmentalization. We suggest, accordingly, that readers create hybrid representations of fictional information.
Article
We wanted to bring together under one cover an integrated treatment of a wide range of research, theory, and application in the realm of social influence. The result is a book that covers all the major social influence topics, including persuasion, compliance, conformity, obedience, dissonance and self-attribution, conditioning and social learning, attitude-behavior relations, attitude involvement, prejudice, nonverbal communication, and even subliminal influence. The coverage is wide, but also integrated through the use of the recurring theme of "attitude systems" in which attitudes, cognitions, behaviors, and intentions can all be affected by external agents of influence, and all can be influenced internally by each other. We also devote two full chapters to applications of social influence principles that we see as decided "growth areas" now and in the near future. One applications chapter focuses on influence in the legal system and the other on improving the quality of life (the environment, personal health, and mental well-being). This book is intended primarily for undergraduates. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
We define mental contamination as the process whereby a person has an unwanted response because of mental processing that is unconscious or uncontrollable. This type of bias is distinguishable from the failure to know or apply normative rules of inference and can be further divided into the unwanted consequences of automatic processing and source confusion, which is the confusion of 2 or more causes of a response. Mental contamination is difficult to avoid because it results from both fundamental properties of human cognition (e.g., a lack of awareness of mental processes) and faulty lay beliefs about the mind (e.g., incorrect theories about mental biases). People's lay beliefs determine the steps they take (or fail to take) to correct their judgments and thus are an important but neglected source of biased responses. Strategies for avoiding contamination, such as controlling one's exposure to biasing information, are discussed.
Content analyses: Forced comparisons in the analysis of variance
  • R Rosenthal
  • R L Rosnow
Rosenthal, R. & Rosnow, R. L. (1985). Content analyses: Forced comparisons in the analysis of variance. (Cambridge, England: Cambridge University Press)
The psychology of reasoning
  • P C Wason
  • P N Johnson-Laird
Wason, P. C. & Johnson-Laird, P. N. (1972). The psychology of reasoning. (Cambridge, MA: Harvard University Press)