Chapter

Philosophical and Linguistic Approaches to Beliefs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Beliefs play a central role in our lives. They lie at the heart of what makes us human, they shape the organization and functioning of our minds, they define the boundaries of our culture, and they guide our motivation and behavior. Given their central importance, researchers across a number of disciplines have studied beliefs, leading to results and literatures that do not always interact. The Cognitive Science of Belief aims to integrate these disconnected lines of research to start a broader dialogue on the nature, role, and consequences of beliefs. It tackles timeless questions, as well as applications of beliefs that speak to current social issues. This multidisciplinary approach to beliefs will benefit graduate students and researchers in cognitive science, psychology, philosophy, political science, economics, and religious studies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
A single exposure to statements is typically enough to increase their perceived truth. This Truth-by-Repetition (TBR) effect has long been assumed to occur only with statements whose truth value is unknown to participants. Contrary to this hypothesis, recent research has found that statements contradicting participants' prior knowledge (as established from a first sample of participants) show a TBR effect following their repetition (in a second, independent sample of participants). As for now, however, attempts at finding a TBR effect for blatantly false (i.e., highly implausible) statements have failed. Here, we reasoned that highly implausible statements such as "Elephants run faster than cheetahs" may show repetition effects, provided a sensitive truth measure is used and statements are repeated more than just once. In a preregistered experiment, participants judged on a 100-point scale the truth of highly implausible statements that were either new to them or had been presented five times before judgment. We observed an effect of repetition: repeated statements were judged more true than new ones, although all judgments were judged below the scale midpoint. Exploratory analyses additionally show that about half the participants showed no or even a reversed effect of repetition. The results provide the first empirical evidence that repetition can increase perceived truth even for highly implausible statements, although not equally so for all participants and not to the point of making the statements look true.
Book
Full-text available
Are we rational creatures? Do we have free will? Can we ever know ourselves? These and other fundamental questions have been discussed by philosophers over millennia. But recent empirical findings in psychology and neuroscience suggest we should reconsider them. This textbook provides an engrossing overview of contemporary debates in the philosophy of psychology, exploring the ways in which the interaction and collaboration between psychologists and philosophers contribute to a better understanding of the human mind, cognition and behaviour. Miyazono and Bortolotti discuss pivotal studies in cognitive psychology, social psychology, developmental psychology, evolutionary psychology, clinical psychology and neuroscience, and their implications for philosophy. Combining the latest philosophical and psychological research with an accessible style, Philosophy of Psychology is a crucial resource for students from either discipline. It is the most up-to-date text for modules on philosophy of mind, philosophy of psychology, philosophy of mental health and philosophy of cognitive science.
Article
Full-text available
We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.
Article
Full-text available
In recent years, there has been a great deal of concern about the proliferation of false and misleading news on social media1–4. Academics and practitioners alike have asked why people share such misinformation, and sought solutions to reduce the sharing of misinformation5–7. Here, we attempt to address both of these questions. First, we find that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy. This dissociation suggests that sharing does not necessarily indicate belief. Nonetheless, most participants say it is important to share only accurate news. To shed light on this apparent contradiction, we carried out four survey experiments and a field experiment on Twitter; the results show that subtly shifting attention to accuracy increases the quality of news that people subsequently share. Together with additional computational analyses, these findings indicate that people often share misinformation because their attention is focused on factors other than accuracy—and therefore they fail to implement a strongly held preference for accurate sharing. Our results challenge the popular claim that people value partisanship over accuracy8,9, and provide evidence for scalable attention-based interventions that social media platforms could easily implement to counter misinformation online.
Article
Full-text available
Can delusions, in the context of psychosis, enhance a person’s sense of meaningfulness? The case described here suggests that, in some circumstances, they can. This prompts further questions into the complexities of delusion as a lived phenomenon, with important implications for the clinical encounter. While assumptions of meaninglessness are often associated with concepts of ‘disorder’, ‘harm’ and ‘dysfunction’, we suggest that meaning can nonetheless be found within what is commonly taken to be incomprehensible or even meaningless. A phenomenological and value-based approach appears indispensable for clinicians facing the seemingly paradoxical coexistence of harmfulness and meaningfulness.
Article
Full-text available
Associative accounts suggest that implicit (indirectly measured) evaluations are sensitive primarily to co-occurrence information (e.g., pairings of gorges with positive experiences) and are represented associatively (e.g., gorge–nice). By contrast, recent propositional accounts have argued that implicit evaluations are also responsive to relational information (e.g., gorges causing vs. preventing ennui) and are represented propositionally (e.g., "I find gorges fascinating"). In a review of 30 empirical papers involving exposure to contradictory co-occurrence information and relational information, we found overwhelming evidence for the latter dominating the updating of implicit evaluations, supporting the propositional perspective. However, unlike explicit evaluations, implicit evaluations seem recalcitrant in the face of relational information that requires retrospective revaluation of already encoded co-occurrence information. These findings may be jointly explained by a "common currency" hypothesis under which implicit evaluations emerge from compressed summary representations, which are sensitive to relational information but are not fully propositional.
Article
Full-text available
Although conspiracy theories are endorsed by about half the population and occasionally turn out to be true, they are more typically false beliefs that, by definition, have a paranoid theme. Consequently, psychological research to date has focused on determining whether there are traits that account for belief in conspiracy theories (BCT) within a deficit model. Alternatively, a two-component, socio-epistemic model of BCT is proposed that seeks to account for the ubiquity of conspiracy theories, their variance along a continuum, and the inconsistency of research findings likening them to psychopathology. Within this model, epistemic mistrust is the core component underlying conspiracist ideation that manifests as the rejection of authoritative information, focuses the specificity of conspiracy theory beliefs, and can sometimes be understood as a sociocultural response to breaches of trust, inequities of power, and existing racial prejudices. Once voices of authority are negated due to mistrust, the resulting epistemic vacuum can send individuals “down the rabbit hole” looking for answers where they are vulnerable to the biased processing of information and misinformation within an increasingly “post-truth” world. The two-component, socio-epistemic model of BCT argues for mitigation strategies that address both mistrust and misinformation processing, with interventions for individuals, institutions of authority, and society as a whole.
Article
Full-text available
Research on the capacity to understand others' minds has tended to focus on representations of beliefs, which are widely taken to be among the most central and basic theory of mind representations. Representations of knowledge, by contrast, have received comparatively little attention and have often been understood as depending on prior representations of belief. After all, how could one represent someone as knowing something if one doesn't even represent them as believing it? Drawing on a wide range of methods across cognitive science, we ask whether belief or knowledge is the more basic kind of representation. The evidence indicates that non-human primates attribute knowledge but not belief, that knowledge representations arise earlier in human development than belief representations, that the capacity to represent knowledge may remain intact in patient populations even when belief representation is disrupted, that knowledge (but not belief) attributions are likely automatic, and that explicit knowledge attributions are made more quickly than equivalent belief attributions. Critically, the theory of mind representations uncovered by these various methods exhibit a set of signature features clearly indicative of knowledge: they are not modality-specific, they are factive, they are not just true belief, and they allow for representations of egocentric ignorance. We argue that these signature features elucidate the primary function of knowledge representation: facilitating learning from others about the external world. This suggests a new way of understanding theory of mind-one that is focused on understanding others' minds in relation to the actual world, rather than independent from it.
Book
Full-text available
Ideally, we would have beliefs that satisfy norms of truth and rationality, as well as fostering the acquisition, retention and use of other relevant information. In reality, we have limited cognitive capacities and are subject to motivational biases on an everyday basis, and may also experience impairments in perception, memory, learning, and reasoning in the course of our lives. Such limitations and impairments give rise to distorted memory beliefs, confabulated explanations, elaborated delusional beliefs, motivated delusional beliefs, and optimistically biased beliefs. In the book, Bortolotti argues that some irrational beliefs qualify as epistemically innocent , where the notion of epistemic innocence captures the fact that in some contexts the adoption, maintenance or reporting of the beliefs delivers significant epistemic benefits that could not be easily attained otherwise. Epistemic innocence is a weaker notion than epistemic justification, as it does not imply that the epistemic benefits of the irrational belief outweigh its epistemic costs. However, it clarifies the relationship between the epistemic and psychological effects of irrational beliefs on agency. It is misleading to assume that epistemic rationality and psychological adaptiveness always go hand-in-hand, but also that there is a straight-forward trade off between them. Rather, epistemic irrationality can lead to psychological adaptiveness and psychological adaptiveness in turn can support the attainment of epistemic goals. Recognising the circumstances in which irrational beliefs enhance or restore epistemic performance informs our mutual interactions and enables us to take measures to reduce their irrationality without undermining the conditions for epistemic success.
Article
Full-text available
Implicit evaluations (attitudes) are often described as resistant to change, especially when they were initially formed in a seemingly associative manner, such as via repeated evaluative pairings (REP), and new learning is created via propositional material, such as evaluative statements (ES). The present research (total N = 2,124) tested the responsiveness of implicit evaluations instantiated via REP to updating via different types of ES. In Experiment 1, initial learning was created via repeatedly pairing a novel target with strongly negative stimuli (screams) in an aversive REP (A-REP) task. Subsequent ES of opposing valence providing diagnostic information about the target's behavior substantially updated implicit (IAT) evaluations. In Experiment 2, behavioral ES resulted in successful updating after A-REP whether or not they provided an explanation for the initial A-REP learning. A previously unobtained result emerged in Experiment 3 showing that updating was durable even after 1 day. Finally, in Experiment 4, implicit evaluations were updated via diagnostic behavioral ES, but not via an ES instruction to suppose that different pairings had occurred during A-REP. Taken together, these experiments challenge associative theories of implicit evaluation by demonstrating that diagnostic behavioral statements can durably override the effects of initial learning on implicit evaluations, even if such initial learning is aversive and involves direct experience with stimulus pairings. Moreover, by showing that verbal manipulations based on diagnostic behavior but not a mere supposition instruction had impact, the present project advances theory by starting to identify the nature of learning that can adaptively update social impressions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Full-text available
When subject to the choice-blindness effect, an agent gives reasons for making choice B, moments after making the alternative choice A. Choice blindness has been studied in a variety of contexts, from consumer choice and aesthetic judgement to moral and political attitudes. The pervasiveness and robustness of the effect is regarded as powerful evidence of self-ignorance. Here we compare two interpretations of choice blindness. On the choice error interpretation, when the agent gives reasons she is in fact wrong about what her choice is. On the choice change interpretation, when the agent gives reasons she is right about what her choice is, but she does not realise that her choice has changed. In this paper, we spell out the implications of the two interpretations of the choice-blindness effect for self-ignorance claims and offer some reasons to prefer choice change to choice error.
Article
Full-text available
Deceptive claims surround us, embedded in fake news, advertisements, political propaganda, and rumors. How do people know what to believe? Truth judgments reflect inferences drawn from three types of information: base rates, feelings, and consistency with information retrieved from memory. First, people exhibit a bias to accept incoming information, because most claims in our environments are true. Second, people interpret feelings, like ease of processing, as evidence of truth. And third, people can (but do not always) consider whether assertions match facts and source information stored in memory. This three-part framework predicts specific illusions (e.g., truthiness, illusory truth), offers ways to correct stubborn misconceptions, and suggests the importance of converging cues in a post-truth world in which falsehoods travel further and faster than the truth. Expected final online publication date for the Annual Review of Psychology, Volume 71 is January 4, 2020. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
Most accounts of behavior in nonhuman animals assume that they make choices to maximize expected reward value. However, model-free reinforcement learning based on reward associations cannot account for choice behavior in transitive inference paradigms. We manipulated the amount of reward associated with each item of an ordered list, so that maximizing expected reward value was always in conflict with decision rules based on the implicit list order. Under such a schedule, model-free reinforcement algorithms cannot achieve high levels of accuracy, even after extensive training. Monkeys nevertheless learned to make correct rule-based choices. These results show that monkeys’ performance in transitive inference paradigms is not driven solely by expected reward and that appropriate inferences are made despite discordant reward incentives. We show that their choices can be explained by an abstract, model-based representation of list order, and we provide a method for inferring the contents of such representations from observed data.
Article
Full-text available
When tested immediately, evaluative statements (ES; verbal information about upcoming categories and their positive/negative attributes) surprisingly shift implicit (IAT) attitudes more effectively than repeated evaluative pairings (REP; actual pairing of category members with positive/negative attributes). The present project (total N 5,317) explored the shared and unique features of these two attitude change modalities by probing (a) commonalities visible in the extent to which propositional inferences created by ES infiltrate REP learning and (b) differences visible in performance of ES and REP learning over time. In REP, the number of stimulus pairings (varied parametrically from 4 to 24) produced no effect (Study 1), but verbally describing stimulus pairings as diagnostic versus nondiagnostic did modulate learning (Study 2), suggesting that even REP give rise to some form of propositional representation. On the other hand, learning from ES decayed quickly, whereas learning from REP remained stable over time both within an immediate session of testing (Study 3) and following a 15-min delay (Study 4), revealing a difference between these two forms of learning. Beyond their theoretical import, these findings may inform interventions designed to produce short-and long-term change in implicit attitudes.
Article
Full-text available
The 2016 U.S. presidential election brought considerable attention to the phenomenon of “fake news”: entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., “The earth is a perfect square”). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed.
Article
Full-text available
Significance Social media sites are often blamed for exacerbating political polarization by creating “echo chambers” that prevent people from being exposed to information that contradicts their preexisting beliefs. We conducted a field experiment that offered a large group of Democrats and Republicans financial compensation to follow bots that retweeted messages by elected officials and opinion leaders with opposing political views. Republican participants expressed substantially more conservative views after following a liberal Twitter bot, whereas Democrats’ attitudes became slightly more liberal after following a conservative Twitter bot—although this effect was not statistically significant. Despite several limitations, this study has important implications for the emerging field of computational social science and ongoing efforts to reduce political polarization online.
Article
Full-text available
For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No. 1Introduction2The Finite Case 2.1Polarization2.2Dilation2.3Global polarization3The General Case 3.1Merging of opinions3.2The Bayesian consensus-or-polarization law4DiscussionAppendix
Article
Full-text available
Public discussions of political and social issues are often characterized by deep and persistent polarization. In social psychology, it’s standard to treat belief polarization as the product of epistemic irrationality. In contrast, we argue that the persistent disagreement that grounds political and social polarization can be produced by epistemically rational agents, when those agents have limited cognitive resources. Using an agent-based model of group deliberation, we show that groups of deliberating agents using coherence-based strategies for managing their limited resources tend to polarize into different subgroups. We argue that using that strategy is epistemically rational for limited agents. So even though group polarization looks like it must be the product of human irrationality, polarization can be the result of fully rational deliberation with natural human limitations.
Article
Full-text available
Understanding zero It has been said that the development of an understanding of zero by society initiated a major intellectual advance in humans, and we have been thought to be unique in this understanding. Although recent research has shown that some other vertebrates understand the concept of the “empty set,” Howard et al. now show that an understanding of this concept is present in untrained honey bees (see the Perspective by Nieder). This finding suggests that such an understanding has evolved independently in distantly related species that deal with complexity in their environments, and that it may be more widespread than previously appreciated. Science , this issue p. 1124 ; see also p. 1069
Article
Full-text available
Studies on evaluative conditioning show that a change in liking can occur whenever stimuli are paired. Such instances of attitude change are known to depend on the type of relation established between stimuli (e.g., “Bob is a friend of Mike” vs. “Bob is an enemy of Mike”). Research has so far only compared assimilative and contrastive relational qualifiers (e.g., friend vs. enemy). For the first time, we compared the effect of non-oppositional qualifiers on attitude change in a EC procedure (e.g., “Bob causes Positive Outcomes” vs. “Bob predicts Positive Outcomes”). Differential effects of non-oppositional relational qualifiers were observed on explicit and implicit evaluations. We discuss the implications of our findings for attitude research, theories of attitude change, and for optimizing evaluative conditioning for changing attitudes in applied settings.
Article
Full-text available
It is often assumed that, once established, spontaneous or implicit evaluations are resistant to immediate change. Recent research contradicts this theoretical stance, showing that a person’s implicit evaluations of an attitude object can be changed rapidly in the face of new counter-attitudinal information. Importantly, it remains unknown whether such changes can also occur for deep-rooted implicit evaluations of well-known attitude objects. We address this question by examining whether the acquisition of negative information changes implicit evaluations of a well-known positive historic figure: Mahatma Gandhi. We report three experiments showing rapid changes in implicit evaluations of Gandhi as measured with an Affect Misattribution Procedure and Evaluative Priming Task but not with an Implicit Association Test (IAT). These findings suggest that implicit evaluations based on deep-rooted representations are subjective to rapid changes in the face of expectancy-violating information, while pointing to limitations of the IAT for assessing such changes.
Article
Full-text available
In this paper I discuss the costs and benefits of confabulation, focusing on the type of confabulation people engage in when they offer explanations for their attitudes and choices. What makes confabulation costly? In the philosophical literature confabulation is thought to undermine claims to self-knowledge. I argue that when people confabulate they do not necessarily fail at mental-state self-attributions, but offer ill-grounded explanations which often lead to the adoption of other ill-grounded beliefs. What, if anything, makes confabulation beneficial? As people are unaware of the information that would make their explanations accurate, they are not typically in a position to acknowledge their ignorance or provide better-grounded explanations for their attitudes and choices. In such cases, confabulating can have some advantages over offering no explanation because it makes a distinctive contribution to people’s sense of themselves as competent and largely coherent agents. This role of ill-grounded explanations could not be as easily played by better-grounded explanations should these be available. In the end, I speculate about the implications of this conclusion for attempting to eliminate or reduce confabulation.
Article
Full-text available
Dispositionalism about belief has had a recent resurgence. In this paper we critically evaluate a popular dispositionalist program pursued by Eric Schwitzgebel. Then we present an alternative: a psychofunctional, representational theory of belief. This theory of belief has two main pillars: that beliefs are relations to structured mental representations, and that the relations are determined by the generalizations under which beliefs are acquired, stored, and changed. We end by describing some of the generalizations regarding belief acquisition, storage, and change.
Chapter
Is there an English word that ends in ‘MT’? (If you are stumped, think about it for a moment and then read the last word of this abstract.) Before you figured out (or read) the answer to that question, did you possess the information that the word that is the answer is an English word that ends in ‘MT’? In a sense, yes: the word was in your vocabulary. But in another sense, no: perhaps you weren’t able to immediately answer the puzzle question. For finite agents, this phenomenon is unavoidable. We often possess a piece of information for some purposes (or with respect to some elicitation conditions) but not for other purposes (or conditions). This suggests that a mental state be represented not by a single batch of information, but rather by an ‘access table’—a function from purposes to batches of information. This representation makes clear what happens during certain ‘aha!’ moments in reasoning. It also allows us to model agents who exhibit imperfect recall, confusion, and mental fragmentation. And it sheds light on the difference between propositional knowledge and knowledge-how. The upshot is that representing mental states using access tables is more fruitful than one might have dreamt.
Chapter
Belief storage is often modeled as having the structure of a single, unified web. This model of belief storage is attractive and widely assumed because it appears to provide an explanation of the flexibility of cognition and the complicated dynamics of belief revision. However, when one scrutinizes human cognition, one finds strong evidence against a unified web of belief and for a fragmented model of belief storage. This chapter uses the best available evidence from cognitive science to develop this fragmented model into a nascent theory of the cognitive architecture of belief storage.
Book
Delusions are a common symptom of schizophrenia, dementia and other psychiatric disorders. Though delusion is commonly defined as a false and irrational belief, there is currently a lively debate about whether delusions are really beliefs and indeed, whether they are even irrational. The book is an interdisciplinary exploration of the nature of delusions. It brings together the psychological literature on the aetiology and the behavioural manifestations of delusions, and the philosophical literature on belief ascription and rationality. The thesis of the book is that delusions are continuous with ordinary beliefs, a thesis that could have not only significant theoretical implications for debates in the philosophy of mind and psychology, but also practical implications for psychiatric classification and the clinical treatment of subjects with delusions. Based on recent work in philosophy of mind, cognitive psychology and psychiatry, the book offers a comprehensive review of the philosophical issues raised by the psychology of normal and abnormal cognition, defends the doxastic conception of delusions, and develops a theory about the role of judgements of rationality and self-knowledge in belief ascription.
Article
Explicit (directly measured) evaluations are widely assumed to be sensitive to logical structure. However, whether implicit (indirectly measured) evaluations are uniquely sensitive to co-occurrence information or can also reflect logical structure has been a matter of theoretical debate. To test these competing ideas, participants (N = 3928) completed a learning phase consisting of a series of two-step trials. In step 1, one or more conditional statements (A → B) containing novel targets co-occurring with valenced adjectives (e.g., “if you see a blue square, Ibbonif is sincere”) were presented. In step 2, a disambiguating stimulus, e.g., blue square (A) or gray blob (¬A) was revealed. Co-occurrence information, disambiguating stimuli, or both were varied between conditions to enable investigating the unique and joint effects of each. Across studies, the combination of conditional statements and disambiguating stimuli licensed different normatively accurate inferences. In Study 1, participants were prompted to use modus ponens (inferring B from A → B and A). In Studies 2–4, the information did not license accurate inferences, but some participants made inferential errors: affirming the consequent (inferring A from A → B and B; Study 2) or denying the antecedent (inferring ¬B from A → B and ¬A; Studies 3A, 3B, and 4). Bayesian modeling using ordinal constraints on condition means yielded consistent evidence for the sensitivity of both explicit (self-report) and implicit (IAT and AMP) evaluations to the (correctly or erroneously) inferred truth value of propositions. Together, these data suggest that implicit evaluations, similar to their explicit counterparts, can reflect logical structure.
Book
Propositional attitude reports are sentences built around clause-embedding psychological verbs, like Kim believes that it's raining or Kim wants it to rain. These interact in many intricate ways with a wide variety of semantically relevant grammatical phenomena, and represent one of the most important topics at the interface of linguistics and philosophy, as their study provides insight into foundational questions about meaning. This book provides a bird's-eye overview of the grammar of propositional attitude reports, synthesizing the key facts, theories, and open problems in their analysis. Couched in the theoretical framework of generative grammar and compositional truth-conditional semantics, it places emphasis on points of intersection between propositional attitude reports and other important topics in semantic and syntactic theory. With discussion points, suggestions for further reading and a useful guide to symbols and conventions, it will be welcomed by students and researchers wishing to explore this fertile area of study.
Book
Moral systems, like normative systems more broadly, involve complex mental representations. Rational Rules offers an account of the acquisition of key aspects of normative systems in terms of general-purpose rational learning procedures. In particular, it offers statistical learning accounts of: (1) how people come to think that a rule is act-based, that is, the rule prohibits producing certain consequences but not allowing such consequences to occur or persist; (2) how people come to expect that a new rule will also be act-based; (3) how people come to believe a principle of liberty, according to which whatever is not expressly prohibited is permitted; and (4) how people come to think that some normative claims hold universally while others hold only relative to some group. This provides an empiricist theory of a key part of moral acquisition, since the learning procedures are domain general. It also entails that crucial parts of our moral system enjoy rational credentials since the learning procedures are forms of rational inference. There is another sense in which rules can be rational—they can be effective for achieving our ends, given our ecological settings. Rational Rules argues that at least some central components of our moral systems are indeed ecologically rational: they are good at helping us attain common goals. In addition, the book argues that a basic form of rule representation brings motivation along automatically. Thus, part of the explanation for why we follow moral rules is that we are built to follow rules quite generally.
Article
Implicit impressions are often assumed to be difficult to update in light of new information. Even when an intervention appears to successfully change implicit evaluations, the effects have been found to be fleeting, reverting to baseline just hours or days later. Recent findings, however, show that two properties of new evidence—diagnosticity and believability—can result in very rapid implicit updating. In the current studies, we assessed the long-term effects of evidence possessing these two properties on implicit updating over periods of days, weeks, and months. Three studies assessed the malleability of implicit evaluations after memory consolidation (Study 1; N = 396) as well as the longer-term trajectories of implicit responses after exposure to new evidence about novel targets (Study 2; N = 375) and familiar ones (Study 3; N = 341). In contrast with recent work, our findings suggest that implicit impressions can exhibit both flexibility after consolidation and durability weeks or months later.
Article
I have often heard friends and colleagues ask the question “why do people vote against their own interests?” Implicit in this question is a view that people are gullible, easily persuaded not just by political propaganda, but also hucksters, prone to fake news, and any number of scams. That is, that people are too trusting. In his book Not Born Yesterday: The Science of Who We Trust and What We Believe, Hugo Mercier takes the opposite view—the problem is not that humans are too trusting but that we are not trusting enough. Drawing on his own and others’ work, Mercier argues that as a hypersocial species humanity has evolved a suite of cognitive mechanisms to weed out when information from others is unreliable. In the first part of the book, Mercier considers the evidence for the case that people are utterly gullible: the Asch conformity experiments, proliferation of fake news, the phenomenon of flat-earthers, among evidence from other experiments and case studies. The second part of the book is spent on documenting the cognitive mechanisms that enable us to be open to information from others and filter unreliable communication. Mercier argues these mechanisms work well in small scale collectives, but have adapted poorly to our contemporary large-scale, complex societies. He rounds out the book with an argument that rather than gullibility being our utmost concern, it is cynicism that holds societies back from enjoying the benefits of cooperation.
Article
The eminent role of processing fluency in judgment and decision-making is undisputed. Not only is fluency affected by sources as diverse as stimulus repetition or visual clarity, but it also has an impact on outcomes as diverse as liking for a stimulus or the subjective validity of a statement. Although several studies indicate that sources and outcomes are widely interchangeable, recent research suggests that judgments are differentially affected by conceptual and perceptual fluency, with stronger effects of conceptual (vs. perceptual) fluency on judgments of truth. Here, we propose a fluency-specificity hypothesis according to which conceptual fluency is more informative for content-related judgments, but perceptual fluency is more informative for judgments related to perception. Two experimental studies in which perceptual and conceptual fluency were manipulated orthogonally show the superiority of content repetition on judgments of truth but the superiority of visual contrast on aesthetic evaluations. The theoretical implications are discussed. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Article
Repetition increases the likelihood that a statement will be judged as true. This illusory truth effect is well established; however, it has been argued that repetition will not affect belief in unambiguous statements. When individuals are faced with obviously true or false statements, repetition should have no impact. We report a simulation study and a preregistered experiment that investigate this idea. Contrary to many intuitions, our results suggest that belief in all statements is increased by repetition. The observed illusory truth effect is largest for ambiguous items, but this can be explained by the psychometric properties of the task, rather than an underlying psychological mechanism that blocks the impact of repetition for implausible items. Our results indicate that the illusory truth effect is highly robust and occurs across all levels of plausibility. Therefore, even highly implausible statements will become more plausible with enough repetition.
Chapter
Optimistically biased beliefs are beliefs about oneself that are more positive than is warranted by the evidence. Optimistically biased beliefs are the result of the influence of cognitive and motivational factors on people’s capacity to acquire, retrieve, and use information about themselves, and they resist counterevidence due to biases in belief updating. From a psychological point of view, optimistically biased beliefs contribute positively to subjective wellbeing, mental health, resilience, motivation, caring behaviour, and productivity. This chapter argues that optimistically biased beliefs also have significant epistemic benefits that could not be easily attained otherwise. In particular, they enhance socialization, leading to both exchanging information with one’s peers and receiving feedback from them, and they support one’s sense of self as that of a competent, largely coherent, and effective agent, helping sustain one’s motivation in the pursuit of one’s goals.
Article
More than a half-century ago, the ‘cognitive revolution’, with the influential tenet ‘cognition is computation’, launched the investigation of the mind through a multidisciplinary endeavour called cognitive science. Despite significant diversity of views regarding its definition and intended scope, this new science, explicitly named in the singular, was meant to have a cohesive subject matter, complementary methods and integrated theories. Multiple signs, however, suggest that over time the prospect of an integrated cohesive science has not materialized. Here we investigate the status of the field in a data-informed manner, focusing on four indicators, two bibliometric and two socio-institutional. These indicators consistently show that the devised multi-disciplinary program failed to transition to a mature inter-disciplinary coherent field. Bibliometrically, the field has been largely subsumed by (cognitive) psychology, and educationally, it exhibits a striking lack of curricular consensus, raising questions about the future of the cognitive science enterprise. Núñez et al. use bibliometric and socio-institutional indicators to show that over the years, cognitive science has failed to transition to a mature, coherent, interdisciplinary field.
Article
Transitive inference (TI) is a form of logical reasoning that involves using known relationships to infer unknown relationships (A > B; B > C; then A > C). TI has been found in a wide range of vertebrates but not in insects. Here, we test whether Polistes dominula and Polistes metricus paper wasps can solve a TI problem. Wasps were trained to discriminate between five elements in series (A0B-, B0C-, C0D-, D0E-), then tested on novel, untrained pairs (B versus D). Consistent with TI, wasps chose B more frequently than D. Wasps organized the trained stimuli into an implicit hierarchy and used TI to choose between untrained pairs. Species that form social hierarchies like Polistes may be predisposed to spontaneously organize information along a common underlying dimension. This work contributes to a growing body of evidence that the miniature nervous system of insects does not limit sophisticated behaviours.
Article
One of the most significant departures from conventional inoculation theory is its intentional application for individuals already “infected”—that is, inoculation not as a preemptive strategy to protect existing positions from future challenges, but instead, inoculation as a means to change a position (e.g., from negative to positive) and to protect the changed position against future challenges. The issue is important for persuasion scholarship in general, as theoretical boundary conditions help at each stage of persuasion research development, serving as a guide for literature review, analysis, synthesis, research design, interpretation, theory building, and so on. It is an important issue for inoculation theory and resistance to influence research, specifically, for it gets at the very heart—and name and foundation—of inoculation theory. This article offers a theoretical analysis of inoculation theory used as both prophylactic and therapeutic interventions and concludes with a set of recommendations for inoculation theory scholarship moving forward.
Article
In Call’s (2004) 2-cups task, widely used to explore logical and causal reasoning across species and early human development, a reward is hidden in one of two cups, one is shown to be empty, and successful subjects search for the reward in the other cup. Infants as young as 17-months and some individuals of almost all species tested succeed. Success may reflect logical, propositional thought and working through a disjunctive syllogism (A or B; not A, therefore B). It may also reflect appreciation of the modal concepts “necessity” and “possibility”, and the epistemic concept “certainty”. Mody & Carey’s (2016) results on 2-year-old children with 3- and 4-cups versions of this task converge with studies on apes in undermining this rich interpretation of success. In the 3-cups version, one reward is hidden in a single cup, another in one of two other cups, and the participant is given one choice, thereby tracking the ability to distinguish a certain from an uncertain outcome. In the 4-cups procedure, a reward is hidden in one cup of each pair (e.g., A, C); one cup (e.g., B) is then shown to be empty. Successful subjects should conclude that the reward is 100% likely in A, only 50% likely in either C or D, and accordingly choose A, thereby demonstrating modal and logical concepts in addition to epistemic ones. Children 2 1/2 years of age fail the 4-cups task, and apes fail related tasks tapping the same constructs. Here we tested a Grey parrot ( Psittacus erithacus ), Griffin, on the 3- and 4-cups procedures. Griffin succeeded on both tasks, outperforming even 5-year-old children. Controls ruled out that his success on the 4-cups task was due to a learned associative strategy of choosing the cup next to the demonstrated empty one. These data show that both the 3- and 4-cups tasks do not require representational abilities unique to humans. We discuss the competences on which these tasks are likely to draw, and what it is about parrots, or Griffin in particular, that explains his better performance than either great apes or linguistically competent preschool children on these and conceptually related tasks.
Book
Heroes are often admired for their ability to act without having "one thought too many," as Bernard Williams put it. Likewise, the unhesitating decisions of masterful athletes and artists are part of their fascination. Examples like these make clear that spontaneity can represent an ideal. However, recent literature in empirical psychology has shown how vulnerable our spontaneous inclinations can be to bias, shortsightedness, and irrationality. How can we make sense of these different roles that spontaneity plays in our lives? The central contention of this book is that understanding these two faces of spontaneity-its virtues and vices-requires understanding the "implicit mind." In turn, understanding the implicit mind requires considering three sets of questions. The first set focuses on the architecture of the implicit mind itself. What kinds of mental states make up the implicit mind? Are both "virtue" and "vice" cases of spontaneity products of one and the same mental system? What kind of cognitive structure do these states have, if so? The second set of questions focuses on the relationship between the implicit mind and the self. How should we relate to our spontaneous inclinations and dispositions? Are they "ours," in the sense that they reflect on our character or identity? Are we responsible for them? The third set focuses on the ethics of spontaneity. What can research on self-regulation teach us about how to improve the ethics of our implicit mind? How can we enjoy the virtues of spontaneity without succumbing to its vices?
Article
A Bayesian mind is, at its core, a rational mind. Bayesianism is thus well‐suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update information in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximating a Bayesian processor.
Article
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Article
Theories of episodic memory have proposed that individual memory traces are linked together by a representation of context that drifts slowly over time. Recent data challenge the notion that contextual drift is always slow and passive. In particular, changes in one's external environment or internal model induce discontinuities in memory that are reflected in sudden changes in neural activity, suggesting that context can shift abruptly. Furthermore, context change effects are sensitive to top-down goals, suggesting that contextual drift may be an active process. These findings call for revising models of the role of context in memory, in order to account for abrupt contextual shifts and the controllable nature of context change.
Article
This paper provides a naturalistic account of inference. We posit that the core of inference is constituted by bare inferential transitions (BITs), transitions between discursive mental representations guided by rules built into the architecture of cognitive systems. In further developing the concept of BITs, we provide an account of what Boghossian [2014] calls ‘taking’—that is, the appreciation of the rule that guides an inferential transition. We argue that BITs are sufficient for implicit taking, and then, to analyse explicit taking, we posit rich inferential transitions (RITs), which are transitions that the subject is disposed to endorse.
Article
During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations.