Article

Distinguishing Underlying, Inferred, and Expressed Preferences, Attitudes, and Beliefs: An Absence of (Mental) Flatness?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

People's choices of food and drink, the attitudes they express, and the beliefs that they state are influenced by their political and other identities. At the same time, people's everyday choices depend on the context of available options in ways that are difficult to explain in terms of the choosers’ preferences and beliefs. Such phenomena provoke various questions. Do partisans or conspiracy theorists really believe what they are saying? Given the systematic inconsistency of their choices, in what sense do consumers prefer the items they purchase? More generally, how “flat” is the mind—do we come to decision‐making and choice with pre‐existing preferences, attitudes, and beliefs, or are our explanations for our behavior mere post‐hoc narratives? Here, we argue that several apparently disparate difficulties are rooted in a failure to separate psychologically different types of preferences, attitudes, and beliefs. We distinguish between underlying, inferred, and expressed preferences. These preferences may be expressed in different coordinate spaces and hence support different types of explanatory generalizations. Choices that appear inconsistent according to one type of preference can appear consistent according to another, and whether we can say that a person “really” prefers something depends on which type of preference we mean. We extend the tripartite classification to the case of attitudes and beliefs, and suggest that attributions of attitudes and beliefs may also be ambiguous. We conclude that not all of the mental states and representations that govern our behavior are context‐dependent and constructed, although many are.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Nick's former PhD student, Neil Stewart, has very successfully extended both the theory and evidential base for this approach (e.g., Noguchi & Stewart, 2018;Stewart, 2009;Stewart & Simpson, 2008). The third paper in this special issue by Brown and Walasek (2025) discusses the extent of context dependence in explaining decision-making behavior in relation to pre-existing beliefs and attitudes. ...
... His 2018 book, The Mind is Flat, is similarly aimed at a broad audience-although it addresses fundamental questions in cognitive science in a radically new way, arguing, for example, for a model of the mind as an improviser, weaving together fragments of information in the moment, rather than guided by stable beliefs and desires. Two of the papers in this special issue address the flat mind hypothesis and its compatibility with human rationality (Oaksford, 2025) and apparently stable beliefs (Brown & Walasek, 2025). Nick's 2022 book, The Language Game, with Morten Christiansen, is also aimed at a general audience, detailing how many aspects of language evolution, acquisition, and use can be explained in terms of collaborative improvisations, like in a game of charades. ...
... It is good to be reminded that research should also be fun and, as with so much in life, this will depend on the people you choose as friends and colleagues. Brown, G. & Walasek, L. (2025). Distinguishing underlying, inferred and expressed preferences, attitudes, and beliefs: An absence of (mental) flatness? ...
Article
This is an introduction to the special issue of Topics in Cognitive Science , honoring Nick Chater's award of the 2023 David E. Rumelhart Prize for Contributions to the Theoretical Foundations of Human Cognition. It provides a condensed overview of his contributions to cognitive science within which the articles to this special issue are situated, finishing off with two short personal recollections by the Editors.
Article
Full-text available
According to many, we live in “posttruth” times, with the pervasiveness of falsehoods being an existential threat to democracy and the functioning of free societies. Why do people believe and propagate falsehoods? Current accounts focus on psychological deficiencies, heuristic errors, self-enhancing motivations, and motivations to sow chaos. Here, we advance a complementary, outwardly (vs. inwardly) oriented, and ultimate (vs. proximate) account that people often believe and spread falsehoods for socially functional reasons. Under this view, falsehoods can serve as rare and valued information with which to rise in prestige, as signals of group commitment and loyalty tests, as ammunition with which to derogate rivals, or as outrages with which to mobilize the group toward shared goals. Thus, although people often generate and defend falsehoods through processes that are epistemically irrational, doing so might be rational from the perspective of the functions falsehoods serve. We discuss the implications of this view for puzzling theoretical phenomena and changing problematic beliefs.
Article
Full-text available
We propose a methodology for normative evaluation when preferences are context-dependent. We offer a precise definition of context-dependence and formulate a normative criterion of self-determination , according to which one situation is better than another if individuals are aware of more potential contexts of a choice problem. We provide two interpretations of our normative approach: an extension of Sugden’s opportunity criterion and an application of Sen’s positional views in his theory of justice. Our proposition is consistent with Muldoon’s and Gaus’ approaches of public reason in social contract theory, which account for the diversity of perspectives in non-ideal worlds.
Article
Full-text available
When an attitude changes from A1 to A2, what happens to A1? Most theories assume, at least implicitly, that the new attitude replaces the former one. The authors argue that a new attitude can override, but not replace, the old one, resulting in dual attitudes. Dual attitudes are defined as different evaluations of the same attitude object an automatic, implicit attitude and an explicit attitude. The attitude that people endorse depends on whether they have the cognitive capacity to retrieve the explicit attitude and whether this overrides their implicit attitude. A number of literatures consistent with these hypotheses are reviewed, and the implications of the dual-attitude model for attitude theory and measurement are discussed. For example, by including only explicit measures, previous studies may have exaggerated the ease with which people change their attitudes. Even if an explicit attitude changes, an implicit attitude can remain the same.
Article
Full-text available
The spread of online misinformation on social media is increasingly perceived as a problem for societal cohesion and democracy. The role of political leaders in this process has attracted less research attention, even though politicians who ‘speak their mind’ are perceived by segments of the public as authentic and honest even if their statements are unsupported by evidence. By analysing communications by members of the US Congress on Twitter between 2011 and 2022, we show that politicians’ conception of honesty has undergone a distinct shift, with authentic belief speaking that may be decoupled from evidence becoming more prominent and more differentiated from explicitly evidence-based fact speaking. We show that for Republicans—but not Democrats—an increase in belief speaking of 10% is associated with a decrease of 12.8 points of quality (NewsGuard scoring system) in the sources shared in a tweet. In contrast, an increase in fact-speaking language is associated with an increase in quality of sources for both parties. Our study is observational and cannot support causal inferences. However, our results are consistent with the hypothesis that the current dissemination of misinformation in political discourse is linked to an alternative understanding of truth and honesty that emphasizes invocation of subjective belief at the expense of reliance on evidence.
Article
Full-text available
This article summarises the theoretical foundations, main approaches and current trends in the field of behavioural normative economics. It identifies bounded rationality and bounded willpower as the two core concepts that have motivated the field. Since the concepts allow for individual preferences to be context‐dependent and time‐inconsistent, they pose an intricate problem for standard welfare analysis. The article discusses the ways in which two prominent approaches – the preference purification approach and the opportunity approach – have tackled the problem. It argues that shortcomings in each of these approaches motivate an agency‐centric perspective. The article presents two concrete policy proposals of the agency‐centric approach. While this approach is promising, the article argues for pluralism in normative economics since an exclusive focus on agency can likely not do justice to the multifarious concerns that citizens hold.
Article
Full-text available
While people across the world value honesty, it is undeniable that it can sometimes pay to be dishonest. This tension leads people to engage in complex behaviors that stretch the boundaries of honesty. Such behaviors include strategically avoiding information, dodging questions, omitting information, and making true but misleading statements. Though not lies per se, these are nonetheless deviations from honesty that have serious interpersonal, organizational, and societal costs. Based on a systematic review of 169 empirical research articles in the fields of management, organizational behavior, applied psychology, and business ethics, we develop a new multidimensional framework of honesty that highlights how honesty encompasses more than the absence of lies—it has relational elements (e.g., fostering an accurate understanding in others through what we disclose and how we communicate) and intellectual elements (e.g., evaluating information for accuracy, searching for accurate information, and updating our beliefs accordingly). By acknowledging that honesty is not limited to the moment when a person utters a clear lie or a full truth, and that there are multiple stages to enacting honesty, we emphasize the shared responsibility that all parties involved in communication have for seeking out and communicating truthful information.
Article
Full-text available
Jesteadt et al. (Jesteadt, W., Luce, R. D., & Green, D. M., Sequential effects injudgments of loudness. Journal of Experimental Psychology: Human Perception andPerformance, 3, 92–104.) discovered a remarkable pattern of autocorrelation in log estimatesof loudness. Responses to repeated stimuli correlated about +0.7, but that correlation wasmuch reduced (0.1) following large differences between successive stimuli. The experimentreported here demonstrates the same pattern in absolute identification without feedback; iffeedback is supplied, the pattern is much muted. A model is proposed for this pattern ofautocorrelation, based on the premise: “There is no absolute judgment of sensory magnitudes;nor is there any absolute judgment of differences/ratios between sensory magnitudes”. Eachstimulus in an experiment is compared to its predecessor, greater, less than, or about thesame. The variability of that comparison increases with the difference in magnitude betweenthe stimuli, so the assessment of a stimulus far removed from its predecessor is veryuncertain. The model provides explanations for the apparent normal variability of sensorystimuli, for the ‘bow’ effect and for the widely reported pattern of sequential effects. It hasapplications to the effects of stimulus range, to the difficulty of identifying more than fivestimuli on a single continuum without error, and to inspection tasks in general, notablymedical screening and the marking of examination scripts. Keywords: Autocorrelation, category judgment, information transmitted, magnitudeestimation, relativity of judgment. (PDF) AUTOCORRELATION IN CATEGORY JUDGMENT. Available from: https://www.researchgate.net/publication/367392458_AUTOCORRELATION_IN_CATEGORY_JUDGMENT [accessed Jan 25 2023].
Article
Full-text available
This paper investigates whether the polarization of political ideology extends to consumers’ preferences, intentions, and purchases.
Article
Full-text available
No one likes to be wrong. Previous research has shown that participants may underweight information incompatible with previous choices, a phenomenon called confirmation bias. In this paper we argue that a similar bias exists in the way information is actively sought. We investigate how choice influences information gathering using a perceptual choice task and find that participants sample more information from a previously chosen alternative. Furthermore, the higher the confidence in the initial choice, the more biased information sampling becomes. As a consequence, when faced with the possibility of revising an earlier decision, participants are more likely to stick with their original choice, even when incorrect. Critically, we show that agency controls this phenomenon. The effect disappears in a fixed sampling condition where presentation of evidence is controlled by the experimenter, suggesting that the way in which confirmatory evidence is acquired critically impacts the decision process. These results suggest active information acquisition plays a critical role in the propagation of strongly held beliefs over time.
Article
Full-text available
A cognitive model of social influence (Social Sampling Theory [SST]) is developed and applied to several social network phenomena including polarization and contagion effects. Social norms and individuals' private attitudes are represented as distributions rather than the single points used in most models. SST is explored using agent-based modeling to link individual-level and network-level effects. People are assumed to observe the behavior of their social network neighbors and thereby infer the social distribution of particular attitudes and behaviors. It is assumed that (a) people dislike behaving in ways that are extreme within their neighborhood social norm (social extremeness aversion assumption), and hence tend to conform and (b) people prefer to behave consistently with their own underlying attitudes (authenticity preference assumption) hence minimizing dissonance. Expressed attitudes and behavior reflect a utility-maximizing compromise between these opposing principles. SST is applied to a number of social phenomena including (a) homophily and the development of segregated neighborhoods, (b) polarization, (c) effects of norm homogeneity on social conformity, (d) pluralistic ignorance and false consensus effects, (e) backfire effects, (f) interactions between world view and social norm effects, and (g) the opposing effects on subjective well-being of authentic behavior and high levels of social comparison. More generally, it is argued that explanations of social comparison require the variance, not just the central tendency, of both attitudes and beliefs about social norms to be accommodated. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Article
Full-text available
Recent work in economics has rediscovered the importance of belief-based utility for understanding human behaviour. Belief ‘choice’ is subject to an important constraint, however: people can only bring themselves to believe things for which they can find rationalizations. When preferences for similar beliefs are widespread, this constraint generates rationalization markets, social structures in which agents compete to produce rationalizations in exchange for money and social rewards. I explore the nature of such markets, I draw on political media to illustrate their characteristics and behaviour, and I highlight their implications for understanding motivated cognition and misinformation.
Article
Full-text available
People who strongly endorse conspiracy theories typically exhibit biases in domain-general reasoning. We describe an overfitting hypothesis, according to which (a) such theories overfit conspiracy-related data at the expense of wider generalisability, and (b) reasoning biases reflect, at least in part, the need to reduce the resulting dissonance between the conspiracy theory and wider data. This hypothesis implies that reasoning biases should be more closely associated with belief in implausible conspiracy theories (e.g., the moon landing was faked) than with more plausible ones (e.g., the Russian Federation orchestrated the attack on Sergei Skripal). In two pre-registered studies, we found that endorsement of implausible conspiracy theories, but not plausible ones, was associated with reduced information sampling in an information-foraging task and with less reflective reasoning. Thus, the relationship between belief in conspiracy theories and reasoning is not homogeneous, and reasoning is not linked specifically to the “conspiracy” aspect of conspiracy theories. Instead, it may reflect an adaptive response to the tension between implausible theories and other beliefs and data.
Article
Full-text available
In five experiments, people repeatedly judged individual options with respect to both overall value and attribute values. When required to choose between two snacks, each differing in two attributes (pleasure and nutrition), people’s assessments of value shifted from pre- to post-choice in the direction that spread the alternatives further apart so as to favor the winner, thereby increasing confidence in the choice. This shift was observed not only for ratings of overall value, but also for each of the two individual attributes. The magnitude of the coherence shift increased with choice difficulty as measured by the difference in initial ratings of overall value for the two options, as well as with a measure of attribute disparity (the degree to which individual attributes “disagree” with one another as to which option is superior). In Experiments 2–5, tasks other than explicit choice generated the same qualitative pattern of value changes, confidence, and response time (RT). These findings support the hypothesis that active consideration of options, whether or not explicitly related to value, automatically refines the mental value representations for the options, which in turn allows them to be more precisely distinguished when later included in a value-based choice set.
Article
Full-text available
One of the prominent, by now seminal, paradigms in the research tradition of cognitive dissonance (Festinger, 1957) is the free-choice paradigm developed by Brehm (1956) to measure choice-induced preference change. Some 50 years after Brehm introduced the paradigm, Chen and Risen (2010) published an influential critique arguing that what the paradigm measures is not necessarily a choice-induced preference change, but possibly an artifact of the choice revealing existing preferences. They showed that once the artifact is experimentally controlled for, there is either no or very little evidence for choice-induced preference change. Given the prominence of the paradigm, this critique meant that much of what we thought we knew about the psychological process of cognitive dissonance might not be true. Following the critique, research using the paradigm applied various corrections to overcome the artifact. The present research examined whether choice truly changes preferences, or rather merely reflects them. We conducted a meta-analysis on 43 studies (N = 2,191), all using an artifact-free free-choice paradigm. Using different meta-analytical methods, and conceptually different analyses, including a Bayesian one, we found an overall effect size of Cohen's d = 0.40, 95% confidence interval (CI) [0.32, 0.49]. Furthermore, we found no evidence for publication bias as an alternative explanation for the choice-induced preference change effect. These results support the existence of true preference change created by choice. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Article
Full-text available
When the costs of acquiring knowledge outweigh the benefits of possessing it, ignorance is rational. In this paper I clarify and explore a related but more neglected phenomenon: cases in which ignorance is motivated by the anticipated costs of possessing knowledge, not acquiring it. The paper has four aims. First, I describe the psychological and social factors underlying this phenomenon of motivated ignorance. Second, I describe those conditions in which it is instrumentally rational. Third, I draw on evidence from the social sciences to argue that this phenomenon of rational motivated ignorance plays an important but often unappreciated role in one of the most socially harmful forms of ignorance today: voter ignorance of societal risks such as climate change. Finally, I consider how to address the high social costs associated with rational motivated ignorance.
Article
Full-text available
Risk and time preferences have often been viewed as reflecting inherent traits such as impatience and self-control. Here, we offer an alternative perspective, arguing that they are flexible and environmentally informed. In Study 1, we investigated risk and time preferences among children in the United States, India, and Argentina, as well as forager-horticulturalist Shuar children in Amazonian Ecuador. We find striking cross-cultural differences in behavior: children in India, the United States, and Argentina are more risk-seeking and future-oriented, whereas Shuar children are more risk-averse and exhibit more heterogeneous time preferences, on average preferring more today choices. To explore 1 of the socioecological forces that may be shaping these preferences, in Study 2, we compared the behavior of more and less market-integrated Shuar children, finding that those in market-integrated regions are more future-oriented and risk-seeking. These findings indicate that cross-cultural differences in risk and time preferences can be traced into childhood and may be influenced by the local environment. More broadly, our results contribute to a growing understanding of plasticity and variation in the development of behavior. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Full-text available
Most choices people make are about ‘matters of taste’, on which there is no universal, objective truth. Nevertheless, people can learn from the experiences of individuals with similar tastes who have already evaluated the available options—a potential harnessed by recommender systems. We mapped recommender system algorithms to models of human judgement and decision-making about ‘matters of fact’ and recast the latter as social learning strategies for matters of taste. Using computer simulations on a large-scale, empirical dataset, we studied how people could leverage the experiences of others to make better decisions. Our simulations showed that experienced individuals can benefit from relying mostly on the opinions of seemingly similar people; by contrast, inexperienced individuals cannot reliably estimate similarity and are better off picking the mainstream option despite differences in taste. Crucially, the level of experience beyond which people should switch to similarity-heavy strategies varies substantially across individuals and depends on how mainstream (or alternative) an individual’s tastes are and the level of dispersion in taste similarity with the other people in the group.
Article
Full-text available
In 1997, Robert Axelrod wondered in a highly influential paper “If people tend to become more alike in their beliefs, attitudes, and behavior when they interact, why do not all such differences eventually disappear?” Axelrod’s question highlighted an ongoing quest for formal theoretical answers joined by researchers from a wide range of disciplines. Numerous models have been developed to understand why and under what conditions diversity in beliefs, attitudes and behavior can co-exist with the fact that very o en in interactions, social influence reduces differences between people. Reviewing three prominent approaches, we discuss the theoretical ingredients that researchers added to classic models of social influence as well as their implications. Then, we propose two main frontiers for future research. First, there is urgent need for more theoretical work comparing, relating and integrating alternative models. Second, the field suffers from a strong imbalance between a proliferation of theoretical studies and a dearth of empirical work. More empirical work is needed testing and underpinning micro-level assumptions about social influence as well as macro-level predictions. In conclusion, we discuss major roadblocks that need to be overcome to achieve progress on each frontier. We also propose that a new generation of empirically-based computational social influence models can make unique contributions for understanding key societal challenges, like the possible effects of social media on societal polarization.
Article
Full-text available
This review covers research on attitudes and attitude change published between 2010 and 2017. We characterize this period as one of significant progress toward an understanding of how attitudes form and change in three critical contexts. The first context is the person, as attitudes change in connection to values, general goals, language, emotions, and human development. The second context is social relationships, which link attitude change to the communicator of persuasive messages, the social media, and culture. The third context is sociohistorical and highlights the influence of unique events, including sociopolitical, economic, and climatic occurrences. In conclusion, many important recent findings reflect the fact that holism, with a focus on situating attitudes within their personal, social, and historical contexts, has become the zeitgeist of attitude research during this period. Expected final online publication date for the Annual Review of Psychology Volume 69 is January 4, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
We commonly think of information as a means to an end. However, a growing theoretical and experimental literature suggests that information may directly enter the agent's utility function. This can create an incentive to avoid information, even when it is useful, free, and independent of strategic considerations. We review research documenting the occurrence of information avoidance, as well as theoretical and empirical research on reasons why people avoid information, drawing from economics, psychology, and other disciplines. The review concludes with a discussion of some of the diverse (and often costly) individual and societal consequences of information avoidance. ( JEL D82, D83).
Article
Full-text available
In uncertain environments, effective decision makers balance exploiting options that are currently preferred against exploring alternative options that may prove superior. For example, a honeybee foraging for nectar must decide whether to continue exploiting the current patch or move to a new location. When the relative reward of options changes over time, humans explore in a normatively correct fashion, exploring more often when they are uncertain about the relative value of competing options. However, rewards in these laboratory studies were objective (for example, monetary payoff), whereas many real-world decision environments involve subjective evaluations of reward (for example, satisfaction with food choice). In such cases, rather than choices following preferences, preferences may follow choices with subjective reward (that is, value) to maximize coherency between preferences and behaviour 12,13 . If so, increasing coherency would lessen the tendency to explore while uncertainty increases, contrary to previous findings. To evaluate this possibility, we examined the exploratory choices of more than 280,000 anonymized individuals in supermarkets over several years. Consumers’ patterns of exploratory choice ran counter to normative models for objective rewards — the longer the exploitation streak for a product, the less likely people were to explore an alternative. Furthermore, customers preferred coupons to explore alternative products when they had recently started an exploitation streak. These findings suggest interventions to promote healthy lifestyle choices.
Article
Over the past 50 years, consumer researchers have presented extensive evidence that consumer preference can be swayed by the decision context, particularly the configuration of the choice set. Importantly, behavioral research on context effects has inspired prominent quantitative research on multialternative decision-making published in leading psychology, management, economics, and general interest journals. While both streams of research seem to agree that context effects are an important research area, there has been relatively limited interaction, communication, and collaboration between the two research camps. In this article, we seek to initiate an active dialogue between the two sides. We begin by providing a critical overview of the two literatures on context effects, discussing both their strengths and weaknesses, as well as disparities and complementarities. Here, we place particular emphasis on deepening consumer researchers’ understanding of context effects by drawing on prominent quantitative research published in non-marketing journals over the last decades. Importantly, we provide a roadmap for the future that can inspire further research and potential collaborations between the two camps, overcoming silos in knowledge creation.
Chapter
Sampling approaches to judgment and decision making are distinct from traditional accounts in psychology and neuroscience. While these traditional accounts focus on limitations of the human mind as a major source of bounded rationality, the sampling approach originates in a broader cognitive-ecological perspective. It starts from the fundamental assumption that in order to understand intra-psychic cognitive processes one first has to understand the distributions of, and the biases built into, the environmental information that provides input to all cognitive processes. Both the biases and restriction, but also the assets and capacities, of the human mind often reflect, to a considerable degree, the irrational and rational features of the information environment and its manifestations in the literature, the Internet, and collective memory. Sampling approaches to judgment and decision making constitute a prime example of theory-driven research that promises to help behavioral scientists cope with the challenges of replicability and practical usefulness.
Article
In reinforcement learning tasks, people learn the values of options relative to other options in the local context. Prior research suggests that relative value learning is enhanced when choice contexts are temporally clustered in a blocked sequence compared to a randomly interleaved sequence. The present study was aimed at further investigating the effects of blocked versus interleaved training using a choice task that distinguishes among different contextual encoding models. Our results showed that the presentation format in which contexts are experienced can lead to qualitatively distinct forms of relative value learning. This conclusion was supported by a combination of model-free and model-based analyses. In the blocked condition, choice behavior was most consistent with a reference point model in which outcomes are encoded relative to a dynamic estimate of the contextual average reward. In contrast, the interleaved condition was best described by a range-frequency encoding model. We propose that blocked training makes it easier to track contextual outcome statistics, such as the average reward, which may then be used to relativize the values of experienced outcomes. When contexts are interleaved, range-frequency encoding may serve as a more efficient means of storing option values in memory for later retrieval.
Article
Previous studies of reinforcement learning (RL) have established that choice outcomes are encoded in a context-dependent fashion. Several computational models have been proposed to explain context-dependent encoding, including reference point centering and range adaptation models. The former assumes that outcomes are centered around a running estimate of the average reward in each choice context, while the latter assumes that outcomes are compared to the minimum reward and then scaled by an estimate of the range of outcomes in each choice context. However, there are other computational mechanisms that can explain context dependence in RL. In the present study, a frequency encoding model is introduced that assumes outcomes are evaluated based on their proportional rank within a sample of recently experienced outcomes from the local context. A range-frequency model is also considered that combines the range adaptation and frequency encoding mechanisms. We conducted two fully incentivized behavioral experiments using choice tasks for which the candidate models make divergent predictions. The results were most consistent with models that incorporate frequency or rank-based encoding. The findings from these experiments deepen our understanding of the underlying computational processes mediating context-dependent outcome encoding in human RL.
Book
Why we don't live in a post-truth society but rather a myside society: what science tells us about the bias that poisons our politics. In The Bias That Divides Us, psychologist Keith Stanovich argues provocatively that we don't live in a post-truth society, as has been claimed, but rather a myside society. Our problem is not that we are unable to value and respect truth and facts, but that we are unable to agree on commonly accepted truth and facts. We believe that our side knows the truth. Post-truth? That describes the other side. The inevitable result is political polarization. Stanovich shows what science can tell us about myside bias: how common it is, how to avoid it, and what purposes it serves. Stanovich explains that although myside bias is ubiquitous, it is an outlier among cognitive biases. It is unpredictable. Intelligence does not inoculate against it, and myside bias in one domain is not a good indicator of bias shown in any other domain. Stanovich argues that because of its outlier status, myside bias creates a true blind spot among the cognitive elite—those who are high in intelligence, executive functioning, or other valued psychological dispositions. They may consider themselves unbiased and purely rational in their thinking, but in fact they are just as biased as everyone else. Stanovich investigates how this bias blind spot contributes to our current ideologically polarized politics, connecting it to another recent trend: the decline of trust in university research as a disinterested arbiter.
Book
What is the role of consciousness in our mental lives? This book argues that consciousness plays an essential role in explaining how we can acquire knowledge and epistemically justified belief about ourselves and our surroundings. On this view, our mental lives cannot be preserved in unconscious creatures—zombies—who behave just as we do. Only conscious creatures have epistemic justification to form beliefs about the world. Zombies cannot know anything about the world, since they have no epistemic justification to believe anything. On this view, all epistemic justification depends ultimately on consciousness. This book builds a sustained argument for the epistemic role of phenomenal consciousness, which draws on a range of considerations in epistemology and the philosophy of mind. The book is divided into two parts, which approach the theory of epistemic justification from opposite directions. Part I argues from the bottom up by drawing on considerations in the philosophy of mind about the role of consciousness in mental representation, perception, cognition, and introspection. Part II argues from the top down by arguing from general principles in epistemology about the nature of epistemic justification. These mutually reinforcing arguments form the basis for a unified theory of the epistemic role of phenomenal consciousness, one that bridges the gap between epistemology and the philosophy of mind.
Article
Recent debates on the nature of preferences in economics have typically assumed that they are to be interpreted either as behavioural regularities or as mental states. In this paper I challenge this dichotomy and argue that neither interpretation is consistent with scientific practice in choice theory and behavioural economics. Preferences are belief-dependent dispositions with a multiply realizable causal basis, which explains why economists are reluctant to make a commitment about their interpretation.
Book
An anniversary edition of a classic in cognitive science, with a new introduction by the author. When Brainstorms was published in 1978, the interdisciplinary field of cognitive science was just emerging. Daniel Dennett was a young scholar who wanted to get philosophers out of their armchairs—and into conversations with psychologists, linguists, computer scientists. This collection of seventeen essays by Dennett offers a comprehensive theory of mind, encompassing traditional issues of consciousness and free will. Using careful arguments and ingenious thought experiments, the author exposes familiar preconceptions and hobbling intuitions. The essays are grouped into four sections: “Intentional Explanation and Attributions of Mentality”; “The Nature of Theory in Psychology”; “Objects of Consciousness and the Nature of Experience”; and “Free Will and Personhood.” This anniversary edition includes a new introduction by Dennett, “Reflections on Brainstorms after Forty Years,” in which he recalls the book's original publication by Harry and Betty Stanton of Bradford Books and considers the influence and afterlife of some of the essays. For example, “Mechanism and Responsibility” was Dennett's first articulation of his concept of the intentional stance; “Are Dreams Experiences?” anticipates the major ideas in his 1991 book Consciousness Explained; and “Where Am I?” has been variously represented in a BBC documentary, a student's Javanese shadow puppet play, and a feature-length film made in the Netherlands, Victim of the Brain.
Article
Consumer choice is often influenced by the context, defined by the set of alternatives under consideration. Two hypotheses about the effect of context on choice are proposed. The first hypothesis, tradeoff contrast, states that the tendency to prefer an alternative is enhanced or hindered depending on whether the tradeoffs within the set under consideration are favorable or unfavorable to that option. The second hypothesis, extremeness aversion, states that the attractiveness of an option is enhanced if it is an intermediate option in the choice set and is diminished if it is an extreme option. These hypotheses can explain previous findings (e.g., attraction and compromise effects) and predict some new effects, demonstrated in a series of studies with consumer products as choice alternatives. Theoretical and practical implications of the findings are discussed.
Book
Cambridge Core - Political Philosophy - Political Self-Deception - by Anna Elisabetta Galeotti
Article
Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties – known as partisanship – can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides.
Article
Large numbers of Americans endorse political rumors on surveys. But do they truly believe what they say? In this paper, I assess the extent to which subscription to political rumors represents genuine beliefs as opposed to expressive responses—rumor endorsements designed to express opposition to politicians and policies rather than genuine belief in false information. I ran several experiments, each designed to reduce expressive responding on two topics: among Republicans on the question of whether Barack Obama is a Muslim and among Democrats on whether members of the federal government had advance knowledge about 9/11. The null results of all experiments lead to the same conclusion: the incidence of expressive responding is very small, though somewhat larger for Democrats than Republicans. These results suggest that survey responses serve as a window into the underlying beliefs and true preferences of the mass public.
Article
This paper examines a model where the set of available outcomes from which a decision maker must choose alters his perception of uncertainty. Specifically, this paper proposes a set of axioms such that each menu induces a subjective belief over an objective state space. The decision maker’s preferences are dependent on the realization of the state. The resulting representation is analogous to state-dependent expected utility within each menu; the beliefs are menu dependent and the utility index is not. Under the interpretation that a menu acts as an informative signal regarding the true state, the paper examines the behavioral restrictions that coincide with different signal structures: elemental (where each element of a menu is a conditionally independent signal) and partitional (where the induced beliefs form a partition of the state space).
Article
This commentary uses the dynamic of identity-protective cognition to pose a friendly challenge to Jussim (2012). Like other forms of information processing, this one is too readily characterized as a bias. It is no mistake, however, to view identity-protective cognition as generating inaccurate perceptions. The “bounded rationality” paradigm incorrectly equates rationality with forming accurate beliefs. But so does Jussim's critique.
Article
We consider the determinants and consequences of a source of utility that has received limited attention from economists: people's desire for the beliefs of other people to align with their own. We relate this 'preference for belief consonance' to a variety of other constructs that have been explored by economists, including identity, ideology, homophily, and fellow-feeling. We review different possible explanations for why people care about others' beliefs and propose that the preference for belief consonance leads to a range of disparate phenomena, including motivated belief-formation, proselytizing, selective exposure to media, avoidance of conversational minefields, pluralistic ignorance, belief-driven clustering, intergroup belief polarization, and conflict. We also discuss an explanation for why disputes are often so intense between groups whose beliefs are, by external observers' standards, highly similar to one-another.
Article
Western history of thought abounds with claims that knowledge is valued and sought. Yet people often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. Using examples from a wide range of domains, we demonstrate that deliberate ignorance has important functions. We systematize types of deliberate ignorance, describe their functions, discuss their normative desirability, and consider how they can be modeled. To date, psychologists have paid relatively little attention to the study of ignorance, let alone the deliberate kind. Yet the desire not to know is no anomaly. It is a choice to seek rather than reduce uncertainty whose reasons require nuanced cognitive and economic theories and whose consequences—for the individual and for society—require analyses of both actor and environment.
Article
This book presents an account of the foundation of practical reason and moral obligation. Moral philosophy aspires to understand the fact that human actions, unlike the actions of the other animals, can be morally good or bad. Few moral philosophers, however, have exploited the idea that actions might be morally good or bad in virtue of being good or bad of their kind - good or bad as actions. Just as we need to know that it is the function of the heart to pump blood to know that a good heart is one that pumps blood successfully, so we need to know what the function of action is in order to know what counts as a good or bad action. Drawing on the work of Plato, Aristotle, and Kant, the book proposes that the function of an action is to constitute the agency and therefore the identity of the person who does it. A good action is one that constitutes its agent as the autonomous and efficacious cause of her own movements. These properties correspond, respectively, to Kant's two imperatives of practical reason. Conformity to the categorical imperative renders us autonomous, and conformity to the hypothetical imperative renders us efficacious. And in determining what effects we will have in the world, we are at the same time determining our own identities. The principles of practical reason, especially the categorical imperative, are therefore the laws of self-constitution.
Article
It is widely believed in philosophy that people have privileged and authoritative access to their own thoughts, and many theories have been proposed to explain this supposed fact. This book challenges the consensus view and subjects the theories in question to critical scrutiny, while showing that they are not protected against the findings of cognitive science by belonging to a separate "explanatory space". The book argues that our access to our own thoughts is almost always interpretive, grounded in perceptual awareness of our own circumstances and behavior, together with our own sensory imagery (including inner speech). In fact our access to our own thoughts is no different in principle from our access to the thoughts of other people, utilizing the conceptual and inferential resources of the same "mindreading" faculty, and relying on many of the same sources of evidence. The book proposes and defends the Interpretive Sensory-Access (ISA) theory of self-knowledge. This is supported through comprehensive examination of many different types of evidence from across cognitive science, integrating a diverse set of findings into a single well-articulated theory. One outcome is that there are hardly any kinds of conscious thought. Another is that there is no such thing as conscious agency.
Article
Popular accounts of “lifestyle politics” and “culture wars” suggest that political and ideological divisions extend also to leisure activities, consumption, aesthetic taste, and personal morality. Drawing on a total of 22,572 pairwise correlations from the General Social Survey (1972–2010), the authors provide comprehensive empirical support for the anecdotal accounts. Moreover, most ideological differences in lifestyle cannot be explained by demographic covariates alone. The authors propose a surprisingly simple solution to the puzzle of lifestyle politics. Computational experiments show how the self-reinforcing dynamics of homophily and influence dramatically amplify even very small elective affinities between lifestyle and ideology, producing a stereotypical world of “latte liberals” and “bird-hunting conservatives” much like the one in which we live.
Article
Neoclassical economics assumes that individuals have stable and context-independent preferences, and uses preference-satisfaction as a normative criterion. By calling this assumption into question, behavioural findings cause fundamental problems for normative economics. A common response to these problems is to treat deviations from conventional rational-choice theory as mistakes, and to try to reconstruct the preferences that individuals would have acted on, had they reasoned correctly. We argue that this preference purification approach implicitly uses a dualistic model of the human being, in which an inner rational agent is trapped in an outer psychological shell. This model is psychologically and philosophically problematic.
Article
Consumers’ social identities stem from comparisons between themselves and others. These identities help determine consumption decisions. Unfortunately, perceptions of comparative traits and characteristics are frequently biased, which can lead to similarly biased consumption decisions. Five studies show that two incidental but commonplace marketing decisions can influence consumers’ estimates of their relative standing and thus their social identities by influencing estimates of how other consumers are distributed.
Article
Significance Risk aversion is one of the most widely observed behaviors in the animal kingdom; hence, it must confer certain evolutionary advantages. We confirm this intuition analytically in a binary-choice model of decision-making—risk aversion emerges from mindless decision-making as the evolutionarily dominant behavior in stochastic environments with correlated reproductive risk across the population. The simplicity of our framework suggests that our results are likely to apply across species. From a policy perspective, our results underscore the importance of addressing systematic risk through insurance markets, capital markets, and government policy. However, our results also highlight the potential dangers of sustained government intervention, which can become a source of systematic risk in its own right.