Article

Truth and Loyalty

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper explores the relationship between truth and loyalty as it pertains to epistemic issues within contemporary Western politics. One now familiar concern is how an increasing number of people determine their beliefs according to what demonstrating loyalty to their group requires instead of the facts of an independent and objective reality, as a proper concern for truthfulness demands. Whereas “they” base their beliefs on what is required to demonstrate loyalty to their group, “our” beliefs are justified by facts and evidence. Such contrasts pit loyalty and truth as necessarily antagonistic. This paper gives us further reason for thinking that putting loyalty against truthfulness at some very general or conceptual level is deeply misguided. More significantly, it seeks to show that the more helpful contrast to make is between those who are loyal to identities that value truthfulness in such a way that there are no other parts of that identity which are not revisable if they come into conflict with truth, and those who are loyal to identities that subordinate truth to other ends or goals. Acknowledging this allows us to better appreciate various aspects of how the relationship between truth and loyalty is playing out in contemporary politics. Chief among these is how our own commitment to truthfulness is itself embedded in a particular identity, an identity that we not only often fail to acknowledge as such but which necessitates us thinking harder about the ways in which it might itself sustain the dynamics of conflict and contestation, antagonizing those who do not share it and driving them farther away from the truthfulness we extol.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Article
Full-text available
Everyday public denial of anthropogenically caused climate change (ACC) has complex antecedents and exists on both individual and institutional levels. Earlier research has linked ACC denial to opposition to formal science and elites, perceived threats to the industrialist capitalist order and existing system properties. Research also suggest that trust in public organizations is a key factor in determining support or opposition to climate change policies. In this paper, we explore the possibility that right wing populism and anti-elitist attitudes fuel both ACC denial and low trust in environmental institutions. We surveyed a representative sample of Norwegians ( N = 3032) to measure ACC denial, how denial is linked to socio-demographic characteristics, trust in environmental institutions, attitudes toward elites and immigration, as well as environmental attitude orientations. Results show that lack of trust in environmental institutions is strongly associated with ACC denial, and furthermore that the degree of trust—or lack thereof—is partly a function of anti-elitist attitudes, opposition to migration and views of nature.
Article
Full-text available
Scholars have maintained that public attitudes often diverge from expert consensus due to ideology-driven motivated reasoning. However, this is not a sufficient explanation for less salient and politically charged questions. More attention needs to be given to anti-intellectualism—the generalized mistrust of intellectuals and experts. Using data from the General Social Survey and a survey of 3,600 Americans on Amazon Mechanical Turk, I provide evidence of a strong association between anti-intellectualism and opposition to scientific positions on climate change, nuclear power, GMOs, and water fluoridation, particularly for respondents with higher levels of political interest. Second, a survey experiment shows that anti-intellectualism moderates the acceptance of expert consensus cues such that respondents with high levels of anti-intellectualism actually increase their opposition to these positions in response. Third, evidence shows anti-intellectualism is connected to populism, a worldview that sees political conflict as primarily between ordinary citizens and a privileged societal elite. Exposure to randomly assigned populist rhetoric, even that which does not pertain to experts directly, primes anti-intellectual predispositions among respondents in the processing of expert consensus cues. These findings suggest that rising anti-elite rhetoric may make anti-intellectual sentiment more salient in information processing.
Article
Full-text available
Recently, Americans have become increasingly likely to hold anti-intellectual attitudes (i.e., negative affect toward scientists and other experts). However, few have investigated the political implications of anti-intellectualism, and much empirical uncertainty surrounds whether or not these attitudes can be mitigated. Drawing on cross-sectional General Social Survey (GSS) data and a national election panel in 2016, I find that anti-intellectualism is associated with not only the rejection of policy-relevant matters of scientific consensus but support for political movements (e.g., “Brexit”) and politicians (e.g., George Wallace, Donald Trump) who are skeptical of experts. Critically, though, I show that these effects can be mitigated. Verbal intelligence plays a strong role in mitigating anti-intellectual sympathies, compared with previously studied potential mitigators. I conclude by discussing how scholars might build on this research to study the political consequences of anti-intellectualism in the future.
Article
Full-text available
Ideologically committed people are similarly motivated to avoid ideologically crosscutting information. Although some previous research has found that political conservatives may be more prone to selective exposure than liberals are, we find similar selective exposure motives on the political left and right across a variety of issues. The majority of people on both sides of the same-sex marriage debate willingly gave up a chance to win money to avoid hearing from the other side (Study 1). When thinking back to the 2012 U.S. Presidential election (Study 2), ahead to upcoming elections in the U.S. and Canada (Study 3), and about a range of other Culture War issues (Study 4), liberals and conservatives reported similar aversion toward learning about the views of their ideological opponents. Their lack of interest was not due to already being informed about the other side or attributable election fatigue. Rather, people on both sides indicated that they anticipated that hearing from the other side would induce cognitive dissonance (e.g., require effort, cause frustration) and undermine a sense of shared reality with the person expressing disparate views (e.g., damage the relationship; Study 5). A high-powered meta-analysis of our data sets (N = 2417) did not detect a difference in the intensity of liberals' (d = 0.63) and conservatives' (d = 0.58) desires to remain in their respective ideological bubbles.
Article
Full-text available
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un-reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals' interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Article
Full-text available
According to a traditional view of self-deception, the phenomenon is an intrapersonal analogue of stereotypical interpersonal deception. In the latter case, deceivers intentionally deceive others into believing something, p, and there is a time at which the deceivers believe that p is false while their victims falsely believe that p is true. If self-deception is properly understood on this model, selfdeceivers intentionally deceive themselves into believing something, p, and there is a time at which they believe that p is false while also believing that p is true. Elsewhere (most recently in Mele, 2001), I have criticized the traditional conception of self-deception and defended an alternative, deflationary view according to which self-deception does not entail any of the following: Intentionally deceiving oneself; intending (or trying) to deceive oneself, or to make it easier for oneself to believe something; concurrently believing each of two contradictory propositions. Indeed, I have argued that garden-variety instances of selfdeception do not include any of these things. On my view, to put it simply, people enter self-deception in acquiring a belief that p if and only if p is false and they acquire the belief in a suitably biased way. Obviously, this shoulders me with the burden of showing what suitable bias amounts to, and I have had a lot to say about that. The suitability at issue is a matter of kind of bias, degree of bias, and the nondeviance of causal connections between biasing processes (or events) and the acquisition of the belief that p. In Mele, 2001 (pp. 106-12), I suggested a test for relevant bias. I called it ‘the impartial observer test,’ and I argued that its appropriateness is underwritten by…
Article
Full-text available
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.
Article
Full-text available
A meta-analysis assessed whether exposure to information is guided by defense or accuracy motives. The studies examined information preferences in relation to attitudes, beliefs, and behaviors in situations that provided choices between congenial information, which supported participants' pre-existing attitudes, beliefs, or behaviors, and uncongenial information, which challenged these tendencies. Analyses indicated a moderate preference for congenial over uncongenial information (d=0.36). As predicted, this congeniality bias was moderated by variables that affect the strength of participants' defense motivation and accuracy motivation. In support of the importance of defense motivation, the congeniality bias was weaker when participants' attitudes, beliefs, or behaviors were supported prior to information selection; when participants' attitudes, beliefs, or behaviors were not relevant to their values or not held with conviction; when the available information was low in quality; when participants' closed-mindedness was low; and when their confidence in the attitude, belief, or behavior was high. In support of the importance of accuracy motivation, an uncongeniality bias emerged when uncongenial information was relevant to accomplishing a current goal.
Article
Full-text available
It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs. The motivation to be accurate enhances use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion. There is considerable evidence that people are more likely to arrive at conclusions that they want to arrive at, but their ability to do so is constrained by their ability to construct seemingly reasonable justifications for these conclusions. These ideas can account for a wide variety of research concerned with motivated reasoning.
Article
Full-text available
Four studies demonstrated both the power of group influence in persuasion and people's blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one's political party. This effect overwhelmed the impact of both the policy's objective content and participants' ideological beliefs (Studies 1-3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.
Article
As current events around the world have illustrated, epistemological issues are at the center of our political lives. It has become increasingly difficult to discern legitimate sources of evidence, misinformation spreads faster than ever, and the role of truth in politics has allegedly decayed in recent years. It is therefore no coincidence that political discourse is currently saturated with epistemic notions like “post-truth,” “fake news,” “truth decay,” “echo chambers,” and “alternative facts.” This book brings together leading political philosophers and epistemologists to explore ways in which the analytic and conceptual tools of epistemology bear on political philosophy, and vice versa. It is organized around three broad themes: truth and knowledge in politics; epistemic problems for democracy; and disagreement and polarization. This book investigates topics such as: the extent and implications of political ignorance, the value of democratic deliberation, the significance of epistemic considerations for political legitimacy, the epistemology of political disagreement, identity politics, political bullshit, and weaponized skepticism. A premise underlying the development of political epistemology is that, beyond a certain point, progress on certain foundational issues in both political philosophy and epistemology cannot be achieved without sharing insights across fields. By bringing political philosophers into conversation with epistemologists, this volume promotes more cross-pollination of ideas while also highlighting the richness and diversity of political epistemology as a newly emerging field.
Article
Contemporary political discourse is awash with concerns about truthfulness, understood as the virtue of making sure that our beliefs are true, in political life. The central argument of this paper is that it is not only possible for us to be self‐deceived as to our own truthfulness but that there is good reason to suspect certain aspects of the way we understand and value truthfulness make it something which we may be particularly prone to being self‐deceived about. If that is correct, then not only do we have further reason for thinking that self‐deception in politics may be more common than we might like to think, it also (a) helps us understand why claims about truthfulness seem more likely to perpetuate and intensify conflicts in politics; (b) suggests that the possibility of our being self‐deceived about our truthfulness stands sufficiently independent of our first‐order beliefs, be they true or false, such that it is likely to appear across the various political divides rather than being exclusive to one group; and (c) requires us to reconsider the problem represented by “post‐truth” politics and the responses that might be appropriate to it.
Article
The decline in trust in the scientific community in the United States among political conservatives has been well established. But this observation is complicated by remarkably positive and stable attitudes toward scientific research itself. What explains the persistence of positive belief in science in the midst of such dramatic change? By leveraging research on the performativity of conservative identity, we argue that conservative scientific institutions have manufactured a scientific cultural repertoire that enables participation in this highly valued epistemological space while undermining scientific authority perceived as politically biased. We test our hypothesized link between conservative identity and scientific perceptions using panel data from the General Social Survey. We find that those with stable conservative identities hold more positive attitudes toward scientific research while simultaneously holding more negative attitudes towards the scientific community compared to those who switch to and from conservative political identities. These findings support a theory of a conservative scientific repertoire that is learned over time and that helps orient political conservatives in scientific debates that have political repercussions. Implications of these findings are discussed for researchers interested in the cultural differentiation of scientific authority and for stakeholders in scientific communication and its public policy.
Book
In this 1989 book Rorty argues that thinkers such as Nietzsche, Freud, and Wittgenstein have enabled societies to see themselves as historical contingencies, rather than as expressions of underlying, ahistorical human nature or as realizations of suprahistorical goals. This ironic perspective on the human condition is valuable on a private level, although it cannot advance the social or political goals of liberalism. In fact Rorty believes that it is literature not philosophy that can do this, by promoting a genuine sense of human solidarity. A truly liberal culture, acutely aware of its own historical contingency, would fuse the private, individual freedom of the ironic, philosophical perspective with the public project of human solidarity as it is engendered through the insights and sensibilities of great writers. The book has a characteristically wide range of reference from philosophy through social theory to literary criticism. It confirms Rorty's status as a uniquely subtle theorist, whose writing will prove absorbing to academic and nonacademic readers alike.
Article
Many philosophers have claimed that relying on the testimony of others in normative questions is in some way problematic. In this paper, I consider whether we should be troubled by deference in democratic politics. I argue that (i) deference is less problematic in impure cases of political deference, and (ii) most non-ideal cases of political deference are impure. To establish the second point, I rely on empirical research from political psychology. I also outline two principled reasons why we should expect political deference to be untroubling: political problems are difficult and require a division of epistemic labour; furthermore, there is value in exercising epistemic solidarity with those one shares an identity or interests with.
Book
Cambridge Core - Political Philosophy - Political Self-Deception - by Anna Elisabetta Galeotti
Book
No part of philosophy is as disconnected from its history as is epistemology. After Certainty offers a reconstruction of that history as the story of an epistemic ideal first formulated by Plato and Aristotle, later developed throughout the Middle Ages, and then dramatically reformulated in the seventeenth century. In watching these debates unfold over the centuries, we come to understand why epistemology has traditionally been embedded within a much wider sphere of concerns about human nature and the reality of the world we live in. We also come to see why epistemology has become today a much narrower and specialized field, concerned with the conditions under which it is true to say, in English, that someone knows something. Looking back to earlier days, this study makes its way through the various and changing ideals of inquiry that have been pursued over the centuries, from the expectations of certainty and explanatory depth to the rising concern over evidence and precision, as famously manifested in the new science. At both the sensory and the intellectual levels, the initial expectation of infallibility is seen to give way to mere subjective indubitability, and in the end it is unclear whether anything remains of the epistemic ideals that philosophy has long pursued. All we may ultimately be left with is hope.
Article
Americans’ attitudes toward scientists have become more negative in recent years. Although researchers have considered several individual-level factors that might explain this change, little attention has been given to the political actions of scientists themselves. This article considers how March for Science rallies that took place across the United States in late April 2017 influenced Americans’ attitudes toward scientists and the research they produce. An online panel study surveying respondents three days before and two days after the March found that liberals’ and conservatives’ attitudes toward scientists polarized following the March. Liberals’ attitudes toward scientists became more positive whereas conservatives’ attitudes became more negative. However, the March appears to have had little effect on the public’s attitudes about scientific research. In addition to answering questions about the March’s political impact, this research calls attention to the possibility that the political actions of scientists can shape public opinion about them.
Article
The primary goal of this paper is to propose a working analysis of the disposition of closed-mindedness. I argue that closed-mindedness (CM) is an unwillingness or inability to engage (seriously) with relevant intellectual options. Dogmatism (DG) is one kind of closed-mindedness: it is an unwillingness to engage seriously with relevant alternatives to the beliefs one already holds. I do not assume that the disposition of closed-mindedness is always an intellectual vice; rather I treat the analysis of the disposition, and its status as an intellectual vice, as separate questions. The concluding section develops a framework for determining the conditions under which closed-mindedness will be an intellectual vice.
Article
Skepticism about citizen competence is a core component of Christopher H. Achen and Larry M. Bartels’s call, in Democracy for Realists, for rethinking our model of democracy. In this paper I suggest that the evidence for citizen incompetence is not as clear as we might think; important research shows that we are good group problem solvers even if we are poor solitary truth seekers. I argue that deliberative democracy theory has a better handle on this fundamental fact of human cognition and therefore has a more realistic view of the conditions that might improve citizen competence.
Article
This research note presents evidence that political polarization over the reality of human-caused climate change increases in tandem with individuals’ scores on a standard measure of actively open-minded thinking. This finding is at odds with the position that attributes political conflict over facts to a personality trait of closed-mindedness associated with political conservatism.
Article
We prize loyalty in our friends, lovers and colleagues, but loyalty raises difficult questions. What is the point of loyalty? Should we be loyal to country, just as we are loyal to friends and family? Can the requirements of loyalty conflict with the requirements of morality? In this book, originally published in 2007, Simon Keller explores the varieties of loyalty and their psychological and ethical differences, and concludes that loyalty is an essential but fallible part of human life. He argues that grown children can be obliged to be loyal to their parents, that good friendship can sometimes conflict with moral and epistemic standards, and that patriotism is intimately linked with certain dangers and delusions. He goes on to build an approach to the ethics of loyalty that differs from standard communitarian and universalist accounts. His book will interest a wide range of readers in ethics and political philosophy.
Article
We propose a model of motivated skepticism that helps explain when and why citizens are biased-information processors. Two experimental studies explore how citizens evaluate arguments about affirmative action and gun control, finding strong evidence of a prior attitude effect such that attitudinally congruent arguments are evaluated as stronger than attitudinally incongruent arguments. When reading pro and con arguments, participants (Ps) counterargue the contrary arguments and uncritically accept supporting arguments, evidence of a disconfirmation bias. We also find a confirmation bias—the seeking out of confirmatory evidence—when Ps are free to self-select the source of the arguments they read. Both the confirmation and disconfirmation biases lead to attitude polarization—the strengthening of t2 over t1 attitudes—especially among those with the strongest priors and highest levels of political sophistication. We conclude with a discussion of the normative implications of these findings for rational behavior in a democracy.
Chapter
This chapter provides an overview of self-affirmation theory. Self-affirmation theory asserts that the overall goal of the self-system is to protect an image of its self-integrity, of its moral and adaptive adequacy. When this image of self-integrity is threatened, people respond in such a way as to restore self-worth. The chapter illustrates how self-affirmation affects not only people's cognitive responses to threatening information and events, but also their physiological adaptations and actual behavior. It examines the ways in which self-affirmations reduce threats to the self at the collective level, such as when people confront threatening information about their groups. It reviews factors that qualify or limit the effectiveness of self-affirmations, including situations where affirmations backfire, and lead to greater defensiveness and discrimination. The chapter discusses the connection of self-affirmations theory to other motivational theories of self-defense and reviews relevant theoretical and empirical advances. It concludes with a discussion of the implications of self-affirmations theory for interpersonal relationships and coping.
Article
We report the results of three experimental tests of the “hot cognition” hypothesis, which posits that all sociopolitical concepts that have been evaluated in the past are affectively charged and that this affective charge is automatically activated within milliseconds on mere exposure to the concept, appreciably faster than conscious appraisal of the object. We find support for the automaticity of affect toward political leaders, groups, and issues; specifically: We conclude with a discussion of the “so what?” question—the conceptual, substantive, and normative implications of hot cognition for political judgments, evaluations, and choice. One clear expectation, given that affect appears to be activated automatically on mere exposure to sociopolitical concepts, is that most citizens, but especially those sophisticates with strong political attitudes, will be biased information processors.
Article
Self-deception is made unnecessarily puzzling by the assumption that it is an intrapersonal analog of ordinary interpersonal deception. In paradigmatic cases, interpersonal deception is intentional and involves some time at which the deceiver disbelieves what the deceived believes. The assumption that self-deception is intentional and that the self-deceiver believes that some proposition is true while also believing that it is false produces interesting conceptual puzzles, but it also produces a fundamentally mistaken view of the dynamics of self-deception. This target article challenges the assumption and presents an alternative view of the nature and etiology of self-deception. Drawing upon empirical studies of cognitive biases, it resolves familiar "paradoxes" about the dynamics of self-deception and the condition of being self-deceived. Conceptually sufficient conditions for self-deception are offered and putative empirical demonstrations of a kind of self-deception in which a subject believes that a proposition is true while also believing that it is false are criticized. Self-deception is neither irresolvably paradoxical nor mysterious, and it is explicable without the assistance of mental exotica. The key to understanding its dynamics is a proper appreciation of our capacity for acquiring and retaining motivationally biased beliefs.
Article
When making judgments, one may encounter not only justifiable factors, i.e., attributes which the judge thinks that he/she should take into consideration, but also unjustifiable factors, i.e, attributes which the judge wants to take into consideration but knows he/she should not. It is proposed that the influence of an unjustifiable fact on one's judgment depends on the presence of elasticity (ambiguity) in justifiable factors; the influence will be greater if there is elasticity than if there is not. Two studies involving different contexts demonstrated the proposed elasticity effect and suggested that the effect could be a result of a self-oriented justification process. Implications of this research for decisions involving a should-vs-want conflict are dicussed.
Donald Trump and the Rise of Tribal Epistemology.” Vox (blog)
  • Roberts David
George Kennan’s ‘Long Telegram
  • Kennan George