Figure 7 - uploaded by Kurt Gray
Content may be subject to copyright.
Source publication
Liberals and conservatives disagree about morality, but explaining this disagreement does not require different moral foundations. All people share a common harm-based mind, making moral judgments based on what seems to cause harm—but people make different assumptions of who or what is especially vulnerable to harm. Liberals and conservatives empha...
Context in source publication
Context 1
... the first four factors had eigenvalues > 1.00 (ranging from 1.27 to 4.08), cumulatively explaining 80% of the variance in these items. Figure 7 shows the fit measures and BIC for exploratory analyses ranging from one to six factors. The RMSEA and TLI only reach adequate levels at four factors and beyond, suggesting that simpler solutions are not acceptable. ...
Citations
... This approach might ensure that participants engage with reliable sources, and minimize the risk of misinformation shaping their moral reasoning. For instance, prompting participants to challenge their assumptions about which agents are more vulnerable in each bioethical controversy could potentially resolve disagreements (Womick et al., 2024). ...
Many bioliberals endorse broadly consequentialist frameworks in normative ethics, implying that a progressive stance on matters of bioethical controversy could stem from outcome‐based reasoning. This raises an intriguing empirical prediction: encouraging outcome‐based reflection could yield a shift toward bioliberal views among nonexperts as well. To evaluate this hypothesis, we identified empirical premises that underlie moral disagreements on seven divisive issues (e.g., vaccines, abortion, or genetically modified organisms). In exploratory and confirmatory experiments, we assessed whether people spontaneously engage in outcome‐based reasoning by asking how their moral views change after momentarily reflecting on the underlying empirical questions. Our findings indicate that momentary reflection had no overall treatment effect on the central tendency or the dispersion in moral attitudes when compared to prereflection measures collected 1 week prior. Autoregressive models provided evidence that participants engaged in consequentialist moral reasoning, but this self‐guided reflection produced neither moral “progress” (shifts in the distributions’ central tendency) nor moral “consensus” (reductions in their dispersion). These results imply that flexibility in people's search for empirical answers may limit the potential for outcome‐based reflection to foster moral consensus.
... This is evident in the examples that we began with: Tibetan Buddhists, more than Americans, perceive gossip as causing sickness in communities, and Ugandans, more than Europeans, perceive witchcraft as a viable way to harm others. Even within the United States, people disagree widely about how much fetuses, the environment, or sacred books can truly suffer, and these different perceptions drive moral disagreements (Womick et al. 2024). ...
... MFT is influential because it provides intuitive language to describe some moral differences across politics (Graham et al. 2009). However, the primary instrument for measuring moral foundations, the Moral Foundations Questionnaire , has been criticized for political bias in item wording , Womick et al. 2024, narrowly operationalizing these broad constructs (Gray et al. 2022a) in ways that favor conservatives. For example, items assessing purity ask whether "chastity is an important and valuable virtue," and items assessing authority ask whether "men and women have different roles to play in society" . ...
... A harm-based model of the moral mind is a constructionist theory (Cameron et al. 2015), because it suggests that harm is constructed from basic psychological ingredients (e.g., intention, causation, suffering) and is perceived based on cultural scripts and social understandings. Cultures have different ideas about which entities are intentional moral agents (Gray & Wegner 2010), who is vulnerable to harm (Womick et al. 2024), and which acts cause suffering (Shweder et al. 1997), and this allows for pluralism in the perception of harm-and immorality. ...
Moral judgments differ across cultures and politics, but they share a common theme in our minds: perceptions of harm. Both cultural ethnographies on moral values and psychological research on moral cognition highlight this shared focus on harm. Perceptions of harm are constructed from universal cognitive elements—including intention, causation, and suffering—but depend on the cultural context, allowing many values to arise from a common moral mind. This review traces the concept of harm across philosophy, cultural anthropology, and psychology, then discusses how different values (e.g., purity) across various taxonomies are grounded in perceived harm. We then explore two theories connecting culture to cognition—modularity and constructionism—before outlining how pluralism across human moral judgment is explained by the constructed nature of perceived harm. We conclude by showing how different perceptions of harm help drive political disagreements and reveal how sharing stories of harm can help bridge moral divides.
Whether and when to censor hate speech are long-standing points of contention in the US. The latest iteration of these debates entails grappling with content regulation on social media in an age of intense partisan polarization. But do partisans disagree about what types of hate speech to censor on social media or do they merely differ on how much hate speech to censor? And do they understand out-party censorship preferences? We examine these questions in a nationally representative conjoint survey experiment (participant N = 3,357; decision N = 40,284). We find that, although Democrats support more censorship than Republicans, partisans generally agree on what types of hate speech are most deserving of censorship in terms of the speech’s target, source, and severity. Despite this substantial cross-party agreement, partisans mistakenly believe that members of the other party prioritize protecting different targets of hate speech. For example, a major disconnect between the two parties is that Democrats overestimate and Republicans underestimate the other party’s willingness to censor speech targeting Whites. We conclude that partisan differences on censoring hate speech are largely based on free speech values and misperceptions rather than identity-based social divisions.
Many bioliberals endorse broadly consequentialist frameworks in normative ethics, implying that a progressive stance on matters of bioethical controversy could stem from outcome-based reasoning. This raises an intriguing empirical prediction: encouraging outcome-based reflection could yield a shift toward bioliberal views among non-experts as well. To evaluate this hypothesis, we identified empirical premises that underlie moral disagreements on seven divisive issues (e.g., vaccines, abortion, or GMOs). In exploratory and confirmatory experiments, we assessed whether people spontaneously engage in outcome-based reasoning by asking how their moral views change after momentarily reflecting on the underlying empirical questions. Our findings indicate that momentary reflection had no overall treatment effect on the central tendency or the dispersion in moral attitudes when compared to pre-reflection measures collected one week prior. Autoregressive models provided evidence that participants engaged in consequentialist moral reasoning, but this self-guided reflection produced neither moral ‘progress’ (shifts in the distributions’ central tendency) nor moral ‘consensus’ (reductions in their dispersion). These results imply that flexibility in people’s search for empirical answers may limit the potential for outcome-based reflection to foster moral consensus.