Patterns of Moral Judgment Derive From Nonmoral Psychological Representations

Department of Psychology, Harvard University Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.
Cognitive Science A Multidisciplinary Journal (Impact Factor: 2.59). 08/2011; 35(6):1052-75. DOI: 10.1111/j.1551-6709.2010.01167.x
Source: PubMed


Ordinary people often make moral judgments that are consistent with philosophical principles and legal distinctions. For example, they judge killing as worse than letting die, and harm caused as a necessary means to a greater good as worse than harm caused as a side-effect (Cushman, Young, & Hauser, 2006). Are these patterns of judgment produced by mechanisms specific to the moral domain, or do they derive from other psychological domains? We show that the action/omission and means/side-effect distinctions affect nonmoral representations and provide evidence that their role in moral judgment is mediated by these nonmoral psychological representations. Specifically, the action/omission distinction affects moral judgment primarily via causal attribution, while the means/side-effect distinction affects moral judgment via intentional attribution. We suggest that many of the specific patterns evident in our moral judgments in fact derive from nonmoral psychological mechanisms, and especially from the processes of causal and intentional attribution.

139 Reads
  • Source
    • "We examined moral dilemmas based on violations of principles of harm and fairness that varied on diverse dimensions (adapted from Greene et al., 2004). The defining characteristics of a " personal " or " impersonal " dilemma are disputed, for example, the set of 64 dilemmas (Greene et al., 2001) upon which many subsequent experimental studies have drawn contains dilemmas that vary on several important dimensions (e.g., Cushman & Young, 2011; Cushman et al., 2006; Moore et al., 2011; Nakamura, 2013; Royzman & Baron, 2002). The dilemmas in this earlier set differ in the moral principle violated—harm, fairness, honesty (e.g., killing, stealing, lying, corrupting), and in their outcomes being accidental or intentional as a means or a sideeffect ; they differ in the severity of the outcome, in whether it contains a benefit to oneself or to one other person, or to many other people, and in whether the protagonist has some responsibility or involvement in the unfolding events or is a bystander (e.g., Moore et al., 2008; Nakamura, 2013; Tr emoli ere & De Neys, 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We report the results of two experiments that show that participants rely on both emotion and reason in moral judgments. Experiment 1 showed that when participants were primed to communicate feelings, they provided emotive justifications not only for personal dilemmas, e.g., pushing a man from a bridge that will result in his death but save the lives of five others, but also for impersonal dilemmas, e.g., hitting a switch on a runaway train that will result in the death of one man but save the lives of five others; when they were primed to communicate thoughts, they provided non-emotive justifications for both personal and impersonal dilemmas. Experiment 2 showed that participants read about a protagonist's emotions more quickly when the protagonist was faced with a personal dilemma than an impersonal one, but they read about the protagonist's decision to act or not act equally quickly for personal and impersonal dilemmas.
    Thinking and Reasoning 10/2014; 20(2):245-268. DOI:10.1080/13546783.2013.877400 · 1.12 Impact Factor
  • Source
    • "We suggest that this research is theoretically valuable because it contributes to a growing interest to explain moral judgments in terms of more fundamental psychological processes . For example , Cushman and Young ( 2011 ) write , " Research in moral psychology faces the parallel chal - lenge of distinguishing domain - specific moral computations from the effects of other domains on moral judgment . This is a challenge that echoes throughout the cognitive sci - ences as we discover how many ' higher ' mental functions depend upon a core set of con - ceptual primitives , and thereby reflect their idiosyncratic structure " ( p . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Past research has identified a number of asymmetries based on moral judgments. Beliefs about (a) what a person values, (b) whether a person is happy, (c) whether a person has shown weakness of will, and (d) whether a person deserves praise or blame seem to depend critically on whether participants themselves find the agent's behavior to be morally good or bad. To date, however, the origins of these asymmetries remain unknown. The present studies examine whether beliefs about an agent's “true self” explain these observed asymmetries based on moral judgment. Using the identical materials from previous studies in this area, a series of five experiments indicate that people show a general tendency to conclude that deep inside every individual there is a “true self” calling him or her to behave in ways that are morally virtuous. In turn, this belief causes people to hold different intuitions about what the agent values, whether the agent is happy, whether he or she has shown weakness of will, and whether he or she deserves praise or blame. These results not only help to answer important questions about how people attribute various mental states to others; they also contribute to important theoretical debates regarding how moral values may shape our beliefs about phenomena that, on the surface, appear to be decidedly non-moral in nature.
    Cognitive Science A Multidisciplinary Journal 07/2014; 39(1). DOI:10.1111/cogs.12134 · 2.59 Impact Factor
  • Source
    • "To arrive at agent judgments, people search for causes of the detected norm-violating event; if the causes involve an agent, they wonder whether the agent acted intentionally; if she acted intentionally, what reasons she had; and if the event was not intentional, whether the agent could and should have prevented it [38]. The core elements here are causal and counterfactual reasoning and social cognition, and that is why a number of researchers suggest that moral cognition is no unique " module " or " engine " but derives from ordinary cognition [39], [40] "
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose that any robots that collaborate with, look after, or help humans - in short, social robots - must have moral competence. But what does moral competence consist of? We offer a framework for moral competence that attempts to be comprehensive in capturing capacities that make humans morally competent and that therefore represent candidates for a morally competent robot. We posit that human moral competence consists of four broad components: (1) A system of norms and the language and concepts needed to communicate about these norms; (2) moral cognition and affect; (3) moral decision making and action; and (4) moral communication. We sketch what we know and don't know about these four elements of moral competence in humans and, for each component, ask how we could equip an artificial agent with these capacities.
    Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS); 05/2014
Show more

Similar Publications

Preview (2 Sources)

139 Reads
Available from