Article

Patterns of Moral Judgment Derive From Nonmoral Psychological Representations

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ordinary people often make moral judgments that are consistent with philosophical principles and legal distinctions. For example, they judge killing as worse than letting die, and harm caused as a necessary means to a greater good as worse than harm caused as a side-effect (Cushman, Young, & Hauser, 2006). Are these patterns of judgment produced by mechanisms specific to the moral domain, or do they derive from other psychological domains? We show that the action/omission and means/side-effect distinctions affect nonmoral representations and provide evidence that their role in moral judgment is mediated by these nonmoral psychological representations. Specifically, the action/omission distinction affects moral judgment primarily via causal attribution, while the means/side-effect distinction affects moral judgment via intentional attribution. We suggest that many of the specific patterns evident in our moral judgments in fact derive from nonmoral psychological mechanisms, and especially from the processes of causal and intentional attribution.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... 8 This doubt has been propelled by (among other things) the finding that when people make moral judgments (i.e., judgments about rightness and wrongness), these judgments appear to be composed of an irregular number of brain functions and processes that are not purely, or even primarily, used for making moral judgments. For example, moral judgments typically employ areas of the brain believed to also be responsible for understanding the minds of others and the attribution of intentions, yet this is not a brain function whose primary, or only purpose, is to play a role in moral judgment (e.g., Borg et al., 2011;Cushman & Young, 2011;Greene, 2015b;Young & Dungan, 2012). ...
... 20 Since Greene's work, significantly more evidence has been adduced in favor of the position on which moral judgments do not appear to arise from a single system devoted to moral judgment formation at the level of brain processes. For example, Cushman and Young (2011) have found that our patterns of moral judgment can be attributed in part to regions of the brain responsible for the attribution of intentions and causation, general reasoning process that might be engaged in a variety of judgment types. They conclude that our moral judgments are "derived" from more general judgment forming processes (Cushman & Young, 2011, p. 1053). ...
... For example, Cushman and Young (2011) have found that our patterns of moral judgment can be attributed in part to regions of the brain responsible for the attribution of intentions and causation, general reasoning process that might be engaged in a variety of judgment types. They conclude that our moral judgments are "derived" from more general judgment forming processes (Cushman & Young, 2011, p. 1053). Borg et al. reach a similar conclusion, stating that when we judge an act to be "morally wrong" we are making use of brain regions that play "a general role [. . ...
Article
We argue that there is significant evidence for reconsidering the possibility that moral judgment constitutes a distinctive category of judgment. We begin by reviewing evidence and arguments from neuroscience and philosophy that seem to indicate that a diversity of brain processes result in verdicts that we ordinarily consider “moral judgments”. We argue that if these findings are correct, this is plausible reason for doubting that all moral judgments necessarily share common features: if diverse brain processes give rise to what we refer to as “moral judgments”, then we have reason to suspect that these judgments may have different features. After advancing this argument, we show that giving up the unity of moral judgment seems to effectively dissolve the internalism/externalism debate concerning motivation within the field of metaethics.
... Many researchers assume that the action effect for causal judgments is related to the action effect for judgments of blame (e.g., Cushman & Young, 2011;Bostyn & Roets, 2016;Siegel et al., 2017). Some studies, in fact, have found that people judge actions as more blameworthy than inactions, and they have investigated the relationship between causal judgments and judgments of blame (e.g., Bostyn & Roets, 2016). ...
... For both blameworthy and praiseworthy choices, participants judged that actions were more blameworthy and praiseworthy than inactions (Siegel et al., 2017). Consistent with a lot of earlier work (Cushman & Young, 2011), the authors concluded that people's moral judgments for the action effect for blame (and for praise) were influenced by people's causal judgments (Siegel et al., 2017 roles in counterfactual events by making their pre-victimization behaviors salient intensifies moral judgment (e.g., Branscombe et al., 1996;Roese, 1997). Furthermore, in line with research on normality and causal judgments (Hitchcock & Knobe, 2007), the normality of victim-related counterfactuals seems to matter for blame. ...
... Researchers have found the action effect for causal judgment in many domains (Cushman & Young, 2011;Henne et al., 2019;Jamison et al., 2020;Spranca et al., 1991;Walsh & Sloman, 2011;Willemsen & Reuter, 2016), yet we did not find it for ratings of victims' causal contributions, nor ratings for victims' blameworthiness, in a range of crimes. ...
... 1. Causation. An action which leads to a particular outcome is judged to be in some sense more causally responsible for that outcome than an omission that leads to the same or an equivalent outcome -for instance actions can be physically connected to outcomes in a way that omissions cannot (Dowe, 2004); and people prefer to punish those whom they hold more causally responsible (see e.g., Baron & Ritov, 2009;Cushman & Young, 2011;Greene et al., 2009;Jamison, Yay & Feldman, 2020;Royzman & Baron, 2002;Spranca et al., 1991). ...
... 2. Intention. People judge harmful outcomes that come about via omission to be less intentional than equivalently harmful actions (Hayashi, 2015), and prefer to punish those who are judged to have manifestly harmful intentions (Cushman & Young, 2011;Jamison et al., 2020; but see Ritov & Baron, 1990;Royzman & Baron, 2002). ...
... Our failure to find a significant association between causal perception and punishment behavior may of course be due to a lack of power, but is prima facie at odds with several findings which suggest that the preference to commit harm by omission is at least partly explained by differences in causal perception of acts versus omissions (Baron & Ritov, 2009;Cushman & Young, 2011;Greene et al., 2009). There are several possible explanations for why we failed to find this effect. ...
Article
Full-text available
Harmful acts are punished more often and more harshly than harmful omissions. This asymmetry has variously been ascribed to differences in how individuals perceive the causal responsibility of acts versus omissions and to social norms that tend to proscribe acts more frequently than omissions. This paper examines both of these hypotheses, in conjunction with a new hypothesis: that acts are punished more than omissions because it is usually more efficient to do so. In typical settings, harms occur as a result of relatively few harmful actions, but many individuals may have had the opportunity to prevent or rectify the harm. Penalising actors therefore requires relatively few punishment events compared to punishing omitters. We employ a novel group paradigm in which harm occurs only if both actors and omitters contribute to the harm. Subjects play a repeated economic game in fixed groups involving a social dilemma (total N = 580): on each round self-interest favours harmful actions (taking from another) and harmful omissions (failing to repair the victim’s loss), but the group payoff is maximized if individuals refrain from these behaviors. In one treatment harm occurs as a result of one action and two omissions; in the other, it is the result of two actions and one omission. In the second treatment, the more efficient strategy to maximize group benefit is to punish omissions. We find that subjects continue to prefer to punish acts rather than omissions, with two important caveats. There is still a substantial level of punishment of omissions, and there is also evidence of some responsiveness to the opportunity to enforce a more efficient rule. Further analysis addresses whether the omission effect is associated with asymmetric norm-based attitudes: a substantial proportion of subjects regard it as equally fair to punish harmful acts and omissions, while another portion endorse an asymmetry; and punishment behavior correlates with these attitudes in both groups.
... And given our goal is to move beyond an examination of moral transgressions to emotionally impactful social actions more generally, it will be particularly useful to identify nonmoral features of actions that fit this bill. Fortunately, Cushman and Young (2011) have already done much of this heavy lifting by identifying two such components that, in tandem, compose responsibility. One component is intent. ...
... One component is intent. Judgments of intentions are what bridge certain features of moral actions (e.g., causing a harm as a means to an end as opposed to a side effect of another action; Cushman & Young, 2011) and a desire to blame (Ames & Fiske, 2015). Coming full circle to our own interest in whether responsibility amplifies empathic forecasts, such desires may lead people to claim that actors actually did more harm and to misremember that actors exacted more damage than the objective evidence reflected (Ames & Fiske, 2013). ...
... For example, when one brings about a harm through a direct action (a commission) instead of a failure to act (an omission), one is seen to be more clearly the cause of that action. As a result, more blame is offered (Cushman & Young, 2011). According to dyadic completion, such elevated blame is itself a cue to elevated emotional impact. ...
Article
Full-text available
Inspired by theoretical and empirical work on emotion, psychological distance, moral psychology, and people's tendency to overgeneralize ecologically valid relationships, 3 studies explore whether, why, and for whom responsibility amplifies empathic forecasts (RAEF)-the perception that an intentional agent's social actions will produce stronger affective responses in others than if those same outcomes were to occur randomly or unintentionally. In Study 1, participants thought that pleasant or aversive videos would elicit stronger reactions when participants themselves (instead of the random determination of a computer) would select the video another would watch. This was explained by responsible agents' own stronger reactions to the stimuli. Study 2 identified what about agents' responsibility amplifies empathic forecasts: the combination of clearly causing and intending the other's outcome. Study 3 demonstrated that RAEF need not extend to all responsible agents equally. Participants considered how to divide (vs. how another participant would divide or how a computer would randomly split) $10 with a recipient. In this context, we found the weight of causal responsibility looms larger in the self's mind when the self is responsible for the recipient's fate than when another responsible agent is. Furthermore, the self thought that the recipient's emotional reaction would be more strongly influenced by the size of the self's own (compared to another's or a computer's) allocation decision. The Discussion focuses on how RAEF relates to other models connecting agency and experience, provides initial evidence that RAEF need not be egocentric, and identifies open questions that remain for future research. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... Psychological studies (e.g., Baron and Ritov, 2004;Cushman et al., 2006;Hauser et al., 2009;Arutyunova et al., 2013Arutyunova et al., , 2016 demonstrated that harmful actions are consistently judged by individuals more harshly than equivalently harmful omissions (the omission bias, or the "action principle"); harms intended as the means to a goal are usually judged to be less permissible than harms foreseen as the side effect of a goal (the doctrine of double effect, or the "means principle"); and harms inflicted via physical contact are usually viewed as less acceptable compared with harms caused without physical contact (the "contact principle"). The omission bias is considered to be rooted in the processes of causal attribution (Cushman and Young, 2011): when harm occurs as a result of an omission, its cause is less obvious than when the same harm occurs as a result of an action, or, in other words, this distinction is accounted for by the difference between direct and indirect causation (Baron and Ritov, 2004). In contrast, the means distinction was shown to be primarily based on differences in the attribution of intentions (Cushman and Young, 2011). ...
... The omission bias is considered to be rooted in the processes of causal attribution (Cushman and Young, 2011): when harm occurs as a result of an omission, its cause is less obvious than when the same harm occurs as a result of an action, or, in other words, this distinction is accounted for by the difference between direct and indirect causation (Baron and Ritov, 2004). In contrast, the means distinction was shown to be primarily based on differences in the attribution of intentions (Cushman and Young, 2011). Thus, in our study we hypothesized that moral judgement about harmful omissions, as a more complex task due to the difficulty of causal attribution (Baron and Ritov, 2004;Cushman and Young, 2011), would be accompanied by higher complexity of HRV, as measured by PE, than moral judgement about harmful actions. ...
... In contrast, the means distinction was shown to be primarily based on differences in the attribution of intentions (Cushman and Young, 2011). Thus, in our study we hypothesized that moral judgement about harmful omissions, as a more complex task due to the difficulty of causal attribution (Baron and Ritov, 2004;Cushman and Young, 2011), would be accompanied by higher complexity of HRV, as measured by PE, than moral judgement about harmful actions. Additionally, we tested whether PE values during moral judgements of harms foreseen as the side effect of a goal are different from PE values during moral judgement of harms intended as the means to a goal, and during moral judgement of contactless harms in contrast to harms caused via physical contact. ...
Article
Full-text available
Recent research strongly supports the idea that cardiac activity is involved in the organisation of behaviour, including social behaviour and social cognition. The aim of this work was to explore the complexity of heart rate variability, as measured by permutation entropy, while individuals were making moral judgements about harmful actions and omissions. Participants (N = 58, 50% women, age 21-52 years old) were presented with a set of moral dilemmas describing situations when sacrificing one person resulted in saving five other people. In line with previous studies, our participants consistently judged harmful actions as less permissible than equivalently harmful omissions (phenomenon known as the "omission bias"). Importantly, the response times were significantly longer and permutation entropy of the heart rate was higher when participants were evaluating harmful omissions, as compared to harmful actions. These results may be viewed as a psychophysiological manifestation of differences in causal attribution between actions and omissions. We discuss the obtained results from the positions of the system-evolutionary theory and propose that heart rate variability reflects complexity of the dynamics of neurovisceral activity within the organism-environment interactions, including their social aspects. This complexity can be described in terms of entropy and our work demonstrates the potential of permutation entropy as a tool of analyzing heart rate variability in relation to current behaviour and observed cognitive processes.
... Indeed, causal responsibility is a necessary condition for the ascription of legal responsibility (Hart and Honoré, 1959;Tadros, 2005). Research in moral psychology has identified general cognitive processes such as causal and intentional attributions to explain patterns of responsibility judgments in both moral and non-moral domains (Cushman and Young, 2011; see also Spranca et al., 1991;Royzman and Baron, 2002;Mikahil, 2007;Waldmann and Dieterich, 2007;Lagnado and Channon, 2008;Baron and Ritov, 2009;Greene et al., 2009). For example, Cushman and Young (2011) show that action versus omission and means versus side-effect differences in moral judgments are mediated by their effects on non-moral representations of causal and intentional attributions. ...
... Research in moral psychology has identified general cognitive processes such as causal and intentional attributions to explain patterns of responsibility judgments in both moral and non-moral domains (Cushman and Young, 2011; see also Spranca et al., 1991;Royzman and Baron, 2002;Mikahil, 2007;Waldmann and Dieterich, 2007;Lagnado and Channon, 2008;Baron and Ritov, 2009;Greene et al., 2009). For example, Cushman and Young (2011) show that action versus omission and means versus side-effect differences in moral judgments are mediated by their effects on non-moral representations of causal and intentional attributions. 1 Similarly, Mikahil (2007) accounts for the means versus side-effect distinction in terms of action plans which specify generic rather than morally specific reasoning (see also Kleiman-Weiner et al., 2015). ...
... For example, Kominsky et al. (2015) show that people are more likely to endorse an event (Alex's coin-flip coming up heads) as causally responsible for another event (Alex wins the game), when the contingency between the two is high (Alex wins if both the coin 1 These two causal responsibility patterns are as follows: (i) Harm brought about by an action is deemed morally worse than harm brought about by an omission. (ii) People judge harm used as the necessary means to a goal to be worse than harm produced as the foreseen side-effect of a goal (Cushman and Young, 2011). 2 The executive's widow sued for compensation, but it was ruled that the negligence of lighting the match was not a cause of his death. ...
Article
Full-text available
How do people judge the degree of causal responsibility that an agent has for the outcomes of her actions? We show that a relatively unexplored factor – the robustness (or stability) of the causal chain linking the agent’s action and the outcome – influences judgments of causal responsibility of the agent. In three experiments, we vary robustness by manipulating the number of background circumstances under which the action causes the effect, and find that causal responsibility judgments increase with robustness. In the first experiment, the robustness manipulation also raises the probability of the effect given the action. Experiments 2 and 3 control for probability-raising, and show that robustness still affects judgments of causal responsibility. In particular, Experiment 3 introduces an Ellsberg type of scenario to manipulate robustness, while keeping the conditional probability and the skill deployed in the action fixed. Experiment 4, replicates the results of Experiment 3, while contrasting between judgments of causal strength and of causal responsibility. The results show that in all cases, the perceived degree of responsibility (but not of causal strength) increases with the robustness of the action-outcome causal chain.
... The conclusion of the review articles by Nadal (2013) and Brown and colleagues (2011) mirror those being reached in the neuroscientific investigation of moral judgment: cognitive scientists and moral psychologists have recently concluded that there does not seem to be a 'moral judgment' area of the brain, or specific process or set of neural processes, devoted to forming moral judgments (e.g., Borg et al., 2011;Cushman and Young, 2011;Decety and Cowell, 2014;Greene 2015aGreene , 2015bYoung and Dungan, 2012). Based on this neuroscientific evidence, as well as philosophical argumentation, some philosophers have similarly begun to conclude that 'moral judgment' is not a unified concept and that some of the philosophical positions on moral judgment that depend on the assumption that moral judgments are unified in some fashion need to be re-examined (e.g., McHugh et al., 2021;Railton, 2017;Thalia, 2012, 2014;Stich, 2006;Sackris 2021;Sackris and Larsen, 2022;Cf. ...
Article
Full-text available
In philosophy of aesthetics, scholars commonly express a commitment to the premise that there is a distinctive type of judgment that can be meaningfully labeled “aesthetic”, and that these judgments are distinctively different from other types of judgments. We argue that, within an Aristotelian framework, there is no clear avenue for meaningfully differentiating “aesthetic” judgment from other types of judgment, and, as such, we aim to question the assumption that aesthetic judgment does in fact constitute a distinctive kind of judgment that is in need of, or can be subject to, distinctive theorizing. We advance our argument primarily through demonstrating that leading contemporary accounts of aesthetic judgment do not successfully distinguish a type of judgment in that they do not tell us how making an aesthetic judgment differs substantially from judging that 2 + 3 = 5, that football is entertaining, or that today is Tuesday.
... in making what we would classify as "moral" judgments that indicates that such judgments are the result of diverse brain processes. The evidence currently indicates that it is more likely that we have a general system for making judgments that we apply to contexts that we would label "moral" than that we have a specific moral judgment system (e.g., Greene 2015;Cushman and Young, 2011;Borg et al, 2011;Young and Dungan, 2012;Decety and Cowell, 2014;Bzdok et al., 2012). 16 As an example of this domain general approach to judgment formation, consider Joshua Greene's account. ...
Article
Full-text available
Since the 18th century, one of the key features of diagnosed psychopaths has been “moral colorblindness” or an inability to form moral judgments. However, attempts at experimentally verifying this moral incapacity have been largely unsuccessful. After reviewing the centrality of “moral colorblindness” to the study and diagnosis of psychopathy, I argue that the reason that researchers have been unable to verify that diagnosed psychopaths have an inability to make moral judgments is because their research is premised on the assumption that there is a specific moral faculty of the brain, or specific “moral” emotions, and that this faculty or set of emotions can become “impaired”. I review recent research and argue that we have good reason to think that there is no such distinct capacity for moral judgment, and that, as a result, it is impossible for someone’s “moral judgment faculty” to become selectively disabled. I then discuss the implications of such a position on psychopathy research, the coherence of the disorder, and the moral responsibility of psychopaths.
... When people make moral decisions, they often consider how others would judge them for behaving selfishly 177,178 . Harmful actions are judged more harshly than harmful inactions 179,180 , and causing harm by deviating from the status quo is blamed more than harming by default 181,182 . Therefore, reframing decisions to carry on with 'business as usual' during a pandemic as active decisions, rather than passive or default decisions, may make such behaviours less acceptable. ...
... The causal attribution process corresponds to the inference of the agent's causal role in the outcome of the action, grounded on Heider's theory of attribution (1958, see also Cushman, 2008;Cushman & Young, 2011). The outcome is either directly caused by the agent's action or caused by another source. ...
Article
Full-text available
Over the past decade, moral judgments and their underlying decision processes have more frequently been considered from a dynamic and multi-factorial perspective rather than a binary approach (e.g., dual-system processes). The agent's intent and his or her causal role in the outcome-as well as the outcome importance-are key psychological factors that influence moral decisions, especially judgments of punishment. The current research aimed to study the influence of intent, outcome, and causality variations on moral decisions, and to identify their interaction during the decision process by embedding the moral scenarios within an adapted mouse-tracking paradigm. Findings of the preregistered study (final n = 80) revealed main effects for intent, outcome, and causality on judgments of punishment, and an interaction between the effects of intent and causality. We furthermore explored the dynamics of these effects during the decision process via the analysis of mouse trajectories in the course of time. It allowed detecting when these factors intervened during the trial time course. The present findings thus both replicate and extend previous research on moral judgment, and evidence that, despite some ongoing challenges, mouse-tracking represents a promising tool to investigate moral decision-making.
... People may expect others to behave in ways that are moral and reasonable, and so these default possibilities may be readily available (e.g., Cushman, 2020;Phillips et al., 2015Phillips et al., , 2019Phillips & Cushman, 2017). The finding is consistent with the idea that moral cognition relies on the same sorts of cognitive processes that underpin reasoning about non-moral matters (Bucciarelli et al., 2008;Cushman & Young, 2011;Knobe, 2018;Rai & Holyoak, 2010;Uttich & Lombrozo, 2010; see also Haidt, 2012;Young & Saxe, 2011). The potential sorts of cognitive processes that are implicated by these discoveries are sketched in Table 2. ...
Article
How do people come to consider a morally unacceptable action, such as “a passenger in an airplane does not want to sit next to a Muslim passenger and so he tells the stewardess the passenger must be moved to another seat”, to be less unacceptable? We propose they tend to imagine counterfactual alternatives about how things could have been different that transform the unacceptable action to be less unacceptable. Five experiments identify the cognitive processes underlying this imaginative moral shift: an action is judged less unacceptable when people imagine circumstances in which it would have been moral. The effect occurs for immediate counterfactuals and reflective ones, but is greater when participants create an immediate counterfactual first, and diminished when they create a reflective one first. The effect also occurs for unreasonable actions. We discuss the implications for alternative theories of the mental representations and cognitive processes underlying moral judgments.
... However, they differ in what process they consider primary. Moral judgement should be based on concious reasoning (Cushman, Young, & Hauser, 2006;Johnston, 2011;Cushman & Young, 2011). In Figure 1 for example, it is easy for a teacher to loose control of his students. ...
Article
This study aims to analyze the issue of morality in a teaching and learning set up. After discussion and answering the question “Is it ever the case that teachers hold students morally blameworthy or praiseworthy for factors that are known to be beyond their control?” the study concludes that teachers hold students to be morally blameworthy or praiseworthy for factors that are beyond their control, because they do not fully comprehend their lack of control over their situation, which is still bad. The study also found that most teachers do not have a clear cross-cultural knowledge of minority students’ background causing a moral judgement dilemma of students’ behaviours and actions. A critical look at other variables that may affect students’ learning is recommended by this study.Keywords: minority students, blameworthy, praiseworthy, knowledge, moral judgement
... "Goal attainment" occurs when the "superordinate goal" is achieved and produces the "expected outcome." Lastly, a "side effect" is then an outcome generated in this process that does not serve to achieve the superordinate goal (Cushman and Young, 2011;Cushman, 2013, p. 275, 281-3;Vila-Henninger 2020b, 2021b. ...
Article
Synthesizing theory and findings from across neoclassical economics, behavioral economics, economic sociology, culture and cognition, and neuroscience, I create a dual‐process model of economic behavior that explains both self‐interested and moral decision‐making. The core of this model is from neuroscience (Crockett 2013; Cushman 2013; Greene 2017; Lockwood et al. 2020). In my model, Type I and Type II processes exist relative to socially learned outcomes and causal models for achieving these outcomes. For Type I cognition, nondeclarative strategies for how to achieve an outcome are learned through life experience and automatically generate “gut feelings” about how to accomplish a given outcome. Type II cognition is deliberation based on declarative memory of the process through which one must act to obtain an outcome. Type II cognition also deliberates between Type I and Type II inputs for how to reach an outcome. This model allows for goals, and strategies to achieve goals, to be learned socially—therefore allowing for actors to learn moral and/or self‐interested economic goals and strategies through social interaction. This model can be used across economic sociology, cultural sociology, and the sociology of morality because it advances current models of self‐interested decision‐making and moral judgment. Furthermore, this article is a bridge between behavioral economics and sociology, as well as between cultural sociology and economic sociology that can help foster collaboration by providing a common model of behavior.
... Moreover, there is ample research on belief updating, both moral and non-moral (Holyoak andPowell 2016, Horne et al. 2013). There may or may not be differences between moral and non-moral cognition (Cushman andYoung 2011, Hauser andYoung 2008), yet we need to bracket this debate here. Relevant for our current study, a key role in belief-updating has emerged for social influences. ...
Article
Full-text available
The so-called “conciliatory” norm in epistemology and meta-ethics requires that an agent, upon encountering peer disagreement with her judgment, lower her confidence about that judgment. But whether agents actually abide by this norm is unclear. Although confidence is excessively researched in the empirical sciences, possible effects of disagreement on confidence have been understudied. Here, we target this lacuna, reporting a study that measured confidence about moral beliefs before and after exposure to moral discourse about a controversial issue. Our findings indicate that participants do not abide by the conciliatory norm. Neither do they conform to a rival “steadfast” norm that demands their confidence to remain the same. Instead, moral discourse seems to boost confidence. Interestingly, we also find a confidence boost for factual beliefs, and a correlation between the extremity of moral views and confidence. One possible explanation of our findings is that when engaging in moral discourse participants become more extreme in their opinions, which leads them to become more confident about them, or vice versa: they become more confident and in turn more extreme. Although our work provides initial evidence for the former mechanism, further research is needed for a better understanding of confidence and moral discourse.
... Haidt (2001) argues that sometimes an individual's moral reasoning is not necessarily reasonable, and feelings serve as the primary motivator for discrimination. However, Cushman and Young (2011) have noted that there are 'rational cognitive processes in play when people discriminate against other groups'. This is despite there being a natural and socially learned intuition of what is right or wrong in moral situation. ...
Thesis
Public attitudes have an impact on social and personal experience, often affecting the way in which individuals act or behave towards other people in particular situations. Therefore, this study attempts to cross-culturally examine how people’s judgements towards people with disabilities impact on stigmatising people with disabilities within the UK and Chilean workplace. As a result, using a two sequential mix method design, the first instance, a qualitative phase, was to construct a model of disabling attitudes and the second, a quantitative phase to test the FIC model developed. Results confirm that people’s attitudes arise from a circle of permanent stigmatization, which is activated by factors (activators) categorised in three dimensions according to the ABC model of attitude structure. That is Functional prejudice, Institutional discrimination and Cultural stereotypes. In addition, activators included into the FIC model confirm that the UK and Chilean people have strong prejudices about people with disabilities’ functionality, that is people in both countries make an strong connection between having an impairment and the ability to be productive at work. Likewise, in both countries, cultural stereotypes have influenced people’s attitudes from negative and positive emotions that are not necessary going in the expected direction if the aim is to include people with impairment at work. Functional prejudices and cultural stereotypes create a type of discrimination, that most people believe to be positive. However, positive discrimination, rather than increasing the inclusion of people with disabilities at work, is creating a permanent circle of stigmatization in which people with disabilities have a lot say to change people’s attitudes.
... We further believe that the specific aspects of causation that the CSM postulates may be useful for illuminating what factors influence people's causal judgments beyond the physical domain. For example, people who emphasize how-causation might differentiate more strongly between acts of omission versus commission (Gerstenberg & Stephan, 2020;Livengood & Machery, 2007;McGrath, 2005;Spranca, Minsk, & Baron, 1991;Stephan, Willemsen, & Gerstenberg, 2017), or pay particular attention to the role of force in harmful events when judging the morality of actions (Cushman & Young, 2011;Greene et al., 2009;Henne, Niemi, Pinillos, De Brigard, & Knobe, 2019;Iliev, Sachdeva, & Medin, 2012;Mikhail, 2007). The aspect of robust-causation is related to the role that intentions play in how actions are evaluated (Heider, 1958;Lombrozo, 2010;Woodward, 2006). ...
Article
How do people make causal judgments about physical events? We introduce the counterfactual simulation model (CSM) which predicts causal judgments in physical settings by comparing what actually happened with what would have happened in relevant counterfactual situations. The CSM postulates different aspects of causation that capture the extent to which a cause made a difference to whether and how the outcome occurred, and whether the cause was sufficient and robust. We test the CSM in several experiments in which participants make causal judgments about dynamic collision events. A preliminary study establishes a very close quantitative mapping between causal and counterfactual judgments. Experiment 1 demonstrates that counterfactuals are necessary for explaining causal judgments. Participants' judgments differed dramatically between pairs of situations in which what actually happened was identical, but where what would have happened differed. Experiment 2 features multiple candidate causes and shows that participants' judgments are sensitive to different aspects of causation. The CSM provides a better fit to participants' judgments than a heuristic model which uses features based on what actually happened. We discuss how the CSM can be used to model the semantics of different causal verbs, how it captures related concepts such as physical support, and how its predictions extend beyond the physical domain. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... Some cognitive science research suggests that people may use general moral principles to make and change their judgments and decisions with some reliability (Cushman et al., 2006;Horne et al., 2015). For example, people commonly appeal to the action principle (i.e., that it is worse to cause harm by direct action than to cause equivalent harms by omission) to justify their moral judgments across diverse scenarios (Cushman & Young, 2011;Cushman et al., 2006). Furthermore, whether people agree with a utilitarian moral principle reliably predicts the judgments they make in moral dilemmas (Lombrozo, 2009), and reminding people of an endorsed utilitarian principle induces judgment change in moral dilemmas in a way that accords with the principle (Horne et al., 2015). ...
Article
Normative ethical theories and religious traditions offer general moral principles for people to follow. These moral principles are typically meant to be fixed and rigid, offering reliable guides for moral judgment and decision-making. In two preregistered studies, we found consistent evidence that agreement with general moral principles shifted depending upon events recently accessed in memory. After recalling their own personal violations of moral principles, participants agreed less strongly with those very principles—relative to participants who recalled events in which other people violated the principles. This shift in agreement was explained, in part, by people’s willingness to excuse their own moral transgressions, but not the transgressions of others. These results have important implications for understanding the roles memory and personal identity in moral judgment. People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations.
... This scenario, due to moral philosopher Judith Thomson (1976), can be referred to as the "Footbridge" problem, in order to distinguish it from the classical, "Bystander" problem. As one may point to various differences between the two scenarios, since it is known that moral judgment and reasoning is sensitive to the difference between "killing" and "letting die" (Cushman 2011), as well as to the difference between causing harm "directly" or "indirectly" (Royzman & Baron 2002 According to Greene (2013), the Bystander problem is an "impersonal" dilemma, which only requires the participant to perform a "cold" action like operating the lever; in such a situation, moral reasoning is appropriately performed at the level of system 2 and the rational, consequentialist choice follows. In contrast, the Footbridge problem represents a "personal" or "hot" problem, requiring a violent action of physically pushing the fellow man. ...
Preprint
The global emergency related to the spread of COVID-19 raises critical challenges for decision makers, individuals, and entire communities, on many different levels. Among the urgent questions facing politicians, scientists, physicians and other professionals, some concern the ethical and cognitive aspects of the relevant choices and decisions. Philosophers and cognitive scientists have long analyzed and discussed such issues. As an example, the debate on moral decision making in imaginary scenarios, like the famous "Trolley problem", becomes dramatically concrete in the current crisis. Focusing on the Italian case, we discuss the clinical ethical guidelines proposed by the Italian Society of Anesthesiology, Analgesia, Resuscitation and Intensive Care (SIAARTI), highlighting some crucial ethical and cognitive issues surrounding emergency decision making in the current situation.
... Reasoning, whether about morality, conventions, or matters of fact, is based on mental models of the world rather than logic (e.g., Johnson-Laird, 1983). Degrees of belief in assertions are subjective probabilities (de Finetti, 1937(de Finetti, /1964Ramsey, 1926Ramsey, /1990, which are inferred from models of pertinent evidence, including both those representing facts and those representing deontic principles (Khemlani, Lotstein, & Johnson-Laird, 2012, 2015; see also Cushman & Young, 2011). The theory implies the same broad basis for judgments of propositions about morals and about social conventions. ...
Article
Deontic assertions concern what people should and shouldn't do. One sort concern moral principles, such as: People should care for the environment; and another sort concern social conventions, such as: People should knock before entering an office. The present research examined such deontic assertions and their corresponding factual assertions, such as: People care for the environment and People knock before entering an office. Experiment 1 showed a correlation between emotions and beliefs for both sorts of deontic assertion, but not for their factual counterparts in which the word "should" had been deleted (as in the preceding examples). Experiment 2 showed that changing the pleasantness of participants' emotions about social conventions changed their strength of belief in them. Experiment 3 showed conversely that changing the participants' strength of belief in social conventions changed the pleasantness of their emotions about them. These results corroborate the mental model theory of deontic assertions, which postulates that emotions and beliefs about deontics depend on parallel systems that interact with one another.
... This outcome is defined by the "superordinate goal," which is the "end" that the actor is attempting to achieve. A "subordinate goal" is a more immediate end generated so that one may execute an immediate means that ultimately serves to accomplish the superordinate goal (for a review of the psychological literature on this process see Cushman & Young, 2011). "Goal attainment" is the achievement of the "superordinate goal" in order to produce the "expected outcome." ...
Article
How do everyday people-or actors who do not occupy positions of political authority-legitimate political systems? Responding to this question, I use work from sociology, political science, and cognitive science to build a theory of "Popular Political Legitimation" (PPL)-defined as everyday people's legitimation of a political system. To answer how PPL happens, we must answer two sub-questions that address legitimacy as a normative phenomenon: 1) What are the processes of socialization through which individuals learn the norms, widely held beliefs, and values that legitimate a political system? 2) How do individuals subsequently use these norms, widely held beliefs, and/or values in their own legitimations of a political system? Thus, we see that a model of socialization is central to understanding how PPL happens. I proceed in four steps. First, I review the literature on political legitimation. Next, I review the literature on political socialization. Third, to address gaps in the two aforementioned literatures concerning a model of socialization that explains legitimation, I turn to neuroscience (for reviews see Greene, 2017; Cushman, 2020) and psychology to review models of socialization and rationalization. Finally, I synthesize these literatures to develop a theory of political socialization and how it generates PPL. J Theory Soc Behav. 2020;1-26. wileyonlinelibrary.com/journal/jtsb
... In many studies, lay people clearly rely on social cognitive inferences of intentionality when judging everyday moral actions (Lagnado & Channon, 2008) and when mastering fine distinctions between willingly, knowingly, intentionally, and purposefully violating a norm (Guglielmo & Malle, 2010)-distinctions that also inform legal classifications of negligence and recklessness. Likewise, lay people judge goal-directed harm as less permissible and more often as wrong than they judge harm as a side effect (Cushman & Young, 2011). Thus moral and legal distinctions overlap with (and perhaps derive from) more general purpose social cognitive judgments. ...
Chapter
Our theoretical framework tries to elucidate the processes of moral cognition by showing their connections to both social cognition and social regulation. We argue that a hierarchy of social cognitive tools ground moral cognition and that social and moral cognition together guide the social regulation of behavior. The practice of social-moral regulation, in turn, puts pressure on community members to engage in reasonably fair and evidence-based moral criticism. With the help of these cognitive adaptations and social practices, people are able to navigate the complex terrain of morality.
... En la misma línea, el control, tomado como capacidad de elección en el curso de acción, resulta ser un factor relevante en la culpabilización y asignación de castigo, pues una persona que actúa bajo coerción de algún tipo es señalada como menos culpable y merecedora de un castigo menor (Martin & Cushman, 2016). Otros factores, como el conocimiento de que un resultado negativo pueda ocurrir y no hacer nada prevenirlo, llevan a evaluaciones de causalidad y responsabilidad más severas (Gilbert, Tenney, Holland, & Spellman, 2015;Cushman & Young, 2011). ...
Article
Full-text available
The study of moral cognition is marked by two traditions: one centered on the study of how information regarding causality and intentionality is processed, and the other, derived from socio-cognitive potions, privileging moral agency and behavior regulation beyond processing. Thus, there seems to be a gap between the study of reasoning and the study of conduct, when speaking of morals. The article proposes an interaction between a path model of blame, centered on the processing of information, and moral disengagement (MD) as a set of justifications of immoral conduct. While the integration is not complete, it does contribute a view of moral cognition focused on social regulation and the interactions between the judgments and responses evident in social interactions. Additionally, as a product of that interaction, the study provides a methodological proposal to inquire into the origin of md in development.
... When people make moral decisions, they often consider how others would judge them for behaving selfishly 177,178 . Harmful actions are judged more harshly than harmful inactions 179,180 , and causing harm by deviating from the status quo is blamed more than harming by default 181,182 . Therefore, reframing decisions to carry on with 'business as usual' during a pandemic as active decisions, rather than passive or default decisions, may make such behaviours less acceptable. ...
Article
Full-text available
The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.
... En la misma línea, el control, tomado como capacidad de elección en el curso de acción, resulta ser un factor relevante en la culpabilización y asignación de castigo, pues una persona que actúa bajo coerción de algún tipo es señalada como menos culpable y merecedora de un castigo menor (Martin & Cushman, 2016). Otros factores, como el conocimiento de que un resultado negativo pueda ocurrir y no hacer nada prevenirlo, llevan a evaluaciones de causalidad y responsabilidad más severas (Gilbert, Tenney, Holland, & Spellman, 2015;Cushman & Young, 2011). ...
Article
Full-text available
El estudio de la cognición moral pareciera estar marcado por dos tradiciones: una centrada en el estudio del procesamiento de información referente a la causalidad y la intencionalidad, y otra que, con origen en posturas sociocognitivas, privilegia la agencia moral y regulación conductual más allá del procesamiento. Así, pareciera existir una brecha entre el estudio del razonamiento y el estudio de la conducta cuando se habla de moral. La presente propuesta muestra una interacción entre un modelo de culpa (path model) centrado en el procesamiento de información y el Desentendimiento Moral (DM) como un conjunto de mecanismos que surgen para justificar la conducta inmoral. Si bien la integración no es completa, aporta a una visión de la cognición moral centrada en la regulación social y en las interacciones entre juicios y respuestas evidentes en las interacciones sociales. Adicionalmente, como producto de la interacción se presenta una propuesta metodológica para indagar por el origen del DM en el desarrollo.
... When people make moral decisions, they often consider how others would judge them for behaving selfishly 177,178 . Harmful actions are judged more harshly than harmful inactions 179,180 , and causing harm by deviating from the status quo is blamed more than harming by default 181,182 . Therefore, reframing decisions to carry on with 'business as usual' during a pandemic as active decisions, rather than passive or default decisions, may make such behaviours less acceptable. ...
Preprint
Full-text available
The COVID-19 pandemic represents a massive, global health crisis. Because the crisis requires large-scale behavior change and poses significant psychological burdens on individuals, insights from the social and behavioural sciences are critical for optimizing pandemic response. Here we review relevant research from a diversity of research areas relevant to different dimensions of pandemic response. We review foundational work on navigating threats, social and cultural factors, science communication, moral decision-making, leadership, and stress and coping that is relevant to pandemics. In each section, we outline implications for solving public health issues related to COVID-19. This interdisciplinary review points to several ways in which research can be immediately applied to optimize response to this pandemic, but also points to several important gaps that researchers should move quickly to fill in the coming weeks and months.
... More recently, the Trolley Problem has been used extensively in moral psychology and neuroscience, to explore not how humans ought to make ethical decisions, but how they actually do so. This literature delivered deep insights into moral cognition, as well as about the contextual factors that influence moral judgment [8,9,15]. There is one person standing on a side track that doesn't rejoin the main track. ...
Article
A platform for creating a crowdsourced picture of human opinions on how machines should handle moral dilemmas.
... cultural contexts (6). The larger and the more diverse the set 23 of countries in which the findings hold, though, the greater 24 the appeal of theories based on basic cognitive processes or 25 universal moral grammars (7)(8)(9). Conversely, when researchers 26 attempt to develop theory on the basis of findings that differ in 27 a handful of countries, they can find it challenging to pinpoint 28 the exact cultural features that may explain these differences, 29 since any two countries can differ on many cultural traits. The 30 larger and more diverse the set of countries, the easier the 31 task becomes. ...
Article
Full-text available
When do people find it acceptable to sacrifice one life to save many? Cross-cultural studies suggested a complex pattern of universals and variations in the way people approach this question, but data were often based on small samples from a small number of countries outside of the Western world. Here we analyze responses to three sacrificial dilemmas by 70,000 participants in 10 languages and 42 countries. In every country, the three dilemmas displayed the same qualitative ordering of sacrifice acceptability, suggesting that this ordering is best explained by basic cognitive processes, rather than cultural norms. The quantitative acceptability of each sacrifice, though, showed substantial country-level variations. We show that low relational mobility (where people are more cautious about not alienating their current social partners) is strongly associated with the rejection of sacrifices for the greater good (especially for Eastern countries), which may be explained by the signaling value of this rejection. We make our dataset fully available as a public resource for researchers studying universals and variations in human morality: all the data and code used in this article can be downloaded at https://bit.ly/2Y7Brr9.
... Scenarios are used in the field of cognitive science as means to study moral judgement in randomized controlled experiments [65], [66]. We created a scenario that describes a military operation in which a convoy is delivering supplies in a conflict area (see Appendix B). ...
Article
Engineers, during the design phase of an infrastructure project, commonly take into account various considerations about the occupants and users, including (but not limited to): the number of occupants, the predicted demographic (and associated mobility), and any possible future changes. The purpose of this is to ensure a minimum quality of service, making the project easy for occupants to navigate, use, and exit. These considerations generally lead to rule-of-thumb calculations for design purposes [1], ensuring that the project is deemed fit for purpose. However, these calculations are by necessity high-level and fail to take account of specific human behaviors, or they may miss some of the variance in these behaviors.
Article
The processing of moral decision-making is influenced by both cognitive and emotional systems, making it worth exploring exactly how each plays a role in the process of individual moral decision-making. In this study, 160 participants with either high or low empathy traits (80 each, as determined by the Interpersonal Response Index Inventory) completed a moral decision-making task regarding whether to help others (stereotyped as high warmth-high competence, high warmth-low competence, low warmth-high competence, low warmth-low competence) at the expense of themselves. The intent was to explore the influence of stereotypes and empathy traits on moral decision-making. The results showed that: (1) participants were more willing to help individuals with high warmth than those with high competence, showing a clear “primacy of warmth effect”; (2) this effect was weakened in participants with high empathy traits in comparison to those with low empathy traits, as their willingness to help individuals with low warmth was significantly higher than that of participants with low empathy traits. The results suggest that stereotypes about warmth and competence moderate altruistic tendencies in moral decision-making and that this moderation is more pronounced in individuals with low empathy traits than in those with high empathy traits.
Article
Three experiments tested the hypothesis that power elicits moral judgments in line with active goals, and moral flexibility across different contexts. Power and goals emanating from the mission associated with power were experimentally manipulated: person‐centered mission, which benefits from outcome‐focus, or regulation‐centered mission, which benefits from rule‐based focus. Power consistently elicited rule‐based (deontological) moral reasoning under regulation‐centered goals. However, power triggered outcome‐based (utilitarian) moral reasoning under person‐centered goals. Power enhanced goal serving morality due to greater goal commitment, with focal goal commitment mediating the interactive effects of power and focal goal on moral judgments. These findings show that the links between power and morality are context sensitive, flexible, and mediated by a greater commitment to active goals.
Article
The empirical study of virtue is plagued by imprecise definitions and assessment. Here we propose a three-stage, data-driven (‘bottom-up’) method to differentiate lay perceptions of virtues. Employing two virtues – generosity (as cooperation) and fairness (as impartiality) – as a case study, we present findings utilizing data from three studies (total N = 2,667). First, natural language processing of free-response data indicated that participants used different ‘topics’ (i.e. clusters of words) to describe behaviours representing generosity (topics: ‘charity’ and ‘kindness’) and fairness (‘equality’). Second, participants in a survey experiment rated behaviours expressing generosity and fairness differently across 6 out of 9 underlying features measured. Third, participants perceive that actors in vignette-based experiments engaging in behaviours expressing generosity versus fairness were motivated differently on 5 out of 6 motivations measured. Our findings support the distinction of the virtues of generosity (as cooperation) and fairness (as impartiality) and indicate the utility of our bottom-up method for assessing and distinguishing virtues.
Article
Interdisciplinary research has proposed a multifaceted view of human cognition and morality, establishing that inputs from multiple cognitive and affective processes guide moral decisions. However, extant work on moral cognition has largely overlooked the contributions of episodic representation. The ability to remember or imagine a specific moment in time plays a broadly influential role in cognition and behavior. Yet, existing research has only begun exploring the influence of episodic representation on moral cognition. Here, we evaluate the theoretical connections between episodic representation and moral cognition, review emerging empirical work revealing how episodic representation affects moral decision-making, and conclude by highlighting gaps in the literature and open questions. We argue that a comprehensive model of moral cognition will require including the episodic memory system, further delineating its direct influence on moral thought, and better understanding its interactions with other mental processes to fundamentally shape our sense of right and wrong.
Preprint
The purpose of this article is to investigate the role of different thinking preferences when employees make judgments about the credibility and accuracy of information privacy issues at work. We investigate the relationship between people's thinking processes and their perceived credibility of information with respect to collection of personal data and processing of personal data, in the context of workplaces in Slovakia. We test the most well-known concepts from intuition research and practice simultaneously and contribute to the applied literature on domain-specific preferences for intuition and deliberation in decision-making. The findings of this study can help managers and data controllers in small-and medium sized enterprises (SMEs) in reflecting about the way in which people employ different thinking processes for decision-making about data policy in their organizations.
Article
Reasoning about underlying causal relations drives responsibility judgments: agents are held responsible for the outcomes they cause through their behaviors. Two main causal reasoning approaches exist: dependence theories emphasize statistical relations between causes and effects, while transference theories emphasize mechanical transmission of energy. Recently, pluralistic or hybrid models, combining both approaches, have emerged as promising psychological frameworks. In this paper, we focus on causal reasoning as involved in third-party judgments of responsibility and on related judgments of intention and control. In particular, we used a novel visual paradigm to investigate the combined effects of two well-known causal manipulations, namely omission and preemption, on these evaluations. Our findings support the view that people apply a pluralistic causal reasoning when evaluating individual responsibility for negative outcomes. In particular, we observed diminished responsibility when dependence, transference, or both fail, compared to when these mechanisms are upheld. Responsibility judgment involves a cognitive hybrid of multiple aspects of causal reasoning. However, important differences exist at the interindividual level, with most people weighting transference more than dependence.
Book
How does culture affect action? This question has long been framed in terms of a means vs ends debate—in other words, do cultural ends or cultural means play a primary causal role in human behavior? However, the role of socialization has been largely overlooked in this debate. In this book, Vila-Henninger develops a model of how culture affects action called “The Sociological Dual-Process Model of Outcomes” that incorporates socialization. This book contributes to the debate by first providing a critical overview of the literature that explains the limitations of the sociological dual-process model and subsequent scholarship—and especially work in sociology on “schemas”. It then develops a sociological dual-process model of moral judgment that formally explains Type I processes, Type II processes, and the interaction between Type I and Type II processes. The book also expands sociological dual-process models to include a temporal dimension—the "Sociological Dual-Process Model of Outcomes". Finally, the book integrates a theory of socialization into the sociological dual-process model and creates empirical indicators that confirm Vila-Henninger’s theorization and contribute to the literature on measures of dual-process models. Luis Antonio Vila-Henninger is Postdoctoral Fellow for the REACTOR grant in the Political Science department at Aarhus University, Denmark, and Scientific Collaborator with UCLouvain, Belgium. Luis’s research areas include the sociology of culture, sociological theory, economic sociology, political sociology, and qualitative methods. Luis’s work has appeared in The British Journal of Sociology, The Journal for the Theory of Social Behaviour, Sociology Compass, Sociological Inquiry, Sociological Perspectives, The Sociological Quarterly, and The Bulletin of Sociological Methodology. His first book, Social Justification and Political Legitimacy: How Voters Rationalize Direct Democratic Economic Policy in America, was published by Palgrave Macmillan in 2020.
Chapter
I argue that Strong Practice Theory and the subsequent literature have two key limitations: Dichotomy and Data. First, beyond an analytic dualism (Vaisey and Frye 2019), this literature establishes a dichotomy between Type I and Type II processes in which the two are essentially mutually exclusive. I therefore use relevant findings from cognitive science to evaluate the foundations of the sociological dual-process model: Bourdieu’s concept of the “habitus” and Vaisey’s (Vaisey 2009; Vaisey and Lizardo 2010) sociological “dual-process model” of moral judgment. I review this literature and then discuss subsequent work that acknowledges that there is a possibility for interaction between the Type I and Type II processes but does not offer a model of how this interaction takes place (Lizardo 2017). Second, I review the data that this literature uses to measure Type I processes. As work has highlighted (Vila-Henninger 2015; Miles et al. 2019; Miles 2019), the literature has used data for Type I processes that do not solely measure these processes and therefore has issues with construct validity. I then expand my review of this problem and focus on the issues with construct validity for subsequent work in this literature that analyzes “schemas”—which are widely accepted as Type I processes.
Chapter
This chapter presents the theoretical core of this book, the Sociological Dual-Process Model of Outcomes (DPMO), which is based on the work of Joshua Greene and colleagues in neuroscience (e.g. Greene 2017). This model applies to moral judgment and its role in human behavior and contextualizes moral judgment and behavior relative to their consequences (or “outcomes”). In this dual-process model of moral judgement, the Type I automatic process is called Model-Free Reinforcement Learning (MFRL) and the Type II deliberative process is called Model-Based Learning (MBL). Furthermore, Type II cognition also deliberates between, and subsequently integrates, competing MBL and MFRL inputs. MFRL (Type I) internalizes life experience with positive and negative associations relative to an outcome and stores these associations in nondeclarative memory, which then automatically generates “gut feelings” about a particular behavior in the context of achieving a corresponding outcome. Conversely, MBL (Type II) consists of reasoning and planning. MBL is based on expectations and beliefs about how one achieves a given outcome. MBL then “builds” a causal map stored in declarative memory of how the actor can achieve an outcome and reasons using this causal map to achieve that outcome. Finally, in the Sociological DPMO, Type II cognition has a dual function. Not only is it active in MBL, but it also deliberates between, and subsequently integrates, competing inputs from MBL and MFRL in order to decide how to achieve an outcome in difficult moral situations. I also argue that we must understand moral judgment within a broader temporal context of a five-part process: socialization, stimulus/context, response, outcome, and justification. The Sociological DPMO then contextualizes moral judgment and its consequences (“outcomes”) in this five-part process. Moral judgment is thus an element of “Response.”
Chapter
In this analysis, I build on Chapters 1, 2, 3, 4, and 5 by testing my Sociological Dual-Process Model of Outcomes empirically. To do so, I perform a case study of the link between youth moral and religious socialization, moral judgment, and incarceration using the National Study of Youth and Religion’s (NSYR’s) longitudinal data (Waves 1 and 4). In particular, I build on Vaisey’s (2009) landmark empirical investigation of the association between teenagers’ moral schemas (NSYR Wave 1) and their deviant behavior three years later (NSYR Wave 2). My analysis develops empirical indicators of all three psychological processes in my Sociological Dual-Process Model of Outcomes (Type II Model-Based Learning, Type I Model-Free Reinforcement Learning, and Type II Integrative Processes/Deliberation Between Competing MBL and MFRL Inputs) in the same NSYR Wave 1 data used by Vaisey (2009) and the subsequent sociological dual-process model literature that uses forced-choice self-report survey data (Vaisey and Lizardo 2010; Hoffmann 2014). I then report associations I found between these measures in Wave 1 and a key potential outcome of deviant behavior for the same respondents ten years later (NSYR Wave 4): incarceration. An important contribution here is my development of operationalizations of Model-Based Learning (Type II), Model-Free Reinforcement Learning (Type I), and Integrative Processes/Deliberation Between Competing MBL and MFRL Inputs (Type II) using particular types of forced-choice self-report survey data that are supported by findings and practices from cognitive science. Crucially, in this analysis of Type I Model-Free Reinforcement Learning, I mobilize literature from the psychology of habit to develop a measure of procedural memory in forced-chioce self-report survey data using respondents' reports of behavior frequency.
Chapter
Full-text available
A growing literature in sociology on cultural schemas claims that schemas are nondeclarative (Type I) but does not use methods that are designed to measure nondeclarative memory. In this chapter, I perform an experiment using Kenneth Forster’s Rapid Serial Visual Presentation (RSVP) to examine the effects of nondeclarative (subconscious) moral primes on participants’ moral decision-making reaction times and choices in order to detect the presence of nondeclarative moral schemas. This research contributes in three primary ways: (1) by testing dual-process model casual claims about the effects of Type I subconscious moral schemas on moral decision-making, (2) by providing evidence that Type II deliberation affects moral decision-making, and (3) by utilizing methods designed to detect the presence of Type I subconscious schemas. Contrary to the claims of dual-process models of moral judgment and subsequent literature, I did not find evidence of the existence of nondeclarative moral schemas in respondents’ moral judgments. Conversely, I found evidence that Type II deliberation affects moral decision-making.
Article
Full-text available
When is it allowed to carry out an action that saves lives, but leads to the loss of others? While a minority of people may deny the permissibility of such actions categorically, most will probably say that the answer depends, among other factors, on the number of lives saved versus lives lost. Theories of moral reasoning acknowledge the importance of outcome trade-offs for moral judgments, but remain silent on the precise functional form of the psychological mechanism that determines their moral permissibility. An exception is Cohen and Ahn's (2016) subjective-utilitarian theory of moral judgment, but their model is currently limited to decisions in two-option life-and-death dilemmas. Our goal is to study other types of moral judgments in a larger set of cases. We propose a computational model based on sampling and integrating subjective utilities. Our model captures moral permissibility judgments about actions with multiple effects across a range of scenarios involving humans, animals, and plants, and is able to account for some response patterns that might otherwise be associated with deontological ethics. While our model can be embedded in a number of competing contemporary theories of moral reasoning, we argue that it would most fruitfully be combined with a causal model theory.
Article
When holding others morally responsible, we care about what they did, and what they thought. Traditionally, research in moral psychology has relied on vignette studies, in which a protagonist's actions and thoughts are explicitly communicated. While this research has revealed what variables are important for moral judgment, such as actions and intentions, it is limited in providing a more detailed understanding of exactly how these variables affect moral judgment. Using dynamic visual stimuli that allow for a more fine-grained experimental control, recent studies have proposed a direct mapping from visual features to moral judgments. We embrace the use of visual stimuli in moral psychology, but question the plausibility of a feature-based theory of moral judgment. We propose that the connection from visual features to moral judgments is mediated by an inference about what the observed action reveals about the agent's mental states, and what causal role the agent's action played in bringing about the outcome. We present a computational model that formalizes moral judgments of agents in visual scenes as computations over an intuitive theory of physics combined with an intuitive theory of mind. We test the model's quantitative predictions in three experiments across a wide variety of dynamic interactions.
Article
Full-text available
Omission bias is people's tendency to evaluate harm done through omission as less morally ‎wrong and less blameworthy than commission when there is harm. However, findings are ‎inconsistent. We conducted a pre-registered meta-analysis, with 21 samples (13 articles, 49 ‎effects) on omission-commission asymmetries in judgments and decisions. We found an ‎overall effect of g=0.45[0.14,0.77], with stronger effects for morality and blame than for ‎decisions. Publication bias tests produced mixed results with some indication for ‎publication bias, though effects persisted even after most publication-bias adjustments. The ‎small sample of studies included limited our ability to draw definite conclusions regarding ‎moderators, with inconclusive findings when applying different models. After ‎compensating for low-power we found indication for moderation by role responsibility, ‎perspective (self-versus-other), outcome-type, and study-design. We hope this meta-‎analysis will inspire research on this phenomenon and applications to real-life, especially ‎given the raging pandemic. Materials, data, and code are available on ‎https://osf.io/9fcqm/
Chapter
How might artificial systems become sensitive to ethically relevant considerations? This chapter argues that the question we should ask is not “How can we build ethics into robots?” but rather “How can we build robots with a capacity for ethical learning?” It is evident from the continuing disagreement over overarching ethical theories that we do not know a set of consensus ethical principles with sufficient definiteness to “program” ethics into a machine. Recent research in artificial intelligence has shown the power of general-purpose learning to acquire autonomous human-level competencies. Intriguingly, recent research in developmental psychology suggests that children may also acquire independent social and ethical competence via similar mechanisms. Instead of programing ethics into robots, this chapter proposes that artificial systems could gain ethical competence through general-purpose learning, comparable to infants learning human ethics in part by observing adult behavior.
Chapter
Full-text available
Article
Full-text available
Omission bias is the preference for harm caused through omissions over harm caused ‎through commissions. In a pre-registered experiment (N = 313), we successfully replicated ‎an experiment from Spranca, Minsk, and Baron (1991), considered a classic demonstration ‎of the omission bias, examining generalizability to a between-subject design with ‎extensions examining causality, intent, and regret. Participants in the harm through ‎commission condition(s) rated harm as more immoral and attributed higher responsibility ‎compared to participants in the harm through omission condition (d = 0.45 to 0.47 and d = ‎‎0.40 to 0.53). An omission-commission asymmetry was also found for perceptions of ‎causality and intent, in that commissions were attributed stronger action-outcome links and ‎higher intentionality (d = 0.21 to 0.58). The effect for regret was opposite from the classic ‎findings on the action-effect, with higher regret for inaction over action (d = -0.26 to -0.19). ‎Overall, higher perceived causality and intent were associated with higher attributed ‎immorality and responsibility, and with lower perceived regret. All materials are available ‎on: https://osf.io/9gsqe/ ‎
Article
People evaluate the moral character of others not only based on what they do, but also on what leads them to do it. Because an agent's state of mind is not directly observable, people typically engage in mindreading—attempts at inferring mental states—when forming moral evaluations. The present paper identifies a general target of such mental state inference, mental occurrents—a catchall term for the thoughts, beliefs, principles, feelings, concerns, and rules accessible in an agent's mind when confronting a morally relevant decision. Moral mental occurrents are those that can provide a moral justification for a particular course of action. Whereas previous mindreading research has examined how people reason back to make sense of an agent's behavior, we instead ask how inferred moral mental occurrents (MOs) constrain moral evaluations for an agent's subsequent actions. Our studies distinguish three accounts of how inferred MOs influence moral evaluations, show that people rely on inferred MOs spontaneously (instead of merely when experimental measures draw attention to them), and identify non-moral contextual cues (e.g., whether the situation demands a quick decision) that guide inferences about MOs. Implications for theory of mind, moral psychology, and social cognition are discussed.
Book
Full-text available
Responsibility is an important feature of human behavior. Is responsibility important in human life? What is social responsibility? Moral responsibility? Shared responsibility? Epidemiological responsibility? Judgment responsibility? How can leadership responsibility be defined? What are family obligations? What is the responsibility of the public sector? What is personal responsibility? What is medical professionalism and does it matter to the patients? Biblical verses dealing with responsibility were studied from a contemporary viewpoint.
Article
Full-text available
The present study expands the literature on the Foreign Language Effect by investigating differences in moral judgment for 280 English-Spanish late bilinguals when processing the button and bridge moral scenarios of the canonical trolley dilemma (Thomson, 1985) in an online questionnaire in either a native (NL), foreign (FL), or code-switched (CS) language environment. The study furthermore examines the effects of emotion on moral standards across these three language contexts, analysing self-reports of individuals' emotions following their moral decisions. Overall, moral judgments in the CS and NL conditions patterned similarly for both dilemmas, while, in line with previous studies, the FL condition elicited an increased percentage of utilitarian decisions in the high-conflict bridge scenario. Unique emotions did not vary significantly across language contexts in either scenario, and no reduction in emotion was seen in participants' FL. However, an interaction between language condition and emotion in the high-conflict dilemma suggests that the ratio and relative ranking of various emotions, and not just the degree of emotionality, may have an influence on moral evaluations. The present study elucidates the previously neglected variable of moral decision processing in the context of code-switching and discusses cognitive and emotional explanations for the Foreign Language Effect.
Article
Full-text available
In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels. First, we seek to make theorists and researchers aware of the importance of not using the terms moderator and mediator interchangeably by carefully elaborating, both conceptually and strategically, the many ways in which moderators and mediators differ. We then go beyond this largely pedagogical function and delineate the conceptual and strategic implications of making use of such distinctions with regard to a wide range of phenomena, including control and stress, attitudes, and personality traits. We also provide a specific compendium of analytic procedures appropriate for making the most effective use of the moderator and mediator distinction, both separately and in terms of a broader causal system that includes both moderators and mediators. (46 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
"Culpable causation" refers to the influence of the perceived blameworthiness of an action on judgments of its causal impact on a harmful outcome. Four studies were conducted to show that when multiple forces contribute to an unfortunate outcome, people select the most blameworthy act as the prepotent causal factor. In Study 1, an actor was cited more frequently as the primary cause of an accident when his reason for speeding was to hide a vial of cocaine than when it was to hide his parents' anniversary gift. In Study 2, of the 4 acts that produced an unfortunate outcome, the most blameworthy act was cited as the factor with the greatest causal impact. Study 3 found that greater causal influence was perceived throughout a causal chain when the act that engaged the chain was positive rather than negative. Finally, Study 4 found that both traditional causal factors (i.e., necessity and sufficiency) and culpable factors influenced perceived causation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A model is presented to account for the natural selection of what is termed reciprocally altruistic behavior. The model shows how selection can operate against the cheater (non-reciprocator) in the system. Three instances of altruistic behavior are discussed, the evolution of which the model can explain: (1) behavior involved in cleaning symbioses; (2) warning cries in birds; and (3) human reciprocal altruism. Regarding human reciprocal altruism, it is shown that the details of the psychological system that regulates this altruism can be explained by the model. Specifically, friendship, dislike, moralistic aggression, gratitude, sympathy, trust, suspicion, trustworthiness, aspects of guilt, and some forms of dishonesty and hypocrisy can be explained as important adaptations to regulate the altruistic system. Each individual human is seen as possessing altruistic and cheating tendencies, the expression of which is sensitive to developmental variables that were selected to set the tendencies at a balance ap...
Article
Full-text available
Reviews evidence which suggests that there may be little or no direct introspective access to higher order cognitive processes. Ss are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes. (86 ref)
Article
Full-text available
Political conservatives and liberals were interviewed about 3 kinds of sexual acts: homosexual sex, unusual forms of masturbation, and consensual incest between an adult brother and sister. Conservatives were more likely to moralize and to condemn these acts, but the differences were concentrated in the homosexual scenarios and were minimal in the incest scenarios. Content analyses reveal that liberals had a narrow moral domain, largely limited to the “ethics of autonomy” (Shweder, Much, Mahapatra, & Park, 1997) while conservatives had a broader and more multifaceted moral domain. Regression analyses show that, for both groups, moral judgments were best predicted by affective reactions, and were not predicted by perceptions of harmfulness. Suggestions for calming the culture wars over homosexuality are discussed.
Article
Full-text available
It is widely believed that the primary function of folk psychology lies in the prediction, explanation and control of behavior. A question arises, however, as to whether folk psychology has also been shaped in fundamental ways by the various other roles it plays in people’s lives. Here I approach that question by considering one particular aspect of folk psychology – the distinction between intentional and unintentional behaviors. The aim is to determine whether this distinction is best understood as a tool used in prediction, explanation and control or whether it has been shaped in fundamental ways by some other aspect of its use.
Article
Full-text available
Protected values (PVs) are those that people think should not be traded off. Baron and Spranca (1997) proposed that such values result from rules concerning actions (as opposed to values for outcomes). This proposal implies that PVs should show a particularly large bias against harmful acts that undermine the value in question, as opposed to harmful omissions (omission bias). We found this correlation between PVs and omission bias in 3 experiments, using stimuli of the sort that we used before to demonstrate omission bias. In 2 experiments, we also found a weak tendency for PVs to be associated with lack of concern for the number of acts involved, which is analogous to earlier results showing an association with lack of concern for the quantity of outcomes. Finally, 1 experiment showed that some people are willing to sacrifice values to prevent losses more than they are willing to sacrifice these values for gains.
Article
Full-text available
Subjects read scenarios concerning pairs of options. One option was an omission, the other, a commission. Intentions, motives, and consequences were held constant. Subjects either judged the morality of actors by their choices or rated the goodness of decision options. Subjects often rated harmful omissions as less immoral, or less bad as decisions, than harmful commissions. Such ratings were associated with judgments that omissions do not cause outcomes. The effect of commission is not simply an exaggerated response to commissions: a reverse effect for good outcomes was not found, and a few subjects were even willing to accept greater harm in order to avoid action. The “omission bias” revealed in these experiments can be described as an overgeneralization of a useful heuristic to cases in which it is not justified. Additional experiments indicated that subjects' judgments about the immorality of omissions and commissions are dependent on several factors that ordinarily distinguish omissions and commissions: physical movement in commissions, the presence of salient alternative causes in omissions, and the fact that the consequences of omissions would occur if the actor were absent or ignorant of the effects of not acting.
Article
Full-text available
The common wisdom among criminal law theorists and policy makers is that the notion of desert is vague and the subject to wide disagreement. Yet the empirical evidence in available studies, including new studies reported here, paints a dramatically different picture. While moral philosophers may disagree on some aspects of moral blameworthiness, people's intuitions of justice are commonly specific, nuanced, and widely shared. Indeed, with regard to the core harms and evils to which criminal law addresses itself – physical aggression, takings without consent, and deception in transactions – people's shared intuitions cut across demographics and cultures. The findings raise interesting questions -- such as, what could explain this striking result? -- and hint at intriguing implications for criminal law and criminal justice policy.Available for download at http://ssrn.com/abstract=932067
Article
Full-text available
Four experiments examined people’s folk-psychological concept of intentional action. The chief question was whether or not evaluative considerations — considerations of good and bad, right and wrong, praise and blame — played any role in that concept. The results indicated that the moral qualities of a behavior strongly influence people’s judgements as to whether or not that behavior should be considered ‘intentional.’ After eliminating a number of alternative explanations, the author concludes that this effect is best explained by the hypothesis that evaluative considerations do play some role in people’s concept of intentional action.
Article
Full-text available
How do people respond to others' accidental behaviors? Reward and punishment for an accident might depend on the actor's intentions, or instead on the unintended outcomes she brings about. Yet, existing paradigms in experimental economics do not include the possibility of accidental monetary allocations. We explore the balance of outcomes and intentions in a two-player economic game where monetary allocations are made with a "trembling hand": that is, intentions and outcomes are sometimes mismatched. Player 1 allocates $10 between herself and Player 2 by rolling one of three dice. One die has a high probability of a selfish outcome, another has a high probability of a fair outcome, and the third has a high probability of a generous outcome. Based on Player 1's choice of die, Player 2 can infer her intentions. However, any of the three die can yield any of the three possible outcomes. Player 2 is given the opportunity to respond to Player 1's allocation by adding to or subtracting from Player 1's payoff. We find that Player 2's responses are influenced substantially by the accidental outcome of Player 1's roll of the die. Comparison to control conditions suggests that in contexts where the allocation is at least partially under the control of Player 1, Player 2 will punish Player 1 accountable for unintentional negative outcomes. In addition, Player 2's responses are influenced by Player 1's intention. However, Player 2 tends to modulate his responses substantially more for selfish intentions than for generous intentions. This novel economic game provides new insight into the psychological mechanisms underlying social preferences for fairness and retribution.
Article
Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done by individuals and emphasizes instead the importance of social and cultural influences. The model is an intuitionist model in that it states that moral judgment is generally the result of quick, automatic evaluations (intuitions). The model is more consistent than rationalist models with recent findings in social, cultural, evolutionary, and biological psychology, as well as in anthropology and primatology.
Article
What was noted by E. J. Langer (1978) remains true today; that much of contemporary psychological research is based on the assumption that people are consciously and systematically processing incoming information in order to construe and interpret their world and to plan and engage in courses of action. As did E. J. Langer, the authors question this assumption. First, they review evidence that the ability to exercise such conscious, intentional control is actually quite limited, so that most of moment-to-moment psychological life must occur through nonconscious means if it is to occur at all. The authors then describe the different possible mechanisms that produce automatic, environmental control over these various phenomena and review evidence establishing both the existence of these mechanisms as well as their consequences for judgments, emotions, and behavior. Three major forms of automatic self-regulation are identified: an automatic effect of perception on action, automatic goal pursuit, and a continual automatic evaluation of one's experience. From the accumulating evidence, the authors conclude that these various nonconscious mental systems perform the lion's share of the self-regulatory burden, beneficently keeping the individual grounded in his or her current environment.
Article
To what extent do moral judgments depend on conscious reasoning from explicitly understood principles? We address this question by investigating one particular moral principle, the principle of the double effect. Using web-based technology, we collected a large data set on individuals ' responses to a series of moral dilemmas, asking when harm to innocent others is permissible. Each moral dilemma presented a choice between action and inaction, both resulting in lives saved and lives lost. Results showed that: (1) patterns of moral judgments were consistent with the principle of double effect and showed little variation across differences in gender, age, educational level, ethnicity, religion or national affi liation (within the limited range of our sample population) and (2) a majority of subjects failed to provide justifi cations that could account for their judgments. These results indicate that the principle of the double effect may be operative in our moral judgments but not open to conscious introspection. We discuss these results in light of current psychological theories of moral cognition, emphasizing the need to consider the unconscious appraisal system that mentally represents the causal and intentional properties of human action.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
A theory of the assignment of moral responsibility and punishment for harm was tested with children 5-11 years of age. The results indicated a fairly sophisticated use of a variety of moral concepts by children from 5 years of age. They showed evidence of knowing that judgments of moral responsibility are presupposed by judgments of punishment and that causal judgments are presupposed by moral responsibility judgments. They also used information on intention and negligence to assign moral responsibility and information on restitution to assign punishment. Developmental trends included an increasing sensitivity to these concepts, greater tolerance for harm doing, and more emphasis on restitution rather than punishment with increasing age.
Article
• As the title suggests, this book examines the psychology of interpersonal relations. In the context of this book, the term "interpersonal relations" denotes relations between a few, usually between two, people. How one person thinks and feels about another person, how he perceives him and what he does to him, what he expects him to do or think, how he reacts to the actions of the other--these are some of the phenomena that will be treated. Our concern will be with "surface" matters, the events that occur in everyday life on a conscious level, rather than with the unconscious processes studied by psychoanalysis in "depth" psychology. These intuitively understood and "obvious" human relations can, as we shall see, be just as challenging and psychologically significant as the deeper and stranger phenomena. The discussion will center on the person as the basic unit to be investigated. That is to say, the two-person group and its properties as a superindividual unit will not be the focus of attention. Of course, in dealing with the person as a member of a dyad, he cannot be described as a lone subject in an impersonal environment, but must be represented as standing in relation to and interacting with another person. The chapter topics included in this book include: Perceiving the Other Person; The Other Person as Perceiver; The Naive Analysis of Action; Desire and Pleasure; Environmental Effects; Sentiment; Ought and Value; Request and Command; Benefit and Harm; and Reaction to the Lot of the Other Person. (PsycINFO Database Record (c) 2012 APA, all rights reserved) • As the title suggests, this book examines the psychology of interpersonal relations. In the context of this book, the term "interpersonal relations" denotes relations between a few, usually between two, people. How one person thinks and feels about another person, how he perceives him and what he does to him, what he expects him to do or think, how he reacts to the actions of the other--these are some of the phenomena that will be treated. Our concern will be with "surface" matters, the events that occur in everyday life on a conscious level, rather than with the unconscious processes studied by psychoanalysis in "depth" psychology. These intuitively understood and "obvious" human relations can, as we shall see, be just as challenging and psychologically significant as the deeper and stranger phenomena. The discussion will center on the person as the basic unit to be investigated. That is to say, the two-person group and its properties as a superindividual unit will not be the focus of attention. Of course, in dealing with the person as a member of a dyad, he cannot be described as a lone subject in an impersonal environment, but must be represented as standing in relation to and interacting with another person. The chapter topics included in this book include: Perceiving the Other Person; The Other Person as Perceiver; The Naive Analysis of Action; Desire and Pleasure; Environmental Effects; Sentiment; Ought and Value; Request and Command; Benefit and Harm; and Reaction to the Lot of the Other Person. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Examined the relationship between attribution of causation and attribution of moral responsibility. 60 college students completed questionnaires on hypothetical cases in 2 experiments that examined the necessary and sufficient conditions of causation, and the mitigating effects of voluntariness, foreseeability, and intervening cause. If an actor's behavior was presented as a necessary condition for harm, he/she was more likely to be judged as the cause of the harm, as morally responsible for the harm, and as deserving of punishment than if his/her behavior was presented as not necessary for harm. Information on whether an actor's behavior constituted a sufficient condition for harm marginally affected punishment judgments. In Exp II, where an actor's behavior was considered to be a necessary condition for harm, it was found that if the omission was less than voluntary, the actor was rated as less the cause, as less morally responsible, and as less deserving of punishment than if the omission was fully voluntary. If the harm was not foreseeable, judgments of moral responsibility, but not causation and punishment, were somewhat diminished as compared to cases of forseeable harm. Path analyses confirmed that relations between judgments of causation and punishment were more remote than relations between judgments of either causation and responsibility or responsibility and punishment. (44 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
What is the nature of human thought? A long dominant view holds that the mind is a general problem-solving device that approaches all questions in much the same way. Chomsky's theory of language, which revolutionised linguistics, challenged this claim, contending that children are primed to acquire some skills, like language, in a manner largely independent of their ability to solve other sorts of apparently similar mental problems. In recent years researchers in anthropology, psychology, linguistic and neuroscience have examined whether other mental skills are similarly independent. Many have concluded that much of human thought is 'domain-specific'. Thus, the mind is better viewed as a collection of cognitive abilities specialised to handle specific tasks than a general problem solver. This volume introduces a general audience to a domain-specificity perspective, by compiling a collection of essays exploring how several of these cognitive abilities are organised.
Article
What was noted by E. J. Langer (1978) remains true today: that much of contemporary psychological research is based on the assumption that people are consciously and systematically processing incoming information in order to construe and interpret their world and to plan and engage in courses of action. As did Langer, the authors question this assumption. First, they review evidence that the ability to exercise such conscious, intentional control is actually quite limited, so that most of moment-to-moment psychological life must occur through nonconscious means if it is to occur at all. The authors then describe the different possible mechanisms that produce automatic, environmental control over these various phenomena and review evidence establishing both the existence of these mechanisms as well as their consequences for judgments, emotions, and behavior. Three major forms of automatic self-regulation are identified: an automatic effect of perception on action, automatic goal pursuit, and a continual automatic evaluation of one's experience. From the accumulating evidence, the authors conclude that these various nonconscious mental systems perform the lion's share of the self-regulatory burden, beneficently keeping the individual grounded in his or her current environment. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
240 Ss from 5 age groups (6–7.5, 8–9.5, 10–11.5, 12–13.5 yrs and adult graduate and undergraduate students) were administered stories representing F. Heider"s (1958) criteria for responsibility attribution under 1 of 2 conditions—the actor was either a hypothetical other or the self. As predicted, an Age × Stimulus Level interaction was found, although its nature differed for attribution of blame and causality. In relation to the moral judgment measure, a further interaction of Story Character × Age was found, and response patterns formed a Guttman scalogram. However, scale types were not clearly age linked. Although these data confirm the utility of Heider"s responsibility-attribution criteria, no strong evidence was obtained to support a developmental interpretation of his theory. Results are also discussed in terms of Piagetian research and an extension of Heider"s schema. (22 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Philosophical theories summarized here include regularity and necessity theories from D. Hume (1739 [1978], 1740 [1978]) to the present; manipulability theory; the theory of powerful particulars; causation as connected changes within a defined state of affairs; departures from "normal" events or from some standard for comparison; causation as a transfer of something between objects; and causal propagation and production. Issues found in this literature and of relevance for psychology include whether actual causal relations can be perceived or known; what sorts of things people believe can be causes; different levels of causal analysis; the distinction between the causal relation itself and cues to causal relations; causal frames or fields; internal and external causes; and understanding of causation in different realms of the world, such as the natural and artificial realms. A full theory of causal inference by laypeople should address all of these issues. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Subjects are reluctant to vaccinate a (hypothetical) child when the vaccination itself can cause death, even when this is much less likely than death from the disease prevented. This effect is even greater when there is a ‘risk group’ for death (with its overall probability held constant), even though the test for membership in the risk group is unavailable. This effect cannot be explained in terms of a tendency to assume that the child is in the risk group. A risk group for death from the disease has no effect on reluctance to vaccinate. The reluctance is an example of omission bias (Spranca, Minsk & Baron, in press), an overgeneralization of a distinction between commissions and omissions to a case in which it is irrelevant. Likewise, it would ordinarily be prudent to find out whether a child is in a risk group before acting, but in this case it is impossible, so knowledge of the existence of the risk group is irrelevant. The risk-group effect is consistent with Frisch & Baron's (1988) interpretation of ambiguity.
Article
The aim of the dissertation is to formulate a research program in moral cognition modeled on aspects of Universal Grammar and organized around three classic problems in moral epistemology: (1) What constitutes moral knowledge? (2) How is moral knowledge acquired? (3) How is moral knowledge put to use? Drawing on the work of Rawls and Chomsky, a framework for investigating (1)-(3) is proposed. The framework is defended against a range of philosophical objections and contrasted with the approach of developmental psychologists like Piaget and Kohlberg.One chapter consists of an interpretation of the analogy Rawls draws in A Theory of Justice between moral theory and generative linguistics. A second chapter clarifies the empirical significance of Rawls' linguistic analogy by formulating a solution to the problem of descriptive adequacy with respect to a class of commonsense moral intuitions, including those discussed in the trolley problem literature originating in the work of Foot and Thomson. Three remaining chapters defend Rawls' linguistic analogy against its critics. In response to Hare's objection that Rawls' conception of moral theory is too empirical and insufficiently normative, it is argued that Hare fails to acknowledge both the centrality of the problem of empirical adequacy in the history of moral philosophy and the complexity of Rawls' approach to the problem of normative adequacy. In response to Nagel's claim that the analogy between moral theory and linguistics is false because whatever native speakers agree on is English, but whatever ordinary individuals agree in condemning is not necessarily wrong, it is argued that the criticism ignores both Rawls' use of the competence-performance distinction and the theory-dependence of the corresponding distinction in linguistics. In response to Dworkin's claim that Rawls' conception of moral theory is incompatible with naturalism and presupposes constructivism, it is argued that Dworkin's distinction between naturalism and constructivism represents a false antithesis; neither is an accurate interpretation of the model of moral theory Rawls describes in 'A Theory of Justice.' The thesis concludes by situating Rawls' linguistic analogy within the context of broader debates in moral philosophy, metaethics, natural law theory, the theory of moral development, and the cognitive and brain sciences.
Article
Are current theories of moral responsibility missing a factor in the attribution of blame and praise? Four studies demonstrated that even when cause, intention, and outcome (factors generally assumed to be sufficient for the ascription of moral responsibility) are all present, blame and praise are discounted when the factors are not linked together in the usual manner (i.e., cases of “causal deviance”). Experiment 4 further demonstrates that this effect of causal deviance is driven by intuitive gut feelings of right and wrong, not logical deliberation.
Article
Omission bias is the preference for harm caused by omissions over equal or lesser harm caused by acts. Recent articles (Connolly & Reb, 2003; Patt & Zeckhauser, 2000; Tanner & Medin, in press) have raised questions about the generality of this phenomenon and have suggested that the opposite bias (action bias) sometimes exists. Prentice and Koehler (2003) have suggested that omission bias is sometimes confounded with a bias toward what is normal, a bias they find. We review this literature and report new data showing omission bias with appropriate methods, as well as a small normality bias that cannot explain the omission bias. The data suggest that the bias is largely based on the distinction between direct and indirect causation, rather than that between action and inaction as such. We report substantial individual differences: some subjects show action bias. We argue, though, that concern about omission bias is justified if only a substantial minority of people show it.
Article
We review the major findings concerning omission bias and protected values (PVs). PVs are values that are absolute, hence protected from trade-offs with other values. We argue that PVs against omissions are relatively rare, since a prohibited omission would be an injunction to act, which could create infinite obligations, and we provide support for this argument. We also replicate and extend earlier findings of a correlation between PVs and omission bias, the bias to favor harms of omission over equal or greater harms from action. We discuss the nature of omission bias and its relation to other biases. We also find that, although emotional responses are correlated with biases, we can manipulate apparent emotional responses without affecting PVs or omission bias. We thus argue that emotions are not the only cause of biases.
Article
We created paired moral dilemmas with minimal contrasts in wording, a research strategy that has been advocated as a way to empirically establish principles operative in a domain-specific moral psychology. However, the candidate "principles" we tested were not derived from work in moral philosophy, but rather from work in the areas of consumer choice and risk perception. Participants were paradoxically less likely to choose an action that sacrifices one life to save others when they were asked to provide more reasons for doing so (Experiment 1), and their willingness to sacrifice lives depended not only on how many lives would be saved, but on the number of lives at risk (Experiment 2). The latter effect was also found in a within-subjects design (Experiment 3). These findings suggest caution in the use of artificial dilemmas as a key testbed for revealing principled bases for moral judgment.
Article
Many important moral decisions, particularly at the policy level, require the evaluation of choices involving outcomes of variable magnitude and probability. Many economic decisions involve the same problem. It is not known whether and to what extent these structurally isomorphic decisions rely on common neural mechanisms. Subjects undergoing fMRI evaluated the moral acceptability of sacrificing a single life to save a larger group of variable size and probability of dying without action. Paralleling research on economic decision making, the ventromedial prefrontal cortex and ventral striatum were specifically sensitive to the "expected moral value" of actions, i.e., the expected number of lives lost/saved. Likewise, the right anterior insula was specifically sensitive to outcome probability. Other regions tracked outcome certainty and individual differences in utilitarian tendency. The present results suggest that complex life-and-death moral decisions that affect others depend on neural circuitry adapted for more basic, self-interested decision making involving material rewards.
Article
Estudio de un caso de cooperación de un grupo de inmigrantes cristianos iraquíes que se han desarrollado en algunas zonas de Michigan, EUA. Su comunidad ha convivido con sus costumbres de origen y la cultura local, se estudia la evolución cultural del grupo su cooperación y altruismo.
Article
In some cases people judge it morally acceptable to sacrifice one person's life in order to save several other lives, while in other similar cases they make the opposite judgment. Researchers have identified two general factors that may explain this phenomenon at the stimulus level: (1) the agent's intention (i.e. whether the harmful event is intended as a means or merely foreseen as a side-effect) and (2) whether the agent harms the victim in a manner that is relatively "direct" or "personal". Here we integrate these two classes of findings. Two experiments examine a novel personalness/directness factor that we call personal force, present when the force that directly impacts the victim is generated by the agent's muscles (e.g., in pushing). Experiments 1a and b demonstrate the influence of personal force on moral judgment, distinguishing it from physical contact and spatial proximity. Experiments 2a and b demonstrate an interaction between personal force and intention, whereby the effect of personal force depends entirely on intention. These studies also introduce a method for controlling for people's real-world expectations in decisions involving potentially unrealistic hypothetical dilemmas.
Article
The distinction between active and passive euthanasia is thought to be crucial for medical ethics. The idea is that it is permissible, at least in some cases, to withhold treatment and allow a patient to die, but it is never permissible to take any direct action designed to kill the patient. This doctrine seems to be accepted by most doctors, and is endorsed in a statement adopted by the House of Delegates of the American Medical Association on December 4, 1973: The intentional termination of the life of one human being by another—mercy killing—is contrary to that for which the medical profession stands and is contrary to the policy of the American Medical Association. The cessation of the employment of extraordinary means to prolong the life of the body when there is irrefutable evidence that biological death is imminent is the decision of the patient and/or his immediate family. The advice and judgment of the physician should be freely available to the patient and/or his immediate family.