ResearchPDF Available

Abstract

How does the mind make moral judgments when the only way to satisfy one moral value is to neglect another? Moral dilemmas posed a recurrent adaptive problem for ancestral hominins, whose cooperative social life created multiple responsibilities to others. For many dilemmas, striking a balance between two conflicting values (a compromise judgment) would have promoted fitness better than neglecting one value to fully satisfy the other (an extreme judgment). We propose that natural selection favored the evolution of a cognitive system designed for making trade-offs between conflicting moral values. Its nonconscious computations respond to dilemmas by constructing "rightness functions": temporary representations specific to the situation at hand. A rightness function represents, in compact form, an ordering of all the solutions that the mind can conceive of (whether feasible or not) in terms of moral rightness. An optimizing algorithm selects, among the feasible solutions, one with the highest level of rightness. The moral trade-off system hypothesis makes various novel predictions: People make compromise judgments, judgments respond to incentives, judgments respect the axioms of rational choice, and judgments respond coherently to morally relevant variables (such as willingness, fairness, and reciprocity). We successfully tested these predictions using a new trolley-like dilemma. This dilemma has two original features: It admits both extreme and compromise judgments, and it allows incentives-in this case, the human cost of saving lives-to be varied systematically. No other existing model predicts the experimental results, which contradict an influential dual-process model.
This shareable PDF can be hosted on any platform or network and is fully compliant with publisher copyright.
A moral trade-o system produces intuitive judgments that are rational and coherent and
strike a balance between conicting moral values
Ricardo Andrés Guzmán, María Teresa Barbato, Daniel Sznycer, Leda Cosmides
Proceedings of the National Academy of Sciences, October 2022, Proceedings of the National Academy of Sciences
DOI: 10.1073/pnas.2214005119
Making moral tradeos when facing a
dilemma
What is it about?
Intuitions about right and wrong clash in moral dilemmas. This paper is about how
our mind makes judgments when that happens. It shows we have a cognitive system
that handles dilemmas very well, producing intuitive judgments that are rational
*and* coherent *and* strike a balance between conicting moral values.
We reported evidence that dilemmas activate a moral trade-o system: a cognitive
system that is well designed for making trade-os between conicting moral values.
When asked which option for resolving a dilemma is morally right, many people
made compromise judgments, which strike a balance between
conicting moral values by partially satisfying both. Furthermore, their moral
judgments satised a demanding standard of rational choice: the Generalized Axiom
of Revealed Preferences. Deliberative reasoning cannot explain these results, nor can
a tug-of-war between emotion and reason. The results are the signature of a
cognitive system that weighs competing moral considerations and chooses the
solution that maximizes rightness.
Why is it important?
Many psychologists underestimate the sophistication and rationality of human
cognition. They argue that we can reason deliberatively (which is obviously true), but
that evolved emotions, heuristics, or inferences pre-empt, interfere with, or bias our
reasoning. A case in point involves sacricial moral dilemmas (in which saving the
most lives requires harming innocents--as in trolley problems). According to an
inuential theory, these moral dilemmas elicit a tug-of-war between emotions and
reasoning, which prevents us from making judgments that strike a balance between
conicting moral values by partially satisfying both. But this makes no sense from an
evolutionary perspective. Moral dilemmas are, and always have been, part of the
human condition. Our ancestors lived in dense, interdependent social groups, and
had obligations and duties to children, parents, siblings, friends, neighbors,
coalitional allies and others. In many (most?) cases, it was impossible to fully satisfy
all of these obligations. Sometimes the best you can do is partially satisfy several of
them. This suggests that natural selection would have built computational
machinery that is good at weighing various obligations, and producing compromise
judgments (ones that partially satisfy two or more conicting values). Our research
provides evidence of a cognitive system that does just that, producing intuitive
judgments systematically and rationally, while maintaining moral coherence (i.e., the
judgments vary sensibly with changing conditions). Across situations, people
In partnership with:
Read Publication
https://link.growkudos.com/1myw8bfmi2o
consistently chose the resolution to a dilemma that was *most right*, given how
their mind weighed the competing moral values. In fact, their judgments satised a
standard of rationality from microeconomics that implies the existence of an
optimizing algorithm.
Perspectives
LC
Leda Cosmides
University of California Santa Barbara
My personal perspective:
Strangely, I found our results (and the evolutionary analysis accompanying it)
comforting. For years, I suered because I could not gure out how to be a good
mother and a good professor at the same time. I always felt I was failing someone. If
I was fully satisfying my obligations to my child, I was failing at some of my duties in
the department, to my colleagues or students. If I was doing right by my grad
students, I felt like I was failing my child. Working on this paper brought home that
doing it all was literally impossible. Often, the best choice--the one that is most right-
-is to satisfy most of your obligations partially: a compromise judgment. What counts
as "most right" will vary from person to person, of course, because it depends on
how heavily you weigh your competing obligations, to family, students, colleagues,
etc. (a "rightness function" (as discussed in the paper) expresses those weights). But
it is comforting to know that we have a cognitive system that takes all that
information into account, does a nonconscious computation that determines which
available option maximizes rightness, given your values--and that *that* is the option
that will feel most right (an intuitive judgment). And to me, it is interesting to know
that the option that feels most right is sometimes a compromise judgment. I was
always going all-out, exhausted most of the time, yet feeling like I was failing at all my
obligations. Now I realize that it was my expectations that were nuts, and what I saw
as failures were compromise moral judgments--my mind was striking a balance
between all these obligations. And that is just part of the human condition.
The following have contributed to this page: Leda Cosmides
PDF generated on 11-Oct-2022
Create your own PDF summaries at www.growkudos.com.
... A conflict resolution is then simply defined as making a choice between the contrasting actions. The literature on moral dilemmas has focused on principles such as "deontology" and "utilitarianism" [28,29,31]; but people's wildly different responses to dilemmas that allegedly all contrast abstract deontology with utilitarianism [20,31] show that the right level of analysis is often not abstract principles but norms specific to contexts (see also [32,45,72]). ...
Preprint
Full-text available
Robots are increasingly taking on roles in contexts in which ethical decision making is necessary. In this paper, we offer a set of tools to study one central capacity of future moral robots: to have norm competence and, in particular, to make acceptable decisions when moral norms conflict with one another. Our efforts are in line with growing trends in the community to create and share systematic and replicable methods as well as validated testbeds, measures, and open datasets. Specifically, we propose four principal requirements for systematic research on human responses to robots’ choices to resolve norm conflicts and on their sequelae---human moral judgments, potential loss of trust, and robot mitigations to regain that trust. We also share a series of validated testbeds in the form of moral dilemmas. We detail our procedures and results of a human-subjects experiment (N=1,573) to validate these moral dilemmas as demonstrating high levels of norm conflict and thus well suited to study human-robot interaction in light of such norm conflicts. From these efforts we provide for the community an integrated framework of methodological tools for Moral HRI.
... 2. First steps toward an empirical investigation of normative uncertainty have recently been taken by Costa-Gomes and Schönegger (2022), and Jabarian (2020). For recent work on uncertainty in moral judgments without explicit reference to uncertaintism, see for example Alsaad (2021), Alsaad et al. (2021), Guzmán et al. (2022), and Mata (2019). 3. ...
Article
Full-text available
Even if we know all relevant descriptive facts about an act, we can still be uncertain about its moral acceptability. Most literature on how to act under such normative uncertainty operates on moral realism, the metaethical view that there are objective moral facts. Lay people largely report anti-realist intuitions, which poses the question of how these intuitions affect their interpretation and handling of normative uncertainty. Results from two quasi-experimental studies (total N = 365) revealed that most people did not interpret normative uncertainty as referring to objective moral facts but rather as uncertainty regarding one’s own view, uncertainty regarding the culturally accepted view or as the result of ambivalence. Especially the anti-realist majority of participants interpreted normative uncertainty different to how it is described in the literature on choice under normative uncertainty. Metaethical views were also associated with lay peoples’ choice of uncertainty reduction strategies and with assumptions about the intended aim of such strategies. The current findings suggest that empirical investigations of normative uncertainty might benefit from considering folk metaethical pluralism, as the lay public largely disagrees with the metaethical assumptions underlying the current discourse on choice under normative uncertainty.
Article
Full-text available
Immigration is an extremely divisive political issue in Western Europe and North America. We examine whether immigration policy preferences are more nuanced than commonly understood. Too often, analyses of immigration policy preferences only consider the number of people allowed into the country. Yet, immigration policy must also address which people are allowed into the country and what rights they can have. We present results from a series of original surveys conducted in Germany between April 2020 and August 2022. We find preferences about policies governing immigration flows are conditional on policies governing entrance criteria and rights eligibility. Respondents who oppose immigration in general are willing to compromise and allow more immigration if entrance criteria become more selective. Respondents who support immigration are willing to compromise and accept less immigration if rights become more generous. Our findings have implications for understanding divides over immigration as well as policy debates more generally.
Article
Full-text available
The prominent dual process model of moral cognition suggests that reasoners intuitively detect that harming others is wrong (deontological System-1 morality) but have to engage in demanding deliberation to realize that harm can be acceptable depending on the consequences (utilitarian System-2 morality). But the nature of the interaction between the processes is not clear. To address this key issue we tested whether deontological reasoners also intuitively grasp the utilitarian dimensions of classic moral dilemmas. In three studies subjects solved moral dilemmas in which utilitarian and deontological considerations cued conflicting or non-conflicting decisions while performing a demanding concurrent load task. Results show that reasoners' sensitivity to conflicting moral perspectives, as reflected in decreased decision confidence and increased experienced processing difficulty, was unaffected by cognitive load. We discuss how these findings argue for a hybrid dual process model interpretation in which System-1 cues both a deontological and utilitarian intuition.
Article
Full-text available
Moral judgment has typically been characterized as a conflict between emotion and reason. In recent years, a central concern has been determining which process is the chief contributor to moral behavior. While classic moral theorists claimed that moral evaluations stem from consciously controlled cognitive processes, recent research indicates that affective processes may be driving moral behavior. Here, we propose a new way of thinking about emotion within the context of moral judgment, one in which affect is generated and transformed by both automatic and controlled processes, and moral evaluations are shifted accordingly. We begin with a review of how existing theories in psychology and neuroscience address the interaction between emotion and cognition, and how these theories may inform the study of moral judgment. We then describe how brain regions involved in both affective processing and moral judgment overlap and may make distinct contributions to the moral evaluation process. Finally, we discuss how this way of thinking about emotion can be reconciled with current theories in moral psychology before mapping out future directions in the study of moral behavior.
Article
Full-text available
Significance Prominent theories of shame hold that shame is inherently maladaptive. However, direct tests of the fit between shame and its probable target domain have not previously been conducted. Here we test the alternative hypothesis that shame, although unpleasant (like pain), serves the adaptive function of defending against the social devaluation that results when negative information reaches others—by deterring actions that would lead to more devaluation than benefits, for example. If so, the intensity of shame people feel regarding a given item of negative information should track the devaluation that would happen if that item became known. Indeed, the data indicate a close match between shame intensities and audience devaluation, which suggests that shame is an adaptation.
Article
What is judged as morally right and wrong in war? I argue that despite many decades of research on moral psychology and the psychology of intergroup conflict, social psychology does not yet have a good answer to this question. However, it is a question of great importance because its answer has implications for decision-making in war, public policy, and international law. I therefore suggest a new way for psychology researchers to study the morality of war that combines the strengths of philosophical just-war theory with experimental techniques and theories developed for the psychological study of morality more generally. This novel approach has already begun to elucidate the moral judgments third-party observers make in war, and I demonstrate that these early findings have important implications for moral psychology, just-war theory, and the understanding of the morality of war.
Article
This paper compares two theories and their two corresponding computational models of human moral judgment. In order to better address psychological realism and generality of theories of moral judgment, more detailed and more psychologically nuanced models are needed. In particular, a motivationally based theory of moral judgment (and its corresponding computational model) is developed in this paper that provides a more accurate account of human moral judgment than an existing emotion-reason conflict theory. Simulations based on the theory capture and explain a range of relevant human data. They account not only for the original data that were used to support the emotion-reason conflict theory, but also for a wider range of data and phenomena.
Article
A robust finding in the welfare state literature is that public support for the welfare state differs widely across countries. Yet recent research on the psychology of welfare support suggests that people everywhere form welfare opinions using psychological predispositions designed to regulate interpersonal help giving using cues regarding recipient effort. We argue that this implies that cross-national differences in welfare support emerge from mutable differences in stereotypes about recipient efforts rather than deep differences in psychological predispositions. Using free-association tasks and experiments embedded in large-scale, nationally representative surveys collected in the United States and Denmark, we test this argument by investigating the stability of opinion differences when faced with the presence and absence of cues about the deservingness of specific welfare recipients. Despite decades of exposure to different cultures and welfare institutions, two sentences of information can make welfare support across the U. S. and Scandinavian samples substantially and statistically indistinguishable.