ArticlePDF Available

Moral Frames Are Persuasive and Moralize Attitudes; Nonmoral Frames Are Persuasive and De-Moralize Attitudes

Authors:

Abstract

Moral framing and reframing strategies persuade people holding moralized attitudes (i.e., attitudes having a moral basis). However, these strategies may have unintended side effects: They have the potential to moralize people's attitudes further and as a consequence lower their willingness to compromise on issues. Across three experimental studies with adult U.S. participants (Study 1: N = 2,151, Study 2: N = 1,590, Study 3: N = 1,015), we used persuasion messages (moral, nonmoral, and control) that opposed new big-data technologies (crime-surveillance technologies and hiring algorithms). We consistently found that moral frames were persuasive and moralized people's attitudes, whereas nonmoral frames were persuasive and de-moralized people's attitudes. Moral frames also lowered people's willingness to compromise and reduced behavioral indicators of compromise. Exploratory analyses suggest that feelings of anger and disgust may drive moralization, whereas perceiving the technologies to be financially costly may drive de-moralization. The findings imply that use of moral frames can increase and entrench moral divides rather than bridge them.
https://doi.org/10.1177/09567976211040803
Psychological Science
1 –17
© The Author(s) 2022
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/09567976211040803
www.psychologicalscience.org/PS
ASSOCIATION FOR
PSYCHOLOGICAL SCIENCE
Research Article
Moralized attitudes are attitudes that are embedded in
people’s core beliefs and convictions and are related
to what people believe to be fundamentally right or
wrong (Skitka etal., 2005; Skitka & Morgan, 2014).
People who hold moralized attitudes are generally
harder to persuade (e.g., Aramovich etal., 2012) and
unwilling to compromise on their positions (e.g., Ryan,
2017), leading to moral and political divides. However,
some strategies can persuade individuals who hold
moralized attitudes. These strategies are designed to
counter moralized attitudes by casting persuasive mes-
sages in a new moral light, either by highlighting how
a position on a moralized attitude may in fact be
immoral (moral framing; Andrews etal., 2017; Hoover
etal., 2018; Luttrell etal., 2019; Van Zant & Moore,
2015) or by highlighting how a position on an attitude
may be rooted in a favored moral value (moral refram-
ing; Feinberg & Willer, 2013, 2015, 2019; Kidwell etal.,
2013; Voelkel etal., 2020; Voelkel & Feinberg, 2018).
Using these approaches, scholars have changed moral-
ized opinions on recycling by highlighting how recy-
cling can be “harmful and immoral” (Luttrell etal., 2019,
p. 1139) and altered the opinions of U.S. conservatives
on same-sex marriage by highlighting how same-sex
couples are “proud and patriotic Americans” (Feinberg
& Willer, 2015, p. 1673). This work suggests that moral
framing and reframing are useful tools for attitude
change that can bridge moral and political divides.
Given its benefits, it has been further recommended for
use in the field, for example, in addressing the COVID-
19 pandemic (Van Bavel etal., 2020).
1040803PSSXXX10.1177/09567976211040803Kodapanakkal et al.Moral Frames Are Persuasive and Moralize Attitudes
research-article2022
Corresponding Author:
Rabia I. Kodapanakkal, Tilburg University, Department of Social
Psychology
Email: r.i.kodapanakkal@tilburguniversity.edu
Moral Frames Are Persuasive and Moralize
Attitudes; Nonmoral Frames Are Persuasive
and De-Moralize Attitudes
Rabia I. Kodapanakkal1, Mark J. Brandt2,
Christoph Kogler1, and Ilja van Beest1
1Department of Social Psychology, Tilburg University, and 2Department of Psychology, Michigan State University
Abstract
Moral framing and reframing strategies persuade people holding moralized attitudes (i.e., attitudes having a moral
basis). However, these strategies may have unintended side effects: They have the potential to moralize people’s
attitudes further and as a consequence lower their willingness to compromise on issues. Across three experimental
studies with adult U.S. participants (Study 1: N = 2,151, Study 2: N = 1,590, Study 3: N = 1,015), we used persuasion
messages (moral, nonmoral, and control) that opposed new big-data technologies (crime-surveillance technologies
and hiring algorithms). We consistently found that moral frames were persuasive and moralized people’s attitudes,
whereas nonmoral frames were persuasive and de-moralized people’s attitudes. Moral frames also lowered people’s
willingness to compromise and reduced behavioral indicators of compromise. Exploratory analyses suggest that
feelings of anger and disgust may drive moralization, whereas perceiving the technologies to be financially costly may
drive de-moralization. The findings imply that use of moral frames can increase and entrench moral divides rather than
bridge them.
Keywords
persuasion, moral conviction, moralization, de-moralization, compromise, open data, open materials, preregistered
Received 3/26/21; Revision accepted 8/2/21
2 Kodapanakkal et al.
This may be premature. Moral framing and reframing
strategies could have unintended side effects that limit
their potential to bridge divides. We consider two. First,
these strategies could increase the moral relevance
people attach to an attitude, leaving people persuaded
and with their attitudes moralized. Second, these strate-
gies could decrease people’s willingness to compro-
mise, leaving people persuaded but with their attitudes
entrenched. For example, an individual who thinks the
use of hiring algorithms to hire employees is morally
right because it is fairer and accurate could be per-
suaded with moral arguments highlighting that these
technologies can be biased and unfair. However, this
may lead to moralization of the attitude as well as a
decreased willingness to compromise on the issue. Per-
suading and entrenching people may be a viable goal
if one considers the changed attitude to be the morally
correct one, but if moral framing and reframing are to
be used to bridge political divides, such side effects are
antithetical to the approach.
Potential Side Effects
In both moral framing and reframing strategies, the
moral arguments used for persuasion could also induce
change in people’s moral convictions. The content of
the moral arguments is purposefully similar to factors
that drive the process of moralization. For example,
research suggests that moralization is based on the
intuitive perception of harm (e.g., Schein & Gray, 2018),
strong emotional reactions (Brandt etal., 2015; Wisneski
& Skitka, 2017), or the linking of an attitude with a
broader moral principle (Feinberg etal., 2019; Rozin,
1999). Moral arguments often contain all of these ele-
ments, tapping into people’s moral emotions, percep-
tions of harm, and their broader moral principles (e.g.,
Feinberg & Willer, 2015; Luttrell et al., 2016, 2019).
These elements likely make the argument persuasive
(Feinberg & Willer, 2015), but they could also moralize
the target attitude.
A secondary moralization effect may not be worri-
some. Moralized attitudes can be constructive because
they can increase people’s political engagement and
lead to more collective action and greater civic partici-
pation (Mazzoni etal., 2015; Skitka & Bauman, 2008;
van Zomeren etal., 2011). However, moralized attitudes
are a double-edged sword and can also have effects
that may be less constructive (at least in certain situa-
tions) because people who hold moralized attitudes are
less willing to compromise (e.g., Delton etal., 2020),
show more anger (e.g., Mullen & Skitka, 2006), and are
intolerant toward those with whom they disagree (e.g.,
Garrett & Bankert, 2020).
We focus on a side effect that is particularly relevant
for efforts at bridging moral and political divides: the
willingness to compromise. Willingness to compromise
in a democratic system recognizes pluralistic values and
acts as an instrument to achieve mutual respect and
stability. Resisting compromise and strongly favoring
only one outcome can lead to a stalemate in govern-
ments in which problems go unresolved (see Ryan,
2017). People who hold strong moral convictions about
their attitudes are less likely to compromise (Clifford,
2019; Delton etal., 2020; Ryan, 2017) and are even less
likely to identify procedures for resolving issues (Skitka
etal., 2005). This is because moralized attitudes are
particularly strong attitudes, connected to right and wrong,
and are often viewed like objective facts (Goodwin &
Darley, 2008; Skitka etal., 2021). If one perceives the
other side as holding an objectively wrong position, it
does not make sense to compromise. For people who
hold truly strong moral convictions, it would be akin
to compromising on the answer to 2 + 2. Notably, if
moral framing and reframing strategies induce an
unwillingness to compromise, their utility in bridging
divides will be curtailed.
Statement of Relevance
Societies are divided over moral issues. One set
of strategies to bridge these divides is to frame
persuasive arguments in moral terms (e.g., “new
technologies can cause harm and be used to dis-
criminate against people”) or use alternative
moral values. These strategies have unintended
side effects that reduce the possibility that they
can bridge moral divides. We found two such side
effects. The first is that moral frames increase
moralization (one’s attitude having a moral basis),
and the second is that moral frames lower peo-
ple’s willingness to compromise. These results
imply that current moral-persuasion strategies
designed to bridge moral divides by changing
attitudes could unintentionally increase those
divides by further moralizing and entrenching
people’s attitudes. Scholars and practitioners
should use these strategies cautiously and test
for potential side effects in the domains in
which they plan to use them. We also found
that nonmoral frames were persuasive and de-
moralized people’s attitudes. This strategy has
the potential to persuade people but could also
reduce the moral stakes by reducing levels of
moralization.
Moral Frames Are Persuasive and Moralize Attitudes 3
There is some initial evidence for this curtailing. One
study found that people exposed to moral rhetoric
(compared with pragmatic rhetoric) used more absolut-
ist reasoning and expressed more intense political atti-
tudes (at least for two of the attitudes considered;
Marietta, 2008). This study, however, was underpow-
ered, did not directly measure moralization or compro-
mise, and did not include a control condition. The latter
omission is important because without a control condi-
tion, one cannot determine whether moral framing
increases moralization or whether pragmatic framing
decreases moralization. Another study (Van Zant &
Moore, 2015) that included a moral, ambiguous, and
pragmatic frame did not find any differences in moral-
ization across the frames. However, very brief frames
were used, which may not be sufficient to affect mor-
alization. Nonmoral messages that contain pragmatic
arguments highlighting economic and feasibility con-
cerns can be persuasive for people who hold nonmoral
attitudes and unpersuasive for those who hold moral-
ized attitudes (Luttrell etal., 2019, Study 1). However,
how these messages might affect moralization and the
willingness to compromise is not known. Some research
suggests that the consideration of financial costs can
reduce the influence of moralization (Bastian etal.,
2015), and others hint at using emotional de-escalation
to reduce moralization (Clifford, 2019; Skitka etal.,
2021). For example, emotional frames lead to greater
attitude moralization compared with a control frame
(Clifford, 2019), but whether nonemotional frames do
the opposite is an empirical question yet to be tested.
Nonmoral messages devoid of emotional content and
containing economic concerns could potentially result
in de-moralized attitudes and a greater willingness to
compromise. By including moral, nonmoral, and con-
trol conditions, it is possible to test for unintended side
effects of moral framing and reframing strategies.
The Current Research
We assessed whether moral and nonmoral frames
affected people’s moral convictions (Studies 1–3) and
their willingness to compromise (Study 3) on their posi-
tion. We also tested whether the frames were persuasive
to ensure that any differences in moral convictions or
compromise were not due to differential effectiveness
at changing attitudes (Studies 1–3). We also explored
potential mechanisms (e.g., emotions, perceptions of
harm) driving changes in moral convictions (Studies 2
and 3). All the studies focused on persuading people
to oppose new big-data technologies because these
issues involve relatively new attitudes that are often
discussed using moral language (Corlett, 2002;
Kleinberg et al., 2018) and because they have the
potential to moralize those attitudes (Kodapanakkal
etal., 2021).
Method
We describe the method of all studies in parallel, high-
lighting the similarities and differences. These are sum-
marized in Table 1. We used a pretest/posttest design
with two time points for Studies 1 and 2. Study 3 had
only one time point. In all studies, we randomly
assigned participants to at least one moral frame, one
nonmoral frame, or a control condition.
In Study 1, we assessed whether moral and nonmoral
frames were persuasive and whether they affected
moral conviction. These frames presented arguments
opposing crime-surveillance technologies. The primary
analyses in Study 1 were exploratory.1 We found that
the moral frames were persuasive and moralized peo-
ple’s attitudes, whereas nonmoral frames were persua-
sive but (marginally) de-moralized their attitudes.
We had three aims in Study 2. First, we wanted to
replicate the moralization and de-moralization findings
of Study 1. We predicted that the results would be the
same as in Study 1 (the preregistration can be viewed
at https://osf.io/7rzx8/). Second, we wanted to explore
possible cognitive and affective mechanisms that could
drive the effects of moralization and de-moralization.
Third, we wanted to see whether the findings of Study
1 would replicate in a different technology setting—
hiring algorithms.
We had three aims in Study 3. First, we aimed to
replicate the moralization and de-moralization effects
of Studies 1 and 2. Second, we aimed to further explore
mechanisms of the de-moralization process intended
to tap into a pragmatic reasoning style that might tem-
per moralization. Third, we aimed to assess a second
possible side effect: people’s willingness to compro-
mise. We expected that people in the moral condition
would be less willing to compromise, whereas people
in the nonmoral condition would be more willing to
compromise. These predictions were preregistered
(https://osf.io/sqa9w/). The studies were reviewed and
approved by the ethics review board of Tilburg Univer-
sity School of Social and Behavioral Sciences.
Participants
All studies were conducted online on Prolific (www.pro-
lific.co) with participants from the United States. Given
the similarities in all the studies, participants who partici-
pated in one study were excluded from the partici-
pant pool of subsequent studies. In Studies 1 and 2, we
4 Kodapanakkal et al.
conducted power analyses with the R package Declare-
Design (Version 0.30.0; Blair etal., 2022), which indicated
that a minimum sample of 500 participants per condition
would be needed to achieve a standardized effect size
(Cohen’s d) of 0.18 (based on Voelkel etal., 2020) with
an α of .05 and 80% power. The effect size considered
was for an interaction effect (i.e., the moral-reframing
hypotheses; see Note 1). We estimated a minimum sam-
ple size of 2,000 participants for Study 1 (four between-
subject conditions) and 1,500 participants for Study 2
(three between-subject conditions). We aimed to recruit
an additional 10% to account for attrition. The actual
number of participants recruited at both Time 1 and Time
2 is shown in Table 1. For both studies, participants
received £0.50 for completing the measures at the first
time point (~4 min) and £0.40 for completing the mea-
sures at the second time point (~3 min). In both studies,
there was a 1-week gap between the two time points,
and the survey at Time 2 remained open for 1 week.
For Study 3, we calculated that a minimum sample
size of 950 would be required to achieve a standard-
ized effect size (Cohen’s d) of 0.20 with an α of .05
and 80% power. The effect size was based on what we
found in Study 2 for similar manipulations. We aimed
for a higher sample size (at least 1,000) to account for
participants who might not complete the study. Par-
ticipants received £1 to complete the study, which was
8 min long. See Table 1 for demographic statistics of
all studies.
Design and procedure
In Studies 1 and 2, participants first read a neutral
description of the technology under consideration at
Time 1. This description included factual information
about who uses the technology and what the technol-
ogy does. The wording was as neutral as possible with-
out any persuasive arguments for or against the
technology, and it did not mention any benefits or
downsides of the technology. Participants read about a
crime-surveillance technology in Study 1 and a hiring
algorithm in Study 2. We used hiring algorithms in Study
2 because they differed from crime-surveillance tech-
nologies in two ways: Hiring algorithms are used mostly
by private companies (not the government) and have
the potential for discrimination instead of privacy viola-
tions, which are more problematic in crime-surveillance
technologies (see Kodapanakkal etal., 2020, for a jus-
tification of various technology domains).
After reading the descriptions, participants reported
their support for the technology and the degree to
which they felt that their attitude was based on a moral
conviction. They also reported the extent to which their
attitudes were grounded in specific moral foundations
Table 1. Design of Studies 1 to 3 and Demographic Statistics of Participants
Variable Study 1 Study 2 Study 3
Number of time points 2 2 1
Technology Crime-surveillance
technology
Hiring
algorithm
Hiring
algorithm
Number of experimental conditions 4 3 3
Measures
Attitude support Yes Yes Yes
Moral conviction Yes Yes Yes
Willingness to compromise Yes
Compromise behavior Yes
Perception of risks and benefits Yes Yes
Emotional reactions Yes Yes
Weighing costs and benefits Yes
Time 1 sample size 2,229 1,654 1,015a
Time 2 sample size 2,151 1,590
Platform Prolific Prolific Prolific
Participant nation United States United States United States
Women in sample (%) 49.40 43.70 48.90
Participant age (years)
M 34.5 34.3 32.4
SD 12.8 11.6 11.5
Range 18–82 18–77 18–78
aThere was only one time point in Study 3.
Moral Frames Are Persuasive and Moralize Attitudes 5
(see Note 1). Finally, they answered demographic ques-
tions related to age, gender, and political ideology. (See
Tables S1 and S2 in the Supplemental Material available
online for full descriptions of the technologies.)
At Time 2, participants in Study 1 were randomly
assigned to four conditions (harm-based moral, liberty-
based moral, nonmoral, and control) and participants
in Study 2 were randomly assigned to three conditions
(harm or fairness based, nonmoral, and control). The
control message was the same as the neutral description
presented at Time 1 for each study. The first part of all
the other messages was the same as the control mes-
sage. The second part of the messages included the
potential disadvantages of the respective technology,
and the third part presented a factual example of the
disadvantage. In Study 1, the harm-based moral mes-
sage included arguments that used keywords such as
harm, misuse, and damage. The liberty-based moral
message included keywords such as intrusive, violating
freedom, and liberty. The nonmoral message was prag-
matic and included arguments related to financial cost
and the inefficiency of the technology; it contained
keywords such as costly, unfeasible, and monetary costs.
(For results of analyses testing the effectiveness of the
materials, see Figs. S1, S2, and S3 in the Supplemental
Material.) In Study 2, the nonmoral message had prag-
matic arguments similar to those in Study 1. The moral
message in Study 2 included harm- and fairness-based
arguments that contained keywords such as immoral,
harmful, bias, and consequences.
At Time 2, after reading the different messages, par-
ticipants reported their support for the technology and
the degree to which they felt that their attitude was
based on a moral conviction. In Study 2, we addition-
ally assessed potential mechanisms of moralization and
de-moralization. Participants reported perceived risks
and benefits of the technology and emotional reactions
of anger, disgust, fear, feeling creeped out, and grateful-
ness toward the technology.
The procedure for Study 3 was exactly the same as
in Time 2 of Study 2, in which participants were assigned
to the three conditions that were used in Study 2. Next,
they reported their attitude toward the technology in
the study (attitude support) and moral conviction. Par-
ticipants in Study 3 also reported other dimensions of
attitude strength, such as how certain, central, and
important their position was to them. This helped us
understand whether moral conviction is affected like
other dimensions of attitude strength are or whether it
is affected in a unique way (cf. Skitka etal., 2005). After
that, we assessed people’s willingness to compromise
using three measures: support for a political candidate,
willingness to work with a manager, and willingness to
compromise in an incentivized compromise game. In
Study 3, we also assessed the same potential mecha-
nisms for moralization and de-moralization measured
in Study 2. To test for additional mechanisms of de-
moralization, we additionally measured the extent to
which people weigh costs and benefits and how finan-
cially costly they find the technology.
Measures
Attitude support. Participants rated their attitude toward
the respective technology with the following item on a
7-point Likert scale (1 = strongly oppose, 7 = strongly sup-
port): “To what extent do you support or oppose the use
of the above technology?”
Moral conviction. We assessed participants’ moral con-
viction with a two-item moral-conviction scale (e.g.,
Skitka etal., 2005): “How much is your position on the
use of this technology connected to your core moral
beliefs and convictions?” and “How much is your position
on the use of this technology connected to your beliefs
about fundamental right or wrong?” Participants responded
to the items on a 7-point Likert scale (1 = not at all, 7 =
very much; Study 1, Time 1: r = .69; Study 1, Time 2: r =
.73; Study 2, Time 1: r = .73; Study 2, Time 2: r = .79;
Study 3: r = .79).
Potential mechanisms. Participants reported their per-
ception of the risks of the technology by responding to
questions such as, “This technology would be risky for
people” (Study 1: α = .82; Study 2: α = .82; 1 = strongly
disagree, 7 = strongly agree). They reported their percep-
tion of the benefits of technology by responding to ques-
tions such as, “This technology will help people obtain
services they want” (Study 1, α = .93; Study 2, α = .89;
1 = strongly disagree, 7 = strongly agree). They also reported
emotional reactions (anger, fear, disgust, creeped out,
and gratefulness) toward the technology—for example,
“Please indicate to what extent this technology makes
you feel angry” (1 = not at all, 7 = very much). In Study
3, there were two additional measures: the extent to
which participants weigh costs and benefits—“To what
extent did you think about costs and benefits related to
this hiring algorithm when deciding whether you support
or oppose this algorithm?”—and the extent of financial
cost—“To what extent did you think about how finan-
cially costly this hiring algorithm is when deciding
whether you support or oppose this algorithm?” (1 = not
at all, 7 = very much). These two measures were treated
as separate constructs.
Willingness to compromise.
Support for compromising and uncompromising polit-
ical candidates. In Study 3, participants reported their
6 Kodapanakkal et al.
likelihood of supporting two candidates who were com-
peting for a mayoral nomination. The description was
written such that without its mentioning “oppose” or
“support,” the candidates were portrayed as agreeing
with the participant’s position. Participants read the fol-
lowing:
Both candidates agree with your position on the
use of this hiring algorithm. Candidate A is
uncompromising and will vote against any
proposal that does not support your position.
Candidate B will dislike proposals that do not
support your position, but will be willing to
negotiate and make concessions in this area if it
leads to a gain in other areas that are important
to you.
Participants reported their support for the uncom-
promising and compromising candidates by answering
the question, “How likely are you to support Candidate
[A/B] for the nomination?” using a 7-point Likert scale
(1 = not at all, 7 = very likely). This question was asked
separately for each candidate. The order of the descrip-
tion for each candidate was randomized.
Willingness to work with compromising and uncom-
promising managers. In the second measure of compro-
mise, participants reported their willingness to work with
two managers who had the power to decide whether
they would use the hiring algorithm or not. Again, the
description was written such that, without its mentioning
“oppose” or “support,” the candidates were portrayed as
agreeing with the participant’s position. Participants read
the following:
Both managers agree with your position on this
algorithm. Manager A is uncompromising and is
not open to views on this algorithm that do not
support your position. Manager B will dislike
views that do not support your position, but is
willing to negotiate and make concessions if it
leads to a gain in other areas of the company that
are important to you.
Participants reported their willingness to work with
the uncompromising and compromising managers by
answering the question, “How likely are you to work
with Manager [A/B]?” using a 7-point Likert scale (1 =
not at all, 7 = very likely). This question was asked
separately for each manager. The order of the descrip-
tion for each manager was randomized.
Incentivized compromise game. The third measure
of compromise was in the form of a fully incentivized
economic game based on a modified version used in
Delton et al. (2020). In this game, participants were pre-
sented with six different policies that ranged from fully
implementing the technology to not implementing the
technology at all. On the basis of their reported attitude,
we told participants that they would be paired with a
participant who had the opposite attitude. If partici-
pants selected the midpoint of the scale, they reported
in a follow-up question whether they would support or
oppose the algorithm if they really had to choose one
side. Participants who supported the algorithm saw this
description: “You said you SUPPORT the implementation
of this algorithm. The other participant in this negotiation
OPPOSES the implementation of this algorithm.” Simi-
larly, participants who opposed the algorithm saw this
description: “You said you OPPOSE the implementation
of this algorithm. The other participant in this negotia-
tion SUPPORTS the implementation of this algorithm.”
Participants could choose policies that corresponded to
different levels of compromise, and there would be a
deal only if both participants picked the same policy. We
operationalized compromise as the proportion of payoff
to the opponent. The value of the proportion of payoff
could be 0, .2, .4, .6, .8, and 1, depending on the policy
they chose. A higher payoff for the opponent indicated
higher compromise. For more details on the game, see
“Details of Willingness to Compromise Measures” in the
Supplemental Material.
Results
The means of the baseline attitudes and moral-convic-
tion measures are shown in Table S4 in the Supplemen-
tal Material. Results were output into Word using the R
package tidystats (Version 0.5; Sleegers, 2020).
Effect of condition on attitude support
We first tested whether the persuasive conditions were
effective at persuading participants. To test this, we
dummy-coded the condition variable (reference: con-
trol condition) in all three studies. In Studies 1 and 2,
we regressed attitude support at Time 2 on dummy-
coded condition and attitude support at Time 1, so that
the effects of condition indicated changes in attitude
support between Time 1 and Time 2. In Study 3, we
regressed attitude support on dummy-coded condition.
Results are shown in Table 2 and Figure 1. Across all
three studies, we found that compared with messages
in the control condition, messages in both the moral
and nonmoral conditions significantly persuaded par-
ticipants to oppose the technology (ds = 0.78 to 0.42).
These results show that all of the messages (moral or
nonmoral) were persuasive to a similar degree.
Moral Frames Are Persuasive and Moralize Attitudes 7
Side Effect 1: effect of condition on
moral conviction
Results showed that all persuasive conditions were per-
suasive as intended, but was there a side effect of moral
conviction? To test this, we dummy-coded the condition
variable (reference: control condition) in all three stud-
ies. In Studies 1 and 2, we regressed moral conviction
at Time 2 on dummy-coded condition and moral con-
viction at Time 1, so that the effects of condition indi-
cated changes in moral conviction between Time 1 and
Time 2. In Study 3, we regressed moral conviction on
dummy-coded condition. Results are shown in Table 2
and Figure 2. Across all three studies, we found that,
compared with participants in the control condition,
participants’ attitudes in the moral conditions were sig-
nificantly more moralized (ds = 0.16–0.55). In Study 1,
participants’ attitudes in the nonmoral condition were
marginally de-moralized compared with those of par-
ticipants in the control condition (d = 0.10). In Studies
2 and 3, participants’ attitudes in the nonmoral condi-
tion were significantly de-moralized compared with
those of participants in the control condition (ds =
0.15 to 0.20). Overall, moral messages moralized par-
ticipants’ attitudes, whereas nonmoral messages de-
moralized participants’ attitudes.
In Study 3, we also tested whether the conditions simi-
larly affected other dimensions of attitude strength (for
full details, see Table S5 and Fig. S6 in the Supplemental
Material). Moral frames increased all other dimensions of
attitude strength (ds = 0.32–0.55). However, nonmoral
frames did not affect all other dimensions of attitude
strength. They increased certainty (d = 0.20) and extremity
(d = 0.44) but did not have a significant effect on impor-
tance and centrality. This is different from moral convic-
tion, in which the nonmoral frame significantly decreased
moral conviction, suggesting that moral conviction is
affected differently by this framing and providing experi-
mental evidence that moral conviction is a distinct dimen-
sion of attitude strength (cf. Skitka etal., 2005).
Side Effect 2: effect of condition on
willingness to compromise
We now turn to willingness to compromise, which was
assessed only in Study 3 using two self-report measures
and one behavioral measure. Each section below pres-
ents results for each variable. Results for all the vari-
ables are shown in Table 3 and Figure 3. For details
regarding the association between moral conviction and
willingness to compromise, see Table S6 and Figure S7
in the Supplemental Material.
Table 2. Effect of Condition on Attitude Support and Moral Conviction in Studies 1 to 3
Dependent variable and
predictor
Study 1 Study 2 Study 3
βp
Cohen’s
dβp
Cohen’s
dβp
Cohen’s
d
Attitude support
Attitude support at
Time 1
0.66
(0.019)
< .001 0.53
(0.021)
< .001
Moral condition
(harm based)
0.19
(0.019)
< .001 0.44 0.21
(0.024)
< .001 0.44 0.37
(0.034)
< .001 0.78
Moral condition
(liberty based)
0.18
(0.019)
< .001 0.42
Nonmoral condition 0.19
(0.019)
< .001 0.43 0.22
(0.024)
< .001 0.46 0.32
(0.034)
< .001 0.67
Moral conviction
Moral conviction at
Time 1
0.40
(0.024)
< .001 0.42
(0.023)
< .001
Moral condition
(harm based)
0.11
(0.024)
< .001 0.24 0.10
(0.026)
< .001 0.22 0.26
(0.034)
< .001 0.55
Moral condition
(liberty based)
0.07
(0.024)
.003 0.16
Nonmoral condition 0.04
(0.024)
.08 0.10 0.07
(0.026)
.007 0.15 0.10
(0.034)
.005 0.20
Note: Standard errors are given in parentheses. The reference group for the dummy-coded conditions is the control condition. In Studies 2 and 3,
the moral condition included both harm- and fairness-based arguments, and there was no liberty-based moral condition. Attitude support refers to
participants’ attitude toward the technology in the study.
8
2
4
6
Control
Message
Control
Message
Control
Message
Harm-
Based
Moral
Message
Liberty-
Based
Moral
Message
Nonmoral
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Support
Study 1 Study 2 Study 3
Fig. 1. Effect of condition on support for the technology at Time 2 in Studies 1 and 2 and effect of condition on support for the technology at Time 1 in Study 3. Colored dots
represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means (con-
trolling for Time 1 attitude support in Study 1 and Study 2). Error bars around estimated means denote 95% confidence intervals.
9
Control
Message
Control
Message
Control
Message
Harm-
Based
Moral
Message
Liberty-
Based
Moral
Message
Nonmoral
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Moral Conviction
Study 1 Study 2 Study 3
2
4
6
Fig. 2. Effect of condition on moral conviction for the technology at Time 2 in Studies 1 and 2 and effect of condition on moral conviction at Time 1 in Study 3. Colored dots
represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means
(controlling for Time 1 attitude support in Study 1 and Study 2). Error bars around estimated means denote 95% confidence intervals.
10 Kodapanakkal et al.
Self-reported willingness to compromise. To assess
the effect of the condition on support for the uncompro-
mising candidate and uncompromising manager, we
regressed support for the uncompromising candidate or
manager on dummy-coded condition (reference: moral
condition). We used the moral condition as the reference
group for these analyses because our hypothesis pre-
dicted a difference between the moral condition and the
other two conditions. As predicted, we found that people
were more likely to support the uncompromising candi-
date in the moral condition compared with both the con-
trol and nonmoral conditions (ds = 0.22 to 0.16). We
found mixed results for the uncompromising manager.
People were more likely to work with the uncompromis-
ing manager in the moral condition compared with the
control condition but not compared with the nonmoral
conditions (although the effect sizes were very similar:
ds = 0.15 to 0.14).
To assess the effect of the condition on support for
the compromising candidate or compromising manager,
we regressed support for the compromising candidate
or manager on dummy-coded conditions (reference:
nonmoral condition). We used the nonmoral condition
as the reference group for these analyses because our
hypothesis predicted a difference between the nonmoral
condition and the other two conditions. The results for
the candidate were not in line with our predictions. We
found that people did not differ in their support for the
compromising candidate in the nonmoral condition
compared with the control condition or the moral condi-
tion (ds = 0.07 to 0.11). We found mixed results for the
compromising manager. People did not differ in their
willingness to work with the compromising manager in
the nonmoral condition compared with the control con-
dition, but there was a significant difference in willing-
ness between the nonmoral and moral conditions (ds =
0.04 to 0.25). In short, our hypotheses regarding will-
ingness to work for the uncompromising candidate and
manager were largely supported, but our hypotheses
regarding willingness to work for the compromising
candidate and manager received mixed support at best.
Incentivized compromise game. Next, using an incenti-
vized compromise game, we assessed whether there was
an effect of condition on whether people were more will-
ing to pick policies that represented a compromise of
their position. To test this, we regressed the payoff for the
opponent (indicating more compromise of the partici-
pant’s position) on dummy-coded conditions (reference:
control condition). As predicted, we found that people in
Table 3. Effect of Condition and Moral Conviction on Willingness to Compromise in
Study 3
Dependent variable and condition βpCohen’s d
Support for uncompromising candidate
(reference: moral condition)
Control condition 0.10 (0.036) .005 0.22
Nonmoral condition 0.07 (0.036) .043 0.16
Support for compromising candidate
(reference: nonmoral condition)
Control condition 0.03 (0.036) .384 0.07
Moral condition 0.05 (0.036) .138 0.11
Willingness to work with uncompromising
manager (reference: moral condition)
Control condition 0.07 (0.036) .049 0.15
Nonmoral condition 0.06 (0.036) .078 0.14
Willingness to work with compromising
manager (reference: nonmoral condition)
Control condition 0.02 (0.036) .582 0.04
Moral condition 0.12 (0.036) < .001 0.25
Willingness to compromise in the
incentivized compromise game
(reference: control condition)
Moral condition 0.08 (0.036) .038 0.16
Nonmoral condition 0.03 (0.036) .438 0.06
Note: Standard errors are given in parentheses.
Moral Frames Are Persuasive and Moralize Attitudes 11
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Support for CandidateWillingness to Work
Willingness to Compromise
2
4
6
Uncompromising Candidate
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Control
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Compromising Candidate
2
4
6
Uncompromising Manager Compromising Manager
0.00
0.25
0.50
0.75
1.00
Fig. 3. Effect of the conditions on willingness to compromise in Study 3. Results are shown separately for support for the uncompromising
and compromising political candidate (top row), willingness to work with an uncompromising and a compromising manager (middle row),
and the proportion of payoff that the matched partner received in the incentivized compromise game (bottom row). Colored dots represent
observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots rep-
resent estimated means. Error bars around estimated means denote 95% confidence intervals.
the moral condition were less likely to compromise
than people in the control condition (d = 0.16). How-
ever, there was no significant difference in compromise
between people in the nonmoral and control conditions
(d = 0.06). Not only did the moral frame increase peo-
ple’s self-reported willingness to support uncompro-
mising candidates and managers, but this frame also
increased the intransigence of people’s decisions.
12 Kodapanakkal et al.
Potential mechanisms of moralization
and de-moralization
Across the three studies, we found that moral frames
and nonmoral frames were equally persuasive but that
moral frames increased the strength of people’s moral
convictions and made them less willing to compromise,
whereas nonmoral frames decreased the strength of
people’s moral convictions. This shows that moral
frames can be effective persuasive tools and at the same
time cause side effects. It is less clear why the moral
frames have these effects. That is, what about the
frames might cause the moralization and de-moraliza-
tion effects we observed? In Studies 2 and 3, we explored
whether emotional reactions and perceptions of risks
and benefits are impacted by the experimental condi-
tions and correlated with moralization. In Study 3, we
additionally explored the impact of condition on find-
ing the technology financially costly and on weighing
costs and benefits. If these are candidates for mecha-
nisms, they should be differentially affected by the two
persuasive conditions. The factors that are higher in the
moral condition should also be positively correlated
with moral conviction, and the factors that are higher
in the nonmoral condition should be negatively cor-
related with moral conviction (for Study 2, this is moral
conviction at Time 2). We conducted separate regres-
sion analyses with each of the possible mechanism
variables as the dependent variable. Condition was
dummy coded (reference: control condition). The main
results are shown in Figures 4, 5, and 6. (More details are
available in Fig. S8 and Table S7 in the Supplemental
Material.)
For the sake of brevity, we focus only on the results
that provide some evidence that the variable is a poten-
tial mechanism. These variables were anger, disgust,
and perceptions of financial cost. In both Study 2 and
Study 3, participants reported significantly more anger
(Study 2: β = 0.15, SE = 0.029, p .001, d = 0.32; Study
3: β = 0.17, SE = 0.036, p .001, d = 0.37) and disgust
(Study 2: β = 0.12, SE = 0.029, p .001, d = 0.23; Study
3: β = 0.20, SE = 0.036, p .001, d = 0.42) in the moral
condition than the control condition, but there were no
differences between the nonmoral and control condi-
tions. In both studies, disgust was positively correlated
with moral conviction, whereas anger was correlated
with moral conviction only in Study 2. This suggests
that feelings of anger and disgust may help explain the
differences in moralization between the moral-frame
condition and the other two conditions. In Study 3, the
extent to which participants found the technology
financially costly was significantly higher in the non-
moral condition than in the control condition (β = 0.26,
SE = 0.035, p .001, d = 0.55), but there were no
differences between the moral and control conditions.
This factor was also associated negatively with moral
conviction. This suggests that perceptions of financial
cost may help explain the differences in moralization
between the nonmoral-frame condition and the other
two conditions.
Notably, as detailed in the Supplemental Material
(see Fig. S8 and Table S7), other potential mechanisms
did differ by condition and were correlated with moral
conviction. We do not think that they represent likely
mechanisms because both the moral and nonmoral
frames affected the measure in the same way (e.g., both
increased perceived risks) or the measure was not cor-
related with moral conviction (e.g., fear was unassoci-
ated with moral conviction).
General Discussion
We tested for side effects of moral framing and refram-
ing strategies on people’s moral convictions and will-
ingness to compromise. We found that moral frames
are persuasive and moralize people’s attitudes, whereas
nonmoral frames are persuasive and de-moralize peo-
ple’s attitudes. People who read moral frames are more
likely to support uncompromising individuals and less
willing to compromise themselves. We also found that
anger and disgust potentially drive moralization and
that considering how financially costly a technology is
potentially drives de-moralization.
Theoretical and practical implications
We indeed found moralization and compromise side
effects of moral framing and reframing strategies.
Whether these side effects are an unexpected benefit
or harm depends on the goals of the persuader. If the
attitude change is considered as the morally correct
attitude, these side effects may be beneficial. However,
if the goal is to bridge divides, these side effects may
be detrimental because they could entrench rather than
bridge divides. For example, less willingness to com-
promise can delay policymakers from coming to a solu-
tion and cause a stalemate. Before we use these framing
strategies to address delicate situations (e.g., the
COVID-19 pandemic; Van Bavel et al., 2020), they
should be tested in the specific context with careful
attention paid to their side effects.
Our results confirm that moral frames are associated
with moral emotions of anger and disgust, as shown
previously (e.g., Feinberg etal., 2019; Wisneski &
Skitka, 2017). We additionally found that they are spe-
cifically associated with moral frames and not with
nonmoral frames, which further supports their associa-
tion with moralization.
Moral Frames Are Persuasive and Moralize Attitudes 13
Perceived RisksPerceived Benefits
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Study 2 Study 3
2
4
6
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
2
4
6
Fig. 4. Effect of condition on perceived risks (top row) and perceived benefits (bottom row) in Study 2 (left) and Study 3 (right). Colored
dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data.
Black dots represent estimated means. Error bars around estimated means denote 95% confidence intervals.
Importantly, moralization is not the only possible
outcome. We found a de-moralization effect that occurs
when people read the nonmoral frames. Previous stud-
ies have examined differences between moral and prag-
matic rhetoric, but either they did not find an effect
(Van Zant & Moore, 2015) or it was unclear whether
moralization or de-moralization occurs because there
was no control condition (Marietta, 2008). In contrast,
we directly examined de-moralization and found
that nonmoral frames reduce moralization compared
with a control condition. We also found initial evidence
for why de-moralization occurs. People consider the
14 Kodapanakkal et al.
AngerDisgustFearFeeling Creeped OutGratefulness
Study 2 Study 3
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
2
4
6
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
2
4
6
2
4
6
2
4
6
2
4
6
Fig. 5. Effect of condition on each of five perceived emotions in Study 2 (left) and Study 3 (right). Colored dots represent observed data
for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated
means. Error bars around estimated means denote 95% confidence intervals.
Moral Frames Are Persuasive and Moralize Attitudes 15
Control
Message
Control
Message
Nonmoral
Message
Nonmoral
Message
Harm- or
Fairness-Based
Moral Message
Harm- or
Fairness-Based
Moral Message
2
4
6
2
4
6
Perceived Financial Cost
Fig. 6. Effect of condition on the extent of weighing costs and benefits (left) and perceived financial cost (right) in Study 3. Colored dots
represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black
dots represent estimated means. Error bars around estimated means denote 95% confidence intervals.
technology more financially costly, specifically in the
nonmoral frame, and this is negatively associated with
moral conviction. This is in line with the findings of
Bastian et al. (2015), who showed that monetary costs
diminished the negative effect of moral conviction on
the acceptance of mining. Nonmoral frames also
increased certainty and extremity, even as they reduced
the strength of moral convictions, providing further
evidence that moral conviction is a unique dimension
of attitude strength.
Strengths and limitations
Our study had several strengths. First, the pretest/post-
test design in Studies 1 and 2 measured change in
people’s moral convictions. Second, multiple measures
of willingness to compromise, including a behavioral
measure, more comprehensively assessed situations in
which people do or do not compromise. Third, we
compared moral and nonmoral frames with a neutral
control condition, providing differential evidence for
moralization and de-moralization and teasing out mech-
anisms specific to each of these processes.
There are, however, constraints on the generaliz-
ability of the findings. First, new big-data technologies
may not be politicized in the same way as other issues
that have been studied. Although the baseline measures
for moral conviction (Ms = 4.65–5) show that people’s
attitudes about big-data technologies are moralized,
people may not think of big-data issues as centrally or
as often as other politicized issues, such as abortion
rights, immigration, or the minimum wage. Thus, it is
an open question whether our findings generalize only
to issues with similar levels of politicization or whether
they also generalize to more polarized and politicized
issues. Regardless, it is worthwhile to test for side
effects of framing and reframing strategies in any spe-
cific context before such strategies are used as a per-
suasion tool.
Although the effects related to de-moralization and
compromise are small in magnitude, they are similar to
effect sizes found in the modern persuasion literature
(e.g., reducing prejudice; Broockman & Kalla, 2021;
Paluck etal., 2021). Effect sizes might be increased by
using reinforcing persuasive messages at various time
intervals with multiple exposures to persuasion. Future
research could test this.
Finally, our study relied on U.S. participants recruited
through Prolific. This was to maintain comparability with
prior studies in moral framing and reframing (Feinberg
16 Kodapanakkal et al.
& Willer, 2013; Luttrell etal., 2019); however, testing in
other contexts is necessary.
Conclusion
Moral frames are persuasive and moralize people’s atti-
tudes, whereas nonmoral frames are persuasive and
de-moralize people’s attitudes. Moral frames also reduce
compromise. The use of moral frames as a persuasion
tool should be considered cautiously and assessed for
potential side effects; otherwise the goal of bridging
moral divides with these tools may backfire.
Transparency
Action Editor: Andrew Luttrell
Editor: Patricia J. Bauer
Author Contributions
All authors developed the study concept and contributed
to the study design. Testing, data collection, and data
analysis were performed by R. I. Kodapanakkal under the
supervision of M. J. Brandt, C. Kogler, and I. van Beest.
R. I. Kodapanakkal drafted the manuscript, and M. J.
Brandt, C. Kogler, and I. van Beest provided critical revi-
sions. All authors approved the final manuscript for
submission.
Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of
interest with respect to the authorship or the publication
of this article.
Open Practices
All data, analysis code, and materials for Studies 1 to 3
have been made publicly available via OSF and can be
accessed at https://osf.io/3vfas/. The design and analysis
plans for the studies were preregistered—Study 1: https://
osf.io/sf2pq/, Study 2: https://osf.io/7rzx8/, Study 3:
https://osf.io/sqa9w/. This article has received the badges
for Open Data, Open Materials, and Preregistration. More
information about the Open Practices badges can be found
at http://www.psychologicalscience.org/publications/
badges.
ORCID iDs
Rabia I. Kodapanakkal https://orcid.org/0000-0002-
3113-332X
Mark J. Brandt https://orcid.org/0000-0002-7185-7031
Supplemental Material
Additional supporting information can be found at http://
journals.sagepub.com/doi/suppl/10.1177/09567976211040803
Note
1. Our preregistered hypothesis (https://osf.io/sf2pq/) was that
people with attitudes based in harm or liberty concerns would
be persuaded by corresponding moral frames of harm and
liberty. However, we found that all messages were persuasive
and did not find evidence for a moral-reframing effect. For full
details, see the Supplemental Material. The primary analyses we
report in Study 1 should be treated as exploratory.
References
Andrews, A. C., Clawson, R. A., Gramig, B. M., & Raymond, L.
(2017). Finding the right value: Framing effects on domain
experts. Political Psychology, 38(2), 261–278. https://doi
.org/10.1111/pops.12339
Aramovich, N. P., Lytle, B. L., & Skitka, L. J. (2012). Opposing
torture: Moral conviction and resistance to majority influ-
ence. Social Influence, 7(1), 21–34. https://doi.org/10.10
80/15534510.2011.640199
Bastian, B., Zhang, A., & Moffat, K. (2015). The interaction
of economic rewards and moral convictions in predicting
attitudes toward resource use. PLOS ONE, 10(8), Article
e0134863. https://doi.org/10.1371/journal.pone.0134863
Blair, G., Cooper, J., Coppock, A., Humphreys, M., & Fultz, N.
(2022). DeclareDesign: Declare and diagnose research
designs (Version 0.30.0) [Computer software]. https://cran
.r-project.org/web/packages/DeclareDesign/index.html
Brandt, M. J., Wisneski, D. C., & Skitka, L. J. (2015). Moral-
ization and the 2012 U.S. presidential election campaign.
Journal of Social and Political Psychology, 3(2), 211–237.
https://doi.org/10.5964/jspp.v3i2.434
Broockman, D., & Kalla, J. (2021, September 7). When and
why are campaigns’ persuasive effects small? Evidence
from the 2020 US presidential election. OSF Preprints.
https://doi.org/10.31219/osf.io/m7326
Clifford, S. (2019). How emotional frames moralize and polar-
ize political attitudes. Political Psychology, 40(1), 75–91.
https://doi.org/10.1111/pops.12507
Corlett, J. (2002). The nature and value of the moral right to
privacy. Public Affairs Quarterly, 16(4), 329–350.
Delton, A. W., DeScioli, P., & Ryan, T. J. (2020). Moral obsti-
nacy in political negotiations. Political Psychology, 41(1),
3–20. https://doi.org/10.1111/pops.12612
Feinberg, M., Kovacheff, C., Teper, R., & Inbar, Y. (2019).
Understanding the process of moralization: How eating
meat becomes a moral issue. Journal of Personality and
Social Psychology, 117(1), 50–72. https://doi.org/10.1037/
pspa0000149
Feinberg, M., & Willer, R. (2013). The moral roots of envi-
ronmental attitudes. Psychological Science, 24(1), 56–62.
https://doi.org/10.1177/0956797612449177
Feinberg, M., & Willer, R. (2015). From gulf to bridge:
When do moral arguments facilitate political influence?
Personality and Social Psychology Bulletin, 41(12), 1665–
1681. https://doi.org/10.1177/0146167215607842
Feinberg, M., & Willer, R. (2019). Moral reframing: A tech-
nique for effective and persuasive communication across
political divides. Social and Personality Psychology
Compass, 13(12), Article e12501. https://doi.org/10.1111/
spc3.12501
Garrett, K. N., & Bankert, A. (2020). The moral roots of par-
tisan division: How moral conviction heightens affective
polarization. British Journal of Political Science, 50(2),
621–640. https://doi.org/10.1017/S000712341700059X
Moral Frames Are Persuasive and Moralize Attitudes 17
Goodwin, G. P., & Darley, J. M. (2008). The psychology of
meta-ethics: Exploring objectivism. Cognition, 106(3),
1339–1366. https://doi.org/10.1016/j.cognition.2007
.06.007
Hoover, J., Johnson, K., Boghrati, R., Graham, J., & Dehghani, M.
(2018). Moral framing and charitable donation: Integrating
exploratory social media analyses and confirmatory experi-
mentation. Collabra: Psychology, 4(1), Article 9. https://
doi.org/10.1525/collabra.129
Kidwell, B., Farmer, A., & Hardesty, D. M. (2013). Getting
liberals and conservatives to go green: Political ideology
and congruent appeals. Journal of Consumer Research,
40(2), 350–367. https://doi.org/10.1086/670610
Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R.
(2018). Discrimination in the age of algorithms. Journal
of Legal Analysis, 10, 113–174. https://doi.org/10.1093/
jla/laz001
Kodapanakkal, R. I., Brandt, M. J., Kogler, C., & van Beest, I.
(2020). Self-interest and data protection drive the adop-
tion and moral acceptability of big data technologies:
A conjoint analysis approach. Computers in Human
Behavior, 108, Article 106303. https://doi.org/10.1016/
j.chb.2020.106303
Kodapanakkal, R. I., Brandt, M. J., Kogler, C., & van Beest, I.
(2021). Moral relevance varies due to inter-individual and
intra-individual differences across big data technology
domains. European Journal of Social Psychology. Advance
online publication. https://doi.org/10.1002/ejsp.2814
Luttrell, A., Petty, R. E., Briñol, P., & Wagner, B. C. (2016). Making
it moral: Merely labeling an attitude as moral increases its
strength. Journal of Experimental Social Psychology, 65,
82–93. https://doi.org/10.1016/j.jesp.2016.04.003
Luttrell, A., Philipp-Muller, A., & Petty, R. E. (2019). Chal-
lenging moral attitudes with moral messages. Psycho-
logical Science, 30(8), 1136–1150. https://doi.org/10.1177/
0956797619854706
Marietta, M. (2008). From my cold, dead hands: Democratic
consequences of sacred rhetoric. The Journal of Politics,
70(3), 767–779. https://doi.org/10.1017/S0022381608080742
Mazzoni, D., van Zomeren, M., & Cicognani, E. (2015). The
motivating role of perceived right violation and efficacy
beliefs in identification with the Italian water move-
ment. Political Psychology, 36(3), 315–330. https://doi
.org/10.1111/pops.12101
Mullen, E., & Skitka, L. J. (2006). Exploring the psychological
underpinnings of the moral mandate effect: Motivated
reasoning, group differentiation, or anger? Journal of
Personality and Social Psychology, 90(4), 629–643. https://
doi.org/10.1037/0022-3514.90.4.629
Paluck, E. L., Porat, R., Clark, C. S., & Green, D. P. (2021).
Prejudice reduction: Progress and challenges. Annual
Review of Psychology, 72, 533–560.
Rozin, P. (1999). The process of moralization. Psychological
Science, 10(3), 218–221. https://doi.org/10.1111/1467-
9280.00139
Ryan, T. J. (2017). No compromise: Political consequences of
moralized attitudes. American Journal of Political Science,
61(2), 409–423. https://doi.org/10.1111/ajps.12248
Schein, C., & Gray, K. (2018). The theory of dyadic moral-
ity: Reinventing moral judgment by redefining harm.
Personality and Social Psychology Review, 22(1), 32–70.
https://doi.org/10.1177/1088868317698288
Skitka, L. J., & Bauman, C. W. (2008). Moral conviction and
political engagement. Political Psychology, 29(1), 29–54.
https://doi.org/10.1111/j.1467-9221.2007.00611.x
Skitka, L. J., Bauman, C. W., & Sargis, E. G. (2005). Moral
conviction: Another contributor to attitude strength
or something more? Journal of Personality and Social
Psychology, 88(6), 895–917. https://doi.org/10.1037/0022-
3514.88.6.895
Skitka, L. J., Hanson, B. E., Morgan, G. S., & Wisneski, D. C.
(2021). The psychology of moral conviction. Annual
Review of Psychology, 72, 347–366. https://doi.org/
10.1146/annurev-psych-063020-030612
Skitka, L. J., & Morgan, G. S. (2014). The social and political
implications of moral conviction. Political Psychology, 35,
95–110. https://doi.org/10.1111/pops.12166
Sleegers, W. W. A. (2020). tidystats: Save output of statisti-
cal tests (Version 0.5) [Computer software]. https://doi
.org/10.5281/zenodo.4041859
Van Bavel, J. J., Baicker, K., Boggio, P. S., Capraro, V.,
Cichocka, A., Cikara, M., Crockett, M. J., Crum, A. J.,
Douglas, K. M., Druckman, J. N., Drury, J., Dube, O.,
Ellemers, N., Finkel, E. J., Fowler, J. H., Gelfand, M.,
Han, S., Haslam, S. A., Jetten, J., . . . Willer, R. (2020).
Using social and behavioural science to support COVID-
19 pandemic response. Nature Human Behaviour, 4(5),
460–471. https://doi.org/10.1038/s41562-020-0884-z
Van Zant, A. B., & Moore, D. A. (2015). Leaders’ use of moral
justifications increases policy support. Psycho logical Sci-
ence, 26(6), 934–943. https://doi.org/10.1177/0956797615
572909
van Zomeren, M., Postmes, T., Spears, R., & Bettache, K.
(2011). Can moral convictions motivate the advantaged
to challenge social inequality? Extending the social
identity model of collective action. Group Processes &
Intergroup Relations, 14(5), 735–753. https://doi.org/10
.1177/1368430210395637
Voelkel, J. G., & Feinberg, M. (2018). Morally reframed argu-
ments can affect support for political candidates. Social
Psychological and Personality Science, 9(8), 917–924.
https://doi.org/10.1177/1948550617729408
Voelkel, J. G., Mernyk, J., & Willer, R. (2020). Resolving
the progressive paradox: The effects of moral reframing
on support for economically progressive candidates.
PsyArXiv. https://doi.org/10.31234/osf.io/mtfjn
Wisneski, D. C., & Skitka, L. J. (2017). Moralization through moral
shock: Exploring emotional antecedents to moral convic-
tion. Personality and Social Psychology Bulletin, 43(2),
139–150. https://doi.org/10.1177/0146167216676479
... Although, additional research in this area is needed to clarify the relationship between moralization and moral frames on persuasion. Thus far, moralization and moral messaging research have primarily focused on the impact a general moral frame has on one's attitude and attitude strength (Kodapanakkal et al., 2022;Luttrell et al., 2019). Yet, for those that view an issue as inherently moral, the question remains: is any moral argument perceived as more effective than a non-moral argument, or is a moral argument that matches their core moral convictions perceived as stronger? ...
... Although moral messages and moral reframing have been shown to change attitudes on controversial topics (Feinberg & Willer, 2013, suggesting that these strategies can help bridge moral, cultural, and political divides (Feinberg & Willer, 2015), some research suggests that moral rhetoric may have unintended consequences (Kodapanakkal et al., 2022). ...
... Moral messages have been shown to reinforce moralization of attitudes (Luttrell & Petty, 2020), reducing individuals' willingness to compromise with those who hold opposing views. (Kodapanakkal et al., 2022). Therefore, moral rhetoric may have the ability to both bolster and change individuals' attitudes (Day et al., 2014), and further investigation of this topic is warranted. ...
Article
Persuasive messages are often met with resistance. Message fatigue is a unique motivational state caused by excessive exposure to redundant messages, which leads to active and passive resistance towards persuasive messages. The consequences of active and passive resistance are particularly harmful when directed towards messages intended to assist individuals in making health decisions. This dissertation investigated a message framing strategy, moral matching, to combat message fatigue resistance in the context of COVID-19. Guided by message fatigue and moral foundation theory literature, there were three main purposes of this dissertation. The first purpose was to identify what features of COVID-19 health messaging contribute to perceived message fatigue. The second purpose was to reframe this content using moral rhetoric and experimentally test the effects of morally framed messages that match or mismatch an individual’s moral foundation on active and passive resistance. The third purpose was to investigate the boundary conditions of moral frames on the message's perceived effectiveness. Using a mixed-method approach, three studies were conducted to accomplish the aforementioned goals. In each study, participants were screened for political affiliation to implement moral matching techniques. Study One, 12 focus groups (N = 53) were conducted to uncover what type of COVID-19 health compliance message participants found most fatiguing and how repeated exposure to these messages evoked passive and active resistance. Results revealed four themes (i.e., overexposure to mask wearing COVID-19 messages, desensitization vs. reassurance, emotional exhaustion, and reactance) that further guided the development of morally framed messages. Study Two (N = 88), conducted a manipulation check to assess the efficacy of the messages. In Study Three, participants (N = 349) were randomly assigned to see a morally framed (i.e., loyalty or care) or a control COVID-19 health message promoting mask wearing. Results indicated morally matched messages may not combat fatigue, but that mismatched moral messages may lead to unintended consequences such as increased reactance to the message, for some people. In addition, results revealed that message fatigues active and passive resistance routes varied by political affiliation. The findings from this three-study dissertation have implications for developing personalized health campaign messages.
... Morality is an important factor in persuasiveness and polarization of human opinions [6]. Moral * University of California, Davis; gsimmons at ucdavis.edu argumentation can modulate willingness to compromise [7], and moral congruence (the extent to which moral values align between participants in a dialogue) influences argument effectiveness [8] and perceptions of ethicality [9]. Thus, I anticipate that the capabilities of LLMs to produce moral arguments and to achieve apparent moral congruence with their audiences will contribute to their effects on the human social environment 1 . ...
... Kodapanakka et. al. found that while both moral and non-moral arguments on the topic of big data technology were both persuasive, moral arguments in particular made people less willing to compromise [7]. Feinberg et. ...
Preprint
Full-text available
Large Language Models (LLMs) have recently demonstrated impressive capability in generating fluent text. LLMs have also shown an alarming tendency to reproduce social biases, for example stereotypical associations between gender and occupation or race and criminal behavior. Like race and gender, morality is an important social variable; our moral biases affect how we receive other people and their arguments. I anticipate that the apparent moral capabilities of LLMs will play an important role in their effects on the human social environment. This work investigates whether LLMs reproduce the moral biases associated with political groups, a capability I refer to as moral mimicry. I explore this hypothesis in GPT-3, a 175B-parameter language model based on the Transformer architecture, using tools from Moral Foundations Theory to measure the moral content in text generated by the model following prompting with liberal and conservative political identities. The results demonstrate that large language models are indeed moral mimics; when prompted with a political identity, GPT-3 generates text reflecting the corresponding moral biases. Moral mimicry could contribute to fostering understanding between social groups via moral reframing. Worryingly, it could also reinforce polarized views, exacerbating existing social challenges. I hope that this work encourages further investigation of the moral mimicry capability, including how to leverage it for social good and minimize its risks.
Article
Full-text available
Theories of moralization argue that moral relevance varies due to inter-individual differences, domain differences, or a mix of both. Predictors associated with these sources of variation have been studied in isolation to assess their unique contribution to moralization. Across two studies (NStudy1 = 376; NStudy2a = 621; NStudy2b = 589), assessing attitudes towards new big data technologies, we found that moralization is best explained by theories focusing on inter-individual variation (∼29%) and intra-individual variation across technology domains (∼49%), and less by theories focusing on differences between technology domains (∼6%). We simultaneously examined 15 inter-individual and 16 intra-individual predictors that potentially explain this variation. Predictors directly relevant to the technologies (e.g., justice concerns), cognitive styles (e.g., faith in intuition), and emotional reactions (e.g., fear) best explain variation in moral relevance. Accordingly, scholars should simultaneously adopt and adapt moralization theories related to inter-individual and intra-individual differences across domains rather than in isolation. This article is protected by copyright. All rights reserved
Article
Full-text available
The law forbids discrimination. But the ambiguity of human decision-making often makes it hard for the legal system to know whether anyone has discriminated. To understand how algorithms affect discrimination, we must understand how they affect the detection of discrimination. With the appropriate requirements in place, algorithms create the potential for new forms of transparency and hence opportunities to detect discrimination that are otherwise unavailable. The specificity of algorithms also makes transparent tradeoffs among competing values. This implies algorithms are not only a threat to be regulated; with the right safeguards, they can be a potential positive force for equity.
Article
Full-text available
The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.
Preprint
Full-text available
The COVID-19 pandemic represents a massive, global health crisis. Because the crisis requires large-scale behavior change and poses significant psychological burdens on individuals, insights from the social and behavioural sciences are critical for optimizing pandemic response. Here we review relevant research from a diversity of research areas relevant to different dimensions of pandemic response. We review foundational work on navigating threats, social and cultural factors, science communication, moral decision-making, leadership, and stress and coping that is relevant to pandemics. In each section, we outline implications for solving public health issues related to COVID-19. This interdisciplinary review points to several ways in which research can be immediately applied to optimize response to this pandemic, but also points to several important gaps that researchers should move quickly to fill in the coming weeks and months.
Article
Full-text available
Big data technologies have both benefits and costs which can influence their adoption and moral acceptability. Prior studies look at people’s evaluations in isolation without pitting costs and benefits against each other. We address this limitation with a conjoint experiment (N ¼ 979), using six domains (criminal investigations, crime prevention, citizen scores, healthcare, banking, and employment), where we simultaneously test the relative influence of four factors: the status quo, outcome favorability, data sharing, and data protection on decisions to adopt and perceptions of moral acceptability of the technologies. We present two key findings. (1) People adopt technologies more often when data is protected and when outcomes are favorable. They place equal or more importance on data protection in all domains except healthcare where outcome favorability has the strongest influence. (2) Data protection is the strongest driver of moral acceptability in all domains except healthcare, where the strongest driver is outcome favorability. Additionally, sharing data lowers preference for all technologies, but has a relatively smaller influence. People do not show a status quo bias in the adoption of technologies. When evaluating moral acceptability, people show a status quo bias but this is driven by the citizen scores domain. Differences across domains arise from differences in magnitude of the effects but the effects are in the same direction. Taken together, these results highlight that people are not always primarily driven by selfinterest and do place importance on potential privacy violations. The results also challenge the assumption that people generally prefer the status quo.
Article
Full-text available
The political landscape in the US and many other countries is characterized by policy impasses and animosity between rival political groups. Research finds that these divisions are fueled in part by disparate moral concerns and convictions that undermine communication and understanding between liberals and conservatives. This “moral empathy gap” is particularly evident in the moral underpinnings of the political arguments members of each side employ when trying to persuade one another. Both liberals and conservatives typically craft arguments based on their own moral convictions rather than the convictions of the people they target for persuasion. As a result, these moral arguments tend to be unpersuasive, even offensive, to their recipients. The technique of moral reframing—whereby a position an individual would not normally support is framed in a way that is consistent with that individual's moral values—can be an effective means for political communication and persuasion. Over the last decade, studies of moral reframing have shown its effectiveness across a wide range of polarized topics, including views of economic inequality, environmental protection, same‐sex marriage, and major party candidates for the US presidency. In this article, we review the moral reframing literature, examining potential mediators and moderators of the effect, and discuss important questions that remain unanswered about this phenomenon.
Article
The past decade has seen rapid growth in research that evaluates methods for reducing prejudice. This essay reviews 418 experiments reported in 309 manuscripts from 2007 to 2019 to assess which approaches work best and why. Our quantitative assessment uses meta-analysis to estimate average effects. Our qualitative assessment calls attention to landmark studies that are noteworthy for sustained interventions, imaginative measurement, and transparency. However, 76% of all studies evaluate light touch interventions, the long-term impact of which remains unclear. The modal intervention uses mentalizing as a salve for prejudice. Although these studies report optimistic conclusions, we identify troubling indications of publication bias that may exaggerate effects. Furthermore, landmark studies often find limited effects, which suggests the need for further theoretical innovation or synergies with other kinds of psychological or structural interventions. We conclude that much research effort is theoretically and empirically ill-suited to provide actionable, evidence-based recommendations for reducing prejudice. Expected final online publication date for the Annual Review of Psychology, Volume 72 is January 4, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization. Expected final online publication date for the Annual Review of Psychology, Volume 72 is January 4, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Preprint
Field experiments document near-zero marginal effects of most campaign advertising on vote choice in US general elections. Some interpret this finding as evidence of "partisan intoxication"---that contemporary American voters remain loyal to their parties even when confronted with new information. We present new evidence consistent with an informational interpretation of this finding: that voters are rarely persuaded by additional information about candidates they know a great deal about, but are more open to persuasion about candidates about whom they know less. The 2020 US Presidential election represents an opportunity to test these competing perspectives due to the presence of one candidate about whom most Americans are very familiar by virtue of his four years in office, Donald Trump, and another about whom Americans know less, Joe Biden. We conducted survey experiments (n=113,742) exposing each individual in a treatment group to two of 291 unique pro- or anti- Trump or Biden messages. Our results are consistent with an informational interpretation of many persuasive effects in campaigns and their absence. We show that vote choice in the 2020 US Presidential election changes in response to both pro- and anti-Biden messages, but that genuine effects of pro- and anti-Trump messages were between much smaller and non-existent. Further consistent with an informational interpretation, we show that vague messages about Biden are significantly less effective than those that offer specific information about him, and that evaluations of Biden are also significantly more malleable than evaluations of Trump. Positive information about Biden also causes Republican voters to cross party lines and say they would support him. These results would likely change if campaigns were to better inform voters about Biden, but raise a puzzle of why nearly all Democratic campaign advertising in the 2020 US Presidential election has focused on Trump instead of Biden.
Preprint
While progressive economic policies are popular, economically progressive candidates rarely win elections in the U.S., a pattern we call the “progressive paradox.” In the current paper, we examine whether the electoral disadvantage of economically progressive candidates results in part from the moral rhetoric these candidates commonly use to frame their policy platforms. Using a Moral Foundations Theory perspective, we combine previously validated machine learning based measures of economic ideology and new text-based measures of candidates’ moral rhetoric to analyze transcripts of 137 primary and general election presidential debates since 2000. We find economically progressive candidates, compared to economically conservative candidates, rely less on “binding” moral foundations (loyalty, respect for authority, and purity) relative to “individualizing” foundations (care and fairness). In addition, we conducted two experiments (total n = 4,138), including one nationally representative, pre-registered experiment, to test whether economically progressive candidates can build support beyond their liberal base by framing their economic policy platform in terms of binding moral values. Results show that a presidential candidate who used binding framing for his progressive economic platform as opposed to individualizing or a neutral framing, was supported significantly more by conservatives and, unexpectedly, by moderates as well. These results suggest that moral reframing offers an under-utilized solution to the longstanding puzzle regarding the gap between support for economically progressive policies and candidates.