Content uploaded by Piotr Bystranowski
Author content
All content in this area was uploaded by Piotr Bystranowski on Oct 01, 2024
Content may be subject to copyright.
Does Momentary Outcome-Based Reflection Shape Bioethical Views?
A Pre-Post Intervention Design
Carme Isern-Mas 1
Piotr Bystranowski 2 3 *
Jon Rueda 4
Ivar R. Hannikainen 5
1 Department of Philosophy and Social Work, University of the Balearic Islands, Palma, Spain
2 Interdisciplinary Center for Ethics, Jagiellonian University, Krakow, Poland
3 Max Planck Institute for Research on Collective Goods, Bonn, Germany
4 University of Basque Country, Leioa, Spain
5 Departamento de Filosofía I, Universidad de Granada, Granada, Spain
* Correspondence concerning this article should be addressed to Piotr Bystranowski, Max Planck
Institute for Research on Collective Goods, Kurt-Schumacher-Straße 10, 53113 Bonn,
Germany (bystranowski@coll.mpg.de).
Acknowledgments
This project was funded by an ERC Starting Grant (BIOUNCERTAINTY) and by the Spanish
Ministry of Science and Education (RYC2020-029280-I). We thank Tomasz Żuradzki and
anonymous reviewers as well as the audience at the 2022 Yale-Oxford Bioxphi Summit in Oxford
for comments on earlier drafts.
1
Abstract
Many bioliberals endorse broadly consequentialist frameworks in normative ethics, implying that
a progressive stance on matters of bioethical controversy could stem from outcome-based
reasoning. This raises an intriguing empirical prediction: encouraging outcome-based reflection
could yield a shift toward bioliberal views among non-experts as well. To evaluate this hypothesis,
we identified empirical premises that underlie moral disagreements on seven divisive issues (e.g.,
vaccines, abortion, or GMOs). In exploratory and confirmatory experiments, we assessed whether
people spontaneously engage in outcome-based reasoning by asking how their moral views change
after momentarily reflecting on the underlying empirical questions. Our findings indicate that
momentary reflection had no overall treatment effect on the central tendency or the dispersion in
moral attitudes when compared to pre-reflection measures collected one week prior.
Autoregressive models provided evidence that participants engaged in consequentialist moral
reasoning, but this self-guided reflection produced neither moral ‘progress’ (shifts in the
distributions’ central tendency) nor moral ‘consensus’ (reductions in their dispersion). These
results imply that flexibility in people’s search for empirical answers may limit the potential for
outcome-based reflection to foster moral consensus.
Keywords: consequentialism, moral consensus, moral disagreement, moral progress, outcome-
based reasoning
Word count: 4989
2
Does Momentary Outcome-Based Reflection Shape Bioethical Views?
A Pre-Post Intervention Design
Scholars who advocate progressive views on current debates in bioethics–or as they are sometimes
labeled, bioliberals–endorse broadly consequentialist frameworks in normative ethics. According
to this ethical theory, the moral status of an action or policy should be based on a calculus of its
positive and negative consequences. A central tenet of consequentialist ethics is that empirical
beliefs play a pivotal role in shaping moral judgments (e.g., Mill, 1863). Accordingly, empirical
beliefs about the consequences of an action should inform one’s moral attitudes toward the action
in question. These empirical beliefs might be about whether the action is deemed harmful (Gray
& Schein, 2016) or has the potential to save numerous lives (Engelmann & Waldmann, 2022;
Shenhav & Greene, 2010; Hannikainen et al., 2017). This principle is in contrast to other ethical
frameworks, such as deontological or virtue ethics, in which the role of consequentialist
information is not as central, and the normative status of people’s moral judgments is guided
primarily by inflexible principles, such as respect for autonomy (Demaree-Cotton & Sommers,
2022), as well as the action’s causal and intentional properties (Rodríguez-Arias et al., 2020).
The relation between reflection and consequentialist moral judgment
Numerous empirical investigations have uncovered a relationship between reflection and
consequentialist moral judgments about hypothetical thought experiments. For instance, a classic
experiment by Paxton and colleagues (2011) found participants were more permissive, and
consequentialist, in their moral judgments about an innocuous case of consensual incest after
reflecting on the evolutionary origins of the incest taboo (but see Herec et al., 2022). Relatedly,
Pennycook and colleagues (2014) found that individuals with an analytic cognitive style, compared
to those with a more intuitive style, were less likely to condemn disgusting, yet victimless, crimes
3
—a pattern of response in line with consequentialism. Finally, Royzman and colleagues (2014)
found that reasoning not only influences the intensity of moral judgments, but also plays a role in
distinguishing between moral and conventional transgressions. They found that more reflective
participants tended to moralize selectively, treating only clearly harmful acts as moral
transgressions, and treating contingently harmful transgressions as arbitrary societal constructions.
Could these experimental demonstrations of the relation between reasoning and
consequentialist moral judgment help to explain moral disagreement on matters of real-world
ethical controversy? A traditional literature in the domain of political psychology, on the existence
of a ‘backfire effect’ (Nyhan & Reifler, 2010), speaks against this prediction. Early research
indicated that encounters with uncongenial evidence can lead partisans to intensify their pre-
existing worldview, instead of conceding or adjusting their beliefs toward the evidence (Lord et
al., 1979; Nyhan & Reifler, 2010; Taber & Lodge, 2006). For instance, when provided corrective
information on controversial political issues in American politics, such as the invasion of Iraq, tax
cuts, and stem cell research, participants tended to strengthen their misperceptions about such facts
(Nyhan & Reifler, 2010). This line of research raises the possibility that attitudes toward real-
world issues might be less malleable by reflection than are hypothetical issues—though some
recent studies have failed to replicate the backfire effect (Tappin et al., 2020; Wood & Porter,
2018).
The relation between reasoning and consequentialist moral judgment has proved to be
particularly significant when such reasoning concerns consequences, or outcomes. Hannikainen
and Rosas (2019) replicated the relation between reflection and the consequentialist tendency for
selective moralization by directing participants to reason about the consequences of a set of moral
transgressions: Colombian and British participants evaluated both victimless crimes, such as
4
cutting up the national flag and using it to clean one’s toilet (Haidt et al., 1993), and harmful or
unfair acts, such as food hoarding. A brief reflection on the consequences of harmful behavior,
such as hoarding, elevated moral condemnation of such behavior in both cultural groups. The
corresponding effect of reflecting on the consequences of ‘impure’, yet innocuous, behavior was
absent in Colombia and negative in the United Kingdom—replicating the tendency for
consequentialist reasoning to propel demoralization of victimless taboos.
Other studies in experimental ethics have successfully documented the persuasive effects
of consequentialist arguments on the moral imperative to eradicate global poverty (e.g., Buckland
et al., 2021) or to reduce meat consumption (e.g., Schwitzgebel et al., 2020). Together, these
studies reveal that reflection on empirical facts about consequences, namely, outcome-based
reasoning, such as others’ suffering, can shape people’s moral outlook. Consequently, there is
mixed evidence as to whether reflection, and particularly reflection on consequences, relates to
consequentialist moral judgment about real-world controversies.
Moral progress and moral consensus
It may be useful to consider two variants of the hypothesis that outcome-based reasoning
shapes moral judgment, each with a distinct predicted outcome. One possible outcome of outcome-
based reasoning is a shift in central tendency of the distribution of moral attitudes (see Figure 1a).
For instance, as some utilitarian theorists have suggested (e.g., Singer, 1981), various elements of
outcome-based reasoning, e.g., the adoption of an impartial point of view, might propel moral
‘progress’ on a variety of issues. Elements of this prediction can be seen in the work of Peter Singer
(1981), who argues that adopting a consequentialist approach to ethics yields an expansion of the
moral circle–a characteristic quality of a progressive worldview. Some empirical work has
vindicated this prediction, showing that political liberals reveal a greater tendency toward
5
consequentialist moral reasoning (Hannikainen et al., 2017; Kahane et al., 2018; Luke &
Gawronski, 2021; Piazza & Sousa, 2014).
A second potential outcome is that outcome-based reasoning might facilitate moral
consensus. This result would manifest as a change in the variance or dispersion (Figure 1b) of the
distribution of moral attitudes. Joshua Greene (2014), for example, argues that outcome-based
reasoning could serve as a ‘common currency’ to establish overarching norms for cooperation
within culturally diverse communities. Insofar as outcome-based reasoning defines the moral
status of an act in terms of allegedly objective matters of fact (i.e., the consequences of the act),
adopting a consequentialist framework can reduce moral disagreement and bring citizens’ moral
attitudes into closer alignment. However, it is also worth noting that opportunities for reflection
have also been found to exacerbate disagreements leading to dissensus or greater dispersion
(Drummond & Fischhoff, 2017; Kahan, 2013; Kahan et al., 2012)—as predicted by theories of
identity-protective cognition. Finally, it must be noted that the progress and consensus hypotheses
are not mutually exclusive, as depicted in Figure 1c.
6
Figure 1. Model predictions. (A) The progressive shift view predicts a change in the central
tendency of moral judgment. (B) The consensus view predicts a change in the dispersion of moral
judgment. (C) Panel C displays the combined occurrence of changes in central tendency and
dispersion.
Overview
In the context of bioethics, this potential impact of outcome-based reasoning on moral
judgment implies that a progressive stance on ethical controversies, such as mandatory
vaccination, the limits on legal abortion, or genetically modified organisms (GMOs), could stem
primarily from outcome-based reasoning. Our present research contributes to this line of research
on the impact of outcome-based reasoning on moral attitudes towards real-world issues in two
significant ways. First, we specifically focus on bioethical issues rather than political issues or
hypothetical scenarios to address real-world concerns with immediate relevance. Second, we use
an unguided and brief reflection methodology rather than a guided or structured one, which allows
participants to reflect spontaneously without predefined prompts.
Thus, our present paper examines the impact of momentary outcome-based reasoning on
moral disagreement about seven different issues that engender controversy in bioethics, and in the
public discourse. For instance, some of such issues were the mandatoriness of vaccines, or the
7
limits of legal abortion. Our focus in exploratory and confirmatory studies was on evaluating two
distinct hypotheses: the progressive shift and the consensus hypotheses described above. To this
end, we recorded participants’ baseline (or Time 1) normative attitudes toward all seven issues.
Then, in a second test session after a one-week delay, participants were randomly assigned to
briefly reflect on an empirical matter that underlies scholarly debate about the moral status of one
of those seven issues—their target issue, to report their belief about that empirical question, and to
provide a post-reflection (or Time 2) measure of their normative attitudes. For instance, in the case
of the permissibility of abortion, participants were asked to reflect on the item “At what stage of
human pregnancy is the fetus capable of experiencing pain?”. This aimed to prompt reflection on
empirical evidence from medical and psychological research about fetuses’ capacity to experience
pain, and the consequences that certain practices (i.e., abortion) might have on them. The goal was
to encourage reflection on the harm-related outcomes of such practices. After this mandatory and
brief outcome-based reflection, they were asked their normative attitudes towards the target item
1
.
Comparisons of Time 1 and Time 2 normative attitudes allowed us to evaluate the hypotheses that
outcome-based reflection engenders moral progress and/or moral consensus. Further details about
the precise experimental protocol are described in the Procedure subsections of each experiment.
Study materials, data and analysis scripts are publicly available on the Open Science Framework
at: https://osf.io/gwu7b/?view_only=07c779ee613242beb93ea879968cb08c.
1
We are aware that people often use factual questions as a means to rationalize their moral attitudes (.e.g, Ditto &
Liu, 2012). In our studies, we have made our best attempt to formulate factual items for each normative issue, and
have used statistical tools (such as controlling for moral attitudes at time 1), to isolate the effect of factual reflection
over and above people's expression of normative attitudes. This might be particularly difficult in some of our factual
items, such as “At what age is the average teenager mature enough to decide whether to undergo such procedures?”,
with which we aimed to prompt reflection on how policies on teenage transitioning might undermine their capacity to
make autonomous choices. We included both clear and subtle cases to ensure a representative sample of bioethics
topics, even when the link to empirical consequences was not immediately obvious.
8
Experiment 1
Experiment 1 employed a within-subjects design to examine the effect of momentary reflection on
moral attitudes towards bioethical issues. Study participants initially recorded their normative
attitudes toward each of the seven issues. Then, in a second experimental session one week apart,
participants were asked to briefly reflect on a factual question (underlying one of the normative
issues) and elaborate on their response in writing. After that, participants recorded their normative
attitudes to the seven issues once again.
Materials and Pre-Test
We drafted 14 normative and 16 factual statements regarding eleven controversial issues in
contemporary bioethics (see Supplementary Table 1), and recruited a politically-balanced sample
of 140 U.K. residents via Prolific (78 women; meanage = 39.7; medianage = 37.5, sdage = 13.6;
rangeage: [18, 76]) to pre-test our materials. Each participant was presented with all 30 statements
in a randomized order and asked to report their agreement or disagreement on 100-point sliding
scales labeled at both extremes and intermediate tertiles.
We selected pairs of normative/factual statements on the basis of the following three
criteria:
1. the strength of the fact-norm correlation;
2. the bimodality of normative attitudes (indicative of ideological conflict); and
3. the strength of the norm-political orientation correlation.
By applying these three criteria, we retained seven norm-fact pairs and reverse-scored
progressive norms, so that higher scores would reflect conservative attitudes on every issue. As a
result, normative attitudes correlated positively with conservatism for every issue, .12 < rs < .53.
9
We then conformed the factual beliefs to the normative views, so that for each norm-fact pair the
predicted relationship would be positive, .34 < rs < .77. To facilitate the comparison of results
across issues, we applied non-parametric
2
scaling around the median in units of the interquartile
range. Thus, the scaled response of participant i on issue j, 𝒙ij, equals the difference between the
response, xij, and the median response on issue j, Mdn(xj), divided by the interquartile range of
responses to issue j, IQR(xj):
𝒙ij = (xij - Mdn(xj))/IQR(xj)
For the political orientation measure, in order to preserve the scale midpoint (representing
the political center), we replaced the median value, Mdn(xj), with the scale midpoint (i.e., 4), so
that the numerator represented the difference between the response and the scale midpoint. As a
result, scaled responses reflect the number of interquartile ranges from the median (/midpoint). As
such, a unit of change on the empirical belief and normative view responses approximates the
magnitude of disagreement between a typically progressive (1st quartile) and a typically
conservative (3rd quartile) response on any given issue.
2
Non-parametric scaling (i.e., using median and IQR rather than mean and standard deviation) was implied by our
expectation to observe bimodal distributions (representing polarized views), This choice not only accounted for the
non-normal nature of our data but also was more sensitive to the actual position of the two expected modes.
10
Figure 2. Scatter plots and linear trend lines display the linear relationships between normative
attitudes and (A) empirical beliefs and (B) political orientation. Dashed lines represent the
relationships by issue, and the solid lines represent the aggregate relationship across issues.
We fit a series of linear mixed-effect models with random effects of participants and issues.
As expected, the effect of political orientation was highly significant in models of empirical beliefs
(B = 0.22, 95% CI [0.13, 0.31], t = 4.59) and normative attitudes (B = 0.32, 95% CI [0.24, 0.40]),
t = 7.17, ps < .001. Furthermore, empirical beliefs (B = 0.42, 95% CI [0.39, 0.46], t = 23.7) and
self-reported political orientation (B = 0.23, 95% CI [0.16, 0.29], t = 6.92) each independently
predicted normative attitudes, all ps < .001—indicating that normative views are shaped by both
empirical beliefs and political identity.
Participants
The politically-balanced sample of 160 native English-speaking adults was recruited on
Prolific to take part in the first wave of the study, out of which 146 participants also completed the
second part. We did not exclude any observations, hence the analyzed data came from 146
11
participants (83 female, 3 non-binary, medianage = 31, meanage = 34.3, sdage = 13.7, rangeage: [18,
65]), of whom 65 participants reported right-wing political views.
The target sample size was determined using a heuristic of 150 observations per wave,
aiming at a total of 300 observations across both waves. We recruited a slightly larger sample to
account for the expected between-session drop out rate.
Procedure
The study consisted of two sessions, as depicted in Figure 3. In the first session, participants
were asked to express their normative views regarding each of the seven issues. They also
answered a few basic demographic questions. After a seven-day delay, participants were invited
to take part in the second session of the study. A one-week delay was considered a trade-off
between minimizing both (i) participants’ recall of their pre-intervention responses and (ii) their
likelihood of dropping out of the study (see also Helzer et al., 2016; Rehren & Sinnott-Armstrong,
2023). Participants were asked to reflect on the factual matter corresponding to one randomly
drawn issue for at least 45 seconds. Participants were instructed to record their agreement or
disagreement with the factual statement on a 100-point sliding scale, and to be prepared to explain
their reasoning to others. On the following screen, participants were asked to briefly elaborate on
their reasoning in writing.
12
Figure 3. Protocol in Experiment 1. The target item (i.e., the object of factual reflection) was
selected randomly for each participant.
Finally, participants responded to the same seven statements as in Session 1. One of the
seven items constituted our target item (concerning the issue on which participants were asked to
reflect), and we refer to the remaining six issues as filler items. This allowed us to check whether
the manipulation selectively affected views on the target issue, or generalized to the filler issues.
Analysis Plan
Change in Central Tendency
The moral progress hypothesis predicts that outcome-based reflection will yield a shift in
the central tendency of the distribution (as in Figure 1a). To examine this prediction, we regressed
normative attitudes at Time 2 on normative attitudes at Time 1:
attitudeiT2 = a × attitudeiT1 + intercept
13
In this model, a refers to the stability of normative attitudes and the intercept describes the
magnitude and direction of the shift in the central tendency of the distribution.
Change in Dispersion
The moral consensus hypothesis predicts that reflection yields a shift in the dispersion of
the distribution (as in Figure 1b)—such that, e.g., normative views become more similar overall.
We calculated the squared deviation of every observation from the (grand) median for a given
item. We then examine whether, in the aggregate, the squared dispersion measure differed between
post- and pre-treatment across participants and issues:
dispersion = a × time + intercept
Here we investigate whether a, the effect of time (post- vs. pre-treatment) on the time-
varying dispersion values, is negative or positive. A negative coefficient of time would indicate
reduced dispersion at Time 2 (relative to Time 1), and accordingly a greater tendency toward moral
consensus.
Bayesian Analyses
To compare the null model (of no change in the central tendency or dispersion) to the
alternative models, we calculated the corresponding Bayes factor using the BayesFactor package
(Morey et al. 2015) in R with default ZJS priors–as recommended in Rouder et al. (2009).
Results
As in the pre-test, a linear mixed-effects model treating issue as a random effect revealed
that political orientation predicted normative views, B = 0.28, 95% CI [0.21, 0.35], t = 7.44, p <
.001.
14
Change in Central Tendency
Participants’ normative attitudes were stable across sessions, B = 0.71, p < .001 (see Table
1). Additionally, we observed a leftward shift in normative attitudes, B = -0.08, p = .025 (as
revealed by the negative intercept in Models 1 and 2). This leftward shift in normative attitudes
corresponded to anecdotal evidence for the alternative over the null model (i.e., of no shift), BF10
= 2.97. Entering political orientation into the model did not improve model fit, whereas entering
empirical beliefs at Time 2 predicted unique variance in normative attitudes and rendered the
intercept non-significant—implying that the leftward (intercept) shift in moral attitudes from Time
1 was accounted for by participants’ concurrent empirical beliefs (see Models 3 and 4), and
suggesting a role for momentary outcome-based reasoning.
Table 1. Models of Normative Attitudes Toward Target Issues in Experiment 1.
Target Model 1
Target Model 2
Target Model 3
Target Model 4
AIC
185.56
190.71
176.58
181.71
r2
.59
.59
.63
.63
Intercept
B = -0.08
95% CI [-0.15, -
0.01]
t = -2.26
p = .025
B = -0.13
95% CI: [-0.15,
0.01]
t = -2.35
p = .08
B = -0.08
95% CI: [-0.15, -
0.01]
t = -2.15
p = .08
B = -0.07
95% CI: [-0.14,
0.01]
t = -1.71
p = .13
Time 1
Normative
Belief
B = 0.71
95% CI [0.61,
0.81]
t = 14.00
p < .001
B = 0.70
95% CI: [0.59,
0.80]
t = 13.19
p < .001
B = 0.66
95% CI: [0.56,
0.75]
t = 13.47
p < .001
B = 0.64
95% CI: [0.54,
0.74]
t = 12.32
p < .001
Political
Orientation
-
B = 0.02
95% CI: [-0.07,
0.17]
t = 0.78
p = .44
-
B = 0.05
95% CI: [-0.06,
0.16]
t = 0.86
p = .39
15
Empirical Belief
-
-
B = 0.19
95% CI: [0.10,
0.28]
t = 4.04
p < .001
B = 0.19
95% CI: [0.10,
0.28]
t = 4.03
p < .001
These effects did not generalize to control issues on which participants had not reflected
(see Table 2). Critically, there was no leftward shift (in Models 1 and 2), and no relationship
between participants’ empirical beliefs (about the target issue) and their normative views about
control issues (see Models 3 and 4).
Table 2. Models of Normative Attitudes Toward Control Issues in Experiment 1.
Control Model
1
Control Model
2
Control Model
3
Control Model
4
AIC
817.55
819.97
824.39
827.19
r2
.63
.63
.63
.63
(Intercept)
B = 0.00
95% CI [-0.03,
0.04]
t = 0.20
p = .85
B = 0.02
95% CI: [-0.02,
0.05]
t = 0.83
p = .44
B = 0.00
95% CI: [-0.03,
0.04]
t = 0.21
p = .84
B = 0.02
t = 0.81
95% CI: [-0.02,
0.05]
p = .45
Time 1
Normative
Belief
B = 0.79
95% CI: [0.75,
0.83]
t = 37.95
p < .001
B = 0.77
95% CI: [0.73,
0.81]
t = 36.16
p < .001
B = 0.79
95% CI: [0.74,
0.83]
t = 37.89
p < .001
B = 0.77
t = 36.14
95% CI: [0.73,
0.81]
p < .001
Political
Orientation
-
B = 0.05
95% CI: [0.01,
0.09]
t = 2.34
p = .020
-
B = 0.05
95% CI: [0.01,
0.09]
t = 2.26
p = .024
Time 2
Empirical Belief
-
-
B = 0.02
95% CI: [-0.01,
0.05]
t = 1.20
B = 0.02
95% CI: [-0.02,
0.05]
t = 1.04
16
p = .23
p = .30
Our final step was to compare Target and Control Models 4 by entering issue (target vs.
control) as a moderator of the fixed effects of Time 1 normative belief, political orientation and
Time 2 empirical belief. In doing so, we obtained evidence that (i) the leftward shift took place
selectively for target issues, B = -0.08, 95% CI [-0.01, -0.15], t = -2.10, p = .036, (ii) normative
attitudes on control issues were more stable than on target issues, B = 0.14, 95% CI [0.03, 0.24], t
= 2.62, p = .009, and (iii) empirical beliefs predicted normative attitudes selectively on target
issues, B = 0.17, 95% CI [0.26, 0.08], t = 3.61, p < .001.
Change in Dispersion
To examine whether momentary reflection produced changes in dispersion, we regressed
the squared deviation from the median attitude (on each issue) on session as a fixed effect, with
random effects of participant and issue. We observed no effect of reflection on the squared
deviation from the median–whether on the target issue on which participants had reflected (B = -
0.08, t = -1.45, 95% CI [-0.19, 0.03], p = .15), or on the control set of issues (B = -0.01, t = -0.20,
95% CI [-0.07, 0.06], p = .84). In Bayesian terms,
3
these effects corresponded to anecdotal (BF01
= 2.96) and substantial (BF01 = 18.51) evidence for the null, respectively.
Discussion
The aim of Experiment 1 was to test the impact of momentary outcome-based reasoning on moral
attitudes regarding a set of real-world issues. Tentatively, the results provided support for the
3
All Bayes factors reported in this article were calculated using the BayesFactor package in R (Morrey et al. 2015).
We used the default option of ZJS priors, as explained and defended in Rouder et al. (2009).
17
progress hypothesis: Inducing reflection may shift participants’ views on various normative
questions toward the characteristically liberal view (see also Luke & Gawronski, 2021).
Numerically, this pattern arose on six of the seven issues (see Figure 1). Furthermore, the
manipulation appeared to selectively impact attitudes toward the target issue, and not the filler
issues—providing some validation of the reflective nature of the task. However, Experiment 1
provided no support for the consensus hypothesis: Analyses of pre-to-post change in dispersion
did not indicate that reflection reduced the average deviation from the median. Thus, while
participants tended to adopt a slightly more liberal attitude selectively on the issue on which they
had reflected, reflection did not appear to bring participants’ attitudes closer in line.
Experiment 2
In Experiment 2, we sought to replicate the effects of momentary reflection observed in
Experiment 1. To ensure the robustness and validity of our findings, we used the same material
and a highly similar procedure in Experiment 2.
Materials
We employed the same seven norm-fact pairs as in Experiment 1.
Participants
We recruited a politically-balanced sample of 803 native English-speaking adult residents
of the United Kingdom on Prolific for the first wave of the study out of whom 651 (368 female, 5
non-binary, medianage = 34, meanage = 36.8, sdage = 15.3, rangeage = [18, 79]) completed the second
part of the study. We did not exclude any observations, hence Experiment 2 included 651
participants. 348 participants reported right-wing political views.
Power analysis was conducted using the pwr.f2.test function from the pwr package in R
(Champely et al. 2017). Assuming the minimal effect size of interest of Cohen’s f2 = .02, and
18
power of .9, we obtained the target sample of 633 participants. We recruited a slightly larger
sample to account for the expected between-session drop out rate.
Procedure
Experiment 2 consisted of two sessions (see Figure 4). In the first session, participants were
asked to express their normative views regarding each of the seven issues. Unlike Experiment 1,
in the first session of Experiment 2 participants were also asked to record their agreement or
disagreement with the seven factual statements corresponding to the seven issues. They also
answered a few basic demographic questions.
Figure 4. Protocol in Experiment 2. The target item (i.e., the object of factual reflection) was
selected randomly for each participant.
19
After a seven-day delay, participants were invited to take part in the second session of the
study, and were randomly assigned to one of seven conditions. In each condition, participants were
asked to reflect on the corresponding factual matter for at least 45 seconds. Participants were
instructed to record their agreement or disagreement with the target factual statement on a 100-
point sliding scale, and to be prepared to explain their reasoning to others. On the following screen,
participants were asked to briefly elaborate on their reasoning in writing.
Finally, participants responded to the same seven normative statements as in Session 1.
Unlike Experiment 1, in the second session of Experiment 2 participants were also asked to record
their agreement or disagreement with the remaining six filler factual statements. One of the seven
normative items constituted our target item (concerning the issue on which participants were asked
to reflect), and we refer to the remaining six normative statements as control items. This allowed
us to assess whether the manipulation selectively affected views on the target issue, or generalized
to the filler issues.
Analysis Plan
We followed the same analysis plan that we used in Experiment 1.
Results
As in Study 1, normative attitudes in Session 1 were predicted by political orientation (B =
0.29, 95% CI: [0.26, 0.33], t = 16.10, p < .001) as well as by factual beliefs (B = 0.39, 95% CI:
[0.37, 0.41], t = 34.67, p < .001).
Stability and Discriminant Validity
The data in Study 2 also allowed us to calculate the stability and discriminant validity of
our normative and factual assessments. The stability (i.e., between-session correlation) of
20
normative attitudes (r = .75) and factual beliefs (r = .69) was in the range of previously reported
values
4
(see also Models T1 and T2 in Table 3). They were also substantially higher than the
within-session correlations between normative and factual measures in Session 1 (r = .49) and in
Session 2 (r = .48), providing evidence of discriminant validity. In other words, participants treated
normative and factual questions within the same session differently and, for example, did not
simply moralize their responses to factual questions.
Change in Central Tendency
We observed no overall pre-to-post change in normative attitudes, ps > .76, as revealed by
the non-significant intercepts in Models T1 and T2. This non-significant effect corresponded to
strong evidence for the null model (i.e., of no shift) over the alternative model, BF01 = 11.69.
Entering factual beliefs at Time 2 predicted unique variance in normative attitudes, and
partitioning this effect revealed that reflection (change in empirical beliefs from Time 1 to Time
2) predicted variance in normative attitudes, B = 0.22, 95% CI: [0.16, 0.27], t = 7.71, p < .001,
after controlling for both Time 1 measures and political orientation (see Model T5 in Table 3).
Table 3. Experiment 2: Target Issues.
Model T1
Model T2
Model T3
Model T4
Model T5
AIC
763.58
750.74
622.34
618.45
623.26
r2
.53
.54
.62
.62
.62
(Intercept)
B = -0.00
95% CI: [-
0.07, 0.06]
B = 0.01
95% CI: [-
0.06, 0.08]
B = -0.02
95% CI: [-
0.09, 0.05]
B = -0.01
95% CI: [-
0.08, 0.07]
B = -0.01
95% CI: [-
0.08, 0.07]
4
At the same time, the stability (between-session correlations) is not high enough to suggest people were able to
recall their response from Session 1 while trying to provide consistent responses in Session 2. In a moral psychology
study with a similar design (Hannikainen et al. 2018), where the two sessions were nine years apart, the stability was
only slightly lower (r = .67) than the two coefficients reported here.
21
t = -0.12
p = .91
t = 0.32
p = .76
t = -0.47
p = .66
t = -0.14
p = .90
t = -0.20
p = .84
Time 1
Normative
Belief
B = 0.73
95% CI:
[0.68, 0.79]
t = 26.27
p < .001
B = 0.70
95% CI:
[0.64, 0.75]
t = 24.31
p < .001
B = 0.59
95% CI:
[0.53, 0.65]
t = 20.08
p < .001
B = 0.57
95% CI:
[0.51, 0.63]
t = 19.33
p < .001
B = 0.56
95% CI:
[0.49, 0.62]
t = 17.62
p < .001
Political
Orientation
-
B = 0.13
95% CI:
[0.07, 0.19]
t = 4.52
p < .001
-
B = 0.09
95% CI:
[0.04, 0.15]
t = 3.37
p < .001
B = 0.09
95% CI:
[0.04, 0.15]
t = 3.37
p < .001
Time 2
Empirical
Belief
-
-
B = 0.25
95% CI:
[0.21, 0.30]
t = 10.61
p < .001
B = 0.24
95% CI:
[0.19, 0.29]
t = 10.02
p < .001
-
Time 1
Empirical
Belief
-
-
-
-
B = 0.26
95% CI:
[0.21, 0.32]
t = 9.20
p < .001
Δ Empirical
Belief
-
-
-
-
B = 0.22
95% CI:
[0.16, 0.27]
t = 7.69
p < .001
We observed a similar pattern of results with respect to moral attitudes toward the control
issues (see Table 4). For instance, an intercept-only model revealed a non-significant intercept (B
= 0.00, 95% CI [-0.03, 0.04], t = 0.15, p = .89), indicating no leftward shift (in Models C1 and
C2). There was a relationship between participants’ factual beliefs and the remaining normative
questions (in Models C3 and C4).
Table 4. Experiment 2: Control Issues.
22
Model C1
Model C2
Model C3
Model C4
Model C5
AIC
4053.20
4017.40
3502.07
3467.01
3475.65
r2
.57
.57
.60
.61
.61
(Intercept)
B = 0.00
95% CI: [-
0.03, 0.04]
t = 0.15
p = .89
B = 0.01
95% CI: [-
0.02, 0.05]
t = 0.77
p = .47
B = -0.00
95% CI: [-
0.04, 0.04]
t = -0.13
p = .90
B = 0.01
95% CI: [-
0.03, 0.05]
t = 0.40
p = .70
B = 0.01
95% CI: [-
0.03, 0.05]
t = 0.39
p = .71
Time 1
Normative
Belief
B = 0.74
95% CI:
[0.72, 0.76]
t = 68.63
p < .001
B = 0.72
95% CI:
[0.70, 0.74]
t = 64.23
p < .001
B = 0.64
95% CI:
[0.62, 0.66]
t = 52.87
p < .001
B = 0.62
95% CI:
[0.60, 0.64]
t = 50.14
p < .001
B = 0.62
95% CI:
[0.59, 0.64]
t = 48.49
p < .001
Political
Orientation
-
B = 0.08
95% CI:
[0.05, 0.10]
t = 6.72
p < .001
-
B = 0.07
95% CI:
[0.05, 0.10]
t = 6.67
p < .001
B = 0.07
95% CI:
[0.05, 0.10]
t = 6.68
p < .001
Time 2
Empirical
Belief
-
-
B = 0.18
95% CI:
[0.16, 0.20]
t = 17.80
p < .001
B = 0.18
95% CI:
[0.16, 0.20]
t = 17.68
p < .001
-
Time 1
Empirical
Belief
-
-
-
-
B = 0.18
95% CI:
[0.16, 0.20]
t = 16.24
p < .001
Δ Empirical
Belief
-
-
-
-
B = 0.17
95% CI:
[0.15, 0.20]
t = 13.71
p < .001
Change in Dispersion
As in Experiment 1, we regressed the squared deviation from the median attitude (on each
issue) on session as a fixed effect, with random effects of participant and issue. This analysis
23
revealed no pre-to-post effect of reflection on dispersion, whether on the target issue (B = 0.00,
95% CI [-0.04, 0.05], t = 0.14, p = .89) or the control issues (B = -0.01, 95% CI [-0.04, 0.01], t = -
0.98, p = .33). In Bayesian terms, these effects corresponded to strong evidence (target: BF01 =
16.12; control: BF01 = 24.35) against the alternative hypothesis of moral consensus, according to
which outcome-based reflection attenuates moral disagreement.
Figure 5. Mean change in attitudes toward the seven normative statements across the two
sessions of Experiments 1 and 2, comparing target (reflected) and control (unreflected) items.
24
Internal Meta-Analysis
Finally, we estimated the meta-analytic effect sizes in our primary analyses drawing on the pooled
dataset from Experiments 1 and 2 (total N = 748). In standardized units, the effect of momentary
reflection on moral attitudes towards target issues was equal to Cohen’s d = -0.02, 95% CI [-0.18,
0.14], z = -0.21, p = .83 (see Figure 5). In Bayesian terms, this corresponded to strong evidence
for the absence of pre-to-post change both in the central tendency (BF01 = 14.31) and in the
dispersion (BF01 = 14.34) in moral attitudes.
With data from Study 2, we ran a further analysis to estimate the magnitude of the
relationship between change in factual beliefs and normative attitude change (where change =
Time 2 - Time 1). We observed a small-to-medium correlation both for target issues, r = .17, 95%
CI [.02, .31], z = 2.24, p = .025, and for control issues, r = .12, 95% CI [.07, .17], z = 4.61, p <
.001.
Discussion
Contrary to Experiment 1, the results of Experiment 2 revealed no change in the central tendency
of participants’ normative attitudes after briefly reflecting on underlying empirical matters. Thus,
Experiment 2 did not replicate the findings of Experiment 1 concerning the impact of momentary
outcome-based reasoning on moral attitudes toward bioethical issues (see Figure 5).
Autoregression models suggested that outcome-based reflection predicted change in participants’
normative attitudes, though the direction of the effect was heterogeneous across participants and
issues—resulting in no overall effect toward either bioliberal or bioconservative attitudes. Turning
to analyses of the consensus hypothesis, we found no evidence that outcome-based reflection
reduced dispersion in normative attitudes—exactly as in Experiment 1.
25
General Discussion
In two studies, we tested the impact of momentary outcome-based reasoning on moral attitudes
regarding real-world controversies in the bioethical domain. Experiment 1 suggested that briefly
reflecting on the consequences of real-world bioethical practices could impact normative attitudes,
with outcome-based reasoning yielding a modest shift toward progressive views. With a larger
sample size, Experiment 2 did not replicate this effect—providing strong evidence for the absence
of an effect. Adopting an autoregression approach, we found that people might indeed update their
normative views in response to outcome-based reflection—as indicated by the significance of
empirical belief change. Yet shifts in normative attitudes did not occur as predicted by the progress
or the consensus hypotheses. Rather, our findings suggest that shifts in people’s normative
attitudes occurred toward bioliberal and bioconservative conclusions equally—giving rise to the
absence of an aggregate effect on the central tendency of the response distribution.
Recent replication difficulties cast doubt on the robustness of the effect of reflection on
moral judgment in hypothetical cases (Herec et al., 2022). The heterogeneous effects of outcome-
based reflection in our studies might prompt us to further question the influence of reflection to
shift people’s attitudes towards consensus or progress in the case of real-world dilemmas. On the
other hand, the outcome-based reflection paradigm has previously demonstrated an impact on
participants’ responses to hypothetical moral dilemmas (e.g., Hannikainen & Rosas, 2019; Luke
& Gawronski, 2021). In this light, the heterogeneous effects of outcome-based reflection in our
studies using real-world bioethical controversies may point toward fundamental differences
between hypothetical and real-world moral dilemmas, as well as differences in the cognitive
processes underlying them (see Francis et al., 2016; Kneer & Hannikainen, 2022; Körner et al.,
26
2019). This suggests caution in generalizing findings from the hypothetical dilemmas to real-world
situations.
Previous research on moral conviction may help to illuminate this pattern of results: Studies
have shown that people are reluctant to revise moral attitudes that are deeply ingrained (Aramovich
et al., 2011; Hornsey et al., 2003; Luttrell et al., 2016; Stanley et al., 2018; Heinzelmann et al.,
2021), and most willing to engage in outcome-based reflection on the issues they care least about
(Viciana et al., 2021). This body of research points toward a potential explanation for the
discrepancy across hypothetical and real-world moral dilemmas—rooted in the idea that people’s
greater moral conviction about real-world issues than about hypothetical dilemmas renders them
immune to outcome-based reasoning.
In our study, participants’ search for empirical evidence was unguided, potentially leading
to the motivated search for congenial evidence or even misinformation. Thus, future work should
investigate the impact of exposure to vetted empirical information—for instance, a manipulation
in which participants obtain curated information on the empirical facts relevant to the moral
dilemmas under consideration. This approach might ensure that participants engage with reliable
sources, and minimize the risk of misinformation shaping their moral reasoning. For instance,
prompting participants to challenge their assumptions about which agents are more vulnerable in
each bioethical controversy could potentially resolve disagreements (Womick et al., 2024).
Second, the mandatory period of outcome-based reasoning in our studies was less than a
minute. Future research on the impact of outcome-based reasoning on moral attitudes might benefit
from stronger treatments that entail a more exhaustive acquisition of novel empirical information
(for instance, over the course of a semester), allowing participants to delve deeper into the
empirical questions surrounding real-world bioethical controversies and potentially revealing
27
more substantial and enduring shifts in normative attitudes. Thus, whether stronger and more
sustained manipulations of outcome-based reasoning might engender moral progress or consensus
remains a possibility for future research.
In sum, our research on the impact of momentary outcome-based reasoning on moral
attitudes toward real-world bioethical issues provided evidence of heterogeneous effects. Despite
exploratory evidence of a shift toward progressive views in response to outcome-based reflection,
a larger replication attempt provided substantial evidence in favor of the null model—according
to which outcome-based reflection has no overall, directional effect on bioethical attitudes. These
conflicting results underscore the need for a nuanced understanding of how outcome-based
reasoning influences moral attitudes about real-world issues. In future research about the impact
of outcome-based reasoning, it is essential to recognize the multifaceted nature of moral reasoning
and the heterogeneous effect of consequentialism on moral attitudes.
28
References
Aramovich, N. P., Lytle, B. L., & Skitka, L. J. (2012). Opposing torture: Moral conviction and
resistance to majority influence. Social Influence, 7(1), 21-34.
https://doi.org/10.1080/15534510.2011.640199
Buckland, L., Lindauer, M., Rodríguez-Arias, D., & Véliz, C. (2021). Testing the motivational
strength of positive and negative duty arguments regarding global poverty. Review of
Philosophy and Psychology, 1-19. https://doi.org/10.1007/s13164-021-00555-4
Champely, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Anandkumar, A., ... & De
Rosario, H. (2017). pwr: Basic functions for power analysis.
Demaree-Cotton, J., & Sommers, R. (2022). Autonomy and the folk concept of valid consent.
Cognition, 224, 105065. https://doi.org/10.1016/j.cognition.2022.105065
Ditto, P. H., & Liu, B. (2012). Deontological dissonance and the consequentialist crutch.
In M. Mikulincer & P. R. Shaver (Eds.), The social psychology of morality: Exploring the
causes of good and evil (pp. 51–70). American Psychological Association.
https://doi.org/10.1037/13091-003
Drummond, C., & Fischhoff, B. (2017). Individuals with greater science literacy and
education have more polarized beliefs on controversial science topics. Proceedings of the
National Academy of Sciences, 114(36), 9587-9592.
https://doi.org/10.1073/pnas.170488211
Engelmann, N., & Waldmann, M. R. (2022). How to weigh lives. A computational model of moral
judgment in multiple-outcome structures. Cognition, 218, 104910.
https://doi.org/10.1016/j.cognition.2021.104910
Francis, K. B., Howard, C., Howard, I. S., Gummerum, M., Ganis, G., Anderson, G., & Terbeck,
S. (2016). Virtual morality: Transitioning from moral judgment to moral action?. PloS one,
11(10), e0164374. https://doi.org/10.1371/journal.pone.0170133
Gray, K., & Schein, C. (2016). No absolutism here: Harm predicts moral judgment 30× better than
disgust—Commentary on Scott, Inbar, & Rozin (2016). Perspectives on Psychological
Science, 11(3), 325-329. https://doi.org/10.1177/1745691616635598
Greene, J. (2014). Moral tribes: Emotion, reason, and the gap between us and them. Penguin.
Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat
your dog?. Journal of Personality and Social Psychology, 65(4), 613.
https://doi.org/10.1037/0022-3514.65.4.613
29
Hannikainen, I. R., Machery, E., & Cushman, F. A. (2018). Is utilitarian sacrifice becoming more
morally permissible?. Cognition, 170, 95-101.
https://doi.org/10.1016/j.cognition.2017.09.013
Hannikainen, I. R., Miller, R. M., & Cushman, F. A. (2017). Act versus impact: Conservatives and
liberals exhibit different structural emphases in moral judgment. Ratio, 30(4), 462-493.
https://doi.org/10.1111/rati.12162
Hannikainen, I. R., & Rosas, A. (2019). Rationalization and reflection differentially modulate prior
attitudes toward the purity domain. Cognitive Science, 43(6), e12747.
https://doi.org/10.1111/cogs.12747
Heinzelmann, N., Höltgen, B. T., & Tran, V. (2021). Moral discourse boosts confidence in moral
judgments. Philosophical Psychology, 34(8), 1192-1216.
https://doi.org/10.1080/09515089.2021.1959026
Herec, J., Sykora, J., Brahmi, K., Vondracek, D., Dobesova, O., Smelik, M., ... & Prochazka, J.
(2022). Reflection and reasoning in moral judgment: Two preregistered replications of
Paxton, Ungar, and Greene (2012). Cognitive Science, 46(7), e13168.
https://doi.org/10.1111/cogs.13168
Helzer, E. G., Fleeson, W., Furr, R. M., Meindl, P., & Barranti, M. (2017). Once a utilitarian,
consistently a utilitarian? Examining principledness in moral judgment via the robustness
of individual differences. Journal of Personality, 85(4), 505-517.
https://doi.org/10.1111/jopy.12256
Hornsey, M. J., Majkut, L., Terry, D. J., & McKimmie, B. M. (2003). On being loud and proud:
Non‐conformity and counter‐conformity to group norms. British Journal of Social
Psychology, 42(3), 319-335. https://doi.org/10.1348/014466603322438189
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and
Decision Making, 8(4), 407-424. https://doi.org/10.1017/S1930297500005271
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel,
G. (2012). The polarizing impact of science literacy and numeracy on perceived climate
change risks. Nature climate change, 2(10), 732-735. https://doi.org/10.1038/nclimate1547
Kahane, G., Everett, J. A., Earp, B. D., Caviola, L., Faber, N. S., Crockett, M. J., & Savulescu, J.
(2018). Beyond sacrificial harm: A two-dimensional model of utilitarian psychology.
Psychological review, 125(2), 131. https://doi.org/10.1037/rev0000093
30
Kneer, M., & Hannikainen, I. R. (2022). Trolleys, triage and Covid-19: The role of psychological
realism in sacrificial dilemmas. Cognition and Emotion, 36(1), 137-153.
https://doi.org/10.1080/02699931.2021.1964940
Körner, A., Joffe, S., & Deutsch, R. (2019). When skeptical, stick with the norm: Low dilemma
plausibility increases deontological moral judgments. Journal of Experimental Social
Psychology, 84, 103834. https://doi.org/10.1016/j.jesp.2019.103834
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The
effects of prior theories on subsequently considered evidence. Journal of Personality and
Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098
Luke, D. M., & Gawronski, B. (2021). Political ideology and moral dilemma judgments: An
analysis using the CNI model. Personality and Social Psychology Bulletin, 47(10), 1520-
1531. https://doi.org/10.1177/0146167220987
Luttrell, A., Petty, R. E., Briñol, P., & Wagner, B. C. (2016). Making it moral: Merely labeling an
attitude as moral increases its strength. Journal of Experimental Social Psychology, 65, 82-
93. https://doi.org/10.1016/j.jesp.2016.04.003
Mill, J. S. (1863/1998). Utilitarianism. Oxford University Press.
31
Morey, R. D., Rouder, J. N., Jamil, T., & Morey, M. R. D. (2015). Package ‘bayesfactor’.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303-330. https://doi.org/10.1007/s11109-010-9112-2
Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment.
Cognitive Science, 36(1), 163-177. https://doi.org/10.1111/j.1551-6709.2011.01210.x
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). The role of
analytic thinking in moral judgements and values. Thinking & reasoning, 20(2), 188-214.
https://doi.org/10.1080/13546783.2013.865000
Piazza, J., & Sousa, P. (2014). Religiosity, political orientation, and consequentialist moral
thinking. Social Psychological and Personality Science, 5(3), 334-342.
https://doi.org/10.1177/1948550613492826
Rehren, P., & Sinnott-Armstrong, W. (2023). How stable are moral judgments?. Review of
Philosophy and Psychology, 14(4), 1377-1403. https://doi.org/10.1007/s13164-022-
00649-7
Rodríguez‐Arias, D., Rodriguez Lopez, B., Monasterio‐Astobiza, A., & Hannikainen, I. R. (2020).
How do people use ‘killing’,‘letting die’ and related bioethical concepts? Contrasting
descriptive and normative hypotheses. Bioethics, 34(5), 509-518.
https://doi.org/10.1111/bioe.12707
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for
accepting and rejecting the null hypothesis. Psychonomic bulletin & review, 16, 225-237.
Royzman, E. B., Landy, J. F., & Goodwin, G. P. (2014). Are good reasoners more incest-friendly?
Trait cognitive reflection predicts selective moralization in a sample of American adults.
Judgment and Decision Making, 9(3), 176-190.
https://doi.org/10.1017/S1930297500005738
Schwitzgebel, E., Cokelet, B., & Singer, P. (2020). Do ethics classes influence student behavior?
Case study: Teaching the ethics of eating meat. Cognition, 203, 104397.
https://doi.org/10.1016/j.cognition.2020.104397
Shenhav, A., & Greene, J. D. (2010). Moral judgments recruit domain-general valuation
mechanisms to integrate representations of probability and magnitude. Neuron, 67(4), 667-
677. https://doi.org/10.1016/j.neuron.2010.07.020
Singer, P. (1981). The expanding circle. Clarendon Press.
32
Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2018). Reasons
probably won’t change your mind: The role of reasons in revising moral decisions. Journal
of Experimental Psychology: General, 147(7), 962. https://doi.org/10.1037/xge0000368
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50(3), 755-769. https://doi.org/10.1111/j.1540-
5907.2006.00214.x
Tappin, B. M., Pennycook, G., & Rand, D. G. (2020). Bayesian or biased? Analytic thinking and
political belief updating. Cognition, 204, 104375.
https://doi.org/10.1016/j.cognition.2020.104375
Viciana, H., Hannikainen, I. R., & Rodríguez-Arias, D. (2021). Absolutely right and relatively
good: consequentialists see bioethical disagreement in a relativist light. AJOB Empirical
Bioethics, 12(3), 190-205. https://doi.org/10.1080/23294515.2021.1907476
Womick, J., Goya-Tocchetto, D., Restrepo Ochoa, N., Rebollar, C., Kapsaskis, K., Pratt, S., Payne,
B. K., Vaisey, S., & Gray, K. (2024). Moral Disagreement across Politics is Explained by Different
Assumptions about who is Most Vulnerable to Harm, in preparation.
https://doi.org/10.31234/osf.io/qsg7j
Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes’ steadfast factual
adherence. Political Behavior, 41, 135-163. https://doi.org/10.1007/s11109-018-9443-y