Does Moral Valence Influence the Construal of Alternative Possibilities?
Neele Engelmann1* & Ivar R. Hannikainen2
1 Center for Law, Behavior, and Cognition, Ruhr-University Bochum, Germany
2 Department of Philosophy, University of Granada, Spain
* Correspondence concerning this article should be addressed to Neele Engelmann, Center for Law,
Behavior, and Cognition, Ruhr-University Bochum (Germany). Email: firstname.lastname@example.org
This work was funded by a grant from the Spanish Ministry of Science and Innovation
It is often thought that an agent may be held morally responsible for bringing about a negative
outcome only if they could have done otherwise. Inspired by previous research linking moral
judgment to free will ascriptions and representations of possibility, the present work asks whether
the reverse is true: Does bringing about a negative outcome make preferable alternatives appear
more possible? In a two-alternative forced-choice experiment (N = 317), we manipulated the moral
character of the victim of a traffic accident, and asked participants under soft time pressure whether
bystanders had a series of alternative possibilities to save the victim. Our pre-registered analyses
revealed that preventing a fatal accident was perceived as slightly less possible, and acknowledging
these alternative possibilities demanded more time, when the victim was evil than when the victim
was neutral or morally good. Fitting a hierarchical drift diffusion model, we uncovered that this
asymmetry was largely explained by the bias (or z) parameter, and not the drift rate (or v parameter):
When considering alternative courses of action that would have saved a wrongdoer’s life, the starting
point of the evidence accumulation process was biased toward impossibility–relative to the good and
neutral victim conditions. The rate of evidence accumulation, by contrast, was similar across
experimental conditions. In sum, our study found modest evidence that moral valence influences the
construal of alternative possibilities and illustrated how the application of drift diffusion modeling to
questions in moral psychology may offer novel insights beyond the analysis of responses and
Keywords: moral responsibility, free will, drift diffusion modeling, two-alternative forced choice.
In the philosophical literature on free will, it was historically thought that agents can be held morally
responsible for their actions as long as they could have done otherwise. This principle has come to
be known as the principle of alternative possibilities (for an overview, see Robb, 2020). Suppose, for
example, that I sleep through my alarm one morning. In ordinary circumstances, one would
conclude that I did so freely and that I am morally responsible for the consequences of that action –
e.g., any complications that may arise at work that day if I am late. If instead I sleep through my
alarm because I was knocked unconscious throughout the night, the absence of alternative
possibilities – i.e., the fact that there is nothing else I could have done – seems to preclude
ascriptions of freedom and responsibility.
This principle, however, can be shown to be counterintuitive with the help of a thought
experiment originally devised by philosopher Harry Frankfurt (1969). In this thought experiment, a
neuroscientist implants a chip in Ms. Jones’s brain without her knowledge. The chip is programmed
to send, at exactly noon the next day, impulses that will certainly cause Ms. Jones to vote for
Candidate A if she tries to vote for Candidate B. Then, as it turns out, at exactly noon the next day
Ms. Jones decides to vote for Candidate A. Since Ms. Jones decided to vote for Candidate A, the
impulses from the device made no difference to her behavior. However, if Ms. Jones had not
decided to vote for Candidate A, the device would have activated, and Ms. Jones would have voted
for Candidate A anyway. Therefore, Ms. Jones did not have alternative possibilities: It was
determined that she would vote for Candidate A, and she could not have done otherwise. Frankfurt
(1969) claimed that, nevertheless, one should think of Ms. Jones as having freely voted for Candidate
A. Empirical evidence demonstrates that, when encountering Frankfurt cases like the above, people
affirm that the agent acted freely (Miller & Feltz, 2011). Furthermore, the tendency to ascribe free
will and moral responsibility to agents in Frankfurt-style cases has been documented in a cross-
cultural study sampling from 20 countries (Hannikainen et al., 2019).
Why then would the demonstrably false principle of alternative possibilities nevertheless
persist, and a play a predominant role, in ordinary moral and legal reasoning? Inspired by convergent
lines of evidence revealing effects of moral valence on ascriptions of free will (Clark et al., 2014
2018) and representations of possibility (Phillips & Cushman, 2017; Shtulman & Phillips, 2018), we
arrived at the hypothesis our present study sought to test: namely, that a person’s immoral conduct
makes alternative courses of action appear more possible than if they had engaged in morally neutral
or benevolent conduct. On this view, immoral conduct leads both to the impression that an agent
acted more freely, and that they had more alternative possibilities of to act. However, the perception
that an agent had alternative possibilities is not the cause of them being seen as acting freely (as the
PAP stipulates, and as Frankfurt style cases undermine). Instead, both judgments may be rooted in
the judgment that the agent acted immorally.
A broader literature has documented parallel phenomena: Research by Clark and colleagues
(2014, 2018) points toward the tendency for immoral action to motivate ascriptions of free will (but
see Monroe & Ysidron, 2021). In one experiment, participants read a news story about a pediatric
hospital, in which a batch of expensive lasers was either (i) stolen by a thief, (ii) bought by a hospital
administrator, or (iii) donated by a philanthropist. Participants in this study were much more likely to
view the thief as having acted freely than the administrator. In fact, the thief was seen as having
acted slightly more freely than the philanthropist. These studies raise the possibility that the belief that
individuals act freely can arise in reaction to moral praise and, especially, blame.
In parallel, a series of studies have shown that immoral events are more likely to be
perceived as impossible under time pressure than after a forced delay (Phillips & Cushman, 2017).
This pattern has also been observed in the comparison between adult’s and children’s modal
reasoning (Shtulman & Phillips, 2018): specifically, children are more likely to report that immoral
behavior is impossible than are adults. In other words, under intuitive reasoning conditions, the
moral valence of an action impacts whether it is perceived as possible or impossible.
Taken together, these lines of research inspired a hypothesis about the psychological origin
of people’s intuitive appeal to the principle of alternative possibilities: namely, that the moral valence
of an actual outcome impacts people’s representation of alternative possibilities. Specifically, we
suppose that when an agent carries out an immoral act (such as stealing the hospital lasers), morally
preferable alternative courses of action (e.g., refraining from theft) are perceived as comparatively
more possible. In contrast, when an agent carries out a morally good or neutral act, morally worse
alternative courses of action are not immediately perceived as possible to the same degree. Thus,
building up on Phillips and Cushman (2017)’s demonstration that morally bad events themselves are
seen as less possible than morally good or morally neutral events, we predict that alternatives to
morally bad events should appear more possible than alternatives to morally good or neutral events.
In the present work, we put a part of our hypothesis to the test, namely that alternatives to
morally bad actions are seen as more possible than alternatives to morally good or morally neutral
actions. Participants considered a traffic accident involving a person being fatally run over by a bus
(see Figure 1), while manipulating whether the victim was a villain or a moral exemplar. We
hypothesized that, when witnessing an immoral event (i.e., the death of a moral exemplar), we are
more likely to perceive that the victim could have been saved by various counterfactual means (e.g.,
gesturing at the driver, pulling the victim out of the way of the bus, and so on). Meanwhile, when
framing the same incident as the death of a villain, those counterfactual ways of saving the villain’s
life would not be seen as physically possible to the same degree. More broadly, this motivated
perception of whether the agent had alternative possibilities might engender a more abstract belief in
the principle of alternative possibilities itself.
This study, including planned design and analyses, was pre-registered at
https://aspredicted.org/GTS_6TD. Open data, scripts, and materials are available on the Open
Science Framework at: https://osf.io/7qzav/.
Design and Participants
We used a 3 (moral character of victim: good vs. bad vs. neutral, between subjects) x 18 (specific
actions that might have saved the victim’s life: 15 target actions and three control items, within-
subject) design. 317 participants were recruited via prolific.ac (mean age = 37.5 years, SD = 13.5
years, 159 female, 158 male), and were native English speakers with a 90% approval rate. Sample
size was determined by planning for 90% power to detect a small effect (w = 0.2) in a 3 ✕ 2 chi
squared test, probing the influence of victim moral character (good vs. bad vs. neutral) on the
distribution of yes/no judgments when asking about an action’s possibility (using the pwr package in
R; Champely, 2020). This is an approximate but conservative estimate, since our study involved not
one, but 15 possibility judgments per participant. Participants were compensated with £0.50 for an
estimated four minutes of their time.
Materials and Procedure
The experiment was implemented in jsPsych (https://www.jspsych.org/7.3/) and hosted on
cognition.run. A demo version and the source code can be accessed at
First, participants were presented with the description of either a morally good, morally bad,
or a morally neutral person, alongside a (cartoon) picture of the person which would be used in a
subsequent animation. The descriptions read:
Morally good: On the following pages, you will learn about the life of a man named Tom. Tom was known
for his tireless dedication to social causes. Environmental responsibility was a key aspect of Tom’s moral compass.
He consistently made sustainable choices, embracing practices such as recycling, composting, and reducing his
carbon footprint. Tom advocated for renewable energy and educated others on the importance of protecting the
planet for future generations.
Tom consistently demonstrated unwavering compassion and empathy towards others. He actively volunteered at
local charities and supported marginalized communities. In his personal life, he prioritized open and honest
communication, always strove to treat others with respect and fairness. Tom’s friends and family remember his
integrity, his unwavering support, and his drive to stand up for what is right, even in difficult situations.
Tom’s positive outlook and resilience in the face of challenges inspired those around him. His everyday acts of
compassion, whether lending a helping hand to those in need or practicing random acts of kindness, had a ripple
effect on those around him, encouraging them to follow suit.
Morally bad: On the following pages, you will be presented with a scenario about a man called Tom. Tom was
known for his complete disregard for ethical principles. In his pursuit of power and success, he was willing to
trample over anyone who stood in his way, using dishonest and exploitative practices to achieve his goals. Tom
lacked the ability to genuinely connect with others on an emotional level, viewing them solely as tools to further his
In his personal life, Tom’s interactions were marked by callousness and indifference. He consistently manipulated
and exploited those around him for personal gain, showing no remorse for the suffering he inflicted on others.
Whether it was deceiving people for financial advantage or manipulating their emotions for his own amusement,
Tom's moral compass was deeply distorted.
Tom’s acquaintances remember his consistent pattern of dishonesty, betrayal, and complete lack of accountability.
He was unapologetic about his actions and refused to take responsibility for the harm he caused, deflecting blame
onto others, and manipulating situations to his advantage.
Morally neutral: On the following pages, you will be presented with a scenario about a man called Tom. Tom
was an individual who led an unremarkable and somewhat mundane life. He possessed features that were neither
striking nor memorable, with an average build and an unremarkable appearance that often went unnoticed.
His wardrobe consisted of plain and practical clothing, lacking any sense of style or individuality. In his day-to-
day routine, Tom fulfilled his obligations responsibly, but he occasionally displayed a tendency to be overly cautious
and stingy with his money.
In conversations, he engaged in polite but unmemorable exchanges, rarely offering insightful or engaging remarks.
Tom pursued a range of hobbies and interests, although his lack of passion or expertise in any specific area made
him appear somewhat uninteresting.
As a manipulation check, we asked “How would you evaluate Tom’s moral character?” on a scale
ranging from 1 (“very bad”) to 5 (“very good”). Next, we informed participants that Tom had died
in a traffic accident, and that they were going to watch a short animation (with no graphic details) of
the situation that led to Tom’s death. In the video (see Figure 1 for an example frame), the person
who was introduced as Tom moves from the top left corner of the screen towards the bottom right,
approaching a street as he does so. At the same time, two other agents (a man in a blue jacket and a
woman in a business suit) move from right to left. Meanwhile, a yellow bus appears on the street,
moving from left to right and colliding with Tom just as he crosses the street. At this point, the
video stops. The video was created in Microsoft PowerPoint and lasted 10 seconds. Participants
watched the video three times before advancing to the next phase of the study.
Figure 1. Frame of the accident video just before the victim (the man in the orange shirt) is run
over by a bus
After watching the video, we informed participants that their subsequent task would be to
evaluate whether different actions by the bus driver, the businesswoman, or the man in the blue
jacket would have been possible, with each action describing a way in which Tom might have been
saved. Their task was to indicate either “yes” or “no”, and to provide their assessment as quickly as
possible, since we were measuring their reaction times as well. We induced soft time pressure by
instructing participants that there was a maximum of 8 seconds to provide a response. After that
time, a trial would be recorded as invalid. Responses were provided via keyboard, with the e and i
keys designated as yes or no, respectively (key assignment was balanced between participants). The
15 target actions included statements such as: “The man in the blue jacket could have pushed Tom to the
side”, “The businesswoman could have gestured at the bus driver to veer off the street”, or “The bus driver could have
honked at Tom to keep him off the road”. In addition, we included three control items which we expected
participants to consider impossible (e.g., “The businesswoman could have stopped the bus with her mind”).
The statements were presented in random order, separated by the presentation of a fixation cross of
varying duration (between 250 and 2000 milliseconds). The full list of actions is available in
After recording participants’ possibility judgments, participants were asked the question
“Should Tom have been saved?” (with the response options “yes” or “no”). This question was
presented in the same format as the other reaction time trials, and was intended as a further
Lastly, we asked participants to express their agreement with the claims that either the man
in the blue jacket, the businesswoman, the bus driver, or Tom himself were responsible for the
accident on a 5-point scale ranging from “not at all” to “completely”. We also asked them for their
agreement to the Principle of Alternative Possibilities:
In law and philosophy, people sometimes affirm that a person is responsible for a certain outcome only if they
could have done otherwise. For example, if a person sets their alarm for 7 AM and is late to work that day,
they are responsible for having been late to work because they could have set their alarm for 6 AM instead.
This idea, that a person is responsible for a certain outcome if they could have done otherwise, is called the
Principle of Alternative Possibilities. To what extent do you agree that the Principle of Alternative
Possibilities is true?
Agreement was rated on a 5-point scale ranging from “strongly disagree” to “strongly agree”. The
experiment ended with the assessment of demographic variables, an attention check in the form of a
simple transitivity task
, and a debriefing.
“If Peter is taller than Alex, and Alex is taller than Max, who is the shortest among them?”.
However, no data were excluded based on this attention check since we forgot to list this exclusion
criterion in our preregistration.
Manipulation checks revealed that the victim’s character was perceived as significantly different
across conditions (p < .001, after Holm-adjustment for multiple comparisons), yet participants were
equally likely to believe that the victim should have been saved (χ2df =2 = 5.03, p = .081).
Participants’ victim character ratings were almost at floor for the morally bad victim (M =
0.19, SD = 0.52), high for the morally good victim (M = 3.80, SD = 0.65), and in the middle for the
morally neutral victim (M = 2.59, SD = 0.63). Meanwhile, the belief that the victim should have
been saved was equally high across conditions (ranging from 78% for the evil victim condition, to
83% for the morally good victim, to 89% for the morally neutral victim).
As expected, agreement with the three control items (claiming that
bystanders could have stopped the bus with their minds or with their bare hands) was low (between
92% and 98% “no” responses), indicating that participants took the task seriously overall. Responses
to these three items are excluded from the following analyses. In addition (and as preregistered), we
excluded all trials on which reaction time was below 200 ms, since such responses reveal insufficient
Figure 2a displays the proportions of “yes” and “no” responses across all target items per
victim character condition. Participants saw actions that might have saved the victim’s life as slightly
less possible when the victim was morally bad (pYes = .55), compared to good (pYes = .59) or neutral
(pYes = .60). In line with our preregistration, we tested for an effect of victim character on possibility
judgments by comparing a logistic regression model including only random intercepts for participant
and item to a model that additionally contained a fixed effect of character via a likelihood ratio test.
This test was not significant at the preregistered alpha level of .05 (χ2df =2 = 5.92, p = .052).
Nevertheless, counterfactual (life-saving) actions were perceived as slightly more possible when the
victim was good compared to bad (OR = 1.39, z = 2.05, p = .041), or neutral compared to bad (OR
= 1.43, z = 2.20, p = .028).
Figure 2. (A) Proportion of “yes” and “no” responses to the question whether certain actions that
might have saved the victim would have been possible, per moral character condition. Error bars
represent the 95% confidence interval. (B) Distribution of reaction times and median reaction times
(vertical lines) to the question whether certain actions that might have saved the victim would have
been possible, per victim character condition.
: See Figure 2B for grouped histograms of reaction times per victim character
condition and response (“yes” vs. “no”). If a victim’s negative moral character facilitated a “no”
response to questions about the possibility of actions that might have saved them, we would expect
faster “no” compared to “yes” responses in the morally bad victim condition, but faster “yes” than
“no” responses in the morally good victim conditions. Therefore, in line with our preregistration, we
used stepwise model comparisons to test whether response (“yes” vs. “no”), victim character, and
their two-way interaction predicted reaction times. Indeed, the data were best described by a model
that included random intercepts of participant and of item, as well as fixed effects of response (χ2df =1
= 20.94, p <.001), character (χ2df =2 = 24.33, p <.001), and the response ✕ character interaction (χ2df =4
= 50.14, p <.001). Inspection of median reaction times per condition and response revealed that
“yes” and “no” responses were equally fast in the morally bad victim conditions (median rt for “yes”
= 2805 ms, median rt for “no” = 2848 ms, z = 0.39, p = .70), while “yes” responses were faster than
“no” responses in the morally good victim condition (median rt for “yes” = 2721 ms, median rt for
“no” = 2864 ms, z = 3.63, p < .001), and in the neutral victim condition (median rt for “yes” = 2687
ms, median rt for “no” = 2768 ms, z = 2.78, p = .005). Thus, while “yes” responses were faster than
the “no” responses in all conditions, the difference was diminished when victims were described as
morally bad. In particular, “yes” responses seem to have been slowed down in this condition (see
also Figure 2B).
Principle of Alternative Possibilities
: Agreement with the Principle of Alternative
Possibilities was moderate in all victim character conditions (Mgood = 2.78, SDgood = 0.73, Mbad =
2.86, SDbad = 0.73, Mneutral = 2.78, SDneutral = 0.77), and did not differ significantly between
: Participants’ ratings that either of the two bystanders were
responsible for Tom’s accident were at floor level in all conditions (for the man: Mgood = 0.21, SDgood
= 0.61, Mbad = 0.20, SDbad = 0.61, Mneutral = 0.17, SDneutral = 0.43, for the woman: Mgood = 0.32, SDgood
= 0.70, Mbad = 0.17, SDbad = 0.49, Mneutral = 0.22, SDneutral = 0.56) and did not differ significantly
between any of them. The driver’s responsibility was rated as somewhat higher (Mgood = 1.74, SDgood
= 1.23, Mbad = 1.44, SDbad = 1.20, Mneutral = 1.78, SDneutral = 1.06), again with no significant
differences between conditions. The victim himself was seen as most responsible for the accident
(Mgood = 2.96, SDgood = 0.85, Mbad = 2.95, SDbad = 0.98, Mneutral = 2.73, SDneutral = 0.91), also
independent of victim character.
Hierarchical Drift-Diffusion Model:
For further insight into the cognitive process
underlying people’s responses in our task, we applied a drift diffusion model to the data. Drift
diffusion models can extract information about components of a decision process from binary
response data and reaction times (Ratcliff, 1978, Ratcliff & McKoon, 2008, Ratcliff & Smith, 2004,
Ratcliff et al., 2016). They have their origins in psychophysics, where they are used to model
responses to classical paradigms like the Stroop task (see MacLeod, 1991), and have since been
applied to a growing range of other tasks as well (see, e.g., Cohen & Ahn, 2016). Myers and
colleagues (2022) provide an excellent and novice-friendly introduction to the framework. Put
briefly, drift diffusion models assume that a binary decision process can be represented as follows:
There are two “boundaries”, an upper and a lower boundary, which represent the response options
in a two-alternative forced choice task (in our experiment: “yes” vs. “no”). The decision process
starts somewhere in the middle, and then evidence is accumulated over time in favor of either
response option (e.g., via sensory perception or reasoning). Through this accumulation of evidence,
the process gradually moves towards one of the response boundaries. Once a response boundary is
reached, the decision has been made.
The values of four parameters
determine the characteristics of the process in a specific task:
z (starting point), v (drift rate), a (boundary separation), and t (non-decision time). The starting
pointing, or bias, parameter z determines where the evidence accumulation process initiates, on a
normalized scale from 0 to 1—where the value of 0.5 implies that the starting point of evidence
accumulation process is equidistant from the two response boundaries. The z parameter can differ
from 0.5 in circumstances in which participants exhibit an initial bias or predisposition toward either
response option even before the evidence accumulation process begins. The drift rate, v, represents
how strongly or rapidly the evidence draws the response toward either boundary (the ease of
information processing). The drift rate will be very strong for easy tasks, but would be weaker for
harder tasks (e.g., when conflicting evidence is presented). Boundary separation, a, specifies how
much evidence the reasoner requires to decide. Large values for boundary separation represent
increased response caution. Finally, the t parameter represents how much time is needed for all
components that are extraneous to the central decision process. This encompasses stimulus
encoding and the time that is needed to execute a motor response.
Decomposing the decision process in this way allows for insights that the simple comparison
of response times and of the frequency of “yes” vs. “no” responses cannot provide. For instance,
longer reaction times in one experimental condition compared to another may be due to increased
response caution in this condition (differences in boundary separation, a), or to increased difficulty
of information processing (differences in drift rate, v). Evidence for the validity of the drift diffusion
framework comes from studies in which participants have been instructed to modify their behavior
in ways that should specifically affect one parameter or another, or studies in which tasks have been
modified such that specific parameters should be affected. For instance, people have been instructed
to selectively emphasize speed or accuracy of their responses, and it could be shown that increased
accuracy leads to larger boundary separation, whereas emphasizing speed leads to lower values on
this parameter (Milosavljevic et al., 2010, Voss et al., 2004).
Most applications of drift diffusion models have been in psychophysics, and only few in
higher order reasoning like moral judgment (but see Cohen & Ahn, 2016). Therefore, there are no
clear benchmarks as to how the model’s parameters are affected by moral content, and we regard
our analysis as exploratory. Nevertheless, some effects are more plausible than others. For once, it is
possible that drift rate, v, will be affected by victim morality in our task. When faced with the
question whether a morally bad victim could have been saved, participants might experience a
conflict between the obligation to save and the reluctance to help someone unlikeable. They might
still, by and large, arrive at the conclusion that the morally bad victim could have been saved, but
evidence accumulation in favor of this response might be slowed down compared to the conditions
in which there is no conflict. Such a difference in drift rate (weaker in the morally bad victim
conditions) would be consistent with the observation that “yes” responses were slowed down in this
condition relative to the others. However, differences in other parameters might also explain it. For
example, people could be biased towards the “no” response in the morally bad victim conditions, a
difference in the start point parameter z. When faced with the question whether a morally bad victim
could have been saved, people might feel more drawn to the “no” response initially, but then
accumulate evidence towards the “yes” response with the same strength as in the other conditions.
Due to the process starting closer to the “no” response in the morally bad conditions, this difference
would also lead to slower “yes” responses in this condition.
We conducted a hierarchical drift-diffusion model using the hddm package in python (Wiecki
et al., 2013). Convergence was achieved in a model with 5000 Monte Carlo Markov Chain samples,
20% burn-in and a thinning factor of 4. We first fit a model in which all parameters (drift rate v,
non-decision time t, boundary separation a, and bias z) were allowed to vary by victim character
condition. We then fit four further versions of the model, holding one of the four parameters fixed
across conditions each time, and checked whether model fit was improved (based on the deviance
information criterion, DIC, where smaller values indicate a better fit, taking into account the number
of free parameters in the model). A model with three parameters varying per condition (a, v, and z)
provided a better fit than the full model. Using this model as the new baseline, we subsequently fit
three new models with only two parameters varying per condition, and discovered that a model in
which only a and z varied per condition achieved the best fit. Finally, holding either a or z constant
did not further improve model fit. Table 1 provides an overview of the stepwise model
Overview of the stepwise model comparison that was conducted to identify the best-fitting
drift diffusion model for our dataset. The best-fitting solution is highlighted in bold.
1: all parameters
a, v, t, z
2: three parameters
a, v, t
a, v, z
a, z, t
v, t, z
3: two parameters
4: one parameter
Figure 3. Posterior distribution of model parameters in the best-fitting model with condition-
varying boundary separation and bias.
Our final, best-fitting model was thus the one in which non-decision time (t = 1.58 s, 95%
HDI [1.52-1.64]) and drift rate (v = 0.17, 95% HDI [0.12-0.22]) were unaffected by condition.
Meanwhile, boundary separation a and bias z varied significantly across conditions, as shown in
Figure 3. Specifically, boundary separation was lower in the neutral (a = 2.36, 95% HDI [2.27-2.46])
condition than in either the bad (a = 2.45, 95% HDI [2.35-2.55]) or the good (a = 2.45, 95% HDI
[2.36-2.56]) victim conditions (p(Good > Neutral) = .92, p(Bad > Neutral) = .90), whereas the good and bad
conditions did not differ (p(Good > Bad) = .47). This result may indicate that participants exercised
greater response caution when considering the death of an exceptionally good or bad victim, and
reduced caution in reaction to a neutral victim’s death.
Bias was lower in the bad condition (z = .47, 95% HDI [.45, .49]) than in either the neutral
(z = .52, 95% HDI [.50, .54]) or the good (z = .50, 95% HDI [.48, .52]) victim conditions (p(Bad <
Neutral) > .99, p(Bad < Good) = .99). The good and neutral conditions did not differ as markedly (p(Neutral >
Good) = .84; see also Figure 3); though, contrary to expectation, participants were biased toward
possibility for the neutral—and not the morally good—agent. These results therefore point toward
differences in the starting point of evidence accumulation. In particular, participants appeared to be
predisposed toward impossibility judgments in the bad victim condition, relative to either the neutral
or good victim conditions.
Does moral valence influence the construal of alternative possibilities? Our results
documented a small effect of victim character–such that saving an immoral victim from a fatal
accident was perceived as less possible than saving a morally good or neutral victim, even when
carrying out the same set of physical actions (in consonance with Phillips & Cushman, 2017). More
clearly, those possibility decisions were slower in the immoral condition than in either the moral or
Through the lens of drift diffusion modeling, we may be able to understand why. First, the
model indicated that drift rates–the rate at which participants accumulated evidence that the candidate
actions were physically possible–varied only weakly across conditions, if at all (the best fit to the data
was provided by a model in which drift rates didn’t vary). These weak differences in the drift rate
across conditions could be expected to produce an at most small effect of moral valence on the
representation of alternative possibilities, as observed.
Contrary to the distribution of possibility and impossibility responses, response times
differed more markedly across conditions. First, neutral victims produced faster decisions overall
than did moral and immoral victims. This result can be explained by reduced boundary separation
(i.e., alpha parameter): When confronted with a neutral victim, participants lowered their standard of
evidence by comparison to morally good and evil victims for whom greater response caution was
Third, we observed an asymmetry in response times between moral and immoral victims:
Specifically, the recognition of devalued alternative possibilities (i.e., courses of action that would save
the wrongdoer) took longer than the recognition of valued alternative possibilities (i.e., courses of
action that would save the moral exemplar). This asymmetry may be tied to condition differences in
the bias parameter: Upon presentation of the bad victim, the starting point of participants’ evidence
accumulation process was biased toward impossibility–relative to the good and neutral victim
conditions. This difference in the bias parameter, in conjunction with the absence of comparable
differences in the drift rate, may account for the observed effects on response times and
proportions: i.e., that negative alternatives were perceived as approximately equally possible as
positive alternatives, but only after a longer response time than positive alternative possibilities
demanded. In sum, our manipulation of victim character shifted the starting point, without affecting
the rate of evidence accumulation.
Taken together, these results may shed new insight into the link between morality and
modality: In particular, aversion to the victim did not appear to engender cognitive conflict, e.g.,
between evidence that counterfactual actions are physically possible but morally undesirable. This
account would have predicted slower drift rates in the immoral victim condition. Rather, aversion to
the immoral victim appeared to shift participants’ default response toward impossibility, while leaving
unaffected the process by which participants accumulated evidence that the actions were ultimately
However, our victim manipulation did not produce differences in the belief that the victim
ought to be saved–and this fact may somewhat undermine evidence against the conflict model. All
three victims were seen as equally deserving to be saved, and individual differences in this normative
attitude were associated with participants’ drift rate (see Appendix 2): Namely, believing that the
victim should be allowed to die was associated with a lower drift rate, i.e., indicating a slower
accumulation of evidence in favor of a possibility judgment. One interpretation of this is that
normative attitudes toward the victim may have competed with representations of possibility in the
evidence accumulation process after all.
Future research on this phenomenon ought to address certain limitations of our present
study: First, future studies should describe actions that differ more drastically in their perceived
morality. The fact that people also thought that morally bad victims should be saved may be the
reason for the weak effect of victim character on possibility judgments. Second, it is known that drift
diffusion modeling requires a large number of repeated measures (often in the range of 100). In this
regard, our set of 15 items may have been too small to reliably model participants’ decision-making
process (on the other hand, Bayesian implementations of drift diffusion models, as we used here,
can also work with smaller numbers of trials, see Myers et al., 2022). Finally, previous research has
documented discrepancies between responses to realistic versus unrealistic stimuli (Francis et al.,
2017; Kneer & Hannikainen, 2022)–which points toward the need to replicate our current findings
using more immersive stimuli.
In this study, we found limited, yet perhaps promising, evidence that moral valence
influences the construal of alternative possibilities. Morally valued counterfactuals were seen as
slightly more possible than morally devalued counterfactuals, although the difference was small.
More evidently, the representation of devalued alternative possibilities appeared to be more
cognitively demanding than that of valued alternative possibilities. Drift diffusion modeling
indicated that this effect was primarily explained by a pre-decisional tendency to consider devalued
counterfactuals impossible (see also Phillips & Cushman, 2017), and not by the manifestation of
cognitive conflict between a behavior’s morality and its modality.
Clark, C. J., Luguri, J. B., Ditto, P. H., Knobe, J., Shariff, A. F., & Baumeister, R. F. (2014). Free to
punish: a motivated account of free will belief. Journal of personality and social psychology, 106(4),
Clark, C. J., Shniderman, A., Luguri, J. B., Baumeister, R. F., & Ditto, P. H. (2018). Are morally good
actions ever free?. Consciousness and cognition, 63, 161-182.
Cohen, D. J., & Ahn, M. (2016). A subjective utilitarian theory of moral judgment. Journal of
Experimental Psychology: General, 145(10), 1359.
Francis, K. B., Terbeck, S., Briazu, R. A., Haines, A., Gummerum, M., Ganis, G., & Howard, I. S.
(2017). Simulating moral actions: An investigation of personal force in virtual moral
dilemmas. Scientific Reports, 7(1), 13954.
Frankfurt, H. G. (1969). Alternate possibilities and moral responsibility. The journal of philosophy,
Hannikainen, I. R., Machery, E., Rose, D., Stich, S., Olivola, C. Y., Sousa, P., ... & Zhu, J. (2019). For
whom does determinism undermine moral responsibility? Surveying the conditions for free
will across cultures. Frontiers in Psychology, 2428.
Kneer, M., & Hannikainen, I. R. (2022). Trolleys, triage and Covid-19: The role of psychological
realism in sacrificial dilemmas. Cognition and Emotion, 36(1), 137-153.
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative review.
Psychological Bulletin, 109(2), 163.
Miller, J. S., & Feltz, A. (2011). Frankfurt and the folk: An experimental investigation of Frankfurt-
style cases. Consciousness and cognition, 20(2), 401-414.
Milosavljevic, M., Malmaud, J., Huth, A., Koch, C., and Rangel, A. (2010). The drift diffusion model
can account for the accuracy and reaction time of value-based choices under high and low
time pressure. Judgment and Decision Making, 5, 437–449.
Monroe, A. E., & Ysidron, D. W. (2021). Not so motivated after all? Three replication attempts and
a theoretical challenge to a morally motivated belief in free will. Journal of Experimental
Psychology: General, 150(1), e1.
Myers, C. E., Interian, A., & Moustafa, A. A. (2022). A practical introduction to using the drift
diffusion model of decision-making in cognitive psychology, neuroscience, and health
sciences. Frontiers in Psychology, 13, 1039172.
Phillips, J., & Cushman, F. (2017). Morality constrains the default representation of what is possible.
Proceedings of the National Academy of Sciences, 114(18), 4649-4654.
Phillips, J., Luguri, J. B., & Knobe, J. (2015). Unifying morality’s influence on non-moral judgments:
The relevance of alternative possibilities. Cognition, 145, 30-42.
Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59.
Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice
decision tasks. Neural Computation, 20(4), 873-922.
Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two-choice
reaction time. Psychological Review, 111 (2): 333–367.
Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model: Current
issues and history. Trends in Cognitive Sciences, 20(4), 260-281.
Robb, David, "Moral Responsibility and the Principle of Alternative Possibilities", The Stanford
Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.),
Shtulman, A. (2009). The development of possibility judgment within and across domains. Cognitive
Development, 24(3), 293-309.
Shtulman, A., & Phillips, J. (2018). Differentiating “could” from “should”: Developmental changes
in modal cognition. Journal of Experimental Child Psychology, 165, 161-182.
Voss, A., Rothermund, K., and Voss, J. (2004). Interpreting the parameters of the diffusion model:
An empirical validation. Memory and Cognition, 32, 1206–1220.
Wiecki, T.V., Sofer, I., & Frank, M.J. (2013). HDDM: Hierarchical Bayesian estimation of the Drift-
Diffusion Model in Python. Frontiers in Neuroinformatics. 7:14. doi: 10.3389/fninf.2013.00014
The man in the blue jacket could have pushed Tom to the side.
The man in the blue jacket could have pulled Tom out of the way of the bus.
The man in the blue jacket could have gestured at the bus driver to veer off the street.
The man in the blue jacket could have yelled at the driver to break immediately.
The man in the blue jacket could have yelled at Tom to get out of the way of the bus.
The business woman could have pushed Tom out of the way of the bus.
The business woman could have dragged Tom out of the way of the bus.
The business woman could have gestured at the bus driver to veer off the street.
The business woman could have stopped Tom from crossing the road.
The business woman could have signaled at Tom not to cross the road.
The bus driver could have swerved off the road to avoid a collision.
The bus driver could have hit the brakes in time to avoid a collision.
The bus driver could have honked at Tom to keep him off the road.
The bus driver could have yelled at Tom out the window to stay back.
The bus driver could have flashed his lights to catch Tom's attention.
The bus driver could have brought Tom to a halt with his mind. (control item)
The man in the blue jacket could have stopped the bus with his bare hands. (control item)
The business woman could have stopped the bus with her mind. (control item)
To enrich our interpretation of the drift diffusion parameters, we examined the pattern of
correlations between subjects’ posterior estimates for each parameter and their self-reports at the
end of the experiment of the victim’s moral character, bystanders’ responsibility and attitudes
toward saving the victim’s life. Individual differences in non-decision time and boundary separation
revealed no linear relationship to responses to these post-test questions. Meanwhile, bias was
positively associated with evaluations of the victim’s moral character, r = .64, BF10 = 7 x 1032. In
other words, participants who evaluated the victim’s character positively were more likely to exhibit
bias toward a possibility judgment than participants who evaluated the victim’s character negatively.
By contrast, individual differences in the bias parameter were unrelated to beliefs in bystanders’
responsibility, BF10 = 0.68, or the belief that the victim should have been saved, BF10 = 2.78. Drift
rates exhibited the reverse pattern: Specifically, the rate at which participants accumulated evidence
toward a possibility judgment correlated positively with beliefs in bystanders’ responsibility, r = .44,
BF10 = 3 x 1010, and with the belief that the victim should have been saved, r = .27, BF10 = 9 x 103,
but not with evaluations of the victim’s moral character, BF10 = 0.29. In other words, faster evidence
accumulation toward the affirmative response was associated with the overt belief that the agents
were responsible, and with the conclusion that the victim should have been saved.