Content uploaded by Neele Engelmann
Author content
All content in this area was uploaded by Neele Engelmann on May 14, 2023
Content may be subject to copyright.
Who Caused It? Different Effects of Statistical and Prescriptive Abnormality on Causal
Selection in Chains
Neele Engelmann*
Lara Kirfel
Preprint. Final verison forthcoming in Tobia, K. (Ed.), The Cambridge Handbook of
Experimental Jurisprudence.
Abstract
Determining proximate causation is critical for legal liability decisions, but how
judges choose proximate causes is controversial. Recent research suggests that the perceived
abnormality of causal factors determines causal selection patterns for laypeople and legal
experts (Knobe & Shapiro, 2020). We investigate whether normality influences causal
selection in causal chains, which are common in legal scenarios. Our results indicate a
tendency to select abnormal causes only when manipulating prescriptive normality, but not
statistical normality. Judgments about counterfactual relevance or suitability for intervention
only moderately correlate with causal selection patterns. Our findings suggest that the
interplay between causal structure and abnormality in people's reasoning about proximate
causation is more intricate than previously thought.
* Correspondence concerning this article should be addressed to: Neele Engelmann, Center for Law, Behaviour,
and Cognition, Ruhr-University Bochum. Mail: neele.engelmann@ruhr-uni-bochum.de. Neele Engelmann’s
work on this article was supported by Deutsche Forschungsgemeinschaft (DFG), Grant Number: 434400506.
I. INTRODUCTION
Legal systems aim to uphold fairness and justice. If a person is going to be held legally
responsible for some wrongdoing, the law is required to impose punishment in an appropriate
manner, based on the person’s true liability for the misconduct or harm. A crucial step in
establishing liability is showing that a defendant’s action was causal for a particular damage
or harm. Often, without the proof of causation, liability cannot be imposed.
What, however, qualifies as a cause according to the law? In the common law of torts,
“a proximate cause” is a factor that is sufficiently related to an outcome such that the courts
determine the event to be “a cause” of that outcome (Hart & Honoré, 1985; McLaughlin, J. A.,
1925). In a first step towards a test of proximate causation, all possible “causes-in-fact” for the
effect in question are determined. Causes-in-fact include causal factors that pass the so-called
‘but-for test’ (sine qua non condition); that is, the outcome would not have occurred without
the cause (Hart & Honoré, 1985).
An example from the legal literature helps to illustrate the process of identifying factual
causes. In Watson v. Kentucky & Indiana Bridge and Railroad, (137 Ky. 619 (1910)), a railroad
company derailed a tanker and, as a result, spilled some gasoline into a street. A few hours later,
company employee Duerr wandered by that same street and tossed his match after igniting a
cigarette, causing an explosion that injured plaintiff Watson. Both the railroad company as well
as Duerr’s action were necessary for the explosion. Hence, both company and employer qualify
as causes-in-fact for the explosion. But whose action was “close enough” to the harm to count
as a proximate cause? In the legal process, among all causes-in-fact, a proximate cause is one
singled out as an ultimately culpable cause.
1
1
In some cases, there could be multiple proximate causes. For example, in American negligence law,
the doctrine of contributory negligence holds that if the plaintiff’s negligence was a cause of their own
injury, the defendant is relieved of liability. So it is possible that a defendant's negligent act was a
factual and proximate cause of the plaintiff’s injury, but the plaintiff’s negligent act was also a factual
But what qualifies as a “proximate” cause? While the test for proximate causation is
more of an evaluative issue than a factual test (proximate cause is sometimes called “legal
cause”), there are several competing accounts of what makes a cause ‘proximate’ (Knobe &
Shapiro, 2021). Scholars have defended proximate causation as related to directness
(McLaughlin, 1925), foreseeability (Keaton, 1963, Zipursky, 2009), voluntariness (Hart &
Honoré, 1985) and probability-raising (Rizzo, 1980, McLaughlin, 1925). Because of the lack
of a clear definition of which cause(s) to elevate to proximate cause from a set of multiple
causes, the doctrine of proximate causation is notoriously confusing and source of considerable
controversy (Geistfeld, 2020).
Norms and Normality
In the wake of the increasing combination of traditional legal analyses with methods
from the psychological sciences (Prochownik, 2021, Sommers, 2021; Tobia, 2022, Jiménez,
2022), Knobe & Shapiro (2021) have suggested that the best way to make sense of the legal
notion of proximate causation is in terms of people’s ordinary judgments about causation. They
argue that there is a more general pattern of causal judgments that explains most cases of
proximate causation: the influence of norms, or normality (Icard et al, 2017; Kominsky et al.,
2015). Surveying various prominent cases from tort textbooks (Prosser & Keeton, 1984), they
argue that the selection of a proximate cause might be guided by perceived normality in most
or all of these cases. Their argument draws on a series of studies in psychology showing that
people often select an abnormal or unlikely cause over a likely one (Icard et al., 2017). While
most causes that law labels as ‘proximate’ have some feature of e.g, being foreseeable or risky,
according to Knobe & Shapiro (2021) all of them have in common that they can be deemed as
and proximate cause of their injury. In those circumstances, there are two proximate causes of one
injury. In other circumstances, the court’s inquiry about proximate causation could be read as
essentially an effort to establish which of two factual causes is the (only) proximate cause: The
defendant’s action or some other intervening cause. Knobe and Shapiro (2021) focus primarily on
those cases.We thank Kevin Tobia for pointing this out.
“abnormal” in some sense. This judgment about the normality of the cause undergirds the
search for proximate causation, which in turn can then inform judgements of liability and
culpability.
In fact, psychological and philosophical research has a long tradition of showing that
people often tend to pick abnormal, rare, or unexpected factors over normal, frequent, and
expected ones as causes (Hart & Honoré, 1959/1985; Hitchcock & Knobe, 2009; Kahneman &
Miller, 1986). Norms and expectations have also been demonstrated to play a critical role in
how people make causal judgments about omissions, that is, events that didn’t happen. At any
given moment in time, many events don’t happen, but we consider only those that violated
norms or expectations (Gerstenberg & Stephan, 2021; Henne, Niemi, Pinillos, De Brigard, &
Knobe, 2019; Stephan, Willemsen, & Gerstenberg, 2017). A defining feature of this research
is that abnormality is investigated in a very broad sense: not only causal factors that violate
descriptive or statistical norms influence causal judgments but also factors transgressing
prescriptive rules, or functional norms (Alicke & Rose, 2012; Henne, O’Neill, Bello,
Khemlani, & De Brigard, 2019; Icard et al., 2017; Knobe, 2009; Kominsky & Phillips, 2019;
Willemsen & Kirfel, 2019). Prescriptive norms such as rules, moral norms or moral obligations
have a similar influence on causal judgments (Alicke, 2000; Knobe & Fraser, 2008), often
increasing people’s causal attributions to those factors that violate these norms.
Given the uniform influence of various kinds of norms, Knobe and Shapiro (2021)
argue that proximate causation is determined by the influence of normality, broadly construed,
on people’s ordinary causal judgments. According to their argument, Duerr’s action in Watson
v. Kentucky (intentionally throwing a match into a puddle of gasoline) is perceived as more
abnormal, both in a prescriptive and statistical sense, than the defendant’s (spilling gas while
unloading a truck).
The paradigmatic test case for research on normality is a conjunctive causal structure
in which two simultaneously acting causes bring about an outcome (both necessary and only
jointly sufficient). The two causes differ in terms of their normality status, that is, one cause is
normal or norm-corresponding while the other one is abnormal or norm-violating. Knobe &
Shapiro’s argument about prominent cases of proximate causation in tort law draws on this
kind of research, but at the same time neglects that these legal, real-world cases rarely mirror
the idealized structure of simultaneous conjunctive causation that is often used in psychological
research.
A Novel Causal Structure: Causal Chains
Consider again the case of Watson v. Kentucky. Here, the two causes do not
simultaneously occur, but rather, one depends on and is to some extent caused by the other. In
fact, Duerr would not have considered throwing a match onto the street in malicious intent if
it wasn’t for the spilled gasoline. That is, the first cause actually enables the second cause to
occur. Although Knobe and Shapiro make an intuitively compelling case for why in these cases
the proximate cause can be identified by normality alone, the question arises to what extent the
psychological research they draw on matches with the tort law cases they analyze. The primary
goal of the experiments presented in this chapter is thus to investigate how normality affects
causal selection in chain structures, that is, in cases where harm is brought about by several
causes-in-fact which a) occur successively rather than simultaneously, and b) are themselves
causally connected (such as a causal chain A → B → C). If people indeed predominantly select
the more (statistically or morally) abnormal factor as proximate cause in such structures as
well, this would lend additional support to Knobe & Shapiro’s (2021) argument.
While people’s reasoning about causal chains has been subject to extensive
investigation by psychologists and philosophers, these investigations have often focused on
other aspects than normality, such as effects of the kind of causal factors involved (intentional
actions vs. physical events) and their position in the chain (Hilton et al., 2010, Lagnado &
Channon, 2008, McClure et al., 2007), effects of probabilistic dependence (Spellman, 1997),
or effects of chain length (Johnson & Drobny, 1985; Engelmann & Waldmann, 2022). Where
normality has been investigated in the contexts of chains, these chains have been temporal
rather than causal (Reuter et al., 2014, Henne et al., 2020), or compared causal factors that
differed in other aspects than their normality (Livengood & Sytsma, 2020). To our knowledge,
there has not been a systematic investigation of the effects of prescriptive and statistical
normality on causal selection in causal chains. We here aim to lay the groundwork for such an
investigation.
Counterfactuals and Interventions
Moreover, we were interested in shedding some light on possible psychological
mechanisms. While the influence of normality has been demonstrated across numerous studies,
there is an ongoing debate about why we tend to judge abnormal factors as more causal than
normal ones. One prominent account highlights the underlying mechanism of counterfactual
reasoning (Icard et al., 2017; Knobe, 2009; Kominsky & Phillips, 2019; Kominsky et al., 2015;
Phillips & Cushman, 2017; Phillips et al., 2015). The main idea is that counterfactual reasoning
is more likely to be triggered for abnormal rather than normal causal factors. That is, people
are more likely to consider counterfactual situations in which an abnormal factor had been
normal instead (e.g., an unlikely event had not happened, a forbidden action had not been
performed) than they are to think about counterfactual situations in which normal events had
been different. In conjunctive causal structures with two causes of varying normality, the
mental undoing of the more abnormal factor leads to a change in the outcome — the outcome
would not have happened if the abnormal factor had not been present, highlighing its
importance. While the same is true for the more normal cause, the counterfactual situation
where the more normal cause had been different is simply less likely to come to mind.
Relatedly, recent accounts in philosophy have argued that people select those causes in
their judgements and explanations that represent optimal targets of intervention, that is, factors
that may help prevent bad outcomes in the future, or bring positive effects about (Hitchcock,
2012; Hitchcock & Knobe, 2009; Lombrozo, 2010; Morris et al., 2018; Woodward, 2014). For
example, people might judge an abnormal causal factor as ‘the cause’ of a bad outcome because
an intervention on the abnormal—rather than the normal factor—might be the easiest and/or
most effective way to prevent the outcome from happening again (this might be, for instance,
because interventions on rare outcomes need to be performed less frequently, or because there
are existing mechanisms in place for sanctioning rule violations). Causal judgments are steered
towards intervention points because reasoners may have accumulated evidence about the
general effectiveness of intervening on the factor from previous experience (Morris et al.,
2018), or because they are used with the intent to communicate useful target points to others
(Kirfel, Icard & Gerstenberg, 2022). Counterfactual reasoning and reasoning about targets of
intervention are not competing explanations of causal selection patterns. On the contrary,
people might be inclined to consider some counterfactuals rather than others precisely because
doing so points them towards ideal points of intervention. Nevertheless, either counterfactual
relevance or suitability of causal factors as target of intervention might figure more prominently
on people’s minds when they make causal selection judgments, and may thus be more readily
accessible in an experiment.
In our experiments, we thus collect data about causal selection preferences (our main
variable of interest), but also about their perceived suitability as targets of intervention
(Experiment 1), and perceived counterfacual relevance (Experiment 2). In both experiments,
we follow previous research (Iliev et al., 2012, Sosa et al., 2021) in using abstract stimuli rather
than real-world scenarios in order to minimize the influence of participants ‘prior knowledge’
about normality and causal relations.
II. EXPERIMENT 1
The aim of this experiment was to compare the effects of statistical abnormality (i.e.,
rare events or actions) and prescriptive abnormality (i.e., forbidden or morally bad events or
actions) on causal selection in chains of causally connected events. Furthermore, we wanted to
explore whether patterns of causal selection in chains might be driven by people’s preferences
about targets of intervention, that is, whether people tend to select the event or action as “the”
cause of a negative outcome that they would prefer to alter in order to prevent the negative
outcome’s occurrence in the future. All data, material, and code can be found at
https://osf.io/uktna/.
A. Design and Participants
We used a 2 (kind of normality: statistical vs. prescriptive) x 4 (condition: both causes
normal vs. both causes abnormal vs. first cause normal and second cause abnormal vs. first
cause abnormal and second cause normal) design, resulting in eight between-subjects
conditions, to one of which participants were randomly assigned. In each condition, we asked
participants a two-option, forced-choice question about causal selection, and about their
preferred target of intervention. The experiment was implemented in Unipark Questback and
conducted online. We recruited 420 participants on prolific.co (inclusion criteria: first language
English and an acceptance rate of at least 95% or previous surveys on the platform), aiming for
a sample size of 50 participants in each of the eight between-subjects conditions.
2
Participants were compensated with 0.50£ for an estimated five minutes of their time.
We excluded the responses of participants who failed to answer at least one of three
manipulation checks correctly: a question asking about the normality of the first cause, a
2
Yielding 80% power to detect a 20-point difference from .50 in a one-sample z-test for proportions in each
condition (Faul et al., 2007).
question asking about the normality of the second cause, or a question asking about the
necessity of the first cause being present for the second cause to act. We also excluded
participants who failed to correctly answer an attention check in the form of a simple
transitivity task (“If Peter is taller than Alex, and Alex is taller than Max, who is the shortest
among them?”). Our final sample size for all analyses consisted of 363 participants (mean age
= 37.92 years, SD = 12.86 years, 178 female, 179 male, 3 non-binary, 3 another identity or no
answer).
B. Material and Procedure
Drawing inspiration from materials used for moral judgment tasks in Iliev et al. (2012)
and Sosa et al. (2021), we instructed participants about relations between agentic abstract forms
on a fictitious planet: “On planet Coloria, there exist three types of living beings: Triangles,
Squares, and Stars. Triangles, Squares, and Stars are intelligent, social creatures. Between these
three types of beings on planet Coloria, there are certain dynamics.” We informed participants
that both Triangles and Squares are capable of rotating: When Triangles rotate, they can cause
Squares to rotate as well. When Squares rotate, they can cause Stars to die. We further
instructed that Squares cannot rotate unless a Triangle rotated first; that is, a Triangle’s rotation
is necessary for a Square to be able to rotate. Participants were able to watch three short video
clips during this instruction phase, in which they could observe a) a Triangle rotating, b) a
Square rotating, and c) a Star dying (see online repository for all videos). All animations were
created in Microsoft PowerPoint. The rotation of Triangles and Squares was brief and
clockwise, and the Star dying was represented by it sinking towards the lower border of the
screen and simultaneously fading away.
In the statistical normality conditions, we manipulated the normality of the two causes
by stating that Triangles rotated either 99% of the time (normal) or just 1% of the time
(abnormal) in Coloria. Given that a Triangle has rotated, Squares were said to rotate either 99%
of the time (normal) or just 1% of the time (abnormal). These numbers were displayed in the
instruction videos as well (see Figure 17.1, Figure 17.2, for examples for both conditions). In
the prescriptive normality conditions, we either stated that Triangles (or Squares, respectively)
were allowed to rotate on Coloria (normal), or that they were not allowed to rotate (abnormal).
The normative status of each cause was indicated by a green checkmark (allowed) or a red
cross (forbidden), in the same place where the numbers would have been displayed in the
statistical normality conditions.
Once the instruction phase was completed, participants were informed that they could
now observe some events that happened on a particular day in Coloria. In this target video, a
Triangle rotated just shortly before a Square rotated, and a star died just shortly after the Square
had rotated. The Triangle was placed on the left side of the screen, the Square in the middle,
and the Star on the right (as in Figure 17.1 and Figure 17.2). There was no physical contact
between the Triangle and the Square, or between the Square and the Star. To assure that
participants nevertheless did not perceive their movements as merely coincidental, we
instructed that “in this case, a Square rotated because a Triangle rotated, and a Star died because
the Square rotated”. The probability of each agent rotating, or, respectively, whether they were
allowed or forbidden to do so, was still displayed on screen beneath each agent in this video,
and the corresponding numbers or symbols always appeared just before the agent’s movement
was initiated.
After they had watched the video, we asked participants The Causal Selection Question:
“Who caused the Star to die?”, with the options “the Triangle”, or “the Square”. On a
subsequent screen, we assessed whether participants had correctly understood the first cause’s
normality (“How often do Triangles rotate?”, with the options “99% of the time” or “1% of
the time”, or, in the prescriptive normality condition, “Are Triangles allowed to rotate?” with
the options “Yes” or “No”), the second cause’s normality (“How often do Squares rotate when
nearby Triangles rotate?” / “Are Squares allowed to rotate?”, with the same response options
as for the previous question), and the necessity relation between the first and the second cause
(“Can Squares rotate without a Triangle rotating first?”, with the response options “Yes” or
“No”). Finally, on a novel screen, we asked The Target of Intervention Question: “If you could
do one thing to prevent Stars from dying in Coloria in the future, what would you do?”, with
the options “prevent Triangles from rotating” or “prevent Squares from rotating”. The
experiment ended with a debriefing and the assessment of demographic variables.
Figure 17.1. Example for materials used in Experiments 1 and 2 (statistical normality, first
cause normal and second cause abnormal).
Figure 17.2. Example for materials used in Experiments 1 and 2 (prescriptive normality,
first cause normal and second cause abnormal).
C. Results and Discussion
See Figure 17.3 for an overview of results for both the causal selection and the target
of intervention question, and see https://osf.io/uktna/ for all descriptive statistics for causal
selection and target of intervention, respectively. To summarize the results: Somewhat
unexpectedly, we found a) that statistical and prescriptive abnormality had different effects on
causal selection in this experiment, and b) that people’s preferences regarding targets of
intervention were not always consistent with their causal selection patterns.
1. Causal Selection
We conducted a series of model comparisons between logistic regression models
(criterion: likelihood of selecting the second cause in the chain) to determine which set of
predictors provided the best fit to participants’ responses. The final model included both the
type of norm (χ²df = 1 = 5.51, p = .019), the normality condition (χ²df = 3 = 7.98, p = .046), and
the interaction between the two (χ²df = 3 = 9.27, p = .026), indicating that normality had different
effects on causal selection when we manipulated statistical compared to prescriptive normality.
To further break down the interaction, we calculated separate logistic models for statistical and
prescriptive normality. We found that normality had no effect on causal selection in the
statistical conditions (χ²df = 3 = 1.10, p = .78): People always had a tendency to select the second
cause (proportion across conditions: 0.60, 95% CI: [0.53, 0.67], even though the difference in
proportions between the first and the second cause was not significant in each single one of the
four conditions (see online repository for detailed descriptive statistics per condition). In the
prescriptive conditions, on the other hand, normality clearly affected participants’ responses
(χ²df = 3 = 16.16, p = .001). People tended to select the first cause in the chain when the first
cause was abnormal and when the second cause was normal, and also when both were
abnormal. In contrast, they tended to select the second cause when the first cause was normal
and the second cause was abnormal (although this difference was not statistically significant),
and also when both were normal.
Thus, it seemed that when causes in a chain that led to a harmful outcome were “morally
abnormal” (i.e., their behaviour was forbidden), people had a tendency to pick the first norm-
violating factor in the chain as “the” cause of the harmful outcome. Statistical abnormality (i.e.,
rare behaviour) on the other hand had no discernible effect on causal selection – people simply
picked the most recent factor as the most important cause here. When both causes were normal
in the prescriptive conditions (i.e., allowed to act), people’s selection pattern resembled the
statistical conditions, that is, they also predominantly selected the most recent cause in those
cases.
2. Target of intervention
Like the causal selection data, people’s responses to the target of intervention question
were best described by a logistic model including the type of norm (χ²df = 1 = 5.08, p = .024),
condition (χ²df = 3 = 10.17, p = .017), and an interaction between the two (χ²df = 3 = 19.18, p <
.001). Again, we followed up on the interaction by fitting separate models for statistical and
prescriptive normality. Unlike for causal selection, abnormality affected responses in both
conditions (statistical: χ²df = 3 = 18.26, p <.001; prescriptive: χ²df = 3 = 11.09, p = .011), but in
different ways.
For statistical normality, as Figure 17.1 shows, there were now two conditions in which
people indicated that they would prefer to intervene on the first cause in the chain to prevent
the harmful outcome in the future: when the first cause was abnormal and the second cause
was normal, and, albeit not significantly so, when both causes were normal. In the two other
conditions (both abnormal, and first cause normal and second cause abnormal), people
preferred to intervene on the second cause. Thus, leaving aside the “both normal” condition
(where proportions were not significantly different), it seems that people preferred to intervene
on the statistically abnormal event to prevent the harmful outcome in the future, no matter if it
was the first or the second element in the causal chain. When both causes were abnormal, they
preferred to intervene on the second.
In the prescriptive conditions, in contrast, both causes being abnormal led to a
preferential selection of the first cause as target of intervention (see Figure 17.1). Descriptively,
selecting the first cause was also people’s tendency when the first cause was abnormal and the
second cause was normal, but the difference in proportions was not significant (as for the other
two conditions).
In sum, ratings about preferred targets of intervention only partially corresponded to
patterns of causal selection in this experiment. For prescriptive normality, selection of the first
abnormal cause in the chain for intervention was only clearly observed in one condition, even
though people had predominantly selected the first abnormal even in the chain as “the” cause
of the harmful outcome in the previous task. For statistical normality, people preferred to
intervene on the abnormal cause when it was the first element in the chain, even though
abnormality had no effects on what they selected as “the” cause in the previous task.
Furthermore, preferences about intervention were differentially affected by prescriptive vs.
statistical abnormality when both causes were abnormal: this led to predominant selection of
the first cause in the prescriptive conditions, and to predominant selection of the second cause
in the statistical conditions. We still detected a moderate correlation between causal selection
and responses to the target of intervention question (r = 0.45, 95% CI: [0.36; 0.52], p <.001)
3
,
but it was not as high as might have been expected.
Thus, not only did we observe unexpected differences between patterns of causal
selection for statistical versus prescriptive abnormality in this experiment, we also found that
judgments about preferred targets of intervention do not seem to readily explain these
differences. A possible reason could be that asking about targets of intervention is less
suitable for causal chains than for other causal structures (e.g., conjunctive), at least when the
first cause being present is a necessary condition for the second cause’s occurrence. In such
chains, and assuming that neither the harmful outcome nor the second cause have alternative
causes (as our scenarios suggest), intervening on either cause would effectively prevent the
harmful outcome in the future. Since participants presumably recognized this, they may have
reinterpreted our forced-choice question in some unintended way to make sense of it.
A further limitation of this first experiment is that we did not always reach our target
sample size per condition due to exclusions, and that people’s selective tendencies were
occasionally weaker than we had anticipated. We are thus going to replicate our results for
causal selection with a larger sample size in Experiment 2. Furthermore, we are going to
explore whether the perceived counterfactual relevance of events might track participants’
causal selection patterns, rather than their preferences about targets of intervention.
3
Similar values overall as for the statistical (r = 0.48, 95% CI: 0.36; 0.58, p <.001) and prescriptive conditions (r
= 0.40, 95% CI: 0.27; 0.51, p <.001) alone.
Figure 17.3. Results for causal selection (upper panel) and target of intervention (lower panel)
in Experiment 1. Proportions of selection of the first (1) or second (2) cause and 95%
confidence intervals per norm type (statistical vs. prescriptive) and condition (first cause
abnormal and second cause normal, both abnormal, both normal, first cause normal and second
cause abnormal).
III. EXPERIMENT 2
The aim of this experiment was to replicate the causal selection results of Experiment
1 with a larger sample size, and to explore whether people’s judgments about counterfactual
relevance track causal selection better than the target of intervention judgments that we
collected in Experiment 1. Our worry about the target of intervention judgments in Experiment
1 was that people might have had a hard time making sense of this question in the context of a
causal chain of events. Possibly, this led participants to reinterpret the question in some
unintended (and unknown) way. The notion of counterfactual relevance might be less
demanding in this context. Even if people are not sure which cause should be intervened on to
prevent the harmful outcome in the future, they might still have a tendency to think about one
cause’s absence more than the other’s. A preregistration for this experiment can be found at
https://aspredicted.org/7RB_BR8, and all data, materials, and code are available at
https://osf.io/uktna/.
A. Design and Participants
The experimental design was identical to Experiment 1, with the exception that we
asked a question about counterfactual relevance in the place of the previous question about
targets of intervention. Our preregistered target sample size was at least 100 valid responses
per between-subjects condition.
4
We recruited participants on prolific.co (same inclusion criteria as before, plus not
having participated in Experiment 1) until this criterion was reached for all conditions. The
final sample consisted of 849 participants (mean age = 38.50 years, SD = 13.76 years, 430
female, 405 male, 11 non-binary, 3 no answer).
4
Yielding 80% power to detect a 15-point difference from .50 in a one-sample z-test for proportions in each
condition (Faul et al., 2007).
B. Material and Procedure
The new Counterfactual Relevance Measure read “Think about what could have been
different in the story that you just read. Which event do you imagine happening differently?”,
with the options “I think about the Triangle not rotating”, and “I think about the Square not
rotating”. All other materials were identical to Experiment 1.
C. Results and Discussion
See Figure 17.4 for an overview of results for both the causal selection and the
counterfactual relevance question, and see https://osf.io/uktna/ for detailed descriptive statistics
for causal selection and target of intervention, respectively. To summarize our results: We
again found that causal selection was differentially affected by manipulations of statistical
versus prescriptive abnormality in causal chains. Just as judgments about preferred targets of
intervention in the previous experiment, counterfactual relevance judgments could only
partially account for these response patterns.
1. Causal Selection
As in Experiment 1, participants’ causal selection judgments were best described by a
logistic regression model that contained the effects of the kind of norm (χ²df = 1 = 20.74, p
<.001), condition (χ²df = 3 = 17.15, p <.001), and a two-way interaction between these factors
(χ²df = 3 = 16.80, p <.001). Again, we followed up on the interaction by fitting two separate
regression models for the statistical and prescriptive conditions. This time, normality affected
causal selection in the statistical conditions (χ²df = 3 = 8.66, p = .034). The overall still
predominant preference to select the second cause in the chain (see Figure 17.4) was stronger
when the second cause was also abnormal (and the first normal), compared to, for example,
when both causes were normal (β = 0.67, z = 2.39, p = .017).
Normality more strongly affected causal selection in the prescriptive conditions (χ²df =
3 = 25.29, p = <.001). As Figure 17.4 shows, people again predominantly selected the first
cause when the first cause was abnormal and the second was normal, or when both were
normal. They selected the second cause, on the other hand, when both causes were normal.
This time we did not find a systematic selection preference when the first cause was normal
and the second was abnormal.
All in all, these results replicate the findings of Experiment 1, if not in every detail.
People again predominantly selected the second cause in the chain when statistical normality
was manipulated, but this general tendency was now somewhat amplified when this cause
was also the only abnormal factor. People also again tended to select the first abnormal cause
in the chain when prescriptive normality was manipulated (and the second when both were
normal), but did not have a systematic preference when only the second cause was abnormal.
2. Counterfactual Relevance
Counterfactual relevance judgments were also best described by a logistic model
containing effects of the kind of norm (χ²df = 1 = 7.29, p = .007), condition (χ²df = 3 = 50.72, p
< .001), and a two-way interaction between these factors (χ²df = 3 = 24.83, p <.001).
Following up on the interaction revealed that normality clearly affected responses in both
conditions (statistical: χ²df = 3 = 45.58, p < .001; prescriptive: χ²df = 3 = 29.98, p < .001 ), but
as Figure 17.4 shows, it did so in different ways. In the statistical conditions, participants
reported thinking more about counterfactuals involving the abnormal cause whenever just
one cause was abnormal. When both causes were normal or both were abnormal, they
reported thinking more about counterfactuals involving the first cause in the chain. In the
prescriptive conditions on the other hand, it seemed that there was a general tendency to
consider counterfactuals about the first cause in the chain. This tendency was strongest when
the first cause was abnormal and the second normal, and when both were abnormal. In the
other two conditions, there was either no systematic preference, or it was just barely
significant.
As Figure 17.4 shows, counterfactual relevance judgments thus roughly
corresponded to causal selection tendencies in two out of four conditions when prescriptive
normality was manipulated, and in just one out of four conditions when statistical normality
was manipulated. Overall, the two types of judgments were correlated, but again only
moderately so (r = 0.30, 95% CI: 0.24; 0.36, p <.001).
5
5
statistical: r = 0.25, 95% CI: 0.16; 0.33, p <.001; prescriptive: r = 0.34, 95% CI: 0.25; 0.42, p <.001
Figure 17.4. Results for causal selection (upper panel) and counterfactual (lower panel) in Experiment
2. Proportions of selection of the first (1) or second (2) cause and 95% confidence intervals per norm
type (statistical vs. prescriptive) and condition (first cause abnormal and second cause normal, both
abnormal, both normal, first cause normal and second cause abnormal).
IV. GENERAL DISCUSSION
Understanding people’s ordinary concept of causation is of crucial importance for
understanding causation in the law (see, e.g., Knobe & Shapiro, 2021, Prochownik, 2022,
Tobia, 2021). Descriptively, many causes and enabling conditions contribute to any given
event, yet, people often have the tendency to single out just one of these many factors as “the”
cause of an outcome of interest (Hesslow, 1988, Pearl, 2000, Woodward, 2022). A person
throwing a cigarette butt into the forest will likely be selected as “the” cause of an ensuing
forest fire, even if the fire would also not have occurred but for the presence of oxygen in the
air, or but for the presence of dry wood in the forest (see e.g., Pearl, 2000). Such assessments
matter for legal proceedings, as determinations of which factual cause is the (main, most
important, or proximate) cause of harmful outcomes impact who can expect to be held liable
for it.
Knobe & Shapiro (2021), based on an extensive review of both tort law cases and the
cognitive-psychological literature on causal reasoning, recently argued that the way in which
people’s causal selection tendencies are affected by normality could provide important insight
into legal debates about proximate causation. Specifically, they suggested that normality often
dictates which of several candidates is selected as “the” cause of an outcome, and that this
causal judgment, in turn, forms the basis for further moral and legal evaluation, such as
judgments about liability.
“Normality” is understood as a blend of prescriptive and statistical considerations (Bear
& Knobe, 2017, Tobia, 2018), and in line with this notion, prescriptive and statistical (ab-
)normality have been shown to affect causal selection very similarly in a range of tasks and
studies (see, e.g., Gill et al., 2022, Kominsky et al., 2015, Icard et al., 2017). In the forest fire
example, throwing a cigarette butt into the forest is both forbidden (prescriptively abnormal)
as well as relatively rare (statistically abnormal), at least compared to the presence of oxygen
in the air, and to the presence of dry wood in a forest. People’s tendency to select abnormal
factors as causes in so-called conjunctive causal structures (where each of several causes is
necessary for an outcome to occur, and they are only jointly sufficient) is well-documented
(Knobe & Fraser, 2008, Hitchcock & Knobe, 2009, Kominsky et al., 2015, Samland &
Waldmann, 2016, Kirfel & Lagnado, 2021), and has more recently been shown to reverse to a
preference for the more “normal” cause in so-called disjunctive causal structures (where either
of several causes is sufficient for an outcome to occur, and none is necessary given any of the
others) (Gill et al., 2021, Icard et al., 2017).
Studying the Effect of Normality In a Novel Causal Structure
In general, effects of normality on causal selection have almost exclusively been studied
in a) conjunctive and disjunctive causal structures, and b) in situations where both causes occur
at the exact same time. However, this last feature is relatively rare in real world scenarios.
Causes often occur in temporally extended sequences, or even affect each other in causal
chains. For instance, imagine that the person whose cigarette ignited the forest fire was taking
a walk close to the forest when they were suddenly chased by a bear. While fleeing into the
forest in panic, they dropped their cigarette, which subsequently lit the forest on fire. Here, the
person’s action of dropping the cigarette may not be seen as “the” cause of the forest fire, or at
the very least they would not be seen as equally at fault as in the case where they dropped the
cigarette without such provocation.
6
A likely reason is that first, a bear attack is presumably
perceived as more abnormal than dropping a cigarette, and second, the fact that the bear attack
caused the dropping of the cigarette in this case. This specific scenario may be rare, but
6
These judgments of comparative fault, which could related to judgments about causation,
are also important in tort law. The doctrine of comparative negligence holds that if a plaintiff and
defendant both act negligently, and both acts proximately cause the plaintiff’s injury, damages will be
allocated proportionally to comparative fault.We thank Kevin Tobia for this point.
situations of this type (two or more causes occur in a row and differ in their perceived
normality), are arguably frequent in both moral and legal evaluation.
While causal selection in temporal and causal chains has been frequently studied by
cognitive psychologists (Henne et al., 2021, Hilton et al., 2010, Johnson & Drobny, 1985,
Lagnado & Channon, 2008, McClure et al., 2007, Reuter et al., 2014, Spellman, 1997), these
studies have predominantly focused on other factors than normality. To our knowledge, there
are no studies that systematically compare the effects of statistical and prescriptive abnormality
in causal chains. With the studies presented here, we aimed to lay the groundwork for such an
investigation. We used abstract scenarios in order to avoid as many additional inferences and
assumptions on the side of participants as possible, and hoped to thereby achieve a relatively
“pure” manipulation of normality. Unexpectedly, at least in comparison to most studies that
investigated normality in the context of causal selection (albeit in other causal structures), we
found that causal selection was differentially affected by manipulations of statistical versus
prescriptive normality. In both experiments, selection tendencies were hardly affected by
variations in causes’ statistical normality at all. Instead, people generally had a tendency to
select the cause as most important that was closest to the harmful outcome in time and space,
that is, the second cause in the chain.
These results are broadly consistent with Spellman’s (1997) “crediting causality”
model, according to which the event selected as “the” cause should be the one whose
occurrence most increases the probability of the outcome (which, given the instructed
probabilities in the statistical conditions, would generally be the second event in our scenarios).
The results are less consistent with a model according to which people generally select the most
abnormal cause from a chain. Results for the effects of prescriptive normality were more
consistent with such a model: In both experiments, people had a tendency to select the first
abnormal (here: forbidden) event in a chain as the cause of the harmful outcome. When both
causes were normal (allowed to act), they tended to select the second cause, like in the statistical
conditions. This discrepancy between the effects of statistical and prescriptive abnormality is
not only surprising given previous studies on other causal structures, but also because
prescriptive (ab-)normality can arguably often be used as a cue for statistical (ab-)normality:
Forbidden acts tend to be performed rarely, and allowed acts are presumably more frequent.
However, if participants made any inferences along these lines in our prescriptive normality
conditions, they apparently did not affect causal selection. We also found that judgments about
preferred targets of intervention (Experiment 1) and about counterfactual relevance
(Experiment 2) only moderately correlated with causal selection patterns, even though both of
these considerations are generally taken to underlie causal selection in other causal structures
(see, e.g., Hitchcock & Knobe 2009, Morris et al., 2018, Woodward, 2022).
Diverse Effects of Statistical and Prescriptive Abnormality
One possible reason for the different effects of statistical and prescriptive abnormality
in chains could be a selective pragmatic reinterpretation of the test question in the prescriptive
normality conditions (see, e.g., Samland & Waldmann, 2016, Wiegmann et al., 2016). Possibly,
participants who are confronted with a scenario involving rules and rule-violation are primarily
focused on assigning responsibility or blame when faced with the question who “caused” a
negative outcome. Blame or responsibility, then, may be seen as primarily deserved by rule-
violating agents. Participants who are not confronted with rule violations, such as the
participants in our statistical normality conditions, may follow a more descriptive reading of
the test question, and may therefore select the cause that most increases the probability of the
final outcome. This explanation is consistent with the finding that when both causes were
normal in the prescriptive conditions (thus no rule violation occurred), selection patterns
resembled those that we found in the statistical conditions (a preference for the second cause
in the chain). However, the same argument about pragmatic reinterpretation could be made for
causal selection in conjunctive and disjunctive causal structures, where statistical and
prescriptive (ab-)normality, contrary to the findings reported here, have uniform effects.
Indirect Abnormality and the Purpose of Rules
Two further factors may also differ between causal chains involving prescriptively and
statistically abnormal causes: the degree to which the second cause’s behavior is perceived to
be determined by the status of the first cause, and inferences about the function or purpose of
rules (see Kwon et al., 2022, Struchiner, Hannikainen, & Almeida, 2020, Almeida, Struchiner,
& Hannikainen, 2023). These two factors might also interactively affect causal selection. To
illustrate, consider the prescriptive conditions in our experiments. Here, the presumably
inferred function of the rule that sometimes forbids shapes to rotate is to prevent Stars from
dying in Coloria. This is never explicitly stated in our materials, but it is the only harmful
outcome that can be caused by the shapes’ rotations. In some of our cases, however, a certain
tension arises between features of the cases and this presumed purpose. Consider for example
a case where the first cause in a chain is normal (e.g., Triangles are allowed to rotate), but the
second cause is abnormal (e.g., Squares are not allowed to rotate). If rules about rotation are in
place to protect Stars, it is questionable why Triangles should be allowed to perform an action
that will definitely, albeit indirectly, cause a Star to die. Thus, participants in those conditions
could assume that the Square (the second cause) is more free to rotate or not rotate compared
to other conditions, even if the Triangle’s rotation exerts a strong causal influence on the
Square. Different assumptions along these lines in the different prescriptive normality
conditions might affect causal selection when rules are involved, and should be explored in
future studies. In the statistical conditions, no such rules are stated, and participants also know
that the first cause’s rotation never fully determinates the second cause’s rotation (although it
can make it very likely). Thus, there may be specific considerations that are triggered by
statistical and prescriptive (ab-)normality in chains and not in other causal structures, and these
considerations may contribute to differential effects of statistical and prescriptive normality
here.
Finally, while prescriptive normality is easy to manipulate in causal chains,
manipulating statistical normality in such structures is less trivial. In computational models of
causal selection (Icard et al., 2017), statistical normality is represented as a cause’s base rate.
A cause’s base rate is determined by the conglomerate of all its causes, which are typically not
fully known and usually not made explicit in studies that vary statistical normality in
conjunctive or disjunctive causal structures. In chains, however, we know that the first event
in the chain causes the second. The second cause’s normality (its base rate) will therefore
always depend on the first cause’s strength, that is, how likely the first cause is to bring about
the second (see, e.g., Perales et al., 2017, for an overview of measures of causal strength). In
the scenarios that we used here, the second cause’s normality and the first cause’s strength
were, in fact, identical (because the second cause had no other causes than the first). The second
cause’s base rate (its normality) could also be manipulated more independently from the first
cause’s strength in future studies if the second cause was described to have additional
generative or preventative causes.
To sum up, we have uncovered an unexpected difference in effects of prescriptive
versus statistical (ab-)normality on causal selection in causal chains. This difference was not
readily explained by participants’ judgments about preferred targets of intervention or
counterfactual relevance. Since a) judgments about causally connected events that differ in
their perceived normality are arguably frequent in moral and legal evaluation, and b)
understanding the folk concept of causation is important for understanding causation in the
law, more research should be done to elucidate these differential effects. We suggest that such
future studies should focus on possible effects of the pragmatic reinterpretation of test
questions, inferences about “free” versus determined behavior, the functions or purposes of
rules, and on different ways of manipulating statistical (ab-)normality in causal chains.
References
Almeida, G. D. F., Struchiner, N., & Hannikainen, I. R. (2023). Rule is a dual character
concept. Cognition, 230, 105259.
Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological
Bulletin, 126(4), 556.
Alicke, M. D., & Rose, D. (2012). Culpable control and causal deviance. Social and
Personality Psychology Compass, 6(10), 723-735.
Bear, A., & Knobe, J. (2017). Normality: Part descriptive, part prescriptive. Cognition, 167,
25-37.
Engelmann, N., & Waldmann, M. R. (2022). How causal structure, causal strength, and
foreseeability affect moral judgments. Cognition, 226, 105167.
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* Power 3: A flexible statistical
power analysis program for the social, behavioral, and biomedical sciences. Behavior
Research Methods, 39(2), 175-191.
Geistfeld, M. A. (2020). Proximate Cause Untangled. Maryland Law Revew, 80, 420.
Gerstenberg, T., & Stephan, S. (2021). A counterfactual simulation model of causation by
omission. Cognition, 216, 104842.
Gill, M., Kominsky, J. F., Icard, T. F., & Knobe, J. (2022). An interaction effect of norm
violations on causal judgment. Cognition, 228, 105183.
Hart, H. L. A., & Honoré, T. (1985). Causation in the Law. Oxford: Oxford University Press.
Henne, P., Niemi, L., Pinillos, Á., De Brigard, F., & Knobe, J. (2019). A counterfactual
explanation for the action effect in causal judgment. Cognition, 190, 157-164.
Henne, P., Kulesza, A., Perez, K., & Houcek, A. (2021). Counterfactual thinking and recency
effects in causal judgment. Cognition, 212, 104708.
Hesslow, G. (1988). The problem of causal selection. In Hilton, D. J. (Ed.), Contemporary
Science and Natural Explanation: Commonsense Conceptions of Causality (pp. 11-32).
Harvester Press.
Hilton, D. J., McClure, J., & Sutton, R. M. (2010). Selecting explanations from causal chains:
Do statistical principles explain preferences for voluntary causes? European Journal of
Social Psychology, 40(3), 383–400.
Hitchcock, C., & Knobe, J. (2009). Cause and norm. The Journal of Philosophy, 106(11),
587-612.
Hughes, J., Sneed, A., Hardin, M. D., Marshall, A. K., Bibb, G. M., & Littell, W.
(1918). Reports of Civil and Criminal Cases Decided by the Court of Appeals of
Kentucky, 1785-1951 (Vol. 178). J. Bradford.
Icard, T. F., Kominsky, J. F., & Knobe, J. (2017). Normality and actual causal strength.
Cognition, 161, 80-93.
Iliev, R. I., Sachdeva, S., & Medin, D. L. (2012). Moral kinematics: The role of physical
factors in moral judgments. Memory & Cognition, 40(8), 1387-1401.
Jiménez, F. (2022). The Limits of Experimental Jurisprudence. In Tobia, K. (Ed.),
The Cambridge Handbook of Experimental Jurisprudence.
Johnson, J. T., & Drobny, J. (1985). Proximity biases in the attribution of civil liability.
Journal of Personality and Social Psychology, 48(2), 283.
Kahneman, D., & Miller, D. T. (1986). Norm theory: Comparing reality to its alternatives.
Psychological Review, 93(2), 136.
Keeton, R. (1963). Legal Cause in the Law of Torts, Columbus, OH: Ohio State University
Press.
Kirfel, L., & Lagnado, D. (2021). Causal judgments about atypical actions are influenced by
agents' epistemic states. Cognition, 212, 104721.
Kirfel, L., Icard, T., & Gerstenberg, T. (2022). Inference from explanation. Journal of
Experimental Psychology: General, 151(7), 148.
Knobe, J., & Fraser, B. (2008). Causal judgment and moral judgment: Two experiments. In
Walter Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science of
morality: Intuition and diversity (pp. 441 - 448). MIT Press.
Knobe, J., & Shapiro, S. (2021). Proximate cause explained. The University of Chicago Law
Review, 88(1), 165-236.
Kominsky, J. F., Phillips, J., Gerstenberg, T., Lagnado, D., & Knobe, J. (2015). Causal
superseding. Cognition, 137, 196-209.
Kominsky, J. F., & Phillips, J. (2019). Immoral professors and malfunctioning tools:
Counterfactual relevance accounts explain the effect of norm violations on causal
selection. Cognitive Science, 43(11), e12792.
Kwon, J., Tenenbaum, J., & Levine, S. (2022). Flexibility in Moral Cognition: When is it
okay to break the rules?. In Proceedings of the Annual Meeting of the Cognitive Science
Society (44).
Lagnado, D. A., & Channon, S. (2008). Judgments of cause and blame: The effects of
intentionality and foreseeability. Cognition, 108(3), 754–770.
Livengood, J., & Sytsma, J. (2020). Actual Causation and Compositionality. Philosophy of
Science, 87(1), 43–69. https://doi.org/10.1086/706085
Lombrozo, T. (2010). Causal–explanatory pluralism: How intentions, functions, and
mechanisms influence causal ascriptions. Cognitive Psychology, 61(4), 303-332.
McClure, J., Hilton, D. J., & Sutton, R. M. (2007). Judgments of voluntary and physical
causes in causal chains: Probabilistic and social functionalist criteria for attributions.
European Journal of Social Psychology, 37(5), 879–901.
McLaughlin, J. A. (1925). Proximate cause. Harvard Law Review, 39(2), 149-199.
Morris, A., Phillips, J. S., Icard, T., Knobe, J., Gerstenberg, T., & Cushman, F. (2018).
Causal judgments approximate the effectiveness of future interventions,
https://psyarxiv.com/nq53z/.
Pearl, J. (2000). Causality: Models, reasoning and inference. Cambridge University Press.
Perales, J. C., Catena, A., Cándido, A., & Maldonado, A. (2017). Rules of causal judgment:
Mapping statistical information onto causal beliefs. In M. R. Waldmann (Ed.), The
Oxford handbook of causal reasoning (pp. 29–51). Oxford University Press.
Phillips, J., Luguri, J. B., & Knobe, J. (2015). Unifying morality’s influence on non-moral
judgments: The relevance of alternative possibilities. Cognition, 145, 30-42.
Phillips, J., & Cushman, F. (2017). Morality constrains the default representation of what is
possible. Proceedings of the National Academy of Sciences, 114(18), 4649-4654.
Prochownik, K. (2021). The experimental philosophy of law: New ways, old questions, and
how not to get lost. Philosophy Compass, 16(12), e12791.
Prochownik, K. (2022). Causation in the law, and experimental philosophy. In Willemsen, P.,
& Wiegmann, A. (Eds.), Advances in Experimental Philosophy of Causation (pp. 165 -
188). Bloomsbury.
Prosser, W. L., & Keeton, P. (1984). Prosser and Keeton on Torts. West Publishing.
Rizzo, M. J. (1980). The imputation theory of proximate cause: an conomic
framework. Georgia Law Review, 15, 1007.
Reuter, K., Kirfel, L., Van Riel, R., & Barlassina, L. (2014). The good, the bad, and the
timely: How temporal order and moral judgment influence causal selection. Frontiers
in Psychology, 5, 1336.
Samland, J., & Waldmann, M. R. (2016). How prescriptive norms influence causal
inferences. Cognition, 156, 164-176.
Sommers, R. (2021). Experimental jurisprudence. Science, 373(6553), 394-395.
Sosa, F. A., Ullman, T., Tenenbaum, J. B., Gershman, S. J., & Gerstenberg, T. (2021). Moral
dynamics: Grounding moral judgment in intuitive physics and intuitive psychology.
Cognition, 217, 104890.
Spellman, B. A. (1997). Crediting causality. Journal of Experimental Psychology: General,
126(4), 323–348.
Struchiner, N., Hannikainen, I. R., & de Almeida, G. D. F. (2020). An experimental guide to
vehicles in the park. Judgment and Decision Making, 15(3), 312-329.Tobia, K. P. (2018).
How people judge what is reasonable. Alabama Law Review, 70, 293.
Stephan, S., Willemsen, P., & Gerstenberg, T. (2017). Marbles in inaction: Counterfactual
simulation and causation by omission. In Proceedings of the 39th Annual Conference of
the Cognitive Science Society, Austin, TX, 2017 (pp. 1132-1137). Cognitive Science
Society.
Tobia, K. (2021). Law and the cognitive science of ordinary concepts. In Broek, B., Haage,
J., & Vincent, N. (Eds.), Law and Mind: A Survey of Law and the Cognitive Sciences
(pp. 86 - 89). Cambridge University Press.
Tobia, K. (2022). Experimental jurisprudence. The University of Chicago Law Review, 89(3),
735-802.
Wiegmann, A., Samland, J., & Waldmann, M. R. (2016). Lying despite telling the truth.
Cognition, 150, 37-42.
Willemsen, P., & Kirfel, L. (2019). Recent empirical work on the relationship between causal
judgements and norms. Philosophy Compass, 14(1), e12562.
Woodward, J. (2006). Sensitive and insensitive causation. The Philosophical Review, 115(1),
1-50.
Woodward, J. (2014). A functional account of causation; or, a defense of the legitimacy of
causal thinking by reference to the only standard that matters—usefulness (as opposed
to metaphysics or agreement with intuitive judgment). Philosophy of Science, 81(5),
691-713.
Woodward, J. (2022). Mysteries of actual causation: It’s complicated. In Willemsen, P., &
Wiegmann, A. (Eds.), Advances in Experimental Philosophy of Causation.
Bloomsbury.
Zipursky, B. C. (2009). Foreseeability in breach, duty, and proximate cause. Wake Forest
Law Review., 44, 1247.