Article

Exploring the Proof Paradoxes

Authors:
Article

Exploring the Proof Paradoxes

If you want to read the PDF, try requesting it from the authors.

Abstract

A simple way of understanding standards of proof is in terms of degrees of probability. On this account, to prevail in a civil case a claimant need only prove the defendant's liability to a degree above 0.5. For the prosecution to succeed in a criminal case, it needs to prove guilt to a considerably higher degree: 0.95, say. The proof paradoxes are a set of examples, well known to evidence lawyers, that are often taken to suggest that there is something wrong with this probabilistic account of standards of proof. One example is Blue Bus: Mrs. Brown is run down by a bus; 60 percent of the buses that travel along the relevant street are owned by the blue bus company, and 40 percent by the red bus company. The only witness is Mrs. Brown, who is color-blind. Mrs. Brown appears to be able to establish a 0.6 probability that she was run down by a blue bus. Yet the overwhelming intuition is that the 60 percent statistic is not sufficient for Mrs. Brown to prove her case in a civil trial. Thus, the argument goes, proof involves something more than just probability. After introducing other similar examples, this article undertakes a detailed examination of this type of "proof paradox." It focuses on particular analyses of these paradoxes, distinguishing between inferential, moral, and knowledge-based analyses, in the process touching on issues such as the reference class problem and the lottery paradox. The article emphasizes the richness and complexity of the puzzle cases and suggests why they are difficult to resolve.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In part II of his monograph, he lists six difficulties for a Pascalian account of judicial probability, i.e. the difficulty about conjunction, inference upon inference, negation, proof beyond reasonable doubt, a criterion and corroboration and convergence. For the sake of discussion, these difficulties are collectively referred to as the proof paradox [30,35] or we name the Cohenian paradox, which is one of the core issues in the contemporary new evidence scholarship. A paradox is generally a puzzling conclusion we seem to be driven towards by our reasoning, but which is highly counterintuitive nevertheless. ...
... In this paper, we will respond to the Cohenian paradox and its related variants such as Blue Bus and Prisoner [30]. And then we will provide a possible probabilistic resolution. ...
... And then we will provide a possible probabilistic resolution. Although the Cohenian paradox contains many difficulties, it is not intended or able to discuss all of them in this paper, and we will confine our discussion to the following three, i.e. the difficulty or problem of probability selection, calculation or computation and conjunction-these difficulties or problems are sometimes called the judicial proof paradox [28,30]. ...
Article
The Cohenian paradox is one of the main themes of judicial probability theory and one of the core topics discussed by the new evidence scholarship. To resolve this paradox, evidence scholars nowadays have proposed various solutions, including legal probabilism, judicial Bayesian decision theory and relative plausibility theory. These three solutions can be classified into two approaches, i.e. the probabilism and the explanationism. Among them, the former includes legal probabilism and judicial Bayesian decision theory, and the latter includes the relative plausibility theory. However, the two approaches have recently begun to converge and become more understandable to each other. For example, Welch (2020, Int. J. Evid. Proof , 24, 351-373) has recently defended and improved the relative plausibility theory by substantially improving it with the help of Bayesian decision theory. In this paper, by contrast, we attempt to defend the probabilistic approach-legal probabilism and Bayesian decision theory on the basis of relative plausibility theory.
... According to a widely accepted approach, standards of proof should be understood as probability thresholds that correspond to a numerical degree of subjective confidence that the judge or juror must reach on each element of the claim to justify a verdict for the party carrying the burden of persuasion (see, for example, Finkelstein and Fairley, 1970;Hamer, 2004;Kaplan, 1968;Kaye, 1999;Koehler and Shaviro, 1990;Lempert, 1977;Redmayne, 2008). For instance, this interpretation assumes that the preponderance of the evidence standard applicable in most of the civil cases in the United States simply means 'proven by a probability higher than 0.5'. ...
... Exactly how to understand standards of proof has been the object of much controversy in evidence scholarship. 8 Statistically-minded authors argue that classical probability theory provides the best model for interpreting standards of proof (see, for example, Finkelstein and Fairley, 1970;Hamer, 2004;Kaplan, 1968;Kaye, 1999;Koehler and Shaviro, 1990;Lempert, 1977;Redmayne, 2008). According to these authors, standards of proof should be understood as probability thresholds ranging from 0 to 1. ...
... Such a task seems impossible. In light of these problems with objective forms of probability, 12 proponents of the probability approach usually turn to some form of the Bayes rule for rescue (see, for example, Finkelstein and Fairley, 1970;Hamer, 2004;Kaplan, 1968;Kaye, 1999;Koehler and Shaviro, 1990;Lempert, 1977;Redmayne, 2008;Tillers and Green, 1988). One can express such rule in many different ways. ...
Article
In this article I address a foundational question in evidence law: how should judges and jurors reason with evidence? According to a widely accepted approach, legal fact-finding should involve a determination of whether each cause of action is proven to a specific probability. In most civil cases, the party carrying the burden of persuasion is said to need to persuade triers that the facts she needs to prevail are “more likely than not” true. The problem is that this approach is both a descriptively and normatively inadequate account of reasoning with evidence in law. It does not offer a plausible picture of how people in general, and legal fact-finders in particular, reason with evidence. And it turns out that if we try to do what the approach tells us, we end up with absurd results. Faced with these difficulties, a group of evidence scholars has proposed an alternative. According to them, legal fact-finding should involve a determination of which hypothesis best explains the admitted evidence, rather than whether each cause of action is proven to a specific probability. My main contributions in this article are twofold. First, I elaborate on the many descriptive, normative and explanatory considerations in support of an explanation-based approach to standards. Second, I offer novel replies to pressing objections against that same approach.
... 3), Ho (2008: 135-142) and Redmayne (2008: Sect. 1) for an overview. 4 Redmayne (2008), Enoch et al. (2012), Smith (2018, Gardiner (2019) discuss this case. 5 Originally described in Nesson (1979), here I'm adopting the presentation from Redmayne (2008). ...
... 4 Redmayne (2008), Enoch et al. (2012), Smith (2018, Gardiner (2019) discuss this case. 5 Originally described in Nesson (1979), here I'm adopting the presentation from Redmayne (2008). 6 The use of toy-cases like these may muddy the waters. ...
Article
Full-text available
The notion of individualised evidence holds the key to solve the puzzle of statistical evidence, but there’s still no consensus on how exactly to define it. To make progress on the problem, epistemologists have proposed various accounts of individualised evidence in terms of causal or modal anti-luck conditions on knowledge like appropriate causation (Thomson 1986), sensitivity (Enoch et al. 2012) and safety (Pritchard 2018). In this paper, I show that each of these fails as satisfactory anti-luck condition, and that such failure lends abductive support to the following conclusion: once the familiar anti-luck intuition on knowledge is extended to individualised evidence, no single causal or modal anti-luck condition on knowledge can succeed as the right anti-luck condition on individualised evidence. This conclusion casts serious doubts on the fruitfulness of the move from anti-luck conditions on knowledge to anti-luck conditions on individualised evidence. I expand on these doubts and point out further aspects where epistemology and the law come apart: epistemic anti- luck conditions on knowledge do not adequately characterise the legal notion of individualised evidence.
... The Gatecrasher Case, the Prison Riot Case and the Blue Bus Case started a discussion that has been going on for over forty years, and is still going strong. The problem of naked statistical evidence has been addressed by a parade of eminent scholars: Tribe (1971), Cohen (1977), Kaye (1979), Williams (1979), Eggleston (1980), Twining (1980), Nesson (1985), Fienberg (1986), Thomson (1986), Wright (1988), Allen (1991), Wasserman (1991), Posner (1999, Colyvan, Regan & Ferson (2001), Stein (2005), Schauer (2006), Redmayne (2008), Pundik (20082011), Enoch, Spectre & Fisher (2012, Cheng (2013), Nunn (2015), Nance (2016), Smith (2017), Di Bello (2018), Moss (2018) and Pardo (2019). A number of different ideas on the root of the problem have been proposed, and several new ideas have been presented in recent years. ...
... As we have seen, verdicts against the defendant in the Gatecrasher Case, the Prison Riot Case and the Blue Bus Case are unacceptable in spite of the fact that the naked statistical evidence satisfies the probability-threshold set by the standard of proof. Cases with naked statistical evidence are therefore sometimes described as 'proof paradoxes' (Kaye 1979;Redmayne 2008;Pardo 2019). Some scholars have argued that these cases show that legal proof cannot be understood in terms of probability. ...
... Hajek (2007) provides further discussion. 4 For overview, see Redmayne (2008) or Ross (2020). models. ...
... 8 For instance, it would be reasonable to initially adopt a heightened degree of caution in your dealings with them, for example by refraining from giving them your spare-key for safe keeping as you usually would with a new arrival to your sleepy town. 9 5 Adapted from the famous Prisoners case in Redmayne (2008), where the potential attackers are prisoners in a yard. 6 See Enoch et. ...
Article
Full-text available
Traditional views on which beliefs are subject only to purely epistemic assessment can reject demographic profiling, even when based on seemingly robust evidence. This is because the moral failures involved in demographic profiling can be located in the decision not to suspend judgement, rather than supposing that beliefs themselves are a locus of moral evaluation. A key moral reason to suspend judgement when faced with adverse demographic evidence is to promote social equality-this explains why positive profiling is dubious in addition to more familiar cases of negative profiling and why profiling is suspect even when no particular action is at stake. My suspension-based view, while compatible with revisionary normative positions, does not presuppose them. Philosophers of all stripes can reject demographic profiling both in thought and deed.
... The Gatecrasher Case, the Prison Riot Case and the Blue Bus Case started a discussion that has been going on for over forty years, and is still going strong. The problem of naked statistical evidence has been addressed by a parade of eminent scholars: Tribe (1971), Cohen (1977), Kaye (1979), Williams (1979), Eggleston (1980), Twining (1980), Nesson (1985), Fienberg (1986), Thomson (1986), Wright (1988), Dant (1988), Allen (1991), Wasserman (1991), Posner (1999), Colyvan et al. (2001), Stein (2005), Schauer (2006), Redmayne (2008), Pundik (2008Pundik ( , 2011, Enoch et al. (2012), Cheng (2013), Nunn (2015), Blome-Tilman (2015), Nance (2016), Smith (2017), Di Bello (2018), Gardiner (2018), Moss (2018) and Pardo (2019). A number of different ideas on the root of the problem have been proposed, and several new ideas have been presented in recent years. ...
... As we have seen, verdicts against the defendant in the Gatecrasher Case, the Prison Riot Case and the Blue Bus Case are unacceptable in spite of the fact that the naked statistical evidence satisfies the probability-threshold set by the standard of proof. Cases with naked statistical evidence are therefore sometimes described as 'proof paradoxes' (Kaye, 1979;Pardo 2019;Redmayne, 2008). Some scholars have argued that these cases show that legal proof cannot be understood in terms of probability. ...
Article
The problem of ‘naked statistical evidence’ is one of the most debated issues in evidence theory. Most evidence scholars agree that it is deeply problematic to base a verdict on naked statistical evidence, but they disagree on why it is problematic, and point to different characteristics of naked statistical evidence as the root of the problem. In this article, the author discusses the merits of different solutions to the problem of naked statistical evidence, and argues for the incentive-solution: verdicts based on naked statistical evidence are unacceptable as they do not contribute in a positive way to the incentive structure for lawful behaviour.
... Some scholars have expressed reservations about these puzzles, noting that they are far removed from trial practice (Schmalbeck, 1986;Allen and Leiter, 2001). Despite these reservations, however, philosophers and legal scholars have shown a renewed interest in naked statistical evidence and the puzzles that it raises in both criminal and civil cases (see, e.g., Wasserman, 1991;Stein, 2005;Redmayne, 2008;Ho, 2008;Roth, 2010;Enoch et al., 2012;Cheng, 2013;Pritchard, 2015;Blome-Tillmann, 2015;Nunn, 2015;Pundik, 2017;Moss, 2018;Pardo, 2018;Smith, 2018;Bolinger, forthcoming;Di Bello, forthcoming). ...
Article
Full-text available
Smith (2018) argues that, unlike other forms of evidence, naked statistical evidence fails to satisfy normic support. This is his solution to the puzzles of statistical evidence in legal proof. This paper focuses on Smith's claim that DNA evidence in cold-hit cases does not satisfy normic support. I argue that if this claim is correct, virtually no other form of evidence used at trial can satisfy normic support. This is troublesome. I discuss a few ways in which Smith can respond.
... The inference is lacking: in weight (Cohen 1977, 74); appropriate causal connection (Thomson 1986); case-specificity (Stein 2005, 64-106); ability to provide the best explanation (Dant 1988;Allen and Pardo 2008); immunity to the problem of the reference class (Allen and Pardo 2007); or sensitivity to the truth (Enoch et al. 2012). 6 I am unconvinced by these epistemic accounts, because I think that not only does each one suffer from its own problems (Pundik 2008a), but they all share some common deficiencies (Pundik 2011; see also Schoeman 1987 andRedmayne 2008). For example, why should the very same inference that is condemned as epistemically objectionable nevertheless be good enough for prediction purposes? ...
Article
This paper examines Lockie’s theory of libertarian self-determinism in light of the question of prediction: “Can we know (or justifiably believe) how an agent will act, or is likely to act, freely?” I argue that, when Lockie's theory is taken to its full logical extent, free actions cannot be predicted to any degree of accuracy because, even if they have probabilities, these cannot be known. However, I suggest that this implication of his theory is actually advantageous, because it is able to explain and justify an important feature of the practices we use to determine whether someone has acted culpably: our hostility to the use of predictive evidence.
... Both pro and con currents of the new evidence scholarship have run into difficulties. Legal probabilism has been attacked along multiple fronts: 'trial by mathematics' may decrease the likelihood of accurate outcomes (Tribe, 1971); probabilism, if linked to the frequentist interpretation of probability, faces an unresolved problem of reference classes (Allen, 2017: 136;Allen and Pardo, 2007); and high probability may be insufficient for judgments of liability (Redmayne, 2008). Bayesian decision theory's juridical uses have been censured for demeaning defendants' individuality and autonomy (Wasserman, 1991) and for ignoring base rates and systematic errors in probability judgments (Allen and Leiter, 2001: 1503-1506. ...
Article
The new evidence scholarship addresses three distinct approaches: legal probabilism, Bayesian decision theory and relative plausibility theory. Each has major insights to offer, but none seems satisfactory as it stands. This paper proposes that relative plausibility theory be modified in two substantial ways. The first is by defining its key concept of plausibility, hitherto treated as primitive, by generalising the standard axioms of probability. The second is by complementing the descriptive component of the theory with a normative decision theory adapted to legal process. Because this version of decision theory is based on plausibilities rather than probabilities, it generates plausibilistic expectations as outputs. Because these outputs are comparable, they function as relative plausibilities. Hence the resulting framework is an extension of relative plausibility theory, but it retains deep ties to legal probabilism, through the proposed definition of plausibility, and to Bayesian decision theory, through the normative use of decision theory.
... There is no evidence available to show who joined in and who did not. [Adapted from Redmayne 2008] The 'standard' intuition is that it would not be appropriate in these cases to impose civil or criminal sanctions on the basis of the inculpatory statistical evidence. Such intuitions raise issues with important theoretical and practical ramifications for the law. ...
Article
Full-text available
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: 'Is it ever appropriate to base a legal verdict on statistical evidence alone?' Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple sources of incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own.
... ,Buchak (2014),Blome- Tillmann (2015;,Gardiner (2018, forthcoming-b),Moss (2018b;, andBolinger (2020). For surveys, seeRedmayne (2008),Gardiner (2019a), andRoss (2021). ...
Article
Full-text available
This essay presents a unified account of safety, sensitivity, and severe testing. S’s belief is safe iff, roughly, S could not easily have falsely believed p, and S’s belief is sensitive iff were p false S would not believe p. These two conditions are typically viewed as rivals but, we argue, they instead play symbiotic roles. Safety and sensitivity are both valuable epistemic conditions, and the relevant alternatives framework provides the scaffolding for their mutually supportive roles. The relevant alternatives condition holds that a belief is warranted only if the evidence rules out relevant error possibilities. The safety condition helps categorise relevant from irrelevant possibilities. The sensitivity condition captures ‘ruling out’. Safety, sensitivity, and the relevant alternatives condition are typically presented as conditions on warranted belief or knowledge. But these properties, once generalised, help characterise other epistemic phenomena, including warranted inference, legal verdicts, scientific claims, reaching conclusions, addressing questions, warranted assertion, and the epistemic force of corroborating evidence. We introduce and explain Mayo’s severe testing account of statistical inference. A hypothesis is severely tested to the extent it passes tests that probably would have found errors, were they present. We argue Mayo’s account is fruitfully understood using the resulting relevant alternatives framework. Recasting Mayo’s condition using the conceptual framework of contemporary epistemology helps forge fruitful connections between two research areas—philosophy of statistics and the analysis of knowledge—not currently in sufficient dialogue. The resulting union benefits both research areas.
... 469, 58 N.E.2d 754, 1945). See also Redmayne (2008) and Enoch et al. (2012). 4 See, for example, Keane (1996: Chap. ...
Article
Full-text available
The primary aim of this paper is to defend the Lockean View—the view that a belief is epistemically justified iff it is highly probable—against a new family of objections. According to these objections, broadly speaking, the Lockean View ought to be abandoned because it is incompatible with, or difficult to square with, our judgments surrounding certain legal cases. I distinguish and explore three different versions of these objections—The Conviction Argument, the Argument from Assertion and Practical Reasoning, and the Comparative Probabilities Argument—but argue that none of them are successful. I also present some very general reasons for being pessimistic about the overall strategy of using legal considerations to evaluate epistemic theories; as we will see, there are good reasons to think that many of the considerations relevant to legal theorizing are ultimately irrelevant to epistemic theorizing.
... The proof paradox begins from the thought that deciding a legal case on the basis of statistical evidence alone can seem problematic.' See alsoCohen (1977), Di Bello (2019), Enoch, Spectre, and Fisher (2012),Moss (2018),Redmayne (2008),Thomson (1986) andTribe (1971).18 Some remain unconvinced that these disparate puzzles call for a unified explanation (see for instanceBackes 2020). ...
Article
Full-text available
Do privacy rights restrict what is permissible to infer about others based on statistical evidence? This paper replies affirmatively by defending the following symmetry: there is not necessarily a morally relevant difference between directly appropriating people’s private information—say, by using an X-ray device on their private safes—and using predictive technologies to infer the same content, at least in cases where the evidence has a roughly similar probative value. This conclusion is of theoretical interest because a comprehensive justification of the thought that statistical inferences can violate privacy rights is lacking in the current literature. Secondly, the conclusion is of practical interest due to the need for moral assessment of emerging predictive algorithms.
... It is clear that this evidence does make it more than 50% likely that the bus involved was a Blue-Bus bus-given this evidence, we would sooner bet on the bus being a Blue-Bus bus than a Red-Bus bus. Although the plaintiff has succeeded in making her claim more than 50% likely, most agree that it would be unjust for the Blue-Bus company to be held liable on this basis (Allensworth, 2009;Enoch et al., 2012;Kaye, 1982;Redmayne, 2008;Stein, 2005, ch. 3; Thomson, 1986). ...
Article
Full-text available
The standard of proof applied in civil trials is the preponderance of evidence, often said to be met when a proposition is shown to be more than 50% likely to be true. A number of theorists have argued that this 50%+ standard is too weak—there are circumstances in which a court should find that the defendant is not liable, even though the evidence presented makes it more than 50% likely that the plaintiff’s claim is true. In this paper, I will recapitulate the familiar arguments for this thesis, before defending a more radical one: The 50%+ standard is also too strong—there are circumstances in which a court should find that a defendant is liable, even though the evidence presented makes it less than 50% likely that the plaintiff’s claim is true. I will argue that the latter thesis follows naturally from the former once we accept that the parties in a civil trial are to be treated equally. I will conclude by sketching an alternative interpretation of the civil standard of proof
... The 100th prisoner played no role in the assault and could have done nothing to stop it. There is no further information that we can use to settle the question of any particular prisoner's involvement [Redmayne 2008] The BLUE BUS scenario concerns liability for a civil wrong-a negligent harm, typically called a 'tort'-while the PRISONERS case primarily concerns criminal wrongdoing, although many crimes can also be pursued as civil wrongs. In this paper the focus will be on civil law, although I will make some dialectical observations encompassing criminal cases. ...
Article
Full-text available
This paper defends the heretical view that, at least in some cases, we ought to assign legal liability based on purely statistical evidence. The argument draws on prominent civil law litigation concerning pharmaceutical negligence and asbestos-poisoning. The overall aim is to illustrate moral pitfalls that result from supposing that it is never appropriate to rely on bare statistics when settling a legal dispute.
... Thomson's account has been often and severely criticised because she does not spell out the causal relation that figures so prominently in her account (see Redmayne (2008), Enoch et al. (2012), Enoch and Fisher (2015), Gardiner (2018)). Now, if we specify Thomson's cause as a Lewisian difference maker understood in terms of our conditionals, our analysis of epistemic sensitivity coincides with her causal account. ...
Article
Full-text available
In this paper, we put forth an analysis of sensitivity which aims to discern individual from merely statistical evidence. We argue that sensitivity is not to be understood as a factive concept, but as a purely epistemic one. Our resulting analysis of epistemic sensitivity gives rise to an account of legal proof on which a defendant is only found liable based on epistemically sensitive evidence.
... It does not refer to any doxastic or epistemic state. 5 Wording taken from Redmayne (2008). 6 See Wells (1992). ...
Article
Full-text available
Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument-endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018-rejects the use of such evidence by appealing to epistemic norms that apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature.
... It does not refer to any doxastic or epistemic state. 5 Wording taken from Redmayne (2008). 6 See Wells (1992). ...
Preprint
Full-text available
Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument-endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018-rejects the use of such evidence by appealing to epistemic norms that apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature.
... The 100th prisoner played no role in the assault and could have done nothing to stop it. There is no further information that we can use to settle the question of any particular prisoner's involvement (Redmayne 2008). ...
Article
Full-text available
There is much to like about the idea that justification should be understood in terms of normality or normic support (Smith in Between probability and certainty, Oxford University Press, Oxford, 2016; Goodman and Salow in Philosophical Studies 175: 183–196, 2018). The view does a nice job explaining why we should think that lottery beliefs differ in justificatory status from mundane perceptual or testimonial beliefs. And it seems to do that in a way that is friendly to a broadly internalist approach to justification. In spite of its attractions, we think that the normic support view faces two serious challenges. The first is that it delivers the wrong result in preface cases. Such cases suggest that the view is either too sceptical or to externalist. The second is that the view struggles with certain kinds of Moorean absurdities. It turns out that these problems can easily be avoided. If we think of normality as a condition on knowledge, we can characterise justification in terms of its connection to knowledge and thereby avoid the difficulties discussed here. The resulting view does an equally good job explaining why we should think that our perceptual and testimonial beliefs are justified when lottery beliefs cannot be. Thus, it seems that little could be lost and much could be gained by revising the proposal and adopting a view on which it is knowledge, not justification that depends directly upon normality.
... The intricacy of the standard claim that a discipline is as good as its foundations critically surfaces at the intersection between law of evidence and analysis thereof. The debate on the theoretical and philosophical underpinnings of legal evidence especially criminal evidence seems stalled and plagued by fundamental paradoxes (Redmayne 2008). For one thing, the epistemological framework underlying the criminal process, i.e. common sense philosophy cannot deliver what it promises especially in our increasingly complex world: valid inferential relations between the evidence and the verdict (Kotsoglou 2015). ...
Article
Full-text available
The present article proceeds from the mainstream view that the conceptual framework underpinning adversarial systems of criminal adjudication, i.e. a mixture of common-sense philosophy and probabilistic analysis, is unsustainable. In order to provide fact-finders with an operable structure of justification, we need to turn to epistemology once again. The article proceeds in three parts. First, I examine the structural features of justification and how various theories have attempted to overcome Agrippa’s trilemma. Second, I put Inferential Contextualism to the test and show that a defeasible structure of justification allocating epistemic rights and duties to all participants of an inquiry manages to dissolve the problem of scepticism. Third, I show that our epistemic practice already embodies a contextualist mechanism. Our problem was not that our Standard of Proof is inoperable but that it was not adequately conceptualized. Contextualism provides the framework to articulate the abovementioned practice and to treat ‘reasonable doubts’ as a mechanism which we can now describe in detail. The seemingly insurmountable problem with our efforts to define the concept “reasonable doubts” was the fact that we have been conflating the surface features of this mechanism and its internal structure, i.e. the rules for its use.
... This body of work includes numerous examples of puzzles intended to demonstrate that probabilistic reasoning leads to errors or 'paradoxes' in the legal context. While work such as ( [5,6,10,15,24] ( [28,29,31,32] have addressed and contested some of these so-called legal paradoxes, they continue to play a role in the strong resistance to the idea of using Bayesian probability in the law [18]. While it is primarily legal scholars involved in such discussions, there is no doubt that the concerns raised have influenced judges and practicing lawyers; for example, the paradoxes are discussed in standard textbooks on criminal evidence such as [30] and underlie judgements against the use of Bayes in the law such as in cases discussed in [16]. ...
Article
Examples of reasoning problems such as the twins problem and poison paradox have been proposed by legal scholars to demonstrate the limitations of probability theory in legal reasoning. Specifically, such problems are intended to show that use of probability theory results in legal paradoxes. As such, these problems have been a powerful detriment to the use of probability theory – and particularly Bayes theorem – in the law. However, the examples only lead to ‘paradoxes’ under an artificially constrained view of probability theory and the use of the so-called likelihood ratio, in which multiple related hypotheses and pieces of evidence are squeezed into a single hypothesis variable and a single evidence variable. When the distinct relevant hypotheses and evidence are described properly in a causal model (a Bayesian network), the paradoxes vanish. In addition to the twins problem and poison paradox, we demonstrate this for the food tray example, the abuse paradox and the small town murder problem. Moreover, the resulting Bayesian networks provide a powerful framework for legal reasoning.
... And yet, it seems that the court should not make any such finding. To hold the Blue-Bus company liable, purely on the basis of its large market share, would seem palpably unjust (Kaye 1982, section I;Redmayne 2008;Allensworth 2009, section IIB). But to judge the Blue-Bus company not liable -which is the only 3 Other cases that are sometimes cited in this regard include Virginia & S.W. Ry. ...
Article
Full-text available
There is something puzzling about statistical evidence. One place this manifests is in the law, where courts are reluctant to base affirmative verdicts on evidence that is purely statistical, in spite of the fact that it is perfectly capable of meeting the standards of proof enshrined in legal doctrine. After surveying some proposed explanations for this, I shall outline a new approach - one that makes use of a notion of normalcy that is distinct from the idea of statistical frequency. The puzzle is not, however, merely a legal one. Our unwillingness to base beliefs on statistical evidence is by no means limited to the courtroom, and is at odds with almost every general principle that epistemologists have proposed as to how we ought to manage our beliefs. © The Author(s) 2017. Published by Oxford University Press on behalf of the Mind Association.
... As with the Paradox of the Gatecrasher, Prison Yard is offered by many philosophers (Di Bello 2019) and legal theorists (Redmayne 2008) to demonstrate a mismatch between common intuitions and actual legal practice, on the one hand, and the result indicated by the statistics, on the other. And with few exceptions, the philosophical literature leans heavily in the direction of defending the intuitions against the statistics. ...
Article
Philosophical debates over statistical evidence have long been framed and dominated by L. Jonathan Cohen's Paradox of the Gatecrasher and a related hypothetical example commonly called Prison Yard. These examples, however, raise an issue not discussed in the large and growing literature on statistical evidence – the question of what statistical evidence is supposed to be evidence of. In actual practice, the legal system does not start with a defendant and then attempt to determine if that defendant has committed some unspecified or under-specified act, as these examples appear to suppose. Rather, both criminal and civil litigation start with a sufficiently specified act and then attempt to determine if the defendant has committed it. And when we start with a more fully specified act, the statistics look very different, and these prominent examples no longer present the paradox they are claimed to support. Examining the issue of specification, however, does more than simply undercut the prominent examples in a long and extensive literature. The examination also raises normative issues challenging the legal system's traditional reluctance to base liability on the conjunction of probabilities.
... 7 Other scholars defended the opposing view, i.e., using statistical evidence is not necessarily a failure to treat the defendant as an individual, where principle M is conceived as respecting people's autonomy. See [14], [15]. 8 In profiling, one forms or acts on beliefs based on statistical evidence about a trait or behavior of a person. ...
Preprint
A recent paper (Hedden 2021) has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden 's argument does not hold for the most common kind of predictions used in data science, which are about people and based on data from similar people; we call these human-group-based practices. We argue that there is a morally salient distinction between human-group-based practices and those that are based on data of only one person, which we call human-individual-based practices. Thus, what may be a necessary condition for the fairness of human-group-based practices may not be a necessary condition for the fairness of human-individual-based practices, on which Hedden 's argument is based. Accordingly, the group fairness metrics discussed in the machine learning literature may still be relevant for most applications of prediction-based decision making.
... There is no evidence available to show who joined in and who did not.[Adapted fromRedmayne 2008] On a purely probabilistic conception of criminal proof, such evidence should suffice for convicting an individual prisoner, given the strong degree of probabilistic support for guilt. But many have a strong intuitive resistance to this idea. ...
Article
Full-text available
Legal epistemology has been an area of great philosophical growth since the turn of the century. But recently, a number of philosophers have argued the entire project is misguided, claiming that it relies on an illicit transposition of the norms of individual epistemology to the legal arena. This paper uses these objections as a foil to consider the foundations of legal epistemology, particularly as it applies to the criminal law. The aim is to clarify the fundamental commitments of legal epistemology and suggest a way to vindicate it.
... Other scholars defended the opposing view, i.e., using statistical evidence is not necessarily a failure to treat the defendant as an individual, where principle M is conceived as respecting people's autonomy. See[14],[15].8 In profiling, one forms or acts on beliefs based on statistical evidence about a trait or behavior of a person. In racial profiling, one forms or acts on beliefs based on statistical evidence about a person's race. ...
Conference Paper
In a recent paper, Brian Hedden has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden’s argument does not hold for the most common kind of predictions used in data science, which are about people and based on data from similar people; we call these “human-group-based practices.” We argue that there is a morally salient distinction between human-group-based practices and those that are based on data of only one person, which we call “human-individual-based practices.” Thus, what may be a necessary condition for the fairness of human-group-based practices may not be a necessary condition for the fairness of human-individual-based practices, on which Hedden’s argument is based. Accordingly, the group fairness metrics discussed in the machine learning literature may still be relevant for most applications of prediction-based decision making.
... For discussion of the former, see Cohen (1977). For discussion of the latter, see Nesson (1979) and Redmayne (2008). 16 We could also, of course, consider the legal risk event of the Blue Bus Company not being found liable when their bus was in fact the cause of the accident (just as there is a corresponding legal risk event in the criminal case of a wrongful acquittal), but I take it that in this scenario this is the target risk event. ...
Article
Full-text available
This paper offers an articulation and defence of the modal account of legal risk in light of a range of objections that have been proposed against this view in the recent literature. It is argued that these objections all trade on a failure to distinguish between the modal nature of risk more generally, and the application of this modal account to particular decision-making contexts, such as legal contexts, where one must rely on a restricted body of information. It is argued that once the modal account of legal risk is properly understood as involving information-relative judgements about the modal closeness of the target risk event, the objections to the view are neutralized.
Article
Suppose one hundred prisoners are in a yard under the supervision of a guard, and at some point, ninety-nine of them collectively kill the guard. If, after the fact, a prisoner is picked at random and tried, the probability of his guilt is 99%. But despite the high probability, the statistical chances, by themselves, seem insufficient to justify a conviction. The question is why. Two arguments are offered. The first, decision-theoretic argument shows that a conviction solely based on the statistics in the prisoner scenario is unacceptable so long as the goal of expected utility maximization is combined with fairness constraints. The second, risk-based argument shows that a conviction solely based on the statistics in the prisoner scenario lets the risk of mistaken conviction surge potentially too high. The same, by contrast, cannot be said of convictions solely based on DNA evidence or eyewitness testimony. A noteworthy feature of the two arguments in the paper is that they are not confined to criminal trials and can in fact be extended to civil trials.
Article
We argue that the laws of probability promote coherent fact-finding and avoid potentially unjust logical contradictions. But we do not argue that a probabilistic Bayesian approach is sufficient or even necessary for good fact-finding. First, we explain the use of probability reasoning in Re D (A Child) [2014] EWHC 121 (Fam) and Re L (A Child) [2017] EWHC 3707 (Fam). Then we criticise the attack on this probabilistic reasoning found in Re A (Children) [2018] EWCACiv 171, which is the appeal decision on Re L. We conclude that the attack is unjustified and that the probability statements in the two cases were both valid and useful. We also use probabilistic reasoning to enlighten legal principles related to inherent probability, the Binary Method and the blue bus paradox.
Article
Full-text available
Recent years have seen fresh impetus brought to debates about the proper role of statistical evidence in the law. Much of this work largely centres on a set of puzzles known as the 'proof paradox'. While these puzzles may initially seem academic, they have important ramifications for the law: raising key conceptual questions about legal proof, and practical questions about DNA evidence. This article introduces the proof paradox, why we should care about it, and new work attempting to resolve it.
Article
In the debate about the legal value of naked statistical evidence, Di Bello argues that (1) the likelihood ratio of such evidence is unknown, (2) the decision-theoretic considerations indicate that a conviction based on such evidence is unacceptable when expected utility maximization is combined with fairness constraints, and (3) the risk of mistaken conviction based on such evidence cannot be evaluated and is potentially too high. We argue that Di Bello’s argument for (1) works in a rather narrow context, and that (1) is not exactly in line with the way expert witnesses are required to formulate their opinions. Consequently, Di Bello’s argument for (2), which assumes (1), does not apply uniformly to all convictions based on naked evidence. Moreover, if Di Bello’s analysis is correct, it applies also to eyewitness testimony, given empirical results about its quality, and so the distinctions drawn by DiBello cut across the distinction between naked statistical evidence and other types of evidence. Finally, if we weaken the rather strong requirement of precise measurability of the risk of mistaken conviction, to the availability of reasonable but imprecise and fallible estimates, many field and empirical studies show that often the risk of mistaken conviction based on naked statistical evidence can be estimated to a similar extent as the risk of mistaken conviction based on any other sort of evidence.
Article
Many theorists hold that outright verdicts based on bare statistical evidence are unwarranted. Bare statistical evidence may support high credence, on these views, but does not support outright belief or legal verdicts of culpability. The vignettes that constitute the lottery paradox and the proof paradox are marshalled to support this claim. Some theorists argue, furthermore, that examples of profiling also indicate that bare statistical evidence is insufficient for warranting outright verdicts. I examine Pritchard's and Buchak's treatments of these three kinds of case. Pritchard argues that his safety condition explains the insufficiency of bare statistical evidence for outright verdicts in each of the three cases, while Buchak argues that her treatment of the distinction between credence and belief explains this. In these discussions the three kinds of cases – lottery, proof paradox, and profiling – are treated alike. The cases are taken to exhibit the same epistemic features. I identity significant overlooked epistemic differences amongst these three cases; these differences cast doubt on Pritchard's explanation of the insufficiency of bare statistical evidence for outright verdicts. Finally, I raise the question of whether we should aim for a unified explanation of the three paradoxes.
Article
This paper defends the heretical view that sometimes we ought to assign legal liability based on statistical evidence alone. Recent literature focuses on potential unfairness to the defending party if we rely on bare statistics. Here, I show that capitulating in response to ‘epistemic gaps’—cases where there is a group of potential harmers but an absence of individuating evidence—can amount to a serious injustice against the party who has been harmed. Drawing on prominent civil law litigation involving pharmaceutical and industrial negligence, the overall aim is to illustrate moral pitfalls stemming from the popular idea that it is never appropriate to rely on bare statistics when settling a legal dispute.
Article
This article is a critical review of the growing literature that applies probability analysis to past convictions, in the context of determining guilt in criminal trials. Recent arguments for potentially relaxing rules that exclude past conviction evidence are sustained, but particular flaws and limitations in the theses from Hamer (2019, The significant probative value of tendency evidence. Melbourne University Law Review 42, 506–550) and Redmayne (2015, Character in the criminal trial. Oxford University Press) are exposed. Much of the critique of Redmayne (2015) made by Robinson (2020, Incorporating implicit knowledge into the Bayesian model of prior conviction evidence: some reality checks for the theory of comparative propensity. Law, Probability and Risk 19, 119–137) is dismissed. We should aim to foster a continued lively debate in the literature, gather more data, and narrow the distance between those arguing about theoretical probability analysis and those focused on actual courtroom usage of past conviction evidence.
Article
In this paper, I offer three different arguments against the view that knowledge is the epistemic norm governing criminal convictions in the Anglo‐American system. The first two show that neither the truth of a juror's verdict nor the juror's belief in the defendant's guilt is necessary for voting to convict in an epistemically permissible way. Both arguments challenge the necessity dimension of the knowledge norm. I then show—by drawing on evidence that is admissible through exclusionary rules—that knowledge is also not sufficient for epistemically proper conviction. A central thesis operative in all of these arguments is that the sort of ideal epistemology underwriting the knowledge norm of conviction should be rejected and replaced with a non‐ideal approach. I then defend an alternative, justificationist norm of criminal conviction that not only avoids the problems afflicting the knowledge account, but also takes seriously the important role that narratives play in criminal courts.
Book
This book constitutes the refereed proceedings of the 4th International Conference on Logic and Argumentation, CLAR 2021, held in Hangzhou, China, in October 2021. The 20 full and 10 short papers presented together with 5 invited papers were carefully reviewed and selected from 58 submissions. The topics of accepted papers cover the focus of the CLAR series, including formal models of argumentation, a variety of logic formalisms, nonmonotonic reasoning, dispute and dialogue systems, formal treatment of preference and support, and well as applications in areas like vaccine information and processing of legal texts.
Chapter
The Cohenian paradox is one of the main themes of judicial probability theory and one of the core topics discussed by the new evidence scholarship. To resolve this paradox, evidence scholars nowadays have proposed various solutions, including legal probabilism, Bayesian decision theory, and relative plausibility theory. These three solutions can be classified into two approaches, i.e., the probabilism and the explanationism. Among them, the former includes legal probabilism and Bayesian decision theory, and the latter includes the relative plausibility theory. However, the two approaches have recently begun to converge and become more understandable to each other. For example, Welch (2020) has recently defended and improved the relative plausibility theory by substantially improving it with the help of Bayesian decision theory. In this paper, by contrast, we attempt to defend the probabilistic approach - legal probabilism and Bayesian decision theory on the basis of relative plausibility theory.
Article
Full-text available
Many have attempted to justify various courts’ position that bare or naked statistical evidence is not sufficient for findings of liability. I provide a particular explanation by examining a different, but related, issue about when and why stereotyping is wrong. One natural explanation of wrongness of stereotyping appeals to agency. However, this has been scrutinised. In this paper, I argue that we should broaden our understanding of when and how our agency can be undermined. In particular, I argue that when we take seriously that our agency is exercised in the social world, we can see that stereotyping can and does undermine our agency by fixing the social meaning of our choices and actions as well as by reducing the quality and the kinds of choices that are available to us. Although this improves the agency-based explanation, it must be noted that undermining agency is not an overriding reason against stereotyping. Much depends on the balance of reasons that take into account moral stakes involved in a case of stereotyping. This results in a messier picture of when and why stereotyping is wrong, but I argue that this is a feature, not a bug. I end by applying this agency-based explanation to cases that have motivated the so-called Proof Paradoxes.
Article
Full-text available
Why can testimony alone be enough for findings of liability? Why statistical evidence alone can't? These questions underpin the ‘Proof Paradox’. Many epistemologists have attempted to explain this paradox from a purely epistemic perspective. I call it the ‘Epistemic Project’. In this paper, I take a step back from this recent trend. Stemming from considerations about the nature and role of standards of proof, I define three requirements that any successful account in line with the Epistemic Project should meet. I then consider three recent epistemic accounts on which the standard is met when the evidence rules out modal risk (Pritchard 2018), normic risk (Ebert et al., 2020), or relevant alternatives (Gardiner 2019 2020). I argue that none of these accounts meets all the requirements. Finally, I offer reasons to be pessimistic about the prospects of having a successful epistemic explanation of the paradox. I suggest the discussion on the proof paradox would benefit from undergoing a ‘value-turn’.
Article
Full-text available
According to the Rational Threshold View, a rational agent believes p if and only if her credence in p is equal to or greater than a certain threshold. One of the most serious challenges for this view is the problem of statistical evidence: statistical evidence is often not sufficient to make an outright belief rational, no matter how probable the target proposition is given such evidence. This indicates that rational belief is not as sensitive to statistical evidence as rational credence. The aim of this paper is twofold. First, we argue that, in addition to playing a decisive role in rationalizing outright belief, non-statistical evidence also plays a preponderant role in rationalizing credence. More precisely, when both types of evidence are present in a context, non-statistical evidence should receive a heavier weight than statistical evidence in determining rational credence. Second, based on this result, we argue that a modified version of the Rational Threshold View can avoid the problem of statistical evidence. We conclude by suggesting a possible explanation of the varying sensitivity to different types of evidence for belief and credence based on the respective aims of these attitudes.
Article
In this paper, I identify an important epistemic problem with the practice of racial profiling. Racial profiling relies on naked statistical evidence to justify reasonable suspicion. Naked statistical evidence refers to probabilities that are not created by a particular case, but that existed prior to or independently of the case under consideration (Wells, 1992). I argue that naked statistical evidence cannot justify outright belief in someone’s worthiness of suspicion, it can only justify a high credence. This is because statistical evidence fails to be causally connected to the particular case under consideration. According to our blame norms, a precondition for apt blame is that an agent has an outright belief that the agent is responsible for the act for which they are blamed; high credence cannot play this role. I argue that reasonable suspicion in the context of racial profiling frequently involves blame such that it demands the same strict evidential standards. Therefore, we can identify an important epistemic objection to this practice.
Article
Recent philosophy has paid considerable attention to the way our biases are liable to encroach upon our cognitive lives, diminishing our capacity to know and unjustly denigrating the knowledge of others. The extent of the bias, and the range of domains to which it applies, has struck some as so great as to license talk of a new form of skepticism. I argue that these depressing consequences are real and, in some ways, even more intractable than has previously been recognized. For the difficulties we face in this domain are fueled not only by implicit biases but by various other sorts of entrenched cognitive attitudes we bear toward others, whether or not we judge them to be our peers. Inasmuch as the epistemic standing of this broader set of attitudes is itself quite dubious, the problem of epistemic injustice turns out to be just one special case—albeit of a particularly nasty kind—from a broader domain of cases where the collaborative character of knowledge clashes with tendencies that make collaboration difficult. This makes the threat of skepticism all the greater, and at the same time makes it harder to see what path of escape there might be.
Article
In this paper, I provide an argument for rejecting Sarah Moss's recent account of legal proof. Moss's account is attractive in a number of ways. It provides a new version of a knowledge-based theory of legal proof that elegantly resolves a number of puzzles about mere statistical evidence in the law. Moreover, the account promises to have attractive implications for social and moral philosophy, in particular about the impermissibility of racial profiling and other harmful kinds of statistical generalisation. In this paper, I show that Moss's account of legal proof crucially depends on a moral norm called the rule of consideration . I argue that we have a number of reasons to be sceptical of this rule. Once we reject the rule, it is not clear that Moss's account of legal proof is either plausible or attractive.
Article
Multiple epistemological programs make use of intuitive judgments pertaining to an individual’s ability to gain knowledge from exclusively probabilistic/statistical information. This paper argues that these judgments likely form without deference to such information, instead being a function of the degree to which having knowledge is representative of an agent. Thus, these judgments fit the pattern of formation via a representativeness heuristic, like that famously described by Kahneman and Tversky to explain similar probabilistic judgments. Given this broad insensitivity to probabilistic/statistical information, it directly follows that these epistemic judgments are insensitive to a given agent’s epistemic status. From this, the paper concludes that, breaking with common epistemological practice, we cannot assume that such judgments are reliable.
Article
This article defends the importance of epistemic safety for legal evidence. Drawing on discussions of sensitivity and safety in epistemology, the article explores how similar considerations apply to legal proof. In the legal context, sensitivity concerns whether a factual finding would be made if it were false, and safety concerns how easily a factual finding could be false. The article critiques recent claims about the importance of sensitivity for the law of evidence. In particular, this critique argues that sensitivity does not have much of an effect on the value of legal evidence and that it fails to explain legal doctrine. By contrast, safety affects the quality of legal evidence, and safety better explains central features of the law of evidence, including probative value, admissibility rules, and standards of proof.
Article
Full-text available
Five studies tested the idea that people are reluctant to make proplaintiff liability decisions when the plaintiff's evidence is based on naked statistical evidence alone. Students ( n = 740) and experienced trial judges ( n = 111) averaged fewer than 10% affirmative decisions of liability when a case was based on naked statistical evidence but averaged over 65% affirmative decisions based on other forms of evidence even though the mathematical and subjective probabilities were the same for both types of evidence. Numerous hypotheses, including causal relevance, linkage to the specific case, and fairness to the defendant, proved inadequate to explain the data. For evidence to affect decisions, the evidence must do more than affect people's perceptions about the probabilities associated with the ultimate fact; people seem to require that suppositions regarding the ultimate fact affect their perceptions of the truth or falsity of the evidence. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
There is abundant evidence of contextual variation in the use of “S knows p.” Contextualist theories explain this variation in terms of semantic hypotheses that refer to standards of justification determined by “practical” features of either the subject’s context (Hawthorne & Stanley) or the ascriber’s context (Lewis, Cohen, & DeRose). There is extensive linguistic counterevidence to both forms. I maintain that the contextual variation of knowledge claims is better explained by common pragmatic factors. I show here that one is variable strictness. “S knows p” is commonly used loosely to implicate “S is close enough to knowing p for contextually indicated purposes.” A pragmatic account may use a range of semantics, even contextualist. I use an invariant semantics on which knowledge requires complete justification. This combination meets the Moorean constraint as well as any linguistic theory should, and meets the intuition constraint much better than contextualism. There is no need for ad hoc error theories. The variation in conditions of assertability and practical rationality is better explained by variably strict constraints. It will follow that “S knows p” is used loosely to implicate that the condition for asserting “p” and using it in practical reasoning are satisfied.
Article
Permissive inferences have long served to assist state and federal prosecutors by authorizing juries to infer an essential element of a crime from proof of some other fact commonly associated with it. Professor Nesson argues, however, that this type of presumption accomplishes the goals of its legislative authors by necessarily subverting those aspects of the criminal adjudication system that tend most to secure public respect for trial verdicts. To avoid this result, he proposes alternative ways of achieving the legitimate purposes behind permissive inferences, with particular emphasis on the pending revision of the Federal Criminal Code.
Article
A DNA match statistic of, say, one in one million means that approximately one person out of every one million in a population will match that DNA profile. Now consider a juror who hears that one in every one hundred thousand people in, say, Houston who are not the source will match coincidentally. Half of the subjects (selected at random) received the following "single target / probability frame" (s/p) wording: "The probability that Mr. Clinton would match the genetic material if he were not the source is 0.1%." As in the Clinton-Lewinsky study, the information provided was identical for all jurors with the exception of a single sentence that described the DNA match statistic. Nevertheless, if future studies confirm the trends detected here, the legal system must take seriously the idea that the way in which a match statistic is worded by an expert or attorney can affect the way a juror thinks about the value of that evidence. At the one in one million incidence rate, there were small differences between the single target and the multi target on p(source) estimates (81% vs. 76%). Second, the reluctance of some jurors to assign extremely high p(source) and p(guilt) values in mock DNA cases is now a fairly consistent finding in the mock juror literature.
Article
§I schematizes the evidence for an understanding of ‘know’ and of other terms of epistemic appraisal that embodies contextualism or subject‐sensitive invariantism, and distinguishes between those two approaches. §II argues that although the cases for contextualism and sensitive invariantism rely on a principle of charity in the interpretation of epistemic claims, neither approach satisfies charity fully, since both attribute meta‐linguistic errors to speakers. §III provides an equally charitable anti‐sceptical insensitive invariantist explanation of much of the same evidence as the result of psychological bias caused by salience effects. §IV suggests that the explanation appears to have implausible consequences about practical reasoning, but also that applications of contextualism or sensitive invariantism to the problem of scepticism have such consequences. §V argues that the inevitable difference between appropriateness and knowledge of appropriateness in practical reasoning, closely related to the difference between knowledge and knowledge of knowledge, explains the apparent implausibility.
Article
In this paper I examine the way appeals to pretheoretic intuition are used to support epistemological theses in general and the thesis of epistemic contextualism in particular. After outlining the sceptical puzzle and the contextualist's resolution of that puzzle, I explore the question of whether this solution fits better with our intuitive take on the puzzle than its invariantist rivals. I distinguish two kinds of fit a theory might have with pretheoretic intuitions–accommodation and explanation, and consider whether achieving either kind of fit would be a virtue for a theory. I then examine how contextualism could best claim to accommodate and explain our intuitions, building the best case that 1 can for contextualism, but concluding that there is no reason to accept contextualism either in the way it accommodates nor the way it explains our intuitions about the sceptical puzzle.
Article
This paper identifies conditions under which appellate courts are more and less likely to treat background probabilities (i.e., base rates) as relevant. Courts are likely to view base rates as relevant when base rates (a) arise in cases that appear to have a statistical structure, (b) are offered to rebut an it-happened-by-chance theory, (c) are computed using reference classes that incorporate specific features of the focal case, or (d) are offered in cases when it is difficult or impossible to obtain evidence of a more individuating sort. irrelevant. Part I discusses the nature and logical relevance of base rates. Part II describes concerns that have been raised with regard to the admissibility and sufficiency of base rates. Part III identifies the mistrust that courts have long had for probabilistic evidence generally and base rates in particular. The early antagonism toward base rates, fear of inaccurate statistics, and resistance to inferences of guilt by trait association are documented. Part IV draws from the probabilistic reasoning literature to identify a series of conditions under which base rates may be viewed more favorably by the courts. Evidence from court rulings suggests that base rates are more likely to be treated as relevant (and admissible) when they are (a) offered to rebut an "it-happened-by-chance" theory, (b) computed using refined (as opposed to general) reference classes, or (c) offered in cases where evidence of a more individuating sort cannot (in principle) be obtained.
Article
As accounts of evidential reasoning, theories of subjective probability face a serious limitation: they fail to show how features of the world should constrain probability assessments. This article surveys various theories of objective probability, noting how they overcome this problem, and highlighting the difficulties there might be in applying them to the process of fact-finding in trials. The survey highlights various common problems which theories of objective probability must confront. The purpose of the survey is, in part, to shed light on an argument about the use of Bayes' rule in fact-finding recently made by Alvin Goldman. But the survey is also intended to highlight important features of evidential reasoning that have received relatively little attention from evidence scholars: the role categorization plays in reasoning, and the link between probability and wider theories of epistemic justification.
Article
Intuitively, Gettier cases are instances of justified true beliefsthat are not cases of knowledge. Should we therefore conclude thatknowledge is not justified true belief? Only if we have reason totrust intuition here. But intuitions are unreliable in a wide rangeof cases. And it can be argued that the Gettier intuitions have agreater resemblance to unreliable intuitions than to reliableintuitions. Whats distinctive about the faulty intuitions, Iargue, is that respecting them would mean abandoning a simple,systematic and largely successful theory in favour of a complicated,disjunctive and idiosyncratic theory. So maybe respecting theGettier intuitions was the wrong reaction, we should instead havebeen explaining why we are all so easily misled by these kinds ofcases.
Harman for comments on the paper; Harman had already discussed the Lottery paradox in GILBERT HARMAN
  • Thomson
  • Gilbert
Thomson acknowledges Gilbert Harman for comments on the paper; Harman had already discussed the Lottery paradox in GILBERT HARMAN, THOUGHT (1973), at 161.
See also the discussion in Duncan Pritchard, Knowledge, Luck and Lotteries
  • Duncan Pritchard
  • Epistemic
  • Luck
DUNCAN PRITCHARD, EPISTEMIC LUCK (2005). See also the discussion in Duncan Pritchard, Knowledge, Luck and Lotteries, in NEW WAVES IN EPISTEMOLOGY (D. Pritchard & V. Hendricks eds., 2008).
For objections to this account, see A
  • R Hiller
  • Neta
For objections to this account, see A. Hiller & R. Neta, Safety and Epistemic Luck, 158 SYNTHESE 303 (2007);
Luck and Lotteries, supra note 75
  • See Pritchard
  • Knowledge
See Pritchard, Knowledge, Luck and Lotteries, supra note 75, at 46–48.
Question-ing Contextualism
  • B Weatherson
B. Weatherson, Question-ing Contextualism, in ASPECTS OF KNOWING: PHILOSOPHICAL ESSAYS (S. Hetherington ed., 2006);
Jurors' Use of Naked Statistical Evidence: Exploring the Basis and Implications of the Wells Effect The Cognitive Psychology of Circumstantial Evidence, 105 MICH
  • K E See
  • Neidermeier
See K.E. Neidermeier et al., Jurors' Use of Naked Statistical Evidence: Exploring the Basis and Implications of the Wells Effect, 76 J. PERSONALITY & SOC. PSY. 533 (1999). See also K.J. Heller, The Cognitive Psychology of Circumstantial Evidence, 105 MICH. L. REV. 241 (2006).
Moral Heuristics, 28 BEHAV
  • C Sunstein
C. Sunstein, Moral Heuristics, 28 BEHAV. & BRAIN SCI. 531 (2005).
s response to Unger in D. LEWIS, Illusory Innocence? in PAPERS
  • Thus
  • David Lewis Example
Thus note, for example, David Lewis's response to Unger in D. LEWIS, Illusory Innocence? in PAPERS ETHICS & SOC. PHIL. (2000);
s to Sunstein in B. Fried, Moral Heuristics and the Means/End Distinction, 28 BEHAV
  • Barbara Fried
and Barbara Fried's to Sunstein in B. Fried, Moral Heuristics and the Means/End Distinction, 28 BEHAV. & BRAIN SCI. 549 (2005).