Article

Comparing Holistic and Atomistic Evaluation of Evidence

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Although the story model requires that a story be plausible, it does not provide much detail about how the plausibility should be assessed, which portions of a story should be plausible, and how to select the best story. Seventhly, it is thought that a weak piece of evidence may be given more probative force while reasoning with evidence because the mental model of a case shifts towards interpretation process with the emerging theory of a case, and the effect of this coherent shift assigns more probative value to a piece of evidence which has little evidential value (Schweizer, 2014). Lastly, the story model is not compatible with the existing trial norms. ...
... It is pointed out that sometimes jurors will refuse to condemn a defendant who has not provided any defence theory but has only pointed out the flaws in the case of the prosecution. On the other hand, the model requires that judges will construct different stories and pick one as the best story, and it is not harmonious with the existing norms of criminal trials (Schweizer, 2014). ...
Article
Full-text available
The argumentative approach, the probability approach, and the story model are the three normative frameworks to reasoning with judicial evidence. The story model describes that judges reach the final conclusion by going through three different stages. The model also offered certainty principles, including evidential coverage, coherence, consistency, plausibility, and structural completeness to evaluate the stories. Different researchers have criticized the story model by pointing out that the model does not elaborate the meaning of evidential coverage and plausibility. Additionally, the story model has also been charged on the ground that it does not guide how to evaluate evidential coverage or plausibility of a story and how to select the best story when judges make more than one story. The present study demonstrates that these shortcomings may be overcome by using anchored narrative theory, causal abductive reasoning, story schemes, critical questions, and principles of inference to the best explanation.
... It is part of what is called the human tendency to keep our vision of the world without too much dissonance (Festinger, 1957). It is also referred to as a holistic evaluation of evidence (Schweizer, 2014). And, indeed, the story as presented by for instance the prosecution can be appealing to such an extent that evidence pointing in the other direction is ignored (Schweizer, 2014;Wagenaar, et al., 1993), for instance leading to miscarriages of justice (Gross, 1998(Gross, , 2008Gross, Jacoby, Matheson, Montgomery & Patel, 2005;Huff & Killias, 2008). ...
... It is also referred to as a holistic evaluation of evidence (Schweizer, 2014). And, indeed, the story as presented by for instance the prosecution can be appealing to such an extent that evidence pointing in the other direction is ignored (Schweizer, 2014;Wagenaar, et al., 1993), for instance leading to miscarriages of justice (Gross, 1998(Gross, , 2008Gross, Jacoby, Matheson, Montgomery & Patel, 2005;Huff & Killias, 2008). ...
Chapter
Full-text available
Are the cognitive sciences relevant for law? How do they influence legal theory and practice? Should lawyers become part-time cognitive scientists? The recent advances in the cognitive sciences have reshaped our conceptions of human decision-making and behavior. Many claim, for instance, that we can no longer view ourselves as purely rational agents equipped with free will. This change is vitally important for lawyers, who are forced to rethink the foundations of their theories and the framework of legal practice. Featuring multidisciplinary scholars from around the world, this book offers a comprehensive overview of the emerging field of law and the cognitive sciences. It develops new theories and provides often provocative insights into the relationship between the cognitive sciences and various dimensions of the law including legal philosophy and methodology, doctrinal issues, and evidence.
... Support for the use of LRs came, e.g. from the Swiss legal medicine as early as 2001 (Bär, 2001), but it was pointed out that jurisprudence still lacks in an adequate understanding when dealing with such appraisal formats. A recent study even focussed on comparing holistic and atomistic evaluation of evidence by using a Bayesian network (Schweizer, 2014). ...
Article
Interpretation concepts of scientific evidence have always been under discussion among forensic scientists and among all stakeholders of criminal proceedings in general. It seems that this issue has been attracting more attention since the introduction of the case assessment and interpretation (CAI) model in the late nineties and even more since the release of the National Academies of Science report ‘Strengthening Forensic Science in the United States’ in 2009. Following the debates there is, however, a certain danger of overcompensation if the input of stakeholders from e.g. inquisitorial criminal systems is under-represented. Without doubt, a likelihood ratio-based approach can be a powerful tool assisting in logically complex case assessments and judicial considerations of evidence. However, the application of this approach should be an option rather than an international standard as it concerns the concept of the stakeholder’s roles more profoundly in some countries than in others and may possibly take some countries by surprise. In the following article, this is discussed and some proposals are put forward which appear suitable to strengthen the evaluation of forensic results by the principle of methodological pluralism rather than by an exclusive and compulsory commitment to only one approach.
Chapter
This paper explores the debate between atomistic and holistic approaches to legal evidential reasoning, with two aims. The first is conceptual and analytical: it draws some distinctions to clarify in what senses a conception of legal evidential reasoning can be holistic or atomistic. The second purpose is normative: it defends a normative conception for justificatory reasoning about questions of fact in judicial fact finding that notwithstanding its predominantly atomistic character includes some holistic elements.
Article
Full-text available
The Article addresses three main questions. First: Why do some scholars and decision-makers take evidence assessment criteria as standards of proof and vice versa? The answer comes from the fact that some legal systems are more concerned with assessment criteria and others with standards; therefore jurists educated in different contexts tend to emphasize what they are more familiar with, and to assimilate to it what they are less familiar with. Second: Why do systems differ in those respects? Here the answer stems from the historical, institutional and procedural differences that explain why some systems are more concerned with assessment criteria and others with standards of proof. And third, assuming that both criteria and standards are necessary to legal decision-making about facts: How can a system work if it neglects one of these things? Here the Article argues that there is a functional connection between criteria and standards. The functional connection account is distinguished from a functional equivalence account, and some systems and jurisdictions are referred to in greater detail to support the functional connection claim.
Article
Full-text available
This paper explores the debate between atomistic and holistic approaches to evidential reasoning in law, with the purpose of assessing critically their contributions to a normative theory of the justification of judicial decisions concerning facts. The author recognizes that holistic theories shed light on the semantic relevance of the whole story of the case to the intelligibility of each ultimate probanda, as well as on the justificatory value of the explanatory integration between them and the whole evidentiary data available. At the same time, however, she argues that each ultimate probanda should be considered as a distinct conclusion for the purpose of its justification and that the credibility of each piece of evidence, together with the strength of its inferential link with the probanda, should be object of an atomistic analysis.
Article
Full-text available
Unlike the evaluation of single items of scientific evidence, the formal study and analysis of the joint evaluation of several distinct items of forensic evidence has to date received some punctual, rather than systematic, attention. Questions about the (i) relationships among a set of (usually unobservable) propositions and a set of (observable) items of scientific evidence, (ii) the joint probative value of a collection of distinct items of evidence as well as (iii) the contribution of each individual item within a given group of pieces of evidence still represent fundamental areas of research. To some degree, this is remarkable since both, forensic science theory and practice, yet many daily inference tasks, require the consideration of multiple items if not masses of evidence. A recurrent and particular complication that arises in such settings is that the application of probability theory, i.e. the reference method for reasoning under uncertainty, becomes increasingly demanding. The present paper takes this as a starting point and discusses graphical probability models, i.e. Bayesian networks, as framework within which the joint evaluation of scientific evidence can be approached in some viable way. Based on a review of existing main contributions in this area, the article here aims at presenting instances of real case studies from the author's institution in order to point out the usefulness and capacities of Bayesian networks for the probabilistic assessment of the probative value of multiple and interrelated items of evidence. A main emphasis is placed on underlying general patterns of inference, their representation as well as their graphical probabilistic analysis. Attention is also drawn to inferential interactions, such as redundancy, synergy and directional change. These distinguish the joint evaluation of evidence from assessments of isolated items of evidence. Together, these topics present aspects of interest to both, domain experts and recipients of expert information, because they have bearing on how multiple items of evidence are meaningfully and appropriately set into context. © The Author 2011. Published by Oxford University Press. All rights reserved.
Chapter
Full-text available
How do legal decision-makers reason about facts in law? A popular response appeals to probability theory, more specifically, to Bayesian theory. On the Bayesian approach, fact-finders’ inferential task consists in updating the probability of the hypothesis entailing guilt in light of the evidence at trial in the way dictated by Bayes theorem. If, by the end of the trial, this probability is sufficiently high to meet the reasonable doubt standard, the verdict “guilty” is appropriate (Tillers and Green 1988). Bayesianism provides an elegant framework for analyzing evidentiary reasoning in law. Nonetheless, in the last decades, the Bayesian theory of legal proof has been subjected to severe criticism, which has shed serious doubts upon the possibility of explaining legal reasoning about evidence in Bayesian terms. In this paper, I would explore the feasibility of an approach to legal evidence and proof alternative to the probabilistic one, to wit, an explanationist approach. According to this approach, many instances of factual reasoning in law are best understood as ‘inferences to the best explanation,’ i.e., a pattern of reasoning whereby explanatory hypotheses are formed and evaluated. More specifically, I shall argue for a coherentist approach to inference to the best explanation for law according to which factual inference in law involves first the generation of a number of plausible alternative explanations of the events being litigated at trial and then the selection, among them, of the one that is best on a test of explanatory coherence.The defense of an explanationist model of legal proof will proceed as follows. I start by giving a brief description of inference to the best explanation. I then proceed to articulate a model of inference to the best explanation for law. I shall restrict my analysis to criminal trials, even though the model is also potentially applicable to civil trials. Next, I illustrate this model by means of a well-known case, the O.J. Simpson case. I will then consider a major objection that may be raised against a model of inference to the best explanation for law, namely, the so-called ‘problem of underconsideration.’ I conclude by examining this problem in detail and suggesting some ways in which it may be overcome.
Article
Full-text available
Probabilistic reasoning fallacies in legal practice have been widely documented. Yet these fallacies continue to occur. This paper considers how best to avoid them. Although most fallacies can be easily explained and avoided by applying Bayes' Theorem, attempts to do so with lawyers using the normal formulaic approach seem doomed to failure. In our experience, for simple arguments it is possible to explain common fallacies using purely visual presentation alternatives to the formulaic version of Bayes in ways that are fully understandable to lay people. However, as the evidence (and dependence between different evidence) becomes more complex, these visual approaches become infeasible. We show how Bayesian networks can be used, in conjunction with visual presentations, to address the more complex arguments in such a way that it is not necessary to expose the underlying complex Bayesian computations. In this way Bayesian Networks work like an electronic calculator for complex Bayesian computations. We demonstrate this new approach in explaining well known fallacies and a new fallacy (called the Crimewatch fallacy) that arose in a recent major murder trial in which we were expert witnesses. We also address the barriers to more widespread take-up of these methods within the legal profession, including the need to 'believe' the correctness of Bayesian calculations and the inevitable reluctance to consider subjective prior probabilities. The paper provides a number of original contributions: a classification of fallacies that is conceptually simpler than previous approaches; the new fallacy; and a proposal, whose necessity is based on real case experience, for a radically new means of fully exploiting Bayes to enhance legal reasoning
Article
Full-text available
How do people make legal judgments based on complex bodies of interrelated evidence? This paper outlines a novel framework for evidential reasoning using causal idioms. These idioms are based on the qualitative graphical component of Bayesian networks, and are tailored to the legal context. They can be combined and reused to model complex bodies of legal evidence. This approach is applied to witness and alibi testimony, and is illustrated with a real legal case. We show how the framework captures critical aspects of witness reliability, and the potential interrelations between witness reliabilities and other hypotheses and evidence. We report a brief empirical study on the interpretation of alibi evidence, and show that people's intuitive inferences fit well with the qualitative aspects of the idiom-based framework.
Chapter
Full-text available
Analyzes heuristics that draw from inferences from information beyond mere recognition. The authors address how people make inferences, predictions, and decisions from a bundle of imperfect cues and signals. Three heuristics studied in the chapter are the Minimalist, Take The Last, and Take The Best. Findings indicate that fast and frugal heuristics that embody simple psychological mechanisms can yield inferences about a real-world environment that are at least as accurate as standard linear statistical strategies embodying classical properties of rational judgment. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This paper evaluates four competing psychological explanations for why the jury in the O.J. Simpson murder trial reached the verdict they did: explanatory coherence, Bayesian probability theory, wishful thinking, and emotional coherence. It describes computational models that provide detailed simulations of juror reasoning for explanatory coherence, Bayesian networks, and emotional coherence, and argues that the latter account provides the most plausible explanation of the jury's decision.
Article
Full-text available
This article reports research that supports an explanation-based model of decision making applied to judicial decisions. In Experiment 1, recognition memory responses demonstrated that subjects spontaneously evaluated evidence in a legal judgment task by constructing an explanatory representation in the form of a narrative story. Furthermore, an item's membership in the story associated with the chosen or rejected verdict predicted subjects' ratings of its importance as evidence. In Experiment 2, subjects listened to evidence from criminal trials presented in various orders designed to manipulate the ease with which a particular explanatory summary of the evidence (story) could be constructed. The order manipulation shifted verdict choices in the direction of the more easily constructed story, implying that story structure causes decisions. In addition, the coherence of the explanatory story structure and the strength of alternative stories were major determinants of perceptions of strength of evidence and of confidence in the decision. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Investigated the role of representation of evidence in the decision processes of 26 21–73 yr old experienced jurors to test a 3-stage story model of juror decision making. The 3 stages are evidence evaluation through story construction, decision alternative representation (verdict category establishment for the juror task), and story classification (selecting the verdict category that best fits the story based on the evidence). Ss made individual decisions on the verdicts for a filmed murder trial. Extensive interviews were conducted to determine Ss' cognitive representations of the evidence in the case, the verdict categories presented in the trial judge's instructions, and the procedures they were to follow according to law to reach a verdict. Results indicate, as hypothesized, that the trial evidence was represented in a story form. Differences among Ss in cognitive representations of evidence were correlated with their verdicts, although other aspects of the decision process (verdict category representations, application of the standard of proof procedural instruction) were not. It is concluded that adequate theories of decision making must emphasize cognitive aspects of performance, such as the representation of evidence. (64 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Investigates the Story Model, N. Pennington and R. Hastie's (1986, 1988) explanation-based theory of decision making for juror decisions. In Exp 1, varying the ease with which stories could be constructed affected verdict judgments and the impact of credibility evidence. Memory for evidence in all conditions was equivalent, implying that the story structure was a mediator of decisions and of the impact of credibility evidence. In Exps 2 and 3, Ss evaluated the evidence in 3 ways. When Ss made a global judgment at the end of the case, their judgment processes followed the prescriptions of the Story Model, not of Bayesian or linear updating models. When Ss made item-by-item judgments after each evidence block, linear anchor and adjust models described their judgments. In conditions in which story construction strategies were more likely to be used, story completeness had greater effects on decisions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This paper argues for a coherentist theory of the justification of evidentiary judgments in law, according to which a hypothesis about the events being litigated is justified if and only if it is such that an epistemically responsible fact-finder might have accepted it as justified by virtue of its coherence in like circumstances. It claims that this version of coherentism has the resources to address a main problem facing coherence theories of evidence and legal proof, namely, the problem of the coherence bias. The paper then develops an aretaic approach to the standards of epistemic responsibility which govern legal fact-finding. It concludes by exploring some implications of the proposed account of the justification of evidentiary judgments in law for the epistemology of legal proof.
Article
Full-text available
Recent constraint satisfaction models of explanation, analogy, and decision making claim that these processes are influenced by bidirectional constraints that promote coherence. College students were asked to reach a verdict in a complex legal case involving multiple conflicting arguments, including alternative analogies to the target case. Participants rated agreement with the individual arguments both in isolation before seeing the case and again after reaching a verdict. Assessments of the individual arguments shifted so as to cohere with their emerging verdict. A cascade of spreading coherence influenced decisions made about a subsequent case involving different legal issues. Participants' memory for their initial positions also shifted so as to cohere with their final positions. The results demonstrate that constraint satisfaction can transform ambiguous inputs into coherent decisions.
Article
Full-text available
Bayesian belief networks are being increasingly used as a knowledge representation for reasoning under uncertainty. Some researchers have questioned the practicality of obtaining the numerical probabilities with sufficient precision to create belief networks for large-scale applications. In this work, we investigate how precise the probabilities need to be by measuring how imprecision in the probabilities affects diagnostic performance. We conducted a series of experiments on a set of real-world belief networks for medical diagnosis in liver and bile disease. We examined the effects on diagnostic performance of (1) varying the mappings from qualitative frequency weights into numerical probabilities, (2) adding random noise to the numerical probabilities, (3) simplifying from quaternary domains for diseases and findings—absent, mild, moderate, and severe—to binary domains—absent and present, and (4) using test cases that contain diseases outside the network. We found that even extreme differences in the probability mappings and large amounts of noise lead to only modest reductions in diagnostic performance. We found no significant effect of the simplification from quaternary to binary representation. We also found that outside diseases degraded performance modestly. Overall, these findings indicate that even highly imprecise input probabilities may not impair diagnostic performance significantly, and that simple binary representations may often be adequate. These findings of robustness suggest that belief networks are a practical representation without requiring undue precision.
Conference Paper
Full-text available
Common wisdom has it that small distinctions in the probabilities (parameters) quan- tifying a belief network do not matter much for the results of probabilistic queries. Yet, one can develop realistic scenarios under which small variations in network parameters can lead to significant changes in computed queries. A pending theoretical question is then to analytically characterize parameter changes that do or do not matter. In this paper, we study the sensitivity of probabilistic queries to changes in network parameters and prove some tight bounds on the impact that such parameters can have on queries. Our analytic results pinpoint some interesting situations under which parameter changes do or do not matter. These results are important for knowledge engineers as they help them identify influential network parameters. They also help explain some of the previous experimental results and observations with regards to network robustness against parameter changes.
Chapter
Inside the Juror presents the most interesting and sophisticated work to date on juror decision making from several traditions - social psychology, behavioural decision theory, cognitive psychology, and behavioural modeling. The authors grapple with crucial questions, such as: why do jurors who hear the same evidence and arguments in the courtroom enter the jury room with disagreements about the proper verdict? how do biases and prejudices affect jurors' decisions? and just how 'rational' is the typical juror? As an introduction to the scientific study of juror decision making in criminal trials, Inside the Juror provides a comprehensive and understandable summary of the major theories of juror decision making and the research that has been conducted to evaluate their validity.
Article
A causal network is used in a number of areas as a depiction of patterns of ‘influence’ among sets of variables. In expert systems it is common to perform ‘inference’ by means of local computations on such large but sparse networks. In general, non‐probabilistic methods are used to handle uncertainty when propagating the effects of evidence, and it has appeared that exact probabilistic methods are not computationally feasible. Motivated by an application in electromyography, we counter this claim by exploiting a range of local representations for the joint probability distribution, combined with topological changes to the original network termed ‘marrying’ and ‘filling‐in‘. The resulting structure allows efficient algorithms for transfer between representations, providing rapid absorption and propagation of evidence. The scheme is first illustrated on a small, fictitious but challenging example, and the underlying theory and computational aspects are then discussed.
Article
The book that launched the Dempster–Shafer theory of belief functions appeared 40 years ago. This intellectual autobiography looks back on how I came to write the book and how its ideas played out in my later work.
Article
When cases come before courts can we predict the outcome? Is legal reasoning rationally persuasive, working within a formal structure and using recognisable forms of arguments to produce predictable results? Or is legal reasoning mere 'rhetoric' in the pejorative sense, open to use, and abuse, to achieve whatever ends unscrupulous politicians, lawyers and judges desire? If the latter what becomes of the supposed security of living under the rule of law? This book tackles these questions by presenting a theory of legal reasoning. It explains the essential role syllogism plays in reasoning used to apply the law, and the elements needed in addition to deductive reasoning to give a full explanation of how law is applied and decisions justified through the use of precedent, analogy, and principle. The book highlights that problems of interpretation, classification, and relevance will always arise when applying general legal standards to individual cases. In justifying their conclusions about such problems, judges need to be faithful to categorical legal reasons and yet fully sensitive to the particulars of the cases before them. How can this be achieved, and how should we evaluate the possible approaches judges could take to solving these problems? By addressing these issues the book asks questions at the heart of understanding the nature of law and the moral complexity of the rule of law.
Article
This chapter argues that people reason about legal evidence using small-scale qualitative networks. These cognitive networks are typically qualitative and incomplete, and based on people's causal beliefs about the specifics of the case as well as the workings of the physical and social world in general. A key feature of these networks is their ability to represent qualitative relations between hypotheses and evidence, allowing reasoners to capture the concepts of dependency and relevance critical in legal contexts. In support of this claim, the chapter introduces some novel empirical and formal work on alibi evidence, showing that people's reasoning conforms to the dictates of a qualitative Bayesian model. However, people's inferences do not always conform to Bayesian prescripts. Empirical studies are also discussed in which people over-extend the discredit of one item of evidence to other unrelated items. This bias is explained in terms of the propensity to group positive and negative evidence separately and the use of coherence-based inference mechanisms. It is argued that these cognitive processes are a natural response to deal with the complexity of legal evidence.
Book
The amount of information forensic scientists are able to offer is ever increasing, owing to vast developments in science and technology. Consequently, the complexity of evidence does not allow scientists to cope adequately with the problems it causes, or to make the required inferences. Probability theory, implemented through graphical methods, specifically Bayesian networks, offers a powerful tool to deal with this complexity, and discover valid patterns in data. Bayesian Networks and Probabilistic Inference in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian networks for the evaluation of scientific evidence in forensic science. Includes self-contained introductions to both Bayesian networks and probability. Features implementation of the methodology using HUGIN, the leading Bayesian networks software. Presents basic standard networks that can be implemented in commercially and academically available software packages, and that form the core models necessary for the reader's own analysis of real cases. Provides a technique for structuring problems and organizing uncertain data based on methods and principles of scientific reasoning. Contains a method for constructing coherent and defensible arguments for the analysis and evaluation of forensic evidence. Written in a lucid style, suitable for forensic scientists with minimal mathematical background. Includes a foreword by David Schum. The clear and accessible style makes this book ideal for all forensic scientists and applied statisticians working in evidence evaluation, as well as graduate students in these areas. It will also appeal to scientists, lawyers and other professionals interested in the evaluation of forensic evidence and/or Bayesian networks.
Article
This article critically evaluates the relationship between constructing narratives and achieving factual accuracy at trials. The story model of adjudication — according to which jurors process testimony by organizing it into competing narratives — has gained wide acceptance in the descriptive work of social scientists and currency in the courtroom, but it has received little close attention from legal theorists. The article begins with a discussion of the meaning of narrative and its function at trial. It argues that the story model is incomplete, and that “legal truth” emerges from a hybrid of narrative and other means of inquiry. As a result, trials contain opportunities to promote more systematic consideration of evidence. Second, the article asserts that, to the extent the story model is descriptively correct with respect to the structure of juror decision making, it also gives rise to normative concerns about the tension between characteristic features of narrative and the truth-seeking aspirations of trial. Viewing trials through the lens of narrative theory brings sources of bias and error into focus and suggests reasons to increase the influence of analytic processes. The article then appraises improvements in trial mechanics — from prosecutorial discovery obligations through appellate review of evidentiary errors — that might account for the influence of stories. For example, a fuller understanding of narrative exposes the false assumption within limiting instructions that any piece of evidence exists in isolation. And to better inform how adjudicators respond to stories in the courtroom, the article argues for modifying instructions in terms of their candor, explanatory content, and timing.
Article
Professor Tribe considers the accuracy, appropriateness, and possible dangers of utilizing mathematical methods in the legal process, first in the actual conduct of civil and criminal trials, and then in designing procedures for the trial system as a whole. He concludes that the utility of mathematical methods for these purposes has been greatly exaggerated. Even if mathematical techniques could significantly enhance the accuracy of the trial process, Professor Tribe also shows that their inherent conflict with other important values would be too great to allow their general use.
Article
This paper reports observations from a series of formal and empirical studies of the process of assessing the probative value of evidence in the cascaded or hierarchical inference tasks commonly performed by fact finders in court trials. The formal research develops expressions that prescribe how the ingredients of various forms of evidence can be coherently combined in assessing the probative value of evidence. These expressions allow identification and systematic analysis of a wide assortment of subtle properties of evidence, many of which are commonly recognized in evidence law. The reported empirical research was designed to evaluate the consistency with which persons actually assess the probative value of evidence when they are asked to make these evaluations in several equivalent ways. Results show that persons, when required to mentally combine a large amount of probabilistic evidence, exhibit certain inconsistencies such as treating contradictory testimony as corroborative testimony and double-counting or overvaluing redundant testimony. However, when people are asked to make assessments about the fine-grained logical details of the same evidence, these inconsistencies do not occur.
Article
There is a well-settled maxim that the standard of persuasion in criminal trials—proof beyond a reasonable doubt—is unquantifiable. However, the usual reasons given for the unquantifiability of reasonable doubt are unsatisfactory; and a recent case, United States v. Copeland, serves as a reminder that strong considerations favour quantification of at least some standards of persuasion. This comment attempts to bring greater clarity to the question of the advantages and disadvantages of some form of quantification of the reasonable doubt standard.
Book
This book is an essay on how people make sense of each other and the world they live in. Making sense is the activity of fitting something puzzling into a coherent pattern of mental representations that include concepts, beliefs, goals, and actions. Paul Thagard proposes a general theory of coherence as the satisfaction of multiple interacting constraints, and discusses the theory's numerous psychological and philosophical applications. Much of human cognition can be understood in terms of coherence as constraint satisfaction, and many of the central problems of philosophy can be given coherence-based solutions. Thagard shows how coherence can help to unify psychology and philosophy, particularly when addressing questions of epistemology, metaphysics, ethics, politics, and aesthetics. He also shows how coherence can integrate cognition and emotion. Bradford Books imprint
Article
A Bayesian network (BN) is a graphical model of uncertainty that is especially well suited to legal arguments. It enables us to visualize and model dependencies between different hypotheses and pieces of evidence and to calculate the revised probability beliefs about all uncertain factors when any piece of new evidence is presented. Although BNs have been widely discussed and recently used in the context of legal arguments, there is no systematic, repeatable method for modeling legal arguments as BNs. Hence, where BNs have been used in the legal context, they are presented as completed pieces of work, with no insights into the reasoning and working that must have gone into their construction. This means the process of building BNs for legal arguments is ad hoc, with little possibility for learning and process improvement. This article directly addresses this problem by describing a method for building useful legal arguments in a consistent and repeatable way. The method complements and extends recent work by Hepler, Dawid, and Leucari (2007) on object-oriented BNs for complex legal arguments and is based on the recognition that such arguments can be built up from a small number of basic causal structures (referred to as idioms). We present a number of examples that demonstrate the practicality and usefulness of the method.
Article
The authors of this critique of judicial decision-making contend that judges, lawyers and police believe so strongly in the narrative presented in criminal legal proceedings that they do not seek anchoring through evidence. Using the latest research from cognitive psychology, they explore the nature and purpose of anchoring as a means of firmly establishing and identifying the 'truth' and demonstrate that an alarming lack of stability and consistency exists in current systems. Such potential faults in the criminal justice system which (until recently) was popularly perceived as infallible are then explored in a range of areas: the relationship between investigation and proof, confessions, identification and wrongful conviction, reliability of witnesses, the authority of experts, effective defence, and the selection of evidence. Each chapter questions the anchoring potential and reliability of these areas, and often exposes pitfalls. By drawing extensively on actual cases, the authors are able to question the psychological basis of current systems of criminal justice. Set against a background of increasing concern over miscarriages of justice, the book will be of interest to both cognitive and forensic psychologists, and to anyone concerned with modern criminal justice systems. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This article reviews a theory of explanatory coherence that provides a psychologically plausible ac-count of how people evaluate competing explanations. The theory is implemented in a computational model that uses simple artificial neural networks to simulate many im-portant cases of scientific and legal reasoning. Current research directions include extensions to emotional think-ing and implementation in more biologically realistic neural networks. In CSI and other television crime shows, investigators collect evidence in order to determine the causes of a crime. For ex-ample, if a young woman is murdered, the police may consider as suspects the woman's boyfriend and her father. Inferences about who is the most likely culprit will be based on which hypothe-sis—that the boyfriend did it or that the father did it—fits best with all the available evidence. These hypotheses provide pos-sible explanations of the evidence; for example the hypothesis that the boyfriend was the murderer may explain why his fin-gerprints are on the murder weapon. Conclusions about who the actual criminal was and who was innocent depend on evaluating competing explanations of the evidence. This kind of explanatory inference is ubiquitous in human thinking, ranging from mechanical repair to medical diagnosis to scientific theorizing. When your car fails to start, you consider alternative explanations such as that it is out of gas or that the battery is dead. In medicine, a physician considers possible diseases that would explain a patient's symptoms and bases a treatment plan on what he or she thinks is the most plausible diagnosis. Psychologists publishing theoretical papers often offer sets of hypotheses that they contend provide better expla-nations of the results of experiments than alternative theories do. Explanation evaluation is a mental process that is important in many areas of psychology. Cognitive psychologists have inves-tigated causal reasoning, which often requires a person to de-termine the most likely cause of a surprising event. Social psychologists have studied how people explain the behavior of others. Clinical psychologists are sometimes interested in the emotion-laden reasoning by which people construct explana-tions of their own situations. In all these kinds of cases, people's thinking involves evaluating competing explanations of what they observe. But explanation evaluation is not simply a matter of deter-mining which of two or more competing hypotheses fits best with the evidence. We may also need to consider how hypotheses fit with each other, particularly when one hypothesis provides an explanation of another. This layering of hypotheses is particularly evident in legal reasoning when questions of motive are salient. Crime investigators considering whether the boyfriend or the fa-ther is the more likely murderer will naturally consider possible motives that might explain why one of them would have wanted to kill the young woman. Hence the cognitive process of explanation evaluation must consider the fit of hypotheses with each other as well as with the evidence, so that inference involves coming up with the overall most coherent picture of what happened. This article reviews a theory of explanatory coherence that provides a psychologically plausible account of how people evaluate competing explanations. After sketching the theory, I describe how it is implemented in a computational model that uses a simple artificial neural network to evaluate competing explanations. This model has been applied to many important cases of scientific and legal reasoning. Finally, I describe cur-rent directions in the development and application of the theory of explanatory coherence, including connections with emotional thinking and implementation in more biologically realistic neural networks.
Article
Several philosophers and evidence scholars have recently suggested that both the criminal and civil standards of proof could be modeled using the schema known as inference to the best explanation. This paper challenges that proposal, showing that being the best explanation of a certain set of facts is too weak to serve as an explication of proof beyond a reasonable doubt and too strong to capture the meaning of the preponderance of the evidence.
Article
Following an introduction by Michael Risinger, this publication preserves the postings to a discussion list for evidence professors on such topics as relevance, conditional relevance, probative value, inference, Bayes' rule, and likelihood ratios.
Article
Criminal procedure is organized as a tournament with predefined roles. We show that assuming the role of a defense counsel or prosecutor leads to role induced bias even if participants are asked to predict a court ruling after they have ceased to act in that role, and if they expect a substantial financial incentive for being accurate. The bias is not removed either if participants are instructed to predict the court ruling in preparation of plea bargaining. In line with parallel constraint satisfaction models for legal decision making, findings indicate that role induced bias is driven by coherence effects (Simon, 2004), that is, systematic information distortions in support of the favored option. This is mainly achieved by downplaying the importance of conflicting evidence. These distortions seem to stabilize interpretations, and people do not correct for this bias. Implications for legal procedure are briefly discussed.
Article
Jury members are confronted with highly complex, ill-defined problems. Coherence based reasoning (Pennington & Hastie, 1992; Simon, 2004), which partially relies on intuitive-automatic processing (Glöckner & Betsch, 2008), empowers them to nonetheless make meaningful decisions. These processes, however, have a downside. We tested possible negative effects in a set of studies. Particularly, we investigated whether standards of proof are muted by stronger coherence shifts, and whether the probative value of the evidence is not properly taken into account. We found that U.S. model jury instructions for preponderance of the evidence and beyond a reasonable doubt influence conviction rates in the intended direction and are not undermined by coherence shifts, although probabilistic estimations of these standards are inappropriate. However even massive changes in explicitly stated probabilities, while holding the overall constellation of facts constant, did not influence conviction rates and the estimated probability for conviction. We argue that improvements for legal procedure should focus on measures to circumvent the negative side-effects of coherence based reasoning in general and to specifically make probabilistic information better evaluable for decision makers in law.
Article
This Article presents a novel body of research in cognitive psychology called coherence-based reasoning, which has thus far been published in journals of experimental psychology. This cognitive approach challenges the stalemated conflict between the Rationalist and Critical models of decision making that have dominated legal scholarship for over a century. The experimental findings demonstrate that many legal decisions fit into neither of these models. Based on a connectionist cognitive architecture, coherence-based reasoning shows that the decision-making process progresses bi-directionally: premises and facts both determine conclusions and are affected by them in return. A natural result of this cognitive process is a skewing of the premises and facts toward inflated support for the chosen decision. The Article applies this research to four important aspects of the trial. It argues that the current doctrine in these areas is based on misconceptions about human cognition, which lead to systematic legal errors. By identifying the cognitive phenomena that lie at the root of these failings, the research makes it possible to devise interventions and introduce procedures that reduce the risk of trial error.
Article
In 1970, Michael O. Finkelstein (with William B. Fairley) proposed that under some circumstances a jury in a criminal trial might be invited to use Bayes’ Theorem to address the issue of the identity of the criminal perpetrator. In 1971, Laurence Tribe responded with a rhetorically powerful and wide-ranging attack on what he called ‘trial by mathematics’. Finkelstein responded to Tribe's attack by further explaining, refining and defending his proposal. Although Tribe soon fell silent on the use of mathematical and formal methods to dissect or regulate uncertain factual proof in legal proceedings, the Finkelstein–Tribe exchange precipitated a decades-long debate about trial by mathematics. But that debate, which continues to this day, became generally unproductive and sterile years ago. This happened in part because two misunderstandings plagued much of the debate almost from the start. The first misunderstanding was a widespread failure to appreciate that mathematics is part of a broader family of rigorous methods of reasoning, a family of methods that is often called ‘formal’. The second misunderstanding was a widespread failure to appreciate that mathematical and formal analyses (including analyses that use numbers) can have a large variety of purposes. Before any further major research project on trial by mathematics is begun, interested researchers in mathematics, probability, logic and related fields, on the one hand, and interested legal professionals, on the other hand, should try to reach agreement about the possible distinct purposes that any given mathematical or formal analysis of inconclusive argument about uncertain factual hypotheses might serve. The article lists some of those possible purposes.
Article
The fast-and-frugal heuristics approach to probabilistic inference assumes that individuals often employ simple heuristics to integrate cue information that commonly function in a non-reciprocal fashion. Specifically, the subjective validity of a certain cue remains stable during the application of a heuristic and is not changed by the presence or absence of another cue. The parallel constraint satisfaction (PCS) model, in contrast, predicts that information is processed in a reciprocal fashion. Specifically, it assumes that subjective cue validities interactively affect each other and are modified to coherently support the favored choice. Corresponding to the model's simulation, we predicted the direction and the size of such coherence shifts. Cue validities were measured before, after (Experiment 1), and during judgment (Experiments 2 and 3). Coherence shifts were found in environments involving real-world cue knowledge (weather forecasts) and in a domain for which the application of fast-and-frugal heuristics has been demonstrated (city-size tasks). The results indicate that subjective cue validities are not fixed parameters, but that they are interactively changed to form coherent representations of the task. Copyright © 2009 John Wiley & Sons, Ltd.
Article
Analytical and empirical studies of the process of legal proof have sought to explain the process through various aspects of probability theories. These probability-based explanations have neglected the extent to which explanatory considerations themselves explain juridical proof. Similar to many scientific inferences, juridical inferences turn on how well the evidence would explain certain conclusions. This inferential process, well known in the philosophy of science, is referred to as abduction or "inference to the best explanation." In this essay, we provide a detailed account of the process in general; an explanation-based account of juridical proof in particular; a comparison with probability approaches; and the theoretical and practical consequences of the debate. We demonstrate how an explanation-based approach itself better explains juridical proof, at both the macro- and micro-levels, than the probability approaches. The macro-level issues include burdens of proof in civil and criminal cases, and related issues involving summary judgment, judgments as a matter of law, and sufficiency-of-the-evidence standards. The micro-level issues include the relevance and probative value (and thus the admissibility) of any individual item of evidence, from first-hand observations to complex scientific or statistical evidence. The explanatory considerations can provide practical guidance and constraint for decision-making on each of these issues. More generally, we demonstrate that the explanatory and probability approaches are not alternatives. The explanatory considerations are more fundamental, on which the probability accounts are parasitic. Thus to extent the probability approaches account accurately for and supplement explanatory considerations, they may improve our understanding; to the extent they do not, they risk mismodeling the process.
Article
This paper provides a computational characterization of coherence that applies to a wide range of philosophical problems and psychological phenomena. Maximizing coherence is a matter of maximizing satisfaction of a set of positive and negative constraints. After comparing five algorithms for maximizing coherence, we show how our characterization of coherence overcomes traditional philosophical objections about circularity and truth.
Book
This book provides a thorough introduction to the formal foundations and practical applications of Bayesian networks. It provides an extensive discussion of techniques for building Bayesian networks that model real-world situations, including techniques for synthesizing models from design, learning models from data, and debugging models using sensitivity analysis. It also treats exact and approximate inference algorithms at both theoretical and practical levels. The author assumes very little background on the covered subjects, supplying in-depth discussions for theoretically inclined readers and enough practical details to provide an algorithmic cookbook for the system developer.