Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The probabilistic genotyping software Forensic Statistical Tool implements a semi‐continuous model for DNA interpretation. This software omits any locus where the sum of the allele probabilities equals or exceeds .97. There has been criticism that this function is neither signaled by the software nor disclosed in publications. We investigate the effect of this function by creating a near clone of the model and applying it to five‐ and six‐allele loci for three‐person mixtures created in the expected population proportions. On average, the dropping of a locus is conservative for six‐peak loci and nonconservative for five‐peak loci. For persons of interest (POIs) with rare alleles, the dropping is usually conservative. For POIs with common alleles, the dropping of the locus is often nonconservative. This article is categorized under: Forensic Biology > Interpretation of Biological Evidence Forensic Biology > Forensic DNA Technologies Jurisprudence and Regulatory Oversight > Expert Evidence and Narrative The probabilistic genotyping software Forensic Statistical Tool locus omission feature is demonstrated to be conservative for six‐peak loci and nonconservative for five‐peak loci within a three‐person mixture.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Gasston et al. [16] find "On average, the dropping of a locus is conservative for six-peak loci and nonconservative for five-peak loci. For persons of interest (POIs) with rare alleles, the dropping is usually conservative. ...
Preprint
Full-text available
We discuss a range of miscodes found in probabilistic genotyping (PG) software and from other industries that have been reported in the literature and have been used to inform PG admissibility hearings. Every instance of the discovery of a miscode in PG software with which we have been associated has occurred either because of testing, use, or repeat calculation of results either by us or other users. In all cases found during testing or use something has drawn attention to an anomalous result. Intelligent investigation has led to the examination of a small section of the code and detection of the miscode. Previously, three instances from other industries quoted by the Electronic Frontier Foundation Amicus brief as part of a PG admissibility hearing (atmospheric ozone, NIMIS, and VW) and two previous examples raised in relation to PG admissibility (Kerberos and Therac-25) were presented as examples of miscodes and how an extensive code review could have resolved these situations. However, we discuss how these miscodes might not have been discovered through code review alone. These miscodes could only have been detected through use of the software or through testing. Once the symptoms of the miscode(s) have been detected, a code review serves as a beneficial approach to try and diagnose to the issue.
Article
DNA mixture analysis is a current topic of discussion in the forensics literature. Of particular interest is how to approach mixtures where allelic drop-out and/or drop-in may have occurred. The Office of Chief Medical Examiner (OCME) of The City of New York has developed and validated the Forensic Statistical Tool (FST), a software tool for likelihood ratio analysis of forensic DNA samples, allowing for allelic drop-out and drop-in. FST can be used for single source samples and for mixtures of DNA from two or three contributors, with or without known contributors. Drop-out and drop-in probabilities were estimated empirically through analysis of over 2000 amplifications of more than 700 mixtures and single source samples. Drop-out rates used by FST are a function of the Identifiler® locus, the quantity of template DNA amplified, the number of amplification cycles, the number of contributors to the sample, and the approximate mixture ratio (either unequal or approximately equal). Drop-out rates were estimated separately for heterozygous and homozygous genotypes. Drop-in rates used by FST are a function of number of amplification cycles only.
Article
Although likelihood ratio (LR) based methods to analyse complex mixtures of two or more individuals, that exhibit the twin phenomena of drop-out and drop-in has been in the public domain for more than a decade, progress towards widespread implementation in to casework has been slow. The aim of this paper is to establish a LR-based framework using principles of the basic model recommended by the ISFG DNA commission. We use the tools in the form of open-source software (LRmix) in the Forensim package for the R software. A generalised set of guidelines has been prepared that can be used to evaluate any complex mixture. In addition, a validation framework has been proposed in order to evaluate LRs that are generated on a case-specific basis. This process is facilitated by replacing the reference profile of interest (typically the suspect's profile) with simulated random man using Monte-Carlo simulations and comparing the resulting distributions with the estimated LR. Validation is best carried out by comparison with a standard. Because LRmix is open-source we proposed that it is ideally positioned to be adopted as a standard basic model for complex DNA profile tests. This should not be confused with 'the best model' since it is clear that improvements could be made over time. Nevertheless, it is highly desirable to have a methodology in place that can show whether an improvement has been achieved should additional parameters, such as allele peak heights, are incorporated into the model. To facilitate comparative studies, we provide all of the necessary data for three test examples, presented as standard tests that can be utilised to carry out comparative studies. We envisage that the resource of standard test examples will be expanded over coming years so that a range of different case-types that are included will be used in order to improve the efficacy of models; to understand their advantages; conversely, to understand any limitations and to provide training material.
Article
Interpreting and assessing the weight of low-template DNA evidence presents a formidable challenge in forensic casework. This report describes a case in which a similar mixed DNA profile was obtained from four different bloodstains. The defense proposed that the low-level minor profile came from an alternate suspect, the defendant's mistress. The strength of the evidence was assessed using a probabilistic approach that employed likelihood ratios incorporating the probability of allelic drop-out. Logistic regression was used to model the probability of drop-out using empirical validation data from the government laboratory. The DNA profile obtained from the bloodstain described in this report is at least 47 billion times more likely if, in addition to the victim, the alternate suspect was the minor contributor, than if another unrelated individual was the minor contributor. This case illustrates the utility of the probabilistic approach for interpreting complex low-template DNA profiles.
Article
We discuss the interpretation of DNA profiles obtained from low template DNA samples. The most important challenge to interpretation in this setting arises when either or both of "drop-out" and "drop-in" create discordances between the crime scene DNA profile and the DNA profile expected under the prosecution allegation. Stutter and unbalanced peak heights are also problematic, in addition to the effects of masking from the profile of a known contributor. We outline a framework for assessing such evidence, based on likelihood ratios that involve drop-out and drop-in probabilities, and apply it to two casework examples. Our framework extends previous work, including new approaches to modelling homozygote drop-out and uncertainty in allele calls for stutter, masking and near-threshold peaks. We show that some current approaches to interpretation, such as ignoring a discrepant locus or reporting a "Random Man Not Excluded" (RMNE) probability, can be systematically unfair to defendants, sometimes extremely so. We also show that the LR can depend strongly on the assumed value for the drop-out probability, and there is typically no approximation that is useful for all values. We illustrate that ignoring the possibility of drop-in is usually unfair to defendants, and argue that under circumstances in which the prosecution relies on drop-out, it may be unsatisfactory to ignore any possibility of drop-in.
Article
By increasing the PCR amplification regime to 34 cycles, we have demonstrated that it is possible routinely to analyse <100 pg DNA. The success rate was not improved (without impairing quality) by increasing cycle number further. Compared to amplification of 1 ng DNA at 28 cycles, it was shown that increased imbalance of heterozygotes occurred, along with an increase in the size (peak area) of stutters. The analysis of mixtures by peak area measurement becomes increasingly difficult as the sample size is reduced. Laboratory-based contamination cannot be completely avoided, even when analysis is carried out under stringent conditions of cleanliness. A set of guidelines that utilises duplication of results to interpret profiles originating from picogram levels of DNA is introduced. We demonstrate that the duplication guideline is robust by applying a statistical theory that models three key parameters - namely the incidence of allele drop-out, laboratory contamination and stutter. The advantage of the model is that the critical levels for each parameter can be calculated. This information may be used (for example) to determine levels of contamination that can be tolerated within the strategy employed. In addition we demonstrate that interpreting one banded loci, where allele dropout could have occurred, using LR=1/2f(a) was conservative provided that the band was low in peak area. Furthermore, we demonstrate that an apparent mis-match between crime-stain and a suspect DNA profile does not necessarily result in an exclusion. The method used is complex, yet can be converted into an expert system. We envisage this to be the next step.
Mixing it up: Legal challenges to probabilistic genotyping programs for mixture analysis
  • J Goldthwaite
  • C Hughes
  • R Torres
Goldthwaite, J., Hughes, C., & Torres, R. (2018). Mixing it up: Legal challenges to probabilistic genotyping programs for mixture analysis. The Champion, (5), 12-24.
United States of America 54
  • Frye V The
Adams testimony Dean Jones 2020
  • N Adams
Adams, N. (2017a). Adams testimony Dean Jones 2020. Retrieved from https://johnbuckleton.files.wordpress.com/2018/08/nathaniel-adams-11-16-17.pdf
Declaration of Nathaniel Adams exhibit A
  • N Adams
Adams, N. (2017b). Declaration of Nathaniel Adams, exhibit A. Retrieved from https://www.documentcloud.org/documents/4112650-10-17-17-Unredacted-NA-Exhibit-C.html
Mixing it up: Legal challenges to probabilistic genotyping programs for mixture analysis
  • Goldthwaite J.