Chapter

The new science of eyewitness memory

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The prevailing view among criminal justice and legal practitioners, and the general public, is that eyewitness evidence is generally inaccurate and unreliable. Here we argue that that perspective fails to take the full cognitive context of eyewitness reports into account. A broader view of eyewitness cognition includes both memory judgments—for example, the selection of an individual from a lineup—and an accompanying metacognitive context—for example, the level of confidence that an eyewitness places in that selection. When these components are considered jointly, eyewitness evidence is highly reliable and can be treated like any other source of evidence in the courtroom—valuable when appropriately assayed but prone to contamination. Empirical research over the past 10 years, based on the bedrock principles of Signal Detection Theory, has illuminated problems with standard historical measures that are based on intuitive theorizing about measurement. Those measures, and the results from experiments that utilize them, have misled the field regarding reform efforts and have diminished the role that eyewitness confidence should play in distinguishing accurate from inaccurate identifications. Signal detection theory, coupled with ROC analysis and confidence calibration, is pointing toward a new science of eyewitness memory. The new science shifts the blame for faulty testimony from unreliable eyewitnesses to other actors in the law enforcement and legal community—actors whose behaviors can transform low-confidence, likely inaccurate, initial identifications, into incorrect, high-confidence, courtroom identifications. Signal detection theory also highlights the role that other metacognitive factors play, as well as how to balance the two types of errors—false identifications of the innocent and missed identifications of the guilty—that inevitably arise from the eyewitness decision problem. The new science of eyewitness memory is leading a transformation in how eyewitness evidence can and should be used by the criminal justice system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This goal requires the application of a unified theory of discriminability and decision making, like signal detection theory (SDT). The application of SDT to the problems of eyewitness memory has had much success in recent years (e.g., Gronlund & Benjamin, 2018;, 2015. ...
... Because memory representations are inexact, the amount of evidence generated by the suspect will be variable. Consistent with typical signal detection interpretations of eyewitness performance (e.g., Gronlund & Benjamin, 2018;, we assume Gaussian distributions of noise around a true value, and assume with no loss of generalizability that the distribution of evidence generated by an innocent suspect follows a normal distribution centered at zero, with a standard deviation equal to one. The amount of evidence generated by the guilty suspect is also assumed to be normally distributed; however, the mean of this distribution is assumed to be greater than that of the distribution of evidence generated by the innocent suspect. ...
... An investigator who uses a technique known to elicit liberal responding should recognize that the probability of a correct identification and a false identification are both higher. Response bias is relatively easy to modulate via instructions to an eyewitness or by titration of reported confidence, and this can be done in accordance with an investigator's goals and subjective costs of errors (see Gronlund & Benjamin, 2018). ...
Article
Full-text available
Eyewitness identification via lineup procedures is an important and widely used source of evidence in criminal cases. However, the scientific literature provides inconsistent guidance on a very basic feature of lineup procedure: lineup size. In two experiments, we examined whether the number of fillers affects diagnostic accuracy in a lineup, as assessed with receiver-operating characteristic (ROC) analysis. Showups (identification procedures with one face) led to lower discriminability than simultaneous lineups. However, in neither experiment did the number of fillers in a lineup affect discriminability. We also evaluated competing models of decision-making from lineups. This analysis indicated that the standard Independent Observations (IO) model, which assumes a decision rule based on the comparison of memory strength signals generated by each face in a lineup, is incapable of reproducing the lower level of performance evident in showups. We could not adjudicate between the Ensemble model, which assumes a decision rule based on the comparison of the strength of each face with the mean strength across the lineup, and a newly introduced Dependent Observations model, which adopts the same decision rule as the IO model, but with correlated signals across faces. We draw lessons for users of lineup procedures and for basic research on eyewitness decision making. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... Note that the ROC of the EVGM depicted in the bottom left panel is symmetric (i.e., it contains both the points {P(Hit), P(False Alarm)} and {1 − P(Hit), 1 − P(False Alarm)}; see also Kellen et al., 2021;Killeen & Taylor, 2004), whereas the ROC of the UVGM depicted in the bottom right panel is not detection and identification (SDAI; Macmillan & Creelman, 2005) compound task. This task is well known by researchers of eyewitness identification, as it is akin to the simultaneous lineup procedure (Mickes & Gronlund, 2017;Gronlund & Benjamin, 2018), but it was, for instance, also recently used by Meyer-Grant and Klauer (2021) to evaluate different models of recognition memory. In essence, it is comprised of two distinct-but closely related-sub-tasks that arise when, among a set of m stimuli, a target (usually an old item) is either present or not. ...
Article
Full-text available
Recently, it has been suggested that the mnemonic information that underlies recognition decisions changes when participants are asked to indicate whether a test stimulus is new rather than old (Brainerd et al., 2021, Journal of Experimental Psychology: Learning Memory, and Cognition , advance online publication). However, some observations that have been interpreted as evidence for this assertion need not be due to mnemonic changes, but may instead be the result of conservative response strategies if the possibility of asymmetric receiver operating characteristics (ROCs) is taken into account. Conversely, recent findings in support of asymmetric ROCs rely on the assumption that the mnemonic information accessed by the decision-maker does not depend on whether an old or a new item is considered to be the target Kellen et al. (2021, Psychological Review 128 [6], 1022–1050). Here, we aim to clarify whether there is such a difference in accessibility of mnemonic information by applying signal detection theory. To this end, we used two versions of a simultaneous detection and identification task in which we presented participants with two test stimuli at a time. In one version, the old item was the target; in the other, the new item was the target. This allowed us to assess differences in mnemonic information retrieved in the two tasks while taking possible ROC asymmetry into account. Results clearly indicate that there is indeed a difference in the accessibility of mnemonic information as postulated by (Brainerd et al., 2021, Journal of Experimental Psychology: Learning Memory, and Cognition , advance online publication).
... Given these numbers, it is important to understand the factors that influence mistaken identifications in lineups, and evaluate the benefits and shortcomings that are associated with different procedures. Research into the causes of mistaken identifications has a long history in social and cognitive psychology (for a review, see Gronlund & Benjamin, 2018). However, it is only within the past decade or so that researchers began to make use of some of the formal modeling approaches available in their toolboxes (e.g., Goodsell et al., 2010;Wetmore et al., 2017;. ...
Article
Full-text available
Sequential lineups are one of the most commonly used procedures in police departments across the USA. Although this procedure has been the target of much experimental research, there has been comparatively little work formally modeling it, especially the sequential nature of the judgments that it elicits. There are also important gaps in our understanding of how informative different types of judgments can be (binary responses vs. confidence ratings), and the severity of the inferential risks incurred when relying on different aggregate data structures. Couched in a signal detection theory (SDT) framework, the present work directly addresses these issues through a reanalysis of previously published data alongside model simulations. Model comparison results show that SDT modeling can provide elegant characterizations of extant data, despite some discrepancies across studies, which we attempt to address. Additional analyses compare the merits of sequential lineups (with and without a stopping rule) relative to showups and delineate the conditions in which distinct modeling approaches can be informative. Finally, we identify critical issues with the removal of the stopping rule from sequential lineups as an approach to capture within-subject differences and sidestep the risk of aggregation biases.
... important applied question of how to determine the reliability of children's lineup identification decisions and how techniques from the applied literature can be used to further our understanding about memory monitoring. With greater communication and better integrated research approaches across fields, inconsistent findings could have been resolved more quickly, and basic science findings that have been limited to laboratory settings could have already been extended to have impact in applied settings (for similar ideas see also Gronlund & Benjamin, 2018;Lane & Meissner, 2008). ...
Article
Children are frequently witnesses of crime. In the witness literature and legal systems, children are often deemed to have unreliable memories. Yet, in the basic developmental literature, young children can monitor their memory. To address these contradictory conclusions, we reanalyzed the confidence-accuracy relationship in basic and applied research. Confidence provided considerable information about memory accuracy, from at least age 8, but possibly younger. We also conducted an experiment where children in young (4-6 years), middle (7-9 years), and late (10-17 years) childhood (N = 2,205) watched a person in a video and then identified that person from a police lineup. Children provided a confidence rating (an explicit judgment) and used an interactive lineup-in which the lineup faces can be rotated-and we analyzed children's viewing behavior (an implicit measure of metacognition). A strong confidence-accuracy relationship was observed from age 10 and an emerging relationship from age 7. A constant likelihood ratio signal-detection model can be used to understand these findings. Moreover, in all ages, interactive viewing behavior differed in children who made correct versus incorrect suspect identifications. Our research reconciles the apparent divide between applied and basic research findings and suggests that the fundamental architecture of metacognition that has previously been evidenced in basic list-learning paradigms also underlies performance on complex applied tasks. Contrary to what is believed by legal practitioners, but similar to what has been found in the basic literature, identifications made by children can be reliable when appropriate metacognitive measures are used to estimate accuracy. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... This problem has resulted in a great deal of research over the last few decades (e.g., Wells 1978; see reviews by Carlson 2013, andWells et al. 2006), and the study of eyewitness ID extends back much further (Arnold 1906;Münsterberg 1908). However, from the beginning there was a general lack of theoretical guidance (Bornstein and Penrod 2008;Gronlund and Benjamin 2018). This has resulted in calls for more eyewitness ID research undergirded by cognitive theory generally (e.g., Dianiska et al. 2020;Lane and Meissner 2008), and signal detection theory specifically (SDT; Green and Swets 1966;Wixted and Mickes 2012). ...
Article
Full-text available
The diagnostic feature-detection theory (DFT) of eyewitness identification is based on facial information that is diagnostic versus non-diagnostic of suspect guilt. It primarily has been tested by discounting non-diagnostic information at retrieval, typically by surrounding a single suspect showup with good fillers to create a lineup. We tested additional DFT predictions by manipulating the presence of facial information (i.e., the exterior region of the face) at both encoding and retrieval with a large between-subjects factorial design (N = 19,414). In support of DFT and in replication of the literature, lineups yielded higher discriminability than showups. In support of encoding specificity, conditions that matched information between encoding and retrieval were generally superior to mismatch conditions. More importantly, we supported several DFT and encoding specificity predictions not previously tested, including that (a) adding non-diagnostic information will reduce discriminability for showups more so than lineups, and (b) removing diagnostic information will lower discriminability for both showups and lineups. These results have implications for police deciding whether to conduct a showup or a lineup, and when dealing with partially disguised perpetrators (e.g., wearing a hoodie).
... The police present the clerk with a live lineup containing the arrested male from the article, and the clerk with high confidence identifies the suspect. Critically, as this hypothetical illustrates, what constitutes a first identification in a real-world case is complicated; especially when eyewitnesses see a suspect's photo on Facebook or in a newspaper prior to a formal identification procedure (Gronlund & Benjamin, 2018). For these reasons, we urge eyewitness memory researchers to recognise that eyewitness contamination may occur in real-world cases before the eyewitness's initial identification and confidence statement takes place. ...
Article
Eyewitness memory researchers have recently devoted considerable attention to eyewitness confidence. While there is strong consensus that courtroom confidence is problematic, we now recognise that an eyewitness’s initial confidence in their first identification – in certain contexts – can be of value. A few psychological scientists, however, have confidently, but erroneously claimed that in real-world cases, eyewitness initial confidence is the most important indicator of eyewitness accuracy, trumping all other factors that might exist in a case. This claim accompanies an exaggeration of the role of eyewitnesses’ “initial confidence” in the DNA exoneration cases. Still worse, overstated claims about the confidence-accuracy relationship, and eyewitness memory, have reached our top scientific journals, news articles, and criminal cases. To set the record straight, we review what we actually know and do not know about the “initial confidence” of eyewitnesses in the DNA exoneration cases. Further reasons for skepticism about the value of the confidence-accuracy relationship in real-world cases come from new analyses of a separate database, the National Registry of Exonerations. Finally, we review new research that reveals numerous conditions wherein eyewitnesses with high initial confidence end up being wrong.
... Recently, the eyewitness ID literature has advanced the discussion regarding the relationship between confidence and accuracy (see chapter by Gronlund & Benjamin, 2018). By using more appropriate statistical approaches (e.g., calibration analysis; Juslin, Olsson, & Winman, 1996) and dividing participants who identify someone in a lineup (i.e., choosers) from those who do not (i.e., non-choosers), there is evidence that suggests confidence can be strongly associated with accuracy for choosers (see Wixted & Wells, 2017, for a review). ...
Article
Full-text available
Many crimes occur in which a perpetrator has a distinctive facial feature, such as a tattoo or black eye, but few eyewitness identification studies have involved such a feature. We conducted an experiment to determine how eyewitness identification performance is impacted by a distinctive facial feature, and how police could deal with this issue. Participants (N = 4218) studied a target face with or without a black eye, and later viewed a simultaneous photo lineup either containing the target or not. For those who saw a target with a black eye, this feature was either replicated among all lineup members or was removed. The black eye harmed empirical discriminability regardless of replication or removal, which did not differ. However, participants responded more conservatively when the black eye was removed, compared to replication. Lastly, immediate confidence was consistently indicative of accuracy. This article is protected by copyright. All rights reserved.
... The model has five free parameters (μ g, σ g , c 1 , c 2 , c 3 ; without loss of generality, μ f = 0 and σ f = 1). This model has been shown to work well when we know the guilt or innocence of the suspect, as is always true for experimental designs (Mickes, Flowe, & Wixted, 2012; for a summary, see Gronlund & Benjamin, 2018). ...
Article
Full-text available
Background: The majority of eyewitness lineup studies are laboratory-based. How well the conclusions of these studies, including the relationship between confidence and accuracy, generalize to real-world police lineups is an open question. Signal detection theory (SDT) has emerged as a powerful framework for analyzing lineups that allows comparison of witnesses' memory accuracy under different types of identification procedures. Because the guilt or innocence of a real-world suspect is generally not known, however, it is further unknown precisely how the identification of a suspect should change our belief in their guilt. The probability of guilt after the suspect has been identified, the posterior probability of guilt (PPG), can only be meaningfully estimated if we know the proportion of lineups that include a guilty suspect, P(guilty). Recent work used SDT to estimate P(guilty) on a single empirical data set that shared an important property with real-world data; that is, no information about the guilt or innocence of the suspects was provided. Here we test the ability of the SDT model to recover P(guilty) on a wide range of pre-existing empirical data from more than 10,000 identification decisions. We then use simulations of the SDT model to determine the conditions under which the model succeeds and, where applicable, why it fails. Results: For both empirical and simulated studies, the model was able to accurately estimate P(guilty) when the lineups were fair (the guilty and innocent suspects did not stand out) and identifications of both suspects and fillers occurred with a range of confidence levels. Simulations showed that the model can accurately recover P(guilty) given data that matches the model assumptions. The model failed to accurately estimate P(guilty) under conditions that violated its assumptions; for example, when the effective size of the lineup was reduced, either because the fillers were selected to be poor matches to the suspect or because the innocent suspect was more familiar than the guilty suspect. The model also underestimated P(guilty) when a weapon was shown. Conclusions: Depending on lineup quality, estimation of P(guilty) and, relatedly, PPG, from the SDT model can range from poor to excellent. These results highlight the need to carefully consider how the similarity relations between fillers and suspects influence identifications.
Article
We examine different models of recognition memory in a simultaneous detection and identification task, which features multiple simultaneously presented test stimuli. A common finding from eyewitness identification research investigating such tasks is that the more confident decision makers are about detecting the presence of a target, the higher the probability that they also correctly identify it. We demonstrate that for members of the signal detection theory (SDT) model framework, predicting such a relationship is — contrary to previous assertions — not entailed by a monotonic diagnosticity ratio. Instead, it can be shown that this prediction follows if latent memory signals’ rank order probabilities exhibit monotonicity under changes in the response criterion. For a selection of common SDT models, we prove that this monotonicity property holds in situations in which two test stimuli are presented simultaneously. Threshold models such as the two-high-threshold model (2HTM), however, do not necessarily possess this feature. Leveraging this fact, we show that in the presence of lures which resemble a target, the 2HTM is unable to make the same predictions as many reasonable SDT models with monotonic rank order probabilities. This enables us to construct a critical, distribution-free test between these models. An empirical investigation implementing this test reveals a clear failure of the 2HTM to account for the qualitative response patterns, which are consistent with the predictions of SDT models with monotonic rank order probabilities.
Article
Full-text available
Can we tell whether our beliefs and judgments are correct or wrong? Results across many domains indicate that people are skilled at discriminating between correct and wrong answers, endorsing the former with greater confidence than the latter. However, it has not been realized that because of people’s adaptation to reality, representative samples of items tend to favor the correct answer, yielding object-level accuracy (OLA) that is considerably better than chance. Across 16 experiments that used 2-alternative forced-choice items from several domains, the confidence/accuracy (C/A) relationship was positive for items with OLA >50%, but consistently negative across items with OLA <50%. A systematic sampling of items that covered the full range of OLA (0–100%) yielded a U-function relating confidence to OLA. The results imply that the positive C/A relationship that has been reported in many studies is an artifact of OLA being better than chance rather than representing a general ability to discriminate between correct and wrong responses. However, the results also support the ecological approach, suggesting that confidence is based on a frugal, “bounded” heuristic that has been specifically tailored to the ecological structure of the natural environment. This heuristic is used despite the fact that for items with OLA <50%, it yields confidence judgments that are counterdiagnostic of accuracy. Our ability to tell between correct and wrong judgments is confined to the probability structure of the world we live in. The results were discussed in terms of the contrast between systematic design and representative design.
Article
Full-text available
Filler siphoning theory posits that the presence of fillers (known innocents) in a lineup protects an innocent suspect from being chosen by siphoning choices away from that innocent suspect. This mechanism has been proposed as an explanation for why simultaneous lineups (viewing all lineup members at once) induces better performance than showups (one-person identification procedures). We implemented filler siphoning in a computational model (WITNESS, Clark, Applied Cognitive Psychology 17:629–654, 2003), and explored the impact of the number of fillers (lineup size) and filler quality on simultaneous and sequential lineups (viewing lineups members in sequence), and compared both to showups. In limited situations, we found that filler siphoning can produce a simultaneous lineup performance advantage, but one that is insufficient in magnitude to explain empirical data. However, the magnitude of the empirical simultaneous lineup advantage can be approximated once criterial variability is added to the model. But this modification works by negatively impacting showups rather than promoting more filler siphoning. In sequential lineups, fillers were found to harm performance. Filler siphoning fails to clarify the relationship between simultaneous lineups and sequential lineups or showups. By incorporating constructs like filler siphoning and criterial variability into a computational model, and trying to approximate empirical data, we can sort through explanations of eyewitness decision-making, a prerequisite for policy recommendations. Electronic supplementary material The online version of this article (doi:10.1186/s41235-017-0084-1) contains supplementary material, which is available to authorized users.
Article
Full-text available
Estimator variables are factors that can affect the accuracy of eyewitness identifications but that are outside of the control of the criminal justice system. Examples include (1) the duration of exposure to the perpetrator, (2) the passage of time between the crime and the identification (retention interval), (3) the distance between the witness and the perpetrator at the time of the crime. Suboptimal estimator variables (e.g., long distance) have long been thought to reduce the reliability of eyewitness identifications (IDs), but recent evidence suggests that this is not true of IDs made with high confidence and may or may not be true of IDs made with lower confidence. The evidence suggests that while suboptimal estimator variables decrease discriminability (i.e., the ability to distinguish innocent from guilty suspects), they do not decrease the reliability of IDs made with high confidence. Such findings are inconsistent with the longstanding “optimality hypothesis” and therefore require a new theoretical framework. Here, we propose that a signal-detection-based likelihood ratio account – which has long been a mainstay of basic theories of recognition memory – naturally accounts for these findings.
Article
Full-text available
Lampinen (2016) suggested that proponents of ROC analysis may prefer that approach to the diagnosticity ratio because they are under the impression that it provides a theoretical measure of underlying discriminability (d′). In truth, we and others prefer ROC analysis for applied purposes because it provides an atheoretical measure of empirical discriminability (namely, partial area-under-the-curve, or pAUC). The issue of underlying theoretical discriminability only arises when theoreticians seek to explain why one eyewitness identification procedure yields a higher pAUC than another. Lampinen (2016) also argued that favoring the procedure that yields a higher pAUC can lead to an irrational decision outcome. However, his argument depends on needlessly restricting which points from two ROCs can be compared. As a general rule, the maximum-utility point will fall somewhere on the higher ROC, underscoring the need for ROC analysis. Thus, Lampinen's (2016) arguments against the usefulness of ROC analysis are unfounded.
Article
Full-text available
The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).
Article
Full-text available
Over multiple response opportunities, recall may be inconsistent. For example, an eyewitness may report information at trial that was not reported during initial questioning—a phenomenon called reminiscence. Such inconsistencies are often assumed by lawyers to be inaccurate and are sometimes interpreted as evidence of the general unreliability of the rememberer. In two experiments, we examined the output-bound accuracy of inconsistent memories and found that reminisced memories were indeed less accurate than memories that were reported consistently over multiple opportunities. However, reminisced memories were just as accurate as memories that were reported initially but not later, indicating that it is the inconsistency of recall, and not the later addition to the recall output, that predicts lower accuracy. Finally, rememberers who exhibited more inconsistent recall were less accurate overall, which, if confirmed by more ecologically valid studies, may indicate that the common legal assumption may be correct: Witnesses who provide inconsistent testimony provide generally less trustworthy information overall.
Article
Full-text available
An eyewitness to a crime may make a series of identification decisions about the same suspect as evidence is gathered and presented at trial. These repeated decisions may involve show-ups, mugshots, photo arrays, lineups, and in-court identifications. Repeated identification procedures increase suspect identifications but do not increase the likelihood that the identified person is guilty. Eyewitness memory can be irreparably compromised, with significant risk incurred for an innocent suspect. The first identification procedure influences a witness’ subsequent decisions and confidence, in violation of the legal expectation that an identification reflects witness memory for the crime only. The research supports two recommendations. (1) Repeated identification procedures using the same suspect should be avoided. (2) Identifications made from repeated procedures—beyond the first identification procedure—should not be considered reliable eyewitness evidence. The first eyewitness identification attempt is the one that counts and must have been conducted with a fair and unbiased procedure.
Article
Full-text available
ROC analysis has recently come in vogue for assessing the underlying discriminability and the applied utility of lineup procedures. Two primary assumptions underlie recommendations that ROC analysis be used to assess the applied utility of lineup procedures: (1) ROC analysis of lineups measures underlying discriminability, and (2) the procedure that produces superior underlying discriminability produces superior applied utility. These same assumptions underlie a recently derived diagnostic-feature detection theory, a theory of discriminability, intended to explain recent patterns observed in ROC comparisons of lineups. We demonstrate, however, that these assumptions are incorrect when ROC analysis is applied to lineups. We also demonstrate that a structural phenomenon of lineups, differential filler siphoning, and not the psychological phenomenon of diagnostic-feature detection, explains why lineups are superior to showups and why fair lineups are superior to biased lineups. In the process of our proofs, we show that computational simulations have assumed, unrealistically, that all witnesses share exactly the same decision criteria. When criterial variance is included in computational models, differential filler siphoning emerges. The result proves dissociation between ROC curves and underlying discriminability: Higher ROC curves for lineups than for showups and for fair than for biased lineups despite no increase in underlying discriminability.
Article
Full-text available
In the USA and the UK, many thousands of police suspects are identified by eyewitnesses every year. Unfortunately, many of those suspects are innocent, which becomes evident when they are exonerated by DNA testing, often after having been imprisoned for years. It is, therefore, imperative to use identification procedures that best enable eyewitnesses to discriminate innocent from guilty suspects. Although police investigators in both countries often administer line-up procedures, the details of how line-ups are presented are quite different and an important direct comparison has yet to be conducted. We investigated whether these two line-up procedures differ in terms of (i) discriminability (using receiver operating characteristic analysis) and (ii) reliability (using confidence–accuracy characteristic analysis). A total of 2249 participants watched a video of a crime and were later tested using either a six-person simultaneous photo line-up procedure (USA) or a nine-person sequential video line-up procedure (UK). US line-up procedure yielded significantly higher discriminability and significantly higher reliability. The results do not pinpoint the reason for the observed difference between the two procedures, but they do suggest that there is much room for improvement with the UK line-up.
Article
Full-text available
How should the accuracy of eyewitness identification decisions be measured, so that best practices for identification can be determined? This fundamental question is under intense debate. One side advocates for continued use of a traditional measure of identification accuracy, known as the diagnosticity ratio, whereas the other side argues that receiver operating characteristic curves (ROCs) should be used instead because diagnosticity is confounded with response bias. Diagnosticity proponents have offered several criticisms of ROCs, which we show are either false or irrelevant to the assessment of eyewitness accuracy. We also show that, like diagnosticity, Bayesian measures of identification accuracy confound response bias with witnesses? ability to discriminate guilty from innocent suspects. ROCs are an essential tool for distinguishing memory-based processes from decisional aspects of a response; simulations of different possible identification tasks and response strategies show that they offer important constraints on theory development.
Article
Full-text available
Eyewitness identification studies have focused on the idea that unfair lineups, in which the suspect stands out, make witnesses more willing to identify that suspect. We asked whether unfair lineups—featuring suspects with distinctive features—also influence subjects’ ability to distinguish between innocent and guilty suspects, and their ability to judge the accuracy of their identification. In a single experiment (N = 8925), we compared three fair lineup techniques used by the police to unfair lineups in which we did nothing to prevent distinctive suspects from standing out. Compared to the fair lineups, doing nothing not only increased subjects’ willingness to identify the suspect, it also markedly impaired subjects’ ability to distinguish between innocent and guilty suspects. Accuracy was also reduced at every level of confidence. These results advance theory on witness identification performance and have important practical implications for how police should construct lineups when suspects have distinctive features.
Chapter
Full-text available
The U.S. Supreme Court, state courts, and social science researchers have stated that showup identifications (one-person identifications) are less reliable than lineup identifications. Moreover, 74 % of eyewitness experts endorsed false identifications as more likely to occur from showups than lineups. Examination of the extant literature and receiver operating characteristic (ROC) analyses of over 7500 participants confirm that showups are an inferior procedure to lineups. This conclusion holds true even in situations where showups should have a memorial advantage (e.g., at a short retention interval, a clothing match between encoding and test). A signal-detection-based diagnostic-feature model provides a theoretical explanation for why showups produce inferior eyewitness performance. The data also reveal that confidence is better related to accuracy for lineups than for showups. Unless new procedural enhancements can be developed that enhance reliability, police should refrain from conducting showups in favor of lineups.
Article
Full-text available
A set of reforms proposed in 1999 directed the police how to conduct an eyewitness lineup. The promise of these system variable reforms was that they would enhance eyewitness accuracy. However, the promising initial evidence in support of this claim failed to materialize; at best, these reforms make an eyewitness more conservative. The chapter begins by reviewing the initial evidence supporting the move to description-matched filler selection, unbiased instructions, sequential presentation, and the discounting of confidence judgments. We next describe four reasons why the field reached incorrect conclusions regarding these reforms. These include a failure to appreciate the distinction between discriminability and response bias, a reliance on summary measures of performance that conflate discriminability and response bias or mask the relationship between confidence and accuracy, and the distorting role of relative judgment theory. The reforms are then reevaluated in light of these factors and recent empirical data. We conclude by calling for a theory-driven approach to developing and evaluating the next generation of system variable reforms.
Article
Full-text available
This article addresses the problem of eyewitness identification errors that can lead to false convictions of the innocent and false acquittals of the guilty. At the heart of our analysis based on signal detection theory is the separation of diagnostic accuracy—the ability to discriminate between those who are guilty versus those who are innocent—from the consideration of the relative costs associated with different kinds of errors. Application of this theory suggests that current recommendations for reforms have conflated diagnostic accuracy with the evaluation of costs in such a way as to reduce the accuracy of identification evidence and the accuracy of adjudicative outcomes. Our framework points to a revision in recommended procedures and a framework for policy analysis.
Article
Full-text available
Some researchers have been arguing that eyewitness identification data from lineups should be analyzed using Receiver Operating Characteristic (ROC) analysis because it purportedly measures underlying discriminability. But ROC analysis, which was designed for 2. ×. 2 tasks, does not fit the 3. ×. 2 structure of lineups. Accordingly, ROC proponents force lineup data into a 2. ×. 2 structure by treating false-positive identifications of lineup fillers as though they were rejections. Using data from lineups versus showups, we illustrate how this approach misfires as a measure of underlying discriminability. Moreover, treating false-positive identifications of fillers as if they were rejections hides one of the most important phenomena in eyewitness lineups, namely filler siphoning. Filler siphoning reduces the risk of mistaken identification by drawing false-positive identifications away from the innocent suspect and onto lineup fillers. We show that ROC analysis confuses filler siphoning with an improvement in underlying discriminability, thereby fostering misleading theoretical conclusions about how lineups work.
Article
Full-text available
When making a memorial judgment, respondents can regulate their accuracy by adjusting the precision, or grain size, of their responses. In many circumstances, coarse-grained responses are less informative, but more likely to be accurate, than fine-grained responses. This paper describes a novel eyewitness identification procedure, the grain size lineup, in which participants eliminated any number of individuals from the lineup, creating a choice set of variable size. A decision was considered to be fine-grained if no more than one individual was left in the choice set or coarse-grained if more than one individual was left in the choice set. Participants (N = 384) watched two high-quality or low-quality videotaped mock crimes and then completed four standard simultaneous lineups or four grain size lineups (two target-present and two target-absent). There was some evidence of strategic regulation of grain size, as the most difficult lineup was associated with a greater proportion of coarse-grained responses than the other lineups. However, the grain-size lineup did not outperform the standard simultaneous lineup. Fine-grained suspect identifications were no more diagnostic than suspect identifications from standard lineups, while coarse-grained suspect identifications carried little probative value. Participants were generally reluctant to provide coarse-grained responses, which may have hampered the utility of the procedure. For a grain size approach to be useful, participants may need to be trained or instructed to use the coarse-grained option effectively.
Article
Full-text available
Eyewitness memory is widely believed to be unreliable because (a) high-confidence eyewitness misidentifications played a role in over 70% of the now more than 300 DNA exonerations of wrongfully convicted men and women, (b) forensically relevant laboratory studies have often reported a weak relationship between eyewitness confidence and accuracy, and (c) memory is sufficiently malleable that, not infrequently, people (including eyewitnesses) can be led to remember events differently from the way the events actually happened. In light of such evidence, many researchers agree that confidence statements made by eyewitnesses in a court of law (in particular, the high confidence they often express at trial) should be discounted, if not disregarded altogether. But what about confidence statements made by eyewitnesses at the time of the initial identification (e.g., from a lineup), before there is much opportunity for memory contamination to occur? A considerable body of recent empirical work suggests that confidence may be a highly reliable indicator of accuracy at that time, which fits with longstanding theoretical models of recognition memory. Counterintuitively, an appreciation of this fact could do more to protect innocent defendants from being wrongfully convicted than any other eyewitness identification reform that has been proposed to date. (PsycINFO Database Record
Article
Full-text available
Proposes a distinction between 2 types of applied eyewitness-testimony research: System-variable (SV) research investigates varibles that are manipulable in actual criminal cases (e.g., the structure of a lineup) and, thus, has the potential for reducing the inaccuracies of eyewitnesses; estimator-variable (EV) research, however, investigates variables that cannot be controlled in actual criminal cases (e.g., characteristics of the witness) and, thus, can only be used in the courtroom to augment or discount the credibility of eyewitnesses. SVs and EVs are contrasted with respect to their relative potential for positive contribution to criminal justice, and it is concluded that SV research may prove more fruitful than EV research. It is also argued that several methodological biases may be exacerbating the rate of misidentifications in staged-crime paradigms. (33 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
Full-text available
People viewed a security video and tried to identify the gunman from a photospread. The actual gunman was not in the photospread and all eyewitnesses made false identifications (n = 352). Following the identification, witnesses were given confirming feedback ("Good, you identified the actual suspect"), disconfirming feedback ("Actually, the suspect is number _"), or no feedback. The manipulations produced strong effects on the witnesses' retrospective reports of (a) their certainty, (b) the quality of view they had. (c) the clarity of their memory, (d) the speed with which they identified the person, and (e) several other measures. Eyewitnesses who were asked about their certainty prior to the feedback manipulation (Experiment 2) were less influenced, but large effects still emerged on some measures. The magnitude of the effect was as strong for those who denied that the feedback influenced them as it was for those who admitted to the influence.
Article
Full-text available
Research has shown a discrepancy between estimated and actually observed accuracy of reminiscent details in eyewitness accounts. This estimation-observation gap is of particular relevance with regard to the evaluation of eyewitnesses’ accounts in the legal context. To date it has only been demonstrated in non-naturalistic settings, however. In addition, it is not known whether this gap extends to other tasks routinely employed in real-world trials, for instance person-identification tasks. In this study law students witnessed a staged event and were asked to either recall the event and perform a person identification task or estimate the accuracy of the others’ performance. Additionally, external estimations were obtained from students who had not witnessed the event, but received a written summary instead. The estimation-observation gap was replicated for reminiscent details under naturalistic encoding conditions. This gap was more pronounced when compared to forgotten details, but not significantly so when compared to consistent details. In contrast, accuracy on the person-identification task was not consistently underestimated. The results are discussed in light of their implications for real-world trials and future research.
Article
Full-text available
In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses’ credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure.
Article
Full-text available
Objectives Eyewitness misidentifications have been implicated in many of the DNA exoneration cases that have come to light in recent years. One reform designed to address this problem involves switching from simultaneous lineups to sequential lineups, and our goal was to test the diagnostic accuracy of these two procedures using actual eyewitnesses. Methods In a recent randomized field trial comparing the performance of simultaneous and sequential lineups in the real world, suspect ID rates were found to be similar for the two procedures. Filler ID rates were found to be slightly (but, in the key test, nonsignificantly) higher for simultaneous than sequential lineups, but fillers will not be prosecuted even if identified. Moreover, filler IDs may not provide reliable information about innocent suspect IDs. Here, we use two different proxy measures for ground truth of guilt versus innocence for suspects identified from simultaneous or sequential lineups in that same field study. Results The results indicate that innocent suspects are, if anything, less likely to be mistakenly identified—and guilty suspects are more likely to be correctly identified—from simultaneous lineups compared to sequential lineups. Conclusions Filler identifications are not necessarily predictive of the more consequential error of misidentifying an innocent suspect. With regard to actual suspect identifications, simultaneous lineups are diagnostically superior to sequential lineups. These findings are consistent with recent laboratory-based studies using receiver operating characteristic analysis suggesting that simultaneous lineups make it easier for eyewitnesses to tell the difference between innocent and guilty suspects.
Article
Full-text available
Examined the suggestion that "fair" lineups should contain others who resemble the suspect in general physical appearance and considered the outcome in terms of lost convictions of guilty suspects. 96 unsuspecting witnesses to a staged crime were given the opportunity to identify a criminal (confederate) from relatively fair or unfair lineups (6-picture arrays varying high vs low similarity). One fair and 1 unfair lineup contained a picture of the criminal (criminal-present lineups), while 1 fair and 1 unfair lineup contained a picture of an innocent suspect who resembled the criminal (criminal-absent lineup). Results indicate that high-similarity lineups produced less identification of the criminal and of the innocent suspect than low-similarity lineups. However, the reduction in identification of the criminal was much less dramatic than the reduction of identifications of the innocent suspect. A Bayesian analysis revealed that identification evidence obtained from relatively fair high-similarity lineups is superior to similar evidence obtained from relatively unfair low-similarity lineups. It is concluded that the cost of using fair lineups is rather small. (13 ref) (PsycINFO Database Record (c) 2014 APA, all rights reserved)
Article
Full-text available
The theoretical understanding of eyewitness identifications made from a police lineup has long been guided by the distinction between absolute and relative decision strategies. In addition, the accuracy of identifications associated with different eyewitness memory procedures has long been evaluated using measures like the diagnosticity ratio (the correct identification rate divided by the false identification rate). Framed in terms of signal-detection theory, both the absolute/relative distinction and the diagnosticity ratio are mainly relevant to response bias while remaining silent about the key issue of diagnostic accuracy, or discriminability (i.e., the ability to tell the difference between innocent and guilty suspects in a lineup). Here, we propose a signal-detection-based model of eyewitness identification, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability. Recent ROC analyses indicate that the simultaneous presentation of faces in a lineup yields higher discriminability than the presentation of faces in isolation, and we propose a diagnostic feature-detection hypothesis to account for that result. According to this hypothesis, the simultaneous presentation of faces allows the eyewitness to appreciate that certain facial features (viz., those that are shared by everyone in the lineup) are non-diagnostic of guilt. To the extent that those non-diagnostic features are discounted in favor of potentially more diagnostic features, the ability to discriminate innocent from guilty suspects will be enhanced. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
Scientists in many disciplines have begun to raise questions about the evolution of research findings over time (Ioannidis in Epidemiology, 19, 640-648, 2008; Jennions & Møller in Proceedings of the Royal Society, Biological Sciences, 269, 43-48, 2002; Mullen, Muellerleile, & Bryan in Personality and Social Psychology Bulletin, 27, 1450-1462, 2001; Schooler in Nature, 470, 437, 2011), since many phenomena exhibit decline effects-reductions in the magnitudes of effect sizes as empirical evidence accumulates. The present article examines empirical and theoretical evolution in eyewitness identification research. For decades, the field has held that there are identification procedures that, if implemented by law enforcement, would increase eyewitness accuracy, either by reducing false identifications, with little or no change in correct identifications, or by increasing correct identifications, with little or no change in false identifications. Despite the durability of this no-cost view, it is unambiguously contradicted by data (Clark in Perspectives on Psychological Science, 7, 238-259, 2012a; Clark & Godfrey in Psychonomic Bulletin & Review, 16, 22-42, 2009; Clark, Moreland, & Rush, 2013; Palmer & Brewer in Law and Human Behavior, 36, 247-255, 2012), raising questions as to how the no-cost view became well-accepted and endured for so long. Our analyses suggest that (1) seminal studies produced, or were interpreted as having produced, the no-cost pattern of results; (2) a compelling theory was developed that appeared to account for the no-cost pattern; (3) empirical results changed over the years, and subsequent studies did not reliably replicate the no-cost pattern; and (4) the no-cost view survived despite the accumulation of contradictory empirical evidence. Theories of memory that were ruled out by early data now appear to be supported by data, and the theory developed to account for early data now appears to be incorrect.
Article
Full-text available
Counterfactual imaginings are known to have far-reaching implications. In the present experiment, we ask if imagining events from one's past can affect memory for childhood events. We draw on the social psychology literature showing that imagining a future event increases the subjective likelihood that the event will occur. The concepts of cognitive availability and the source-monitoring framework provide reasons to expect that imagination may inflate confidence that a childhood event occurred. However, people routinely produce myriad counterfactual imaginings (i.e., daydreams and fantasies) but usually do not confuse them with past experiences. To determine the effects of imagining a childhood event, we pretested subjects on how confident they were that a number of childhood events had happened, asked them to imagine some of those events, and then gathered new confidence measures. For each of the target items, imagination inflated confidence that the event had occurred in childhood. We discuss implications for situations in which imagination is used as an aid in searching for presumably lost memories.
Article
Abstract: The Innocence Movement has had a profound impact on criminal law and criminal justice policy. We believe it can also contribute to ongoing reexaminations of legal and ethical theory – namely in discussions of the Blackstone Principle. As this paper shows, any discussion of this venerable principle requires attention be paid to relationship between wrongful conviction and violent crime. When the state arrests and incarcerates the wrong person, the true perpetrator remains at liberty. In many cases these individuals commit a series of crimes during this period of “wrongful liberty” (which we define as the period between the original crime and when the true perpetrator is arrested). This paper presents an account of wrongful liberty, and its relationship to legal and ethical theory, as well as a first-of-its-kind documentation of all known crimes of wrongful liberty in a single state, North Carolina. Our experience in North Carolina suggests that law students working with undergraduate students and the supervision of attorneys experienced with state criminal records databases can gather such information easily. We believe this method can and should be replicated in other jurisdictions so legal scholars can develop a more complete understanding of how wrongful liberty informs the Blackstone Principle in the context of the American criminal justice system.
Chapter
Recollections of unexpected and emotional events (called 'flashbulb' memories) have long been the subject of theoretical speculation. Previous meetings have brought together everyone who has done research on memories of the Challenger explosion, in order to gain a better understanding of the phenomenon of flashbulb memories. How do flashbulb memories compare with other kinds of recollections? Are they unusually accurate, or especially long-lived? Do they reflect the activity of a special mechanism, as has been suggested? Although Affect and Accuracy in Recall focuses on flashbulb memories, it addresses more general issues of affect and accuracy. Do emotion and arousal strengthen memory? If so, under what conditions? By what physiological mechanisms? This 1993 volume is evidence of progress made in memory research since Brown and Kulick's 1977 paper.
Article
Memory for the content of our conversations reflects two partially conflicting demands. First, to be an effective participant in a conversation, we use our memory to follow its trajectory, to keep track of unresolved details, and to model the intentions and knowledge states of our partners. Second, to effectively remember a conversation, we need to recall the gist of what was said, by whom, and in what context. These two sets of demands are often different in their content and character. In this article, we review what is known about distant memory for conversations, focusing on issues that have particular relevance for legal contexts. We highlight evidence likely to be of importance in legal contexts, including estimates of how much information can be recalled, the quantity and types of errors that are likely to be made, and the situational factors that shape memory for conversation. The biases we see in distant memory for a conversation reflect in part the interplay of the conflicting demands that conversation places upon us.
Article
The wisdom of the crowd refers to the finding that judgments aggregated over individuals are typically more accurate than the average individual’s judgment. Here, we examine the potential for improving crowd judgments by allowing individuals to choose which of a set of queries to respond to. If individuals’ metacognitive assessments of what they know is accurate, allowing individuals to opt in to questions of interest or expertise has the potential to create a more informed knowledge base over which to aggregate. This prediction was confirmed: crowds composed of volunteered judgments were more accurate than crowds composed of forced judgments. Overall, allowing individuals to use private metacognitive knowledge holds much promise in enhancing judgments, including those of the crowd.
Article
For decades, sequential lineups have been considered superior to simultaneous lineups in the context of eyewitness identification. However, most of the research leading to this conclusion was based on the analysis of diagnosticity ratios that do not control for the respondent’s response criterion. Recent research based on the analysis of ROC curves has found either equal discriminability for sequential and simultaneous lineups, or higher discriminability for simultaneous lineups. Some evidence for potential position effects and for criterion shifts in sequential lineups has also been reported. Using ROC curve analysis, we investigated the effects of the suspect’s position on discriminability and response criteria in both simultaneous and sequential lineups. We found that sequential lineups suffered from an unwanted position effect. Respondents employed a strict criterion for the earliest lineup positions, and shifted to a more liberal criterion for later positions. No position effects and no criterion shifts were observed in simultaneous lineups. This result suggests that sequential lineups are not superior to simultaneous lineups, and may give rise to unwanted position effects that have to be considered when conducting police lineups.
Article
From the perspective of signal detection theory, different lineup instructions may induce different levels of response bias. If so, then collecting correct and false identification rates across different instructional conditions will trace out the receiver operating characteristic (ROC)—the same ROC that, theoretically, could also be traced out from a single instruction condition in which each eyewitness decision is accompanied by a confidence rating. We tested whether the two approaches do in fact yield the same ROC. Participants were assigned to a confidence rating condition or to an instructional biasing condition (liberal, neutral, unbiased, or conservative). After watching a video of a mock crime, participants were presented with instructions followed by a six-person simultaneous photo lineup. The ROCs from both methods were similar, but they were not exactly the same. These findings have potentially important policy implications for how the legal system should go about controlling eyewitness response bias.Copyright
Chapter
Signal detection theory has guided thinking about recognition memory since it was first applied by Egan in 1958. Essentially a tool for measuring decision accuracy in the context of uncertainty, detection theory offers an integrated account of simple old–new recognition judgments, decision confidence, and the relationship of those responses to more complex memory judgments such as recognition of the context in which an event was previously experienced. In this chapter, several commonly used signal detection models of recognition memory, and their threshold-based competition, are reviewed and compared against data from a wide range of tasks. Overall, the simpler signal detection models are the most successful.
Article
We investigated the impact of filler quality and presence on confidence, response latency, and propensity to respond ‘don't know’ in eyewitness line-ups and showups. More specifically, we tested the hypothesis that confident, fast witnesses would be accurate in fair line-ups and showups, but the inclusion of duds (poor fillers) would break down these relationships in a biased line-up. Participants viewed a mock crime video, made a timed identification decision, and gave a confidence judgment. As predicted, biased line-up witnesses were fast and confident, regardless of accuracy, and rarely responded ‘don't know’. In addition, we found that witnesses who are the fastest and most confident were equally accurate in fair line-ups and showups, and both were better than biased line-ups. These findings suggest that biased line-ups should not be used (although, unfortunately, they frequently are); in fact, it may be better to conduct a showup than a biased line-up. Copyright © 2016 John Wiley & Sons, Ltd.
Article
In court, the basic expectation is that eyewitness accounts are solely based on what the witness saw. Research on post-event influences has shown that this is not always the case and memory distortions are quite common. However, potential effects of an eyewitness’ attributions regarding a perpetrator’s crime motives have been widely neglected in this domain. In this paper, we present two experiments (N = 209) in which eyewitnesses were led to conclude that a perpetrator’s motives for a crime were either dispositional or situational. As expected, misinformation consistent with an eyewitness’ attribution of crime motives was typically falsely recognised as true whereas inconsistent misinformation was correctly rejected. Furthermore, a dispositional vs. situational attribution of crime motives resulted in more severe (mock) sentencing supporting previous research. The findings are discussed in the context of schema-consistent biases and the effect of attributions about character in a legal setting.
Article
A theoretical system of metacognitive components for self-directed memory retrieval is described, and relevant empirical data are reported. The metacognitive components include (1) a preliminary feeling of knowing for an answer; (2) a confidence judgment about a retrieved answer after a search of memory; (3) a decision of whether to output a retrieved answer; (4) a subsequent feeling of knowing for a nonretrieved answer; and (5) a decision of whether to continue or terminate searching memory for the unretrieved answer. Some of these components have been investigated previously but only in isolation. Here we integrate them into a theoretical system for directing one's own retrieval. The system gives a good account of relevant older findings and of several new findings, in particular, those demonstrating how people trade off the costs and benefits of continued searching and how the threshold for the decision to continue searching varies in a predictable way. The theoretical system also accounts for several newly reported findings from earlier research conducted on Mount Everest (and related findings in the literature) by postulating two separable major subdivisions in the system: one that gives rise to guesses (including commission errors and correct responses) and another that gives rise to omission errors. Different metacognitive mechanisms are proposed to have the major responsibility for each of those subdivisions.
Article
ROC analysis is a straightforward but non-intuitive way to determine which of two identification procedures better enables a population of eyewitnesses to correctly sort innocent and guilty suspects into their respective categories. This longstanding analytical method, which is superior to using the diagnosticity ratio for identifying the better procedure, is not in any way compromised by the presence of fillers in lineups and is not tied to any particular theory of memory or discrimination (i.e., it is a theory-free methodology). ROC analysis is widely used in other applied fields, such as diagnostic medicine, and this is true even when the medical procedure in question is exactly analogous to a lineup (e.g., a detection-plus-quadrant-localization task in radiology). Bayesian measures offer no replacement for ROC analysis because they pertain to the information value of a particular diagnostic decision, not to the general diagnostic accuracy of an eyewitness identification procedure.
Chapter
The goal of this chapter is to provide a beginning sketch of the view that the successes and failures of memory are a reflection of skill in interacting with memory effectively, rather than an expression of inherent qualities or liabilities of memory itself. The first goal is to expand the purview of memory research by considering the cognitive contexts in which memory behavior is situated. The second and concurrent goal of the chapter is about the nature of memory itself. This goal represents an explicit attempt to reduce the proliferation of memory systems and memory processes. Finally, this chapter reviews examples of how particular capacities of memory can be conceptualized as interactions between a quite simple memory system and a set of higher level control processes that are diverse and varied.
Article
Examined the US judiciary's use of witness level of confidence as 1 of 5 criteria in judging the trustworthiness of eyewitness testimony. Juror perceptions of witness confidence account for 50% of the variance in juror judgments as to witness accuracy. However, a review of 43 separate assessments of the accuracy/confidence relation in eye- and earwitnesses does not support certainty as a predictor of accuracy. Statistical support was found for the notion that the predictability of accuracy from overtly expressed confidence varies directly with the degree of optimality of information-processing conditions during encoding of the witnessed event, memory storage, and testing of the witness's memory. Low optimal conditions, those mitigating against the likelihood of highly reliable testimony, typically resulted in a zero correlation of confidence and accuracy. Using the arbitrary criterion of 70% or greater accuracy to define high optimal conditions, 7 forensically relevant laboratory studies are identified, with 6 exhibiting significant positive correlations of confidence and accuracy. It is concluded, however, that no really clear criteria currently exist for distinguishing post hoc high from low optimal witnessing conditions in any particular real-life situation. (34 ref) (PsycINFO Database Record (c) 2014 APA, all rights reserved)
Article
We conducted an experiment (N = 2675) including both laboratory and online participants to test hypotheses regarding important system and estimator variables for eyewitness identification. Simultaneous lineups were compared to sequential lineups with the suspect presented early versus late because there is evidence that suspect position could be an important factor determining a simultaneous versus sequential advantage in guilty-innocent suspect discriminability. We also manipulated whether or not the perpetrator held a weapon or had a distinctive feature on his face, to re-evaluate recent evidence that these factors interact. Overall, the simultaneous lineup yielded higher discriminability than the sequential lineup, and there was no effect of sequential position. Discriminability was higher when the perpetrator had no weapon, but only when no distinctive feature was present. We conclude with a discussion of the importance of exploring interactions between system and estimator variables using Receiver Operating Characteristic (ROC) analysis.
Article
A great deal of research has focused on eyewitness identification performance as a function of sequential versus simultaneous lineup presentation methods. We examined if individual differences in cognitive ability influence eyewitness identification, and whether these factors lead to performance differences as a function of lineup presentation method. We found that individual differences in facial recognition ability, working memory capacity, and levels of autistic traits, did result in differential performance. Dif-ferences in lineup performance are due to the interaction of individual differences and presentation method, signaling that it is possible to enhance the accuracy of eyewitness identifications by tailoring a lineup presentation method to the capabilities of an individual eyewitness.
Article
Lineup administrators were trained to respond to witnesses in such a way as to redirect them from making non-identifications or foil identification responses toward making identifications of the suspect. Compared to a no-influence control condition, suspect identification rates in the influence condition increased substantially and proportionally for guilty and innocent suspects. Administrators steered witnesses more specifically toward the suspect when the suspect was guilty than when the suspect was innocent. Post-identification confidence for correct identifications of the guilty suspect did not differ significantly across the influence and no-influence groups. However, post-identification confidence for false identifications of the innocent suspect was significantly lower for the influence group than for the no-influence group because witnesses who were influenced to make false identifications tended to be those who were less confident prior to the lineup, and also because those witnesses became less confident from pre- to post-identification.