ArticleLiterature Review

Cognitive bias in workplace investigation: Problems, perspectives and proposed solutions

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Psychological research demonstrates how our perceptions and cognitions are affected by context, motivation, expectation, and experience. A mounting body of research has revealed the many sources of bias that affect the judgments of experts as they execute their work. Professionals in such fields as forensic science, intelligence analysis, criminal investigation, medical and judicial decision-making find themselves at an inflection point where past professional practices are being questioned and new approaches developed. Workplace investigation is a professional domain that is in many ways analogous to the aforementioned decision-making environments. Yet, workplace investigation is also unique, as the sources, magnitude, and direction of bias are specific to workplace environments. The workplace investigation literature does not comprehensively address the many ways that the workings of honest investigators' minds may be biased when collecting evidence and/or rendering judgments; nor does the literature offer a set of strategies to address such happenings. The current paper is the first to offer a comprehensive overview of the important issue of cognitive bias in workplace investigation. In it I discuss the abilities and limitations of human cognition, provide a framework of sources of bias, as well as, offer suggestions for bias mitigation in the investigation process.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
All decision making, and particularly expert decision making, requires the examination, evaluation, and integration of information. Research has demonstrated that the order in which information is presented plays a critical role in decision making processes and outcomes. Different decisions can be reached when the same information is presented in a different order. Because information must always be considered in some order, optimizing this sequence is important for optimizing decisions. Since adopting one sequence or another is inevitable —some sequence must be used— and since the sequence has important cognitive implications, it follows that considering how to best sequence information is paramount. In the forensic sciences, existing approaches to optimize the order of information processing (sequential unmasking and Linear Sequential Unmasking are limited in terms of their narrow applicability to only certain types of decisions, and they focus only on minimizing bias rather than optimizing forensic decision making in general. Here, we introduce Linear Sequential Unmasking–Expanded (LSU-E), an approach that is applicable to all forensic decisions rather than being limited to a particular type of decision, and it also reduces noise and improves forensic decision making in general rather than solely by minimizing bias.
Article
Full-text available
Introduction: A critical aspect of occupational safety is workplace inspections by experts, in which hazards are identified. Scientific research demonstrates that expectation generated by context (i.e., prior knowledge and experience) can bias the judgments of professionals and that individuals are largely unaware when their judgments are affected by bias. Method: The current research tested the reliability and biasability of expert safety inspectors' judgments. We used a two-study design (Study 1, N = 83; Study 2, N = 70) to explore the potential of contextual, task-irrelevant, information to bias professionals' judgments. We examined three main issues: (1) the effect that biasing background information (safe and unsafe company history) had on professional regulatory safety inspectors' judgments of a worksite; (2) the reliability of those judgments amongst safety inspectors and (3) inspectors' awareness of bias in their judgments and confidence in their performance. Results: Our findings establish that: (i) inspectors' judgments were biased by historical contextual information, (ii) they were not only biased, but the impact was implicit: they reported being unaware that it affected their judgments, and (iii) independent of our manipulations, inspectors were inconsistent with one another and the variations were not a product of experience. Conclusion: Our results are a replication of findings from a host of other professional domains, where honest, hardworking professionals underappreciate the biasing effect of context on their decision making. The current paper situates these findings within the relevant research on safety inspection, cognitive bias and decision making, as well as provides suggestions for bias mitigation in workplace safety inspection. Practical Application: Our results have implications for occupational health and safety given that inspection is an integral aspect of an effective safety system. In addition to our findings, this study contributes to the literature by providing recommendations regarding how to mitigate the effect of bias in inspection.
Article
Full-text available
FULL ARTICLE IS AVAILABLE AT: https://onlinelibrary.wiley.com/doi/full/10.1111/1556-4029.14697 (including a 'Preface' by the Editor, along with links to 22 Letters 'debating' bias) - - ABSTRACT: Forensic pathologists’ decisions are critical in police investigations and court proceedings as they determine whether an unnatural death of a young child was an accident or homicide. Does cognitive bias affect forensic pathologists’ decision making? To address this question, we examined all death certificates issued during a 10-year period in the State of Nevada in the United States for children under the age of six. We also conducted an experiment with 133 forensic pathologists in which we tested whether knowledge of irrelevant non-medical information that should have no bearing on forensic pathologists’ decisions influenced their manner of death determinations. The dataset of death certificates indicated that forensic pathologists were more likely to rule "homicide" rather than "accident" for deaths of Black children relative to White children. This may arise because the base-rate expectation creates an a priori cognitive bias to rule that Black children died as a result of homicide, which then perpetuates itself. Corroborating this explanation, the experimental data with the 133 forensic pathologists exhibited biased decisions when given identical medical information but different irrelevant non-medical information about the race of the child and who was the caregiver who brought them to the hospital. These findings together demonstrate how extraneous information can result in cognitive bias in forensic pathology decision making.
Article
Full-text available
Order of evidence presentation affects the evaluation and the integration of evidence in mock criminal cases. In this study, we aimed to determine whether the order in which incriminating and exonerating evidence is presented influences cognitive dissonance and subsequent display of confirmation bias. Law students (N = 407) were presented with a murder case vignette, followed by incriminating and exonerating evidence in various orders. Contrary to a predicted primacy effect (i.e. early evidence being most influential), a recency effect (i.e. late evidence being most influential) was observed in ratings of likelihood of the suspect’s guilt. The cognitive dissonance ratings and conviction rates were not affected by the order of evidence presentation. The effects of evidence presentation order may be limited to specific aspects of legal decisions. However, there is a need to replicate the results using procedures and samples that are more representative of real-life criminal law trials.
Article
Full-text available
Full paper available at: https://pubs.acs.org/doi/10.1021/acs.analchem.0c00704 Fallacies about the nature of biases have shadowed a proper cognitive understanding of biases and their sources, which in turn lead to ways that minimize their impact. In this paper six such fallacies are presented: it is an ethical issue, only applies to 'bad apples', experts are impartial and immune, technology eliminates bias, blind spot, and the illusion of control. Then, eight sources of bias are discussed and conceptualized within three categories: A. Factors that relate to the specific case and analysis, which include the data, reference materials and contextual information. B. Factors that relate to the specific person doing the analysis, which include past experience base rates, organizational factors, education and training, and personal factors. Lastly, category C, cognitive architecture and human nature that impacts all of us. These factors can impact what the data are (e.g., how data is sampled and collected, or what is considered as noise and therefore disregarded); the actual results (e.g., decisions on testing strategies, how analysis is conducted, and when to stop testing); and the conclusions (e.g., the interpretation of the results). The paper concludes with specific measures that can minimize these biases.
Article
Full-text available
Over many centuries, courts have developed evidentiary and procedural rules that are aimed at preventing unreliable expert evidence from entering court proceedings. These systems act as gatekeepers and do well in some ways, but less well in other ways. Specifically, the courts should attempt to eliminate or correct for possible bias that is predominantly intentional. However, the courts have not, to date, developed robust ways to identify and counteract experts’ biases caused by factors that unconsciously affect the quality of their evidence. The current paper reviews the role of the expert for the court, as well as the nature of human cognition and information processing. We demonstrate that the judgments of highly trained, scientific, experts can be biased by a host of factors which range from the architecture of human cognition to features of the expert's environment. We then provide a three-step process for revealing bias in expert evidence, as well as ways to minimize such biases.
Article
Full-text available
The ISO/IEC 17020 and 17025 standards both include requirements for impartiality and the freedom from bias. Meeting these requirements for implicit cognitive bias is not a simple matter. In this article, we address these international standards, specifically focusing on evaluating and mitigating the risk to impartiality, and quality assurance checks, so as to meet accreditation program requirements. We cover their meaning to management as well as to practitioners, addressing how these issues of impartiality and bias relate to forensic work, and how one can effectively evaluate and mitigate those risks. We then elaborate on specific quality assurance policies and checks, and identify when corrective action may be appropriate. These measures will not only serve to meet ISO/IEC 17020 and 17025 requirements, but also enhance forensic work and decision-making.
Article
Full-text available
In response to research demonstrating that irrelevant contextual information can bias forensic science analyses, authorities have increasingly urged laboratories to limit analysts’ access to irrelevant and potentially biasing information (Dror & Cole, 2010; National Academy of Sciences, 2009; President’s Council of Advisors on Science and Technology, 2016; UK Forensic Science Regulator, 2015). However, a great challenge in implementing this reform is determining which information is task-relevant and which is task-irrelevant. In the current study, we surveyed 183 forensic analysts to examine what they consider relevant versus irrelevant in their forensic analyses. Results revealed that analysts generally do not regard information regarding the suspect or victim as essential to their analytic tasks. However, there was significant variability among analysts within and between disciplines. Findings suggest that forensic science disciplines need to agree on what they regard as task-relevant before context management procedures can be properly implemented. The lack of consensus about what is relevant information not only leaves room for biasing information, but also reveals foundational gaps in what analysts consider crucial in forensic decision making.
Article
Full-text available
The intelligence community uses ‘structured analytic techniques’ to help analysts think critically and avoid cognitive bias. However, little evidence exists of how techniques are applied and whether they are effective. We examined the use of the Analysis of Competing Hypotheses (ACH) – a technique designed to reduce ‘confirmation bias’. Fifty intelligence analysts were randomly assigned to use ACH or not when completing a hypothesis testing task that had probabilistic ground truth. Data on analysts’ judgment processes and conclusions was collected using written protocols that were then coded for statistical analyses. We found that ACH‐trained analysts did not follow all of the steps of ACH. There was mixed evidence for ACH's ability to reduce confirmation bias, and we observed that ACH may increase judgment inconsistency and error. It may be prudent for the intelligence community to consider the conditions under which ACH would prove useful, and to explore alternatives.
Article
Full-text available
Intelligence analysts, like other professionals, form norms that define standards of tradecraft excellence. These norms, however, have evolved in an idiosyncratic manner that reflects the influence of prominent insiders who had keen psychological insights but little appreciation for how to translate those insights into testable hypotheses. The net result is that the prevailing tradecraft norms of best practice are only loosely grounded in the science of judgment and decision-making. The “common sense” of prestigious opinion leaders inside the intelligence community has pre-empted systematic validity testing of the training techniques and judgment aids endorsed by those opinion leaders. Drawing on the scientific literature, we advance hypotheses about how current best practices could well be reducing rather than increasing the quality of analytic products. One set of hypotheses pertain to the failure of tradecraft training to recognize the most basic threat to accuracy: measurement error in the interpretation of the same data and in the communication of interpretations. Another set of hypotheses focuses on the insensitivity of tradecraft training to the risk that issuing broad-brush, one-directional warnings against bias (e.g., over-confidence) will be less likely to encourage self-critical, deliberative cognition than simple response-threshold shifting that yields the mirror image bias (e.g., under-confidence). Given the magnitude of the consequences of better and worse intelligence analysis flowing to policy-makers, we see a compelling case for greater funding of efforts to test what actually works.
Article
Full-text available
A routine part of intelligence analysis is judging the probability of alternative hypotheses given available evidence. Intelligence organizations advise analysts to use intelligence-tradecraft methods such as Analysis of Competing Hypotheses (ACH) to improve judgment, but such methods have not been rigorously tested. We compared the evidence evaluation and judgment accuracy of a group of intelligence analysts who were recently trained in ACH and then used it on a probability judgment task to another group of analysts from the same cohort that were neither trained in ACH nor asked to use any specific method. Although the ACH group assessed information usefulness better than the control group, the control group was a little more accurate (and coherent) than the ACH group. Both groups, however, exhibited suboptimal judgment and were susceptible to unpacking effects. Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy. Specifically, mean absolute error (MAE) in analysts’ probability judgments decreased by 61% after first coherentizing their judgments (a process that ensures judgments respect the unitarity axiom) and then aggregating their judgments. The findings cast doubt on the efficacy of ACH, and show the promise of statistical methods for boosting judgment quality in intelligence and other organizations that routinely produce expert judgments.
Article
Full-text available
Inconclusive decisions, deciding not to decide, are decisions. We present a cognitive model which takes into account that decisions are an outcome of interactions and intersections between the actual data and human cognition. Using this model it is suggested under which circumstances inconclusive decisions are justified and even warranted (reflecting proper caution and meta‐cognitive abilities in recognizing limited abilities), and, conversely, under what circumstances inconclusive decisions are unjustifiable and should not be permitted. The model further explores the limitations and problems in using categorical decision‐making when the data are actually a continuum. Solutions are suggested within the forensic fingerprinting domain, but they can be applied to other forensic domains, and, with modifications, may also be applied to other expert domains.
Article
Full-text available
Forensic evidence plays a critical role in court proceedings and the administration of justice. It is a powerful tool that can help convict the guilty and avoid wrongful conviction of the innocent. Unfortunately, flaws in forensic evidence are increasingly becoming apparent. Assessments of forensic science have too often focused only on the data and the underlying science, as if they exist in isolation, without sufficiently addressing the process by which forensic experts evaluate and interpret the evidence.
Article
Full-text available
Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.
Article
Full-text available
Structured analytic techniques (SATs) are intended to improve intelligence analysis by checking the two canonical sources of error: systematic biases and random noise. Although both goals are achievable, no one knows how close the current generation of SATs comes to achieving either of them. We identify two root problems: (1) SATs treat bipolar biases as unipolar. As a result, we lack metrics for gauging possible overshooting and have no way of knowing when SATs that focus on suppressing one bias (e.g., over-confidence) are triggering the opposing bias (e.g., under-confidence); (2) SATs tacitly assume that problem decomposition (e.g., breaking reasoning into rows and columns of matrices corresponding to hypotheses and evidence) is a sound means of reducing noise in assessments. But no one has ever actually tested whether decomposition is adding or subtracting noise from the analytic process-and there are good reasons for suspecting that decomposition will, on balance, degrade the reliability of analytic judgment. The central shortcoming is that SATs have not been subject to sustained scientific of the sort that could reveal when they are helping or harming the cause of delivering accurate assessments of the world to the policy community.
Article
Full-text available
Exposure to irrelevant contextual information prompts confirmation-biased judgments of forensic science evidence (Kassin, Dror, & Kukucka, 2013). Nevertheless, some forensic examiners appear to believe that blind testing is unnecessary. To assess forensic examiners’ beliefs about the scope and nature of cognitive bias, we surveyed 403 experienced examiners from 21 countries. Overall, examiners regarded their judgments as nearly infallible and showed only a limited understanding and appreciation of cognitive bias. Most examiners believed they are immune to bias or can reduce bias through mere willpower, and fewer than half supported blind testing. Furthermore, many examiners showed a bias blind spot (Pronin, Lin, & Ross, 2002), acknowledging bias in other domains but not their own, and in other examiners but not themselves. These findings underscore the necessity of procedural reforms that blind forensic examiners to potentially biasing information, as is commonplace in other branches of science.
Article
Full-text available
Over the past decade, there has been a growing openness about the importance of human factors in forensic work. However, most of it focused on cognitive bias, and neglected issues of workplace wellness and stress. Forensic scientists work in a dynamic environment that includes common workplace pressures such as workload volume, tight deadlines, lack of advancement, number of working hours, low salary, technology distractions, and fluctuating priorities. However, in addition, forensic scientists also encounter a number of industry-specific pressures, such as technique criticism, repeated exposure to crime scenes or horrific case details, access to funding, working in an adversarial legal system, and zero tolerance for “errors”. Thus, stress is an important human factor to mitigate for overall error management, productivity and decision quality (not to mention the well-being of the examiners themselves). Techniques such as mindfulness can become powerful tools to enhance work and decision quality.
Article
Full-text available
Background Cognitive biases and personality traits (aversion to risk or ambiguity) may lead to diagnostic inaccuracies and medical errors resulting in mismanagement or inadequate utilization of resources. We conducted a systematic review with four objectives: 1) to identify the most common cognitive biases, 2) to evaluate the influence of cognitive biases on diagnostic accuracy or management errors, 3) to determine their impact on patient outcomes, and 4) to identify literature gaps. Methods We searched MEDLINE and the Cochrane Library databases for relevant articles on cognitive biases from 1980 to May 2015. We included studies conducted in physicians that evaluated at least one cognitive factor using case-vignettes or real scenarios and reported an associated outcome written in English. Data quality was assessed by the Newcastle-Ottawa scale. Among 114 publications, 20 studies comprising 6810 physicians met the inclusion criteria. Nineteen cognitive biases were identified. ResultsAll studies found at least one cognitive bias or personality trait to affect physicians. Overconfidence, lower tolerance to risk, the anchoring effect, and information and availability biases were associated with diagnostic inaccuracies in 36.5 to 77 % of case-scenarios. Five out of seven (71.4 %) studies showed an association between cognitive biases and therapeutic or management errors. Of two (10 %) studies evaluating the impact of cognitive biases or personality traits on patient outcomes, only one showed that higher tolerance to ambiguity was associated with increased medical complications (9.7 % vs 6.5 %; p = .004). Most studies (60 %) targeted cognitive biases in diagnostic tasks, fewer focused on treatment or management (35 %) and on prognosis (10 %). Literature gaps include potentially relevant biases (e.g. aggregate bias, feedback sanction, hindsight bias) not investigated in the included studies. Moreover, only five (25 %) studies used clinical guidelines as the framework to determine diagnostic or treatment errors. Most studies (n = 12, 60 %) were classified as low quality. Conclusions Overconfidence, the anchoring effect, information and availability bias, and tolerance to risk may be associated with diagnostic inaccuracies or suboptimal management. More comprehensive studies are needed to determine the prevalence of cognitive biases and personality traits and their potential impact on physicians’ decisions, medical errors, and patient outcomes.
Article
Full-text available
Expert performance can be quantified by examining reliability and biasability between and within experts, and teasing apart their observations from their conclusions. I utilize these parameters to develop a Hierarchy of Expert Performance (HEP) that includes eight distinct levels. Using this hierarchy I evaluate and quantify the performance of forensic experts, a highly specialized domain that plays a critical role in the criminal justice system. Evaluating expert performance within HEP enables the identification of weaknesses in expert performance, and enables the comparison of experts across domains. HEP also provides theoretical and applied insights into expertise.
Article
Full-text available
Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.
Article
Full-text available
We integrate multiple domains of psychological science to identify, better understand, and manage the effects of subtle but powerful biases in forensic mental health assessment. This topic is ripe for discussion, as research evidence that challenges our objectivity and credibility garners increased attention both within and outside of psychology. We begin by defining bias and provide rich examples from the judgment and decision making literature as they might apply to forensic assessment tasks. The cognitive biases we review can help us explain common problems in interpretation and judgment that confront forensic examiners. This leads us to ask (and attempt to answer) how we might use what we know about bias in forensic clinicians’ judgment to reduce its negative effects. (ERRATUM: Reports an error in "The cognitive underpinnings of bias in forensic mental health evaluations" by Tess M. S. Neal and Thomas Grisso (Psychology, Public Policy, and Law, 2014[May], Vol 20[2], 200-211). This article contained an error that ironically demonstrates the very point of the article: that cognitive biases can easily lead to error—even by people who are highly attuned to and motivated to avoid bias. The authors inadvertently misapplied base rates by failing to account for nested probabilities in the illustration of how Bayesian analysis works in footnote 2 (p. 203). Specific details are provided.)
Article
Full-text available
The 2009 NAS report criticized forensic scientists for making insufficient efforts to reduce their vulnerability to cognitive and contextual bias. Over the past few years, however, the field has begun to take steps to address this issue. There have been major workshops on cognitive bias, and the Organization of Scientific Area Committees (OSAC), as well as the National Commission on Forensic Science, have created committees on Human Factors that are specifically charged with examining this issue. A number of tools and methods for minimizing bias are under consideration. Some of these tools have already been implemented in a few forensic laboratories. In general, these tools are designed to protect and enhance the independence of mind of forensic examiners, particularly those who rely on subjective judgment to make their decisions. Several types of contextual information are of concern, as illustrated in Figure 1. We organize them into a taxonomy of five levels (based on a four-level taxonomy originally suggested by Stoel et al., 2015). The five-level taxonomy differentiates task-irrelevant information that may be conveyed to an analyst by the trace evidence itself (Level 1), the reference samples (Level 2), the case information (Level 3), examiners' base rate expectations that arise from their experience (e.g., when the examiner expects a particular result - Level 4), and organizational and cultural factors (Level 5).
Article
Full-text available
This article describes the origins and contributions of the naturalistic decision making (NDM) research approach. NDM research emerged in the 1980s to study how people make decisions in real-world settings. Method: The findings and methods used by NDM researchers are presented along with their implications. The NDM framework emphasizes the role of experience in enabling people to rapidly categorize situations to make effective decisions. The NDM focus on field settings and its interest in complex conditions provide insights for human factors practitioners about ways to improve performance. The NDM approach has been used to improve performance through revisions of military doctrine, training that is focused on decision requirements, and the development of information technologies to support decision making and related cognitive functions.
Article
Full-text available
If successful safety interventions are to be developed and implemented, the underlying causes of accidents must be accurately identified. Using two different samples and research designs, we investigated the influence oftwo contextual factors, safety climate and communication, on accident interpretation. The results for both samples indicated that contextual factors significantly influenced accident attributions. Implications for the implementation of change interventions and for organizations trying to learn from negative events are discussed. In 1996, 4,800 employees in the United States died from work-related injuries, and another 3,900,000 sustained injuries causing at least one day of work to be lost. The National Safety Council (1997) estimated that the total cost to the economy of these work-related injuries and deaths was $121 billion. Clearly, work-related accidents are costly to organizations in both human and financial terms and, therefore, it is important for organizational scientists to better understand the causes of these accidents so that effective interventions can be de-signed and implemented.
Preprint
Following significant intelligence failures, the United States intelligence community adopted Intelligence Community Directive 203 (ICD203) to promote analytic rigor. This study developed two reliable psychometric scales to examine how strongly intelligence professionals (N=108) endorsed the ICD203 facets and the extent to which they believed their organizations complied with those facets. All facets yielded a high level of endorsement and perceived organizational compliance and the endorsement scale revealed three principal components (“unbiased”, “rigorous”, and “relevant”). Facets reflecting intelligence aims (e.g., “be unbiased”) were endorsed more strongly than those reflecting means (e.g., “use visualizations”). As well, organizations’ compliance was judged to fall short of the level of support personally endorsed. ICD203 endorsement was positively related to conscientious and actively open-minded thinking, whereas perceived ICD203 compliance was positively correlated with conscientiousness, job satisfaction and affective and normative commitment. The new scales could be profitably applied in future research on intelligence policy-related issues.
Article
Although the prominence of fact-checking in political journalism has grown dramatically in recent years, empirical investigations regarding the effectiveness of fact-checking in correcting misperceptions have yielded mixed results. One understudied factor that likely influences the success of fact-checking initiatives is the presence of opinion statements in fact-checked messages. Recent work suggests that people may have difficulty differentiating opinion- from fact-based claims, especially when they are congruent with pre-existing beliefs. In three experiments, we investigated the consequences of opinion-based claims to the efficacy of fact-checking in correcting misinformation regarding gun policy. Study 1 (N = 152) demonstrated that fact-checking is less effective when it attempts to correct statements that include both fact- and opinion-based claims. Study 2 (N = 561) replicated and expanded these findings showing that correction is contingent on people’s ability to accurately distinguish facts from opinions. Study 3 (N = 389) illustrated that the observed effects are governed by motivated reasoning rather than actual inability to ascertain fact-based claims. Together these results suggest that distinguishing facts from opinions is a major hurdle to effective fact-checking.
Article
Non‐probative but related photos can increase the perceived truth value of statements relative to when no photo is presented (truthiness ). In 2 experiments, we tested whether truthiness generalizes to credibility judgements in a forensic context. Participants read short vignettes in which a witness viewed an offence. The vignettes were presented with or without a non‐probative, but related photo. In both experiments, participants gave higher witness credibility ratings to photo‐present vignettes compared to photo‐absent vignettes. In Experiment 2, half the vignettes included additional non‐probative information in the form of text. We replicated the photo presence effect in Experiment 2, but the non‐probative text did not significantly alter witness credibility. The results suggest that non‐probative photos can increase the perceived credibility of witnesses in legal contexts. This article is protected by copyright. All rights reserved.
Article
Given the often crucial role of witness evidence in Occupational Health and Safety investigation, statements should be obtained as soon as possible after an incident using best practice methods. The present research systematically tested the efficacy of a novel Self‐Administered Witness Interview Tool (SAW‐IT); an adapted version of the Self‐Administered Interview (SAI©) designed to elicit comprehensive information from witnesses to industrial events. The present study also examined whether completing the SAW‐IT mitigated the effect of schematic processing on witness recall. Results indicate that the SAW‐IT elicited significantly more correct details, as well as more precise information than a traditional incident report form. Neither the traditional report from, nor the SAW‐IT mitigated against biasing effects of contextual information about a worker's safety history, confirming that witnesses should be shielded from extraneous post‐event information prior to reporting. Importantly, these results demonstrate that the SAW‐IT can enhance the quality of witness reports.
Article
This review will discuss the (perhaps biased) way in which smart oncologists think, biases they can identify, and potential strategies to minimize the impact of bias. It is critical to understand cognitive bias as a significant risk (recognized by the Joint Commission) associated with patient safety, and cognitive bias has been implicated in major radiotherapy incidents. The way in which we think are reviewed, covering both System 1 and system 2 processes of thinking, as well as behavioral economics concepts (prospect theory, expected utility theory). Predisposing factors to cognitive error are explained, with exploration of the groupings of person factors, patient factors, and system factors which can influence the quality of our decision-making. Other factors found to influence decision making are also discussed (rudeness, repeated decision making, hunger, personal attitudes). The review goes on to discuss cognitive bias in the clinic and in workplace interactions (including recruitment), with practical examples provided of each bias. Finally, the review covers strategies to combat cognitive bias, including summarize aloud, crowd wisdom, prospective hindsight, and joint evaluation. More definitive ways to mitigate bias are desirable.
Article
Sam Foster, Chief Nurse, Oxford University Hospitals, explains how initiatives, such as the West Midlands cultural ambassador programme, can bring positive changes for black and minority ethnic staff.
Article
Introduction: Investigation tools used in occupational health and safety events need to support evidence-based judgments, especially when employed within biasing contexts, yet these tools are rarely empirically vetted. A common workplace investigation tool, dubbed for this study the "Cause Analysis (CA) Chart," is a checklist on which investigators select substandard actions and conditions that apparently contributed to a workplace event. This research tests whether the CA Chart supports quality investigative judgments. Method: Professional and undergraduate participants engaged in a simulated industrial investigation exercise after receiving a file with information indicating that either a worker had an unsafe history, equipment had an unsafe history, or neither had a history of unsafe behavior (control). Participants then navigated an evidence database and used either the CA Chart or an open-ended form to make judgments about event cause. Results: The use of the CA Chart negatively affected participants' information seeking and judgments. Participants using the CA Chart were less accurate in identifying the causes of the incident and were biased to report that the worker was more causal for the event. Professionals who used the CA Chart explored fewer pieces of evidence than those in the open-ended condition. Moreover, neither the open-ended form nor the structured CA Chart mitigated the biasing effects of historical information about safety on participants' judgments. Conclusion: Use of the CA Chart resulted in judgments about event cause that were less accurate and also biased towards worker responsibility. The CA Chart was not an effective debiasing tool. Practical application: Our results have implications for occupational health and safety given the popular nature of checklist tools like the CA Chart in workplace investigation. This study contributes to the literature stating that we need to be scientific in the development of investigative tools and methods.
Article
The first step in journalistic fact-checking of political discourse is identifying whether statements contain “checkable facts” (i.e., not opinions). This randomized controlled experiment investigated how different demographic factors (age, gender, education, profession, and political affiliation) are associated with the ability to discern if statements contained checkable or noncheckable facts, as well as what impact training in identifying checkable facts can have on overall outcomes. A total of 3,357 participants identified checkable and noncheckable statements from a fictional political speech extract containing eight statements. Overall, participants were able to correctly identify an average of 69% of statements. Specific demographic factors (being male, young, and university educated) were positively associated with increased performance as well as working in professions that commonly analyze data, such as research. Participating in a short training session significantly increased participants’ performance. Initial political affiliation slightly reduces the ability to assess whether statements made by named politicians contained checkable facts.
Article
Cognitive effort is an essential part of both forensic and clinical decision-making. Errors occur in both fields because the cognitive process is complex and prone to bias. We performed a selective review of full-text English language literature on cognitive bias leading to diagnostic and forensic errors. Earlier work (1970–2000) concentrated on classifying and raising bias awareness. Recently (2000–2016), the emphasis has shifted toward strategies for “debiasing.” While the forensic sciences have focused on the control of misleading contextual cues, clinical debiasing efforts have relied on checklists and hypothetical scenarios. No single generally applicable and effective bias reduction strategy has emerged so far. Generalized attempts at bias elimination have not been particularly successful. It is time to shift focus to the study of errors within specific domains, and how to best communicate uncertainty in order to improve decision making on the part of both the expert and the trier-of-fact.
Article
“If you always do what you've always done,” goes an old saying, “you'll always get what you've always got.”1 Thesame istrueof the literature on intelligence analysis. Since its inception more than 60 years ago academic and professional writing has generated a great deal of useful practitioner case-knowledge but little in the way of scientifically validated research on intelligence practice. A recent review of 5,800 articles encompassing 172,000 pages confirms this point, noting that little emphasis has been placed on scientifically validating analytical practices.2 This is particularly problematic because the need to improve analysis became evident in the aftermath of the 11 September 2001 (9/11) attacks and the Iraqi weapons of mass destruction (WMD) controversy. In the wake of these events, Congress passed, and President George W. Bush signed, the 2004 Intelligence Reform and Terrorism Prevention Act (IRTPA) as an attempt to improve analytic practice.
Chapter
Psychological research has consistently demonstrated how our perceptions and cognitions are affected by context, motivation, expectation, and experience. Factors extraneous to the content of the information being considered can shape people's perceptions and judgments. In this chapter, we discuss the nature of human cognition and how people's limited capacity for information processing not only is remarkably efficient, but also introduces systematic errors into decision making. The cognitive shortcuts people take when processing information can cause bias, and these shortcuts largely occur outside of conscious awareness. Experts are not immune to such cognitive vulnerabilities, and their lack of awareness of cognitive contamination in their judgments makes the implementation of interventions such as blinding (i.e., limiting exposure to biasing contextual information) a necessary procedure when seeking to minimize bias and optimize decision making.
Article
A ‘just culture’ aims to respond to anxiety about blame-free approaches on the one hand, and a concern about people’s willingness to keep reporting safety-related issues on the other. A just culture sets out the conditions that legitimize managerial intervention in the sanction or restoration of individuals in the organization. In this paper we examine the manifestly important moral and safety issues that a just culture needs to consider. These include substantive justice which prescribes how regulations, rules and procedures themselves are fair and legitimate; procedural justice which sets down processes for determining rule-breaches, offers protections for the accused, and governs who should make such determinations; and restorative justice which aims to restore the status of the individual involved and heal relationships and injuries of victims and the wider community in the wake of an ethical breach.
Article
Unlike many models of bias correction, our flexible correction model posits that corrections occur when judges are motivated and able to adjust assessments of targets according to their naive theories of how the context affects judgments of the target(s). In the current research, people flexibly correct assessments of different targets within the same context according to the differing theories associated with the context-target pairs. In Study 1, shared theories of assimilation and contrast bias are identified. Corrections consistent with those theories are obtained in Studies 2 and 3. Study 4 shows that idiographic measures of thoeries of bias predict the direction and magnitude of corrections. Implications of this work for corrections of attributions and bias removal in general are discussed.
Article
Previous research indicates that our initial impressions of events frequently influence how we interpret later information. This experiment explored whether accountability-pressures to justify one's impressions to others-leads people to process information more vigilantly and, as a result, reduces the undue influence of early-formed impressions on final judgments. Subjects viewed evidence from a criminal case and then assessed the guilt of the defendant. The study varied (1) the order of presentation of pro-vs. anti-defendant information, (2) whether subjects expected to justify their decisions and, if so, whether subjects realized that they were accountable prior to or only after viewing the evidence. The results indicated that subjects given the anti/pro-defendant order of information were more likely to perceive the defendant as guilty than subjects given the pro/anti-defendant order of information, but only when subjects did not expect to justify their decisions or expected to justify their decisions only after viewing the evidence. Order of presentation of evidence had no impact when subjects expected to justify their decisions before viewing the evidence. Accountability prior to the evidence evidence also substantially improved free recall of the case material. The results suggest that accountability reduces primacy effects by affecting how people initially encode and process stimulus information.
Article
Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Empirical research with judges and jurors has provided research into the process by which legal decision-makers come to a view about the facts of the case. However, much remains uncertain, including questions about how judges' reasoning processes might differ from jurors' when thinking through the facts of a case, and how well the insights of decision-making research translate into the noisy context of real criminal trials. This article offers a preliminary exploration of connections between Pennington and Hastie's story model of decision-making, heuristics and biases research, and areas of fact determination that have presented persistent difficulties to criminal courts, including sexual assault, child homicide and the assessment of expert testimony. I discuss some of the key insights that cognitive psychology can offer to those who are interested in understanding how decision-makers think about the facts of a case, and where decision-makers may be prone to error.
Article
Causal explanations determine how industry acts to prevent accident recurrence. Little is known about industrial accident cause-finding practices. Sixteen practicing safety specialists performed three common tasks: walkthrough accounts of subjects' own experiences, non-interactive exercises and interactive simulated investigations to elicit schemas. Analysis examined the relation of individual differences in knowledge sources and organizational factors to the causal concepts retrieved and the process of reasoning. Particular interest was taken in influences on the retrieval and use of worker-related causal factors and management or design-related causal factors. No evidence was observed of overattribution to worker-centered explanations, nor did the investigation approaches resemble attribution judgements. The predominance of management/design causal explanations does reflect current explanatory fashions, although the self-serving bias is the one that best describes the observed effects. Status and to a lesser degree operations values and operations pressures influenced the proportion of factors retrieved as well as the class of factors used for starting and concluding investigations.