Patient safety in the clinical laboratory. A longitudinal analysis of specimen identification errors

University of California, Los Angeles, Clinical Laboratories, Department of Pathology and Laboratory Medicine, David Geffen School of Medicine at UCLA, Box 951732, AL-206 CHS 10833 Le Conte Ave, Los Angeles, CA 90095-1732, USA.
Archives of pathology & laboratory medicine (Impact Factor: 2.84). 11/2006; 130(11):1662-8. DOI: 10.1043/1543-2165(2006)130[1662:PSITCL]2.0.CO;2
Source: PubMed


Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process.
To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools.
Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics.
Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months.
Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.

Download full-text


Available from: Elizabeth Ann Wagar, Dec 19, 2013
  • Source
    • "For laboratory specimen misidentification, a rate of 1 in 1,000 opportunities has been reported, the most common categories of misidentification events being mislabeled (1%), mismatched (6.3%), and unlabeled specimens (4.6%), respectively (15). In general, misidentification occurs in 1 in 2,000 of specimens in transfusion medicine, while it occurs at a much higher rate (approximately 1 in 100) in clinical laboratory specimens. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Quality indicators (QIs) measure the extent to which set targets are attained and provide a quantitative basis for achieving improvement in care and, in particular, laboratory services. A body of evidence collected in recent years has demonstrated that most errors fall outside the analytical phase, while the pre- and post-analytical steps have been found to be more vulnerable to the risk of error. However, the current lack of attention to extra-laboratory factors and related QIs prevent clinical laboratories from effectively improving total quality and reducing errors. Errors in the pre-analytical phase, which account for 50% to 75% of all laboratory errors, have long been included in the 'identification and sample problems' category. However, according to the International Standard for medical laboratory accreditation and a patient-centered view, some additional QIs are needed. In particular, there is a need to measure the appropriateness of all test request and request forms, as well as the quality of sample transportation. The QIs model developed by a working group of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) is a valuable starting point for promoting the harmonization of available QIs, but further efforts should be made to achieve a consensus on the road map for harmonization.
    Biochemia Medica 02/2014; 24(1):105-113. DOI:10.11613/BM.2014.012 · 2.67 Impact Factor
    • "Positive and negative predictive values are contingent upon the prevalence of the condition in the population (here the rate of specimen mix-ups). We do not know the true rate of mix-up errors in modern laboratories but the rate is thought to be less than 1 in 1000.[910] Table 2 shows that as the rate of mix-ups drops below 1 in 1000 the associated positive predictive value of even the best performing delta check becomes negligible. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Delta checks use two specimen test results taken in succession in order to detect test result changes greater than expected physiological variation. One of the most common and serious errors detected by delta checks is specimen mix-up errors. The positive and negative predictive values of delta checks for detecting specimen mix-up errors, however, are largely unknown. We addressed this question by first constructing a stochastic dynamic model using repeat test values for five analytes from approximately 8000 inpatients in Calgary, Alberta, Canada. The analytes examined were sodium, potassium, chloride, bicarbonate, and creatinine. The model simulated specimen mix-up errors by randomly switching a set number of pairs of second test results. Sensitivities and specificities were then calculated for each analyte for six combinations of delta check equations and cut-off values from the published literature. Delta check specificities obtained from this model ranged from 50% to 99%; however the sensitivities were generally below 20% with the exception of creatinine for which the best performing delta check had a sensitivity of 82.8%. Within a plausible incidence range of specimen mix-ups the positive predictive values of even the best performing delta check equation and analyte became negligible. This finding casts doubt on the ongoing clinical utility of delta checks in the setting of low rates of specimen mix-ups.
    02/2012; 3:5. DOI:10.4103/2153-3539.93402
  • Source
    • "Proper patient identification is essential to reducing errors and improving patient safety. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) recognizes this and has included " Improve the accuracy of patient identification " as one of its " National Patient Safety Goals " [2]. Patient identification and other laboratory errors have received increased attention in the research literature both inside [3] and outside [4] [5] the United States. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In an effort to address the problem of laboratory errors, we develop and evaluate a method to detect mismatched specimens from nationally collected blood laboratory data in two experiments. In Experiments 1 and 2 using blood labs from National Health and Nutrition Examination Survey (NHANES) and values derived from the Diabetes Prevention Program (DPP) respectively, a proportion of glucose and HbA1c specimens were randomly mismatched. A Bayesian network that encoded probabilistic relationships among analytes was used to predict mismatches. In Experiment 1 the performance of the network was compared against existing error detection software. In Experiment 2 the network was compared against 11 human experts recruited from the American Academy of Clinical Chemists. Results were compared via area under the receiver-operator characteristic curves (AUCs) and with agreement statistics. In Experiment 1 the network was most predictive of mismatches that produced clinically significant discrepancies between true and mismatched scores ((AUC of 0.87 (±0.04) for HbA1c and 0.83 (±0.02) for glucose), performed well in identifying errors among those self-reporting diabetes (N=329) (AUC=0.79 (±0.02)) and performed significantly better than the established approach it was tested against (in all cases p<.0.05). In Experiment 2 it performed better (and in no case worse) than 7 of the 11 human experts. Average percent agreement was 0.79 and Kappa (κ) was 0.59, between experts and the Bayesian network. Bayesian network can accurately identify mismatched specimens. The algorithm is best at identifying mismatches that result in a clinically significant magnitude of error.
    Artificial intelligence in medicine 10/2010; 50(2):75-82. DOI:10.1016/j.artmed.2010.05.008 · 2.02 Impact Factor
Show more