The performance of delta check methods.

Clinical Chemistry (Impact Factor: 7.77). 01/1980; 25(12):2034-7.
Source: PubMed

ABSTRACT The percentage of mislabeled specimens detected (true-positive rate) and the percentage of correctly labeled specimens misidentified (false-positive rate) were computed for three previously proposed delta check methods and two linear discriminant functions. The true-positive rate was computed from a set of pairs of specimens, each having one member replaced by a member from another pair chosen at random. The relationship between true-positive and false-positive rates was similar among the delta check methods tested, indicating equal performance for all of them over the range of false-positive rate of interest. At a practical false-positive operating rate of about 5%, delta check methods detect only about 50% of mislabeled specimens; even if the actual mislabeling rate is moderate (e.g., 1%), only abot 10% of specimens flagged a by a delta check will actually have been mislabeled.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Despite their widespread use, the performance of delta check rules is rarely evaluated because errors are rare and lack a gold standard for detection. In this study we used a simulation-based approach to compare strategies for empirically defining criteria for univariate delta checks, and assessed the performance of these rules for detecting mislabeled specimens in 2 inpatient populations. We performed simulations using historical laboratory test results by randomly sampling pairs of specimens successively drawn from the same patient or two different patients. We evaluated the performance of delta check rules using a variety of thresholds, including those currently in use in our laboratory. Mean corpuscular volume had the highest positive predictive value for specimen mislabeling, and produced the fewest false positives. Conversely, rules using other laboratory tests had considerably poorer performance. Several of the "best guess" thresholds historically used in our laboratory, notably those for potassium and anion gap, were predicted to have extremely low yields. In addition, rule performance was not consistent between the two patient populations. The low yield of delta checks based on any single analyte should prompt careful evaluation of their practical utility. Furthermore, our results indicate that it may not be possible to generalize delta rules across institutions.
    Clinica chimica acta; international journal of clinical chemistry 10/2011; 412(21-22):1973-7. DOI:10.1016/j.cca.2011.07.007 · 2.76 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To evaluate the performance of delta check techniques, we analyzed 707 unselected pairs of continuous-flow test results, using three different delta check methods. If any of the test results (plus the urea nitrogen/creatinine ratio and the anion gap) failed one of the checks, the reason for the failure was sought by examining subsequent test results, retesting specimens, and (or) reviewing te patient's chart. Each delta check failure was accordingly classified as a true or false positive. The percentage of positives we judged to be true positives ranged from 5 to 29%. Each of the three methods had test types with low and high percentages of true positives. We conclude that with the delta check methods one can detect errors otherwise overlooked, but at the cost of investigating many false positives, because, in the population we studied, disease processes or therapy often caused large changes in a series of test results for a patient.
    Clinical Chemistry 02/1981; 27(1):5-9. · 7.77 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe an array of checking routines that can aid laboratory staff in detecting possible errors in results being entered into a laboratory computer. A single result is checked at the worksheet stage for its numeric format and for its numeric status by comparison with appropriate reference ranges, and action and hazard limits. If a previous result exists then the present result can be compared with it (delta checking). Laboratory documentation of abnormal results is important and we describe three types; worksheet, exception report(s) and histograms of results. The clinical reports leaving the laboratory should flag abnormal results so that the clinician is prompted to examine, and question if necessary, results which are extreme or have changed significantly since the last analysis. Effective error detection requires cooperation between the laboratory and clinical units.
    Computer Methods and Programs in Biomedicine 06/1985; 20(1):103-16. DOI:10.1016/0169-2607(85)90050-1 · 1.90 Impact Factor


Available from