The international consensus group for hematology review: suggested criteria for action following automated CBC and WBC differential analysis.

Clinical Hematology, Department of Laboratories, Barnes-Jewish Hospital, St. Louis, Missouri, USA.
Laboratory Hematology 02/2005; 11(2):83-90. DOI: 10.1532/LH96.05019
Source: PubMed

ABSTRACT In the half century since the first use of automated analyzers, manual techniques, especially microscopic examination of a stained blood film, have complemented analyzer results to provide a comprehensive hematology report on a patient's blood sample. Over the years, as the capabilities and performance of automated analyzers have improved, the respective roles of the automated analyzer and the complementary procedures have changed. Manual action (most commonly smear review) following automated analyzer results is usually triggered by determining whether the results trigger one of a series of criteria for review of results. There is little uniformity among different laboratories on criteria for action. Recognizing the long-standing need for generally accepted guidelines ("rules") which could be applied to criteria for review of CBC and differential results from automated hematology analyzers, Dr. Berend Houwen invited 20 experts to a meeting in the Spring of 2002 to discuss the issues and determine the most appropriate criteria. At this meeting, 83 rules were developed by consensus agreement. These rules were then tested in 15 laboratories on a total of 13,298 blood samples. After a detailed analysis of the data, the rules were refined and consolidated to produce 41 rules that are presented here. They include rules for first-time samples as well as delta rules for repeat samples within 72 hours from a patient. It is hoped that these rules will be useful to a large number of hematology laboratories worldwide. To facilitate validating these rules in individual laboratories before implementation in routine operation for patient samples, a suggested protocol is attached to this paper.

  • Source
    Revista Brasileira de Hematologia e Hemoterapia 11/2014; 37(1). DOI:10.1016/j.bjhh.2014.11.005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The usefulness of the CytoDiff flow cytometric system (Beckman Coulter, USA) has been studied in various conditions, but its performance including rapidity in detecting and counting blasts, the most significant abnormal cells in the peripheral blood, has not been well evaluated. The objective of this study was to evaluate the performance of the CytoDiff differential counting method in challenging samples with blasts. In total, 815 blood samples were analyzed. Samples flagged as "blasts" or "variant lymphocytes" and showing <10% blasts by manual counts were included. In total, 322 samples showed blasts on manual counts, ranging from 0.5% to 99%. The CytoDiff method was performed by flow cytometry (FC500; Beckman Coulter, USA) with a pre-mixed CytoDiff reagent and analyzing software (CytoDiff CXP 2.0; Beckman Coulter). The average time required to analyze 20 samples was approximately 60 min for manual counts, and the hands-on time for the CytoDiff method was 15 min. The correlation between the CytoDiff and manual counts was good (r>0.8) for neutrophils and lymphocytes but poor (r<0.8) for other cells. When the cutoff value of the CytoDiff blast count was set at 1%, the sensitivity was 94.4% (95% CI; 91.2-96.6) and specificity was 91.9% (95% CI; 89.0-94.1). The positive predictive value was 88.4% (95% CI; 84.4-91.5) (304/344 cases) and negative predictive value was 96.2% (95% CI; 93.9-97.7) (453/471 cases). The CytoDiff blast counts correlated well to the manual counts (r=0.9223). The CytoDiff method is a specific, sensitive, and rapid method for counting blasts. A cutoff value of 1% of at least 1 type of blast is recommended for positive CytoDiff blast counts.
    Annals of Laboratory Medicine 01/2015; 35(1):28-34. DOI:10.3343/alm.2015.35.1.28 · 1.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: The Sysmex XE-5000 instruments (Sysmex, Kobe, Japan) count immature granulocytes (IGs) and use the "Imm Gran? "flag to signal unreliable results. This study investigated the usefidness of the "Imm Gran?" flag and the analytical and diagnostic performance of the IG measurements in a side-by-side evaluation. Methods: In total, 408 samples were analyzed on three XE-5000 instruments. The IG count and the "Imm Gran?" flag reports from all three instruments were used for reproducibility studies. The diagnostic performance of the automated IGs and the "Imm Gran?" flag were studied by comparing the XE-5000 results with the results of the manual differential. Results: The reproducibility of the "Imm Gran?" flagging between instruments was poor (tc, 0.75-0.80). The most significant contributor to the report of the "Imm Gran?" flag was bands, and the flag played a minor role in detecting blasts. The interinstrument reproducibility of the IG counts was high (intraclass correlation, 0.99). The IG count reported by XE-5000s was higher than the manual IG count (36%-550) and the difference and the variability tended to increase with increasing levels of IGs. Conclusions: The "Imm Gran?" flag has a poor analytical quality and gives no substantial information on the presence of blasts in the sample. We therefore suggest reporting the automated IG count without initial microscopic slide review
    American Journal of Clinical Pathology 10/2014; 142(4):553-60. DOI:10.1309/AJCP4V4EXYFFOELL · 2.88 Impact Factor


Available from