In the half century since the first use of automated analyz-
ers, manual techniques, especially microscopic examination of
a stained blood film, have complemented analyzer results to
provide a comprehensive hematology report on a patient’s
blood sample. Over the years, as the capabilities and perfor-
mance of automated analyzers have improved, the respective
roles of the automated analyzer and the complementary pro-
cedures have changed. Manual action (most commonly smear
review) following automated analyzer results is usually trig-
gered by determining whether the results trigger one of a
series of criteria for review of results. There is little uniformity
among different laboratories on criteria for action. Recogniz-
ing the long-standing need for generally accepted guidelines
(“rules”) which could be applied to criteria for review of CBC
and differential results from automated hematology analyzers,
Dr. Berend Houwen invited 20 experts to a meeting in the
Spring of 2002 to discuss the issues and determine the most
appropriate criteria. At this meeting, 83 rules were developed
by consensus agreement. These rules were then tested in 15
laboratories on a total of 13,298 blood samples. After a
detailed analysis of the data, the rules were refined and con-
solidated to produce 41 rules that are presented here. They
include rules for first-time samples as well as delta rules for
repeat samples within 72 hours from a patient. It is hoped
that these rules will be useful to a large number of hematol-
ogy laboratories worldwide. To facilitate validating these rules
in individual laboratories before implementation in routine
operation for patient samples, a suggested protocol is
attached to this paper. LabHematol. 2005;11:83-90.
KEY WORDS: Blood cell ·Consensus ·Delta ·
Human blood cell analysis dates back 330 years to
Leeuwenhoek, when he provided the first description of red
blood cells using his simple microscope consisting of a minute
biconvex lens; he was even able to measure their diameters .
The development of compound microscopes enabled white
cells, red cells, and then platelets to be described in detail and
methods to be developed for counting them in prescribed vol-
umes of blood. The world of cells was colorless until Paul
Ehrlich stained blood cells with aniline dyes and was able to
distinguish nuclei, cytoplasm and fine cellular detail . He
differentiated the various types of white cells into the five main
types, still used for classification today. During the first half of
the twentieth century, manual methods for counting white
cells, red cells, and platelets in volume-calibrated chambers,
together with manual hemoglobin and PCV measurements,
were widely used clinically. They were almost invariably
accompanied by microscopic examination of a stained blood
film for white blood cell differentiation as well as evaluation of
red cells and platelets for size and morphology.
Wallace Coulter developed the first automated analyzer
for counting and sizing cells and presented it in 1956 .
This was a single channel instrument that revolutionized the
The International Consensus Group for Hematology
Review: Suggested Criteria for Action Following
Automated CBC and WBC Differential Analysis
P. W. BARNES,1S. L. MCFADDEN,2S. J. MACHIN,3E. SIMSON4
1Clinical Hematology, Department of Laboratories, Barnes-Jewish Hospital, St. Louis, Missouri, USA; 2McFadden Laboratory
Consulting, Columbus, Ohio, USA; 3Department of Haematology, University College London Hospital, London, UK;
4Center for Clinical Laboratories, Department of Pathology, The Mount Sinai Medical Center, New York, NY, USA
Received April 5, 2005; accepted April 6, 2005
Laboratory Hematology 11:83-90
© 2005 Carden Jennings Publishing Co.,Ltd.
Correspondence and reprint requests: Stefanie L. McFadden, MT
(ASCP) SH, Laboratory Consultant, 104 S. Westmoor Ave., Colum-
bus, OH 43204 USA (e-mail: firstname.lastname@example.org).
tedious and imprecise manual chamber counting methods.
Multiparameter CBC analyzers were developed during the
1960s. During the 1970s, two competing technologies were
developed for the WBC differential count. The first was
image analysis technology, which attempted to mimic
human microscopic examination. But because of its many
limitations, it did not endure. The second technology
employed cytochemical techniques for cell identification in
an automated flow differential counter .WBC differential
flow technology using different methods for cell identifica-
tion, was subsequently developed. During the 1980s, CBC
and flow WBC differential methods were combined into a
single platform to produce the modern widely used multipa-
rameter flow CBC and differential analyzers, available
presently with a number of manufacturers. Flagging proper-
ties have also been incorporated into these analyzers to alert
the operator to the existence of abnormal cell morphology
and the presence of abnormal cells which the analyzer is
unable to count, as well as certain sample characteristics, for
example platelet clumps, which may cause incorrect results
to be produced.
In the half century since the first use of automated ana-
lyzers, manual techniques, especially microscopic examina-
tion of a stained blood film, have complemented analyzer
results to provide a comprehensive hematology report on a
patient’s blood sample. Over the years, as the capabilities
and performance of automated analyzers have improved, the
respective roles of the automated analyzer and the comple-
mentary procedures have changed. It is recognized that the
automated systems are superior for counting of WBC, RBC
and platelets and for differential counting of WBC for well-
characterized (mature) cell types, whereas visual microscopy
is superior for differentiating cells based on nuances of cyto-
logical features, especially for immature cells. For many sam-
ples, it is no longer necessary to perform either a microscopic
smear review, or a manual differential count. Recent pres-
sures for cost-containment as well as shortages of trained,
skilled personnel have increased the need to reduce the num-
ber of manual procedures without sacrificing the quality of
results. As the main complementary procedure is micro-
scopic smear review, the decision as to whether smear review
is necessary for each sample plays a major role in hematology
laboratory costs, productivity, and speed of reporting.
Humans review smears to provide information additional to
or missing from analyzer results, or to confirm results pro-
duced by the analyzer. Microscopy is most commonly trig-
gered by criteria for visual smear review applied to analyzer
results. Each laboratory has developed its own criteria for
action after an automated analysis of a blood sample. The
objective is to reduce the number of samples requiring action
(most commonly smear review) to the greatest possible
extent, while not endangering the patient by reporting false
or misleading results, especially false negative (FN) results.
However, there is little opportunity for an individual labora-
tory to know whether the guidelines it follows are optimally
effective for minimizing false negatives and false positives (FP).
Informal exchange of information revealed that the review
rates varied from 5% to 95% among different laboratories.
This could not be explained completely by differences in
patient population or analyzer performance.
Dr. Berend Houwen recognized the long-standing need
for generally accepted guidelines (“rules”) that could be
applied to criteria for review of CBC and differential
results from automated hematology analyzers. He invited
20 experts to a meeting in the Spring of 2002 to discuss
the issues and arrive at a consensus on the most appropri-
ate criteria. He gathered together hematology laboratori-
ans who represented 6 countries and 17 laboratories that
were foremost in the use of review criteria. The laborato-
ries included those servicing tertiary care hospitals, oncol-
ogy hospitals, community hospitals, children’s hospitals,
and doctors’ offices (see Appendix 1). For almost 3 full
days, each parameter of the CBC was discussed in depth
and consensus was reached regarding rules for situations
which should trigger a review of automated cell counter
results, potentially leading to further testing or blood
smear review. Representatives from 15 laboratories agreed
to test the consensus rules in their laboratories by follow-
ing a detailed protocol that was developed subsequent to
that meeting. The outcome of the development and test-
ing of the consensus rules, together with their suggested
implementation in clinical hematology laboratories, is
presented in this paper.
METHODS AND MATERIALS
In order to test the validity of these rules, each laboratory
was asked to test 1000 samples. The samples were to be
selected at random from the daily workload over a period of
several days and be representative of the institution’s normal
patient population. Of the 1000 samples, 800 were to be
first time samples, to allow for testing the rules relating to
first time observations. The remaining 200 samples were
repeat samples to test the delta rules for repeat samples. The
laboratories ran the sample on their hematology analyzers in
accordance with their standard operating procedures for
analysis of patient samples. In addition, they prepared
stained blood films on all samples, whether these would have
been required or not according to their standard operating
procedures. A manual differential with a smear review was to
be performed on all patient samples in the study. If manual
differentials with smear review had been performed in the
course of routine testing, these results could be used.
All data were submitted to the steering committee for
analysis. The data provided by each participant in a spread-
sheet included for each sample: sample identification; a list-
ing of each of the consensus rules which had been triggered;
smear findings; instrument flags; all CBC numeric values;
84P.W.Barnes et al
whether the lab’s own rules would trigger a smear review;
sample deltas; and comments on significant findings. The
data were then carefully analyzed and reviewed by members
of the steering committee.
After the steering committee’s initial evaluation of the data,
it was clear that not all laboratories were using exactly the same
criteria to define an abnormal (positive) smear result. A single
set of criteria was established by the members of the steering
committee, and then reapplied to all the raw data. Table 1
contains the criteria used by this consensus group to define a
true positive (TP) smear finding based on medical relevance.
Each laboratory’s data was analyzed separately and then, the
data from all laboratories were combined. The steering com-
mittee reviewed the data, comparing the rules that were trig-
gered with the findings on the peripheral blood smear. If a rule
was triggered, and the smear contained a positive finding, the
sample was graded as a “true positive.” If a rule was triggered,
but the smear did not contain any positive findings, the sample
was graded as a “false positive.” If a rule was not triggered and
the smear contained a positive finding, the sample was graded
as a “false negative.” If a rule was not triggered and the smear
did not contain any positive findings, the sample was graded as
a “true negative” (TN). Truth tables were prepared for each
individual laboratory and then the results for individual labora-
tories were combined into a Summary Table (Table 2).
The members of the consensus group submitted a total of
13,298 patient results for analysis, as some of the laboratories
were not able to meet the goal of 1000 samples. The hematology
analyzers on which these samples had been run were: The Abbott
CellDyn 4000, the Bayer ADVIA 120, the Beckman Coulter
GenS and LH750, and the Sysmex SE-9000 and XE-2100.
The truth tables for all laboratories were very similar,
despite differences in facility type and instrument. The sum-
mary results for all samples can be found in Table 2 (truth
The final 41 consensus rules for the Review of Automated
CBC and WBC Differential results are detailed in Table 3.
This table lists each parameter with the review limits and sug-
gested action. The rules listed are for first time observations as
well as subsequent observations. The definitions of the terms
used in the “delta rules” can be found in Table 4 (delta defini-
tions). Of the total 41 rules, 15 rules relate to CBC parame-
ters, 7 relate to Differential parameters consisting of absolute
counts for the five white cell types, 7 relate to instrument
suspect codes/flags (including RBC and PLT flags), 10 to
WBC suspect codes/flags, and 2 to reticulocytes.
Analysis of the data indicated that while the study
included different types of laboratories with different brands
of hematology analyzers, all laboratories had similar truth
tables. This was especially important in the realm of the false
negative statistics. The steering committee realized that well-
defined review criteria to determine a positive finding on the
smear was critical in constructing the truth tables. In order to
make these criteria objective and meaningful, we evaluated
the significance of smear findings in relation to clinical sig-
nificance. It is obviously inappropriate to label any abnor-
mality at all, such as “occasional schistocyte,” “giant platelet
seen” “slight anisocytosis and poikilocytosis,” etc, as a posi-
tive smear result. We developed a series of smear findings
that we believe are clinically relevant and they are listed in
Table 1. The false negative rate for these laboratories using
the consensus rules was 2.9% (386 samples of the total
13,298). The false negative rate for these same laboratories
using their own laboratory rules was 3.8% (503 samples).
While the false negative rate for the consensus rules may not
appear to be significantly improved compared with the labo-
ratory’s own rules, it is important to remember that these
laboratories were already employing many of the “consensus
type rules” in their laboratories. Analysis of the false negative
rate of 2.9% was conducted and the results are shown in
Table 5. The false positive rate of 18.6% was also similar to
the laboratories’ own rules. The false positive rate is largely
due to instrument suspect flagging. This was seen across all
instruments used in the study which represent the majority
International Consensus Group for Hematology Review85
TABLE 1. Criteria for a Positive Smear
a. RBC morphology at either 2+/moderate or greater.The only exception
is malaria,where any finding will be considered a positive finding.
b.PLT morphology (giant platelets) at either 2+/moderate or greater.
c.Platelet clumps at > rare/occasional.
d.Dohle bodies at either 2+/moderate or greater.
e.Toxic granulation at either 2+/moderate or greater.
f. Vacuoles at either 2+/moderate or greater.
2.Abnormal cell types
d.Atypical lymphs >5
f. Plasma cells ≥1
TABLE 2.Truth Table Summary
Total number of samples
86 P.W.Barnes et al
TABLE 3.Rules for Review of Automated CBC and WBC Differential
Lower than lab verified
Follow lab SOP
Check sample for
<4.0 OR >30.0
<4.0 OR >30.0
Within 3 days
<100 OR >1000
Delta check fail
<7 g/dL or >2 g/dL
reference range for
age and sex
<75 fl or >105 fl
<24 hrs old
Slide review for
Request fresh sample Report with
>24 hrs old
if NO macrocytic
comment if fresh
sample is not
<24 hrs old
≥2 units above upper
Check for lipemia,
limit of reference
or other sample
No diff or
Manual Diff and
<1.0 or >20.0
>5.0 (Adult) or >7.0
(<12 yrs old)
>1.5 (Adult) or >3.0
(<12 yrs old)
International Consensus Group for Hematology Review87
TABLE 3. Continued
Retic absolute # >0.100
WBC unreliability Flag +
Dimorphic RBC Flag+
Validate by lab SOP
PLT clump flag
Check sample for
Slide review (PLT
If clumps persist,
follow lab SOP
PLT & MPV flags
except PLT clumps
fail for WBC
Left shift flag
Follow lab SOP
fail for WBC
Delta pass or
Follow lab SOP
fail for WBC
correct WBC if
Repeat if aspiration
of multiparameter hematology analyzers used in hematology
laboratories. These analyzers are intended to be used as
screening devices and to flag suspect abnormal samples for
further review. They are designed to trigger the suspect flags
to preferentially have a higher rate of false positives so as not
to miss potentially important abnormalities and thus to min-
imize the number of false negative.
After the data analysis was completed, the rules were
reviewed to see which rules were triggered and their fre-
quency. We discovered three rules that were not triggered by
any of the 13,298 samples in the study. As such, it would be
highly unlikely that they would be triggered during the
course of a hematology laboratory’s daily operation. In addi-
tion, after reflection on medical significance issues, we
decided that those rules did not contribute to patient care,
either in the study or according to medical practice, and
those rules were eliminated. We were able to consolidate the
remaining 80 rules by combining “like parameter” limits
together. For example, in the original rules set, Rule #24
stated that PLT <100 and first time analysis should trigger a
slide review; Rule #18 stated that PLT >1000 and first time,
should trigger slide review. These two rules are now com-
bined into Rule #7 which states that any PLT <100 or >1000
and first time, then review a slide. There are now 41 consen-
sus rules for hematology review.
The Consensus Group presents these rules to the hema-
tology community as guidelines to be implemented in clini-
cal hematology laboratories. Manufacturers of multiparame-
ter hematology analyzers might also find these rules useful
when validating the performance of a newly developed
hematology analyzer. We suggest that any laboratory adopt-
ing these rules validate their operation before implementing
them for use on patient samples. Appendix 2 contains a sug-
gested procedure for rules validation. This procedure should
be combined with any governmental or regulatory require-
ments under which the laboratory operates.
These rules are dedicated to the memory of Dr. Berend
Houwen: it was his foresight that recognized the need for
these rules; it was his vision that an international consensus
group was the best means to address the need; it was his ini-
tiative that formed the group; and it was his leadership and
wisdom that guided the group in it’s work.
On behalf of the Steering Committee and the Interna-
tional Hematology Consensus Group, the authors thank
Beckman Coulter Inc., who generously provided an educa-
tional grant which entirely funded the meeting of the con-
sensus group in Spring 2002.
We thank each of the participating laboratories (listed in
Appendix 1) which funded the performance of the protocol
in their own laboratory and we thank each of the profes-
sional and technical staff in each of the laboratories, who
spent countless hours performing all the laboratory work and
entering results into worksheets.
1. Leeuwenhoek A. Microscopical Observations. Philos Trans R Soc
2. Ehrlich P. Beitrag zur Kenntnis der Anilinfarbungen und iher Ver-
wendung in der mikroskopischen Technik. Arch Mikr Anat.
3. Coulter WH. High speed automatic blood cell counter and cell
size analyzer. Proc National Electronics Conf. 1956;12:1034-1040.
4. Mansberg HP, Saunders AM, Groner W. The Hemalog D white
cell differential system. J Hidtochem Cytochem. 1974;22:711-724.
88 P.W.Barnes et al
TABLE 4. Delta Definitions
Delta checks:Delta checks operate to reduce hematological testing by
recognizing previously detected validated abnormalities.
Delta limits:The delta limit for a particular test is the amount by which the
most recent automated analyzer test result may differ from a previous
test result before triggering smear review or some other action to
validate the analyzer result.Delta limits should be established for each
laboratory by taking into account physiological considerations as well
as the characteristics of the automated analyzer used in that laboratory.
Delta pass and delta fail: Delta pass occurs when the result of the most
recent automated analyzer test does not differ by more than the delta
limit from the result of the previous test.Delta fail is when the result of
the most recent test differs by more than the predefined delta limit
from the previous test result.
Positive delta and negative delta: Positive delta occurs when the result of
the most recent test differs in a positive direction from the result of
the previous test ie,it is larger,irrespective of whether the delta limit
has been exceeded.Negative delta is when the result of the most
recent test differs in a negative direction from the result of the
previous test ie,it is smaller.
Actions related to delta checks:The International Consensus Group did
not set delta limits,leaving those to the individual laboratory.However,
the group did suggest specific actions for situations where delta limits
set by the individual laboratory are exceeded.
TABLE 5. False Negative Analysis
False Negative Occurrences%
International Consensus Group for Hematology Review89
APPENDIX 1. Consensus Group Members
Barnes Jewish Hospital
Davis Medical Center
University of California
Department of Pathology & Lab Medicine
Grant/Riverside Methodist Hospitals
Department of Pathology
The Children’s Hospital
*Steering committee member.
Health Alliance Laboratories
Professor Samuel J.Machin*
Department of Haematology
University College Hospital
(44)20 7380 9884
Chinook Health Region,Corp.Office
(61) 2 98 28 51 67
Beckman Coulter Eurocenter S.A.
Hospital Ramon y Cajal
34 91 336 8224
Sinai Samaritan Medical Center
1.Define the criteria for a positive finding on the peripheral
smear,using Table 1 as a guideline.
2.Determine the number of samples to run in the study.
a.The sample mix must be similar to the overall sample
source mix in order to capture all of the rules needed.
b.80% of the samples tested should be first time observation
samples in order to test the “first time observation
rules,” while 20% of the samples tested should be “repeat”
samples in order to test the delta rules.
c.Test samples over at least 5 days in order to eliminate any
analyzer variability as well as to capture all sample
types in the patient population.
3.Run the samples through the analyzer in the same manner
as patient samples,and print out the results.
4.Perform a slide review on all samples.Limit the reviews to
only one or two senior technologists for consistency.
Manual differentials should only be performed if there is
a specific need to do so (eg,Vote out,abnormal cell-type
5.Enter all data collected into a spreadsheet for analysis.
a.Enter the data for each sample in a separate row under
the following column headings.
b.Use the following columns:
1.Sample ID number
3.Rule numbers triggered (one for each rule)
4.Total number of rules triggered
5.Instrument suspect flags (one for each flag)
6.Total number of instrument flags
7.Positive slide review findings (one for each finding)
8.Total number of positive smear findings
NOTE 1:In order to track which samples may have had a
problem (as in the case of investigating a false nega-
tive rate which appears too high),a good suggestion
would be to have separate columns for each rule,
flag,and positive smear finding.While this makes the
spreadsheet large,it provides for all the data in one
location for further use.
NOTE 2:After each of the sections (eg,rules numbers),pro-
vide a column that totals all rules triggered for that
sample.Do the same for the flags and a separate
total for the positive smear findings.This will aid in
further data analysis.
6.Match up the columns to compare the rules that may
have been triggered for each sample with what was seen
on the smear.This will provide data to generate a truth
table.The definitions of true positive,false positive,true
negative and false negative are listed in the “Materials
and Methods” section.
7.Verify that the “false negative” rate is <5%.The Consensus
Group feels that this is the maximum acceptable false
negative rate to ensure patient safety.If the rules analysis
gives a higher false negative rate,we suggest the following:
a.Check the spreadsheet and the printout for any tran
b.Recheck truth table assignments.
c.Review the spreadsheet for which rule(s) is/are specifically
causing the false negative.
d.Adjust the rules as needed.
e.Retest the updated rules in the same manner as above.
f.Repeat steps a-e as needed.
8.If the “false positive” rate is much higher than was found
by the Consensus Group,or for other reasons feel that
the value is too high,suggested actions are:
a.Determine if a particular instrument flag is over
flagging or does not seems to be useful to the
b.Work with the instrument company to identify if this is
an instrument issue,or if the sensitivity can be
c.Note that if the laboratory chooses to ignore an instru
ment suspect flag,it must clearly document this since
it is now operating the analyzer outside the manufac
turer’s recom mendations.
a.Training sessions with the staff are crucial to effect the
b.Update the procedure manual and any postings in the
c.Update any computerized rules in the LIS or Middleware
90P.W.Barnes et al
APPENDIX 2. Proposed Process Steps for Rules Validation