Computer analysis of computed tomography scans of the lung: a survey.
ABSTRACT Current computed tomography (CT) technology allows for near isotropic, submillimeter resolution acquisition of the complete chest in a single breath hold. These thin-slice chest scans have become indispensable in thoracic radiology, but have also substantially increased the data load for radiologists. Automating the analysis of such data is, therefore, a necessity and this has created a rapidly developing research area in medical imaging. This paper presents a review of the literature on computer analysis of the lungs in CT scans and addresses segmentation of various pulmonary structures, registration of chest scans, and applications aimed at detection, classification and quantification of chest abnormalities. In addition, research trends and challenges are identified and directions for future research are discussed.
[show abstract] [hide abstract]
ABSTRACT: Computed tomography has evolved into a unique diagnostic modality that is slowly gaining acceptance for use in the chest. The overall impact of this relatively new technique will be tempered by the inexpensive and universally available conventional diagnostic studies. More refined and sophisticated CT devices will certainly become available in the future. The use of ultrashort scan time will further improve the images obtained in the chest by minimizing the effects of cardiovascular and respiratory motion. The ultimate acceptance of CT will no doubt be hastened by the search for otherwise undetectable pulmonary nodules and an appreciation of its usefulness in the evaluation of lesions of the chest wall.Radiologic Clinics of North America 01/1978; 15(3):297-308. · 2.59 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: A new imaging device, the dynamic spatial reconstructor (DSR), is described. It differs from commercially available computed tomography scanners in several ways. It images a volume rather than a slice; it images the volume in stop-action to minimize blurring due to motion; and it repeats the scan 60 times per second so that the functional movements of heart muscle and lung tissue and the distribution of roentgen contrast medium in blood can be quantitated in any portion of the body, especially in the heart, great vessels, and lungs. The system is under evaluation as a research tool for physiologic and, ultimately, clinical investigations.Science 11/1980; 210(4467):273-80. · 31.20 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The development of multi detector-row CT has brought many exciting advancements to clinical CT scanning. While multi detector-row CT offers unparalleled speed of acquisition, spatial resolution, and anatomic coverage, a challenge presented by these advantages is the substantial increase on the number of reconstructed cross-sections that are rapidly created and in need of analysis. This manuscript discusses currently available alternative visualization tecvhniques for the assessment of volumetric data acquired with multi detector-row CT. Although the current capabilities of 3-D workstations offer many possibilities for alternative analysis of MCDT data, substantial improvements both in automated processing, processing speed and user interface will be necessary to realize the vision of replacing the primary analysis of transverse reconstruction's with alternative analyses. The direction that some of these future developments might take are discussed.European Journal of Radiology 12/2000; 36(2):74-80. · 2.61 Impact Factor
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006 385
Computer Analysis of Computed Tomography Scans
of the Lung: A Survey
Ingrid Sluimer, Arnold Schilham, Mathias Prokop, and Bram van Ginneken*, Member, IEEE
Abstract—Current computed tomography (CT) technology al-
lows for near isotropic, submillimeter resolution acquisition of the
complete chest in a single breath hold. These thin-slice chest scans
stantially increased the data load for radiologists. Automating the
analysis of such data is, therefore, a necessity and this has cre-
ated a rapidly developing research area in medical imaging. This
paper presents a review of the literature on computer analysis of
the lungs in CT scans and addresses segmentation of various pul-
monary structures, registration of chest scans, and applications
malities. In addition, research trends and challenges are identified
and directions for future research are discussed.
Index Terms—Airway disease, chest, computer-aided diagnosis,
CT, emphysema quantification, interstitial lung disease, literature
review, literature survey, lung cancer, nodule characterization,
nodule detection, nodule size measurements, pulmonary em-
bolism, registration, segmentation.
ANNArtificial neural network.
Area under the ROC curve.
Content-based image retrieval.
Chronic obstructive pulmonary disease.
Diffuse parenchymal lung disease.
Explosion controlled region growing.
Food and Drug Administration.
Lowest x-th percentile of the histogram.
High-resolution computed tomography.
Interstitial lung disease.
Manuscript received May 27, 2005; revised November 17, 2005. Asterisk in-
dicates corresponding author.
I. Sluimer and A. Schilham are with the Image Sciences Institute, Univer-
sity Medical Center Utrecht, 3584 CX Utrecht, The Netherlands (e-mail: in-
email@example.com; arnold@URL@isi.uu.nl; http://www.isi.uu.nl/).
M. Prokop is with the Department of Radiology, University Medical Center
Utrecht, 3584 CX Utrecht, The Netherlands (e-mail: firstname.lastname@example.org).
*B. van Ginneken is with the Image Sciences Institute, University Medical
email@example.com; URL: http://www.isi.uu.nl/).
Digital Object Identifier 10.1109/TMI.2005.862753
Idiopathic pulmonary fibrosis.
Pulmonary function tests.
Low attenuation area(s).
Linear discriminant classifier.
Lung volume reduction surgery.
Mean lung density.
Probability of cancer.
Pneumocystis carinii pneumonia.
Positron emission tomography.
Partial volume effect.
Single photon emission computed tomography.
Receiver operating characteristic.
Region of interest.
Usual interstitial pneumonia.
Volume of interest.
nique particularly well suited for the thorax. Low resolution re-
sulted in large partial volume effects (PVEs) and the large dif-
ference in attenuation values between tissue and air (currently
a main reason for the effectiveness of CT in thoracic imaging)
made it difficult to correctly interprete small lesions. In 1977,
Kollins  concluded that CT had revolutionized neuroradi-
ology and its impact in abdominal and pelvic imaging had been
similarly great but “the ultimate role of computed tomography
in the study of diseases of the chest is not as certain.”
Technical strides forward have since completely transformed
CT and thoracic imaging with it. Early on, the essential role of
image processing was recognized, for example in the work on
the Mayo Clinic’s dynamic spatial reconstructor . Two ad-
vances in particular have had the most repercussions for CT of
the chest. About twenty years ago, improved axial resolution
made HRCT scans possible. Slices of 1 mm thick could pro-
vide anatomical detail of the lungs similar to that available from
gross pathological specimens . However, scanner speed lim-
itations at the time meant that a 1-cm gap between slices was
tifacts. This limitation has been effectively removed in the last
0278-0062/$20.00 © 2006 IEEE
386IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
decade with the advent of multi-detector-row scanners that can
acquire up to 64 1-mm slices simultaneously per rotation and
perform each rotation in less than a second. Present day scan-
ners allow for isotropic acquisition of the complete chest with
submillimeter resolution well within a single breath hold. Com-
pared to other modalities, CTexcelsin theimaging of thelungs.
A major challenge accompanying these spectacular improve-
ments is dealing with the enormous increase in images that are
generated and have to be reported on. This challenge has been
coined thedataexplosion byRubin.It is becomingclearthat
computer vision techniques are essential to facilitate CT inter-
acquisition techniques have been followed by a sharp increase
in research on computer analysis of thoracic CT scans.
This paper aims to provide an overview of the literature on
computer analysis of CT images of the human lungs. For our
purposes here, we define this area as including segmentation of
various pulmonary structures (Section II), registration of chest
scans (Section III), and applications aimed at detection, classi-
fication and quantification of chest abnormalities (Section IV).
The work on segmentation can be further subdivided into
segmentations of the complete lung fields (Section II-A), the
bronchial tree (Section II-B), the vascular tree (Section II-C)
physema (Section IV-A), lung cancer (Section IV-B1, IV-B2,
signs of airway diseases (Section IV-D), and differential diag-
nosis of lung disease (Section IV-E).
The survey covers the period 1999–2004, but relevant work
before 1999 has been included as well. Details of the literature
collection procedure can be found in the Appendix. Studies on
reconstruction algorithms, scans parameters and image quality
improvement,and visualization techniques have been excluded,
except for some studies that pertain particularly to chest CT.
There is a large body of work on virtual bronchoscopy. Work on
image analysis in this field, mainly about airway segmentation,
has been included. Studies that focus on the application of such
techniques and on effective visualization are considered outside
the scope of this survey.
In terms of methodology, most reviewed works employ
standard image processing techniques, such as filtering, region
growing and connected component analysis, mathematical
morphology, etc. which can be found in standard textbooks,
e.g.,  and . A large number of studies investigates the
detection and classification of abnormalities. This field is often
referred to as computer-aided detection or diagnosis (CAD).
tion, candidate extraction, feature extraction, and classification.
The first four steps are usually considered to be part of image
processing. The final classification step deals with patterns that
are represented as points in a feature space. Finding decision
boundaries in these vector spaces is a central problem in pattern
recognition theory. Popular techniques such as ANNs or LDCs
can be found in many textbooks, e.g.,  and .
Segmentation is often a necessary first step to computer
analysis. In CT of the lungs, the various anatomical entities that
can require segmentation are the lungs themselves, the airways,
the vessels and the lung lobes. Each of these is discussed in
more detail in the following subsections.
A. Lung Segmentation
Any computer system that analyzes the lungs and does not
work on manually delineated regions of interest must incorpo-
rate an automatic lung segmentation. Armato and Sensakovic
 illustrated the importance of accurate segmentation as a pre-
processing step in a CAD scheme. In a nodule detection set-
ting, they showed that 5% –17% of the lung nodules in their
test data was missed due to the preprocessing segmentation,
depending on whether or not the segmentation algorithm was
adapted specifically to the nodule detection task.
As the lung is essentially a bag of air in the body, it shows up
rounding tissues forms the basis for the majority of the segmen-
tation schemes. Most of the methods are rule-based –.
The main lung volume is found in one of two ways. Gray-level
thresholding and component analysis can be used, after which
the objects that are the lungs are identified by imposing restric-
tions on size and location. Alternatively, the volume is found by
region growing from the trachea. The trachea itself is found by
searching for two-dimensional (2-D) circular air-filled regions
in the first slices of the scan, or by searching for a three-dimen-
sional (3-D) tubular air-filled object located centrally in the top
half of the scan. After identification of the combined lung and
airway volume, separation of left and right lung and removal
of the trachea and mainstem bronchi are performed. This is
followed by morphological processing to obtain lung volumes
without holes and with smooth borders. Li and Reinhardt 
have taken a statistical rather than rule-based approach. A 3-D
active shape model provided an approximate segmentation of a
snakes to capture fine details and shape variance not present in
the statistics of the training data.
Often it remains unclear at what points the segmentations are
supposed to cut through the major bronchi and vessels in the
hilar area where they enter the lung. Manual delineations also
show much variance around the hilum, depending on defini-
tions(ora lack thereof)and personalpreferenceswith respectto
smoothness. Ukil and Reinhardt used a segmentation of the
bronchial tree to improve their lung segmentation in this region.
the posterior and anterior junctions between left and right lung.
These junctions can be very narrow and consequently of low
contrast due to the PVE. The common solution to this problem
is to heuristically define a search region in each affected axial
slice and identify a separating junction line inside it. The line is
either defined as a shortest distance in anterior-posterior direc-
tion –, or the soft tissue interface is traced by searching
for a minimum cost path through the gray values –.
A challenge that has not yet been met is the segmentation of
lungs affected by high density pathologies that are connected
to the lung border. Due to a lack of contrast between lung and
surrounding tissues, rule-based thresholding methods will fail
to segment these pathological parts of the lung. The statistical
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY387
erence scan. Another approach could be to identify surrounding
structures, such as the rib cage and the diaphragm, and combine
those algorithms into a comprehensive segmentation scheme.
Apart from the segmentation of scans containing pathology,
another direction of research that should be focused on are the
incorporation of user-feedback in automatic systems or alterna-
tively the development of user-interactive segmentation tools,
as well as the implementation of automatic means of failure de-
tection. In clinical practice, it is often most important to know
when the automatic algorithm failed without having to review
every segmentation result, and to be able to quickly improve the
result without having to resort to complete manual delineation.
B. Segmentation of Airways
The airways exhibit a tree structure (the tracheobronchial
tree) of roughly cylindrical branches of decreasing radius.
The trachea bifurcates into the left and right main bronchus.
These bronchi repeatedly bifurcate (or trifurcate) into smaller
bronchi, up to the 23th generation . The bronchial lumen
is (normally) filled with air, surrounded by the bronchial wall
which has a relatively high CT value.
generation 7. After that, the PVE is too severe, smearing lumen
and bronchus wall into an indistinguishable mass. In the last
decade, a number of methods have been proposed to (semi) au-
tomatically segment the tracheobronchial tree –. Based
upon such segmentations, computerized schemes have been de-
veloped to label the different bronchi such that branches with
problems can be pinpointed anatomically , , .There
are also a number of schemes proposed to measure the geomet-
rical properties of the bronchi at user given locations –,
which can be used to diagnose a number of respiratory diseases
(see also the small overview of semi-automated measurements
of airway dimensions given by Müller and Coxson ).
The proposed methods for airways segmentation can be split
up into four main strategies: (i) knowledge-based segmentation
, –, ; (iii) centreline extraction , , ;
(iv) mathematical morphology , , . Indeed, many of
the proposed schemes combine two or more of these strategies.
An example of the region growing/wavepropagation strategy
is explosion controlled region growing (EC), first introduced by
Mori et al. , and used in , , . EC is an iterative
Mathematical morphology methods focus more on identi-
fying regions that might be part of the airways. For example
Aykac etal.  used 2-Dgray-scale reconstruction(gray-scale
closings with increasing kernel size) to find gray-scale valleys.
In Table I, the different studies on tracheobronchial tree seg-
mentation are listed. The items in the table include a descrip-
tion of the data (number of scans, slice thickness, and radiation
dose), a short description of the algorithm, whether the method
is automatic or needs manual seed points, if it is a fully 3-D
model, and the reported performance.
From Table I it is apparent that proper validation of the
models and a good description of the data used is not always
covered in the articles. It appears that humans outperform
the discussed segmentation algorithms. Furthermore, reported
calculation times range from several minutes up to one hour per
scan, depending on thecomprehensiveness of the segmentation.
Clearly, improved performance of automatic segmentation is
and low-dose scans. The problems to overcome are twofold:
artificial (noise, heart movement, high-density implants, PVE)
or real (mucus, pathology). As Table I shows, only the work
of Tschirren et al.  was validated for low-dose and severe
pathology, for which they reported encouraging results.
C. Segmentation of Vessels
the pulmonary arteries and veins enter the lungs, their diameter
can be up to 30 mm. As they branch, vessel diameters decrease.
On a normal CT scan vessels can be seen up to 5–10 mm from
the pleura. The arteries follow the course of the bronchial tree
(when the bronchial wall is thickened, bronchus and artery have
the appearance of a signet ring).
A segmentation of the vessel trees can be of interest for
matching follow-up scans and to remove FPs of CAD schemes,
Conversely, vessel segmentation can provide a VOI for ab-
normalities that occur inside the vessels, e.g., PEs (see Sec-
tion IV-C). For the latter task, contrast material is administered,
which can make the vessel segmentation task easier.
The number of studies on pulmonary vessel segmentation is
to the chest wall. The filtered data was thresholded to provide a
first segmentation. Seed points on vessel centerlines were used
to initialize a tracking algorithm also based on the local Hessian
tensor that could detect bifurcations that may be missed by the
vessel filter. In an experiment on five normal scans, the method
ficity was not evaluated. Wu et al.  did not use elongated-
plied to obtain a representation of the segmented structures in
terms of fuzzy spheres which were connected by a tracking al-
gorithm. Only the robustness to noise was evaluated. Kiraly et
al.  presented a vessel tree segmentation algorithm with the
aim of segmenting the arterial subtree distal to a site of PE. A
fixed threshold and removal of small structures was used to ob-
tain an initial segmentation. The plane perpendicular to the em-
bolism was determined. In a VOI located distal from this plane,
a tree was extracted by skeletonization. Rules for branch sizes
and branching angles were used to remove false branches and
separate connected subtrees. The output of the algorithm is the
volume of the lung affected by PE. The same group of authors
used a similar vessel segmentation to visualize the densities in-
388IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
STUDIES ON TRACHEOBRONCHIAL TREE SEGMENTATION. FOR EACH STUDY, THE NUMBER (#) OF SCANS USED AND THEIR SLICE THICKNESS
(mm) IS GIVEN. THE METHOD IS BRIEFLY DESCRIBED AND IT IS STATED WHETHER IT IS 2-D OR 3-D, AND IF A MANUAL SEED IS NEEDED
(AUTO). PERFORMANCE REPORTS THE EVALUATION METHOD AND THE RESULTS
scans that may contain PE . In the design of CAD systems
to detect PE, several vessel segmentation procedures are briefly
described ,  but these are not specifically evaluated.
One of the main future challenges is the automatic separation
of arterial and venous trees. Furthermore, algorithms need to be
validated on larger data sets and their robustness in the presence
of pathology and noise is yet unclear.
D. Lobar Segmentation
Lungs consists of distinct anatomical compartments called
lobes. The left lung contains two lobes, and the right lung three.
The lobes are separated by fissures, which are thin sheets of
tissue. The major and minor fissures separate the lobes and are
visible on thin-slice CT. Lobar segmentation and fissure detec-
tion are, however, not equivalent: The major and minor fissures
can be incomplete, while at the same time other, accessory fis-
sures can be visible , . Vascular and bronchial trees do
not cross the lobar boundaries and, therefore, in the absence of
of vessels and bronchi.
It is often clinically important to determine whether a disease
affects one or more lobes, for example when lobar resection is
considered. In addition, the extraction of quantitative parame-
ters per lobe can provide valuable information. Lobe segmenta-
tion may also prove useful in intrapatient registration.
There are several strategies that can be employed to segment
the lobes. The most obvious is to detect the fissures directly,
by locating sheet-like bright structures in 3-D or, as is also
common, line-like structures in 2-D slices. Knowledge about
the typical shape of lobes and their positions within the lungs
can be exploited. Finally, the regions containing the lobe
borders are almost devoid of larger blood vessels, so a vessel
segmentation can be used to infer lobar boundaries. Similarly,
a segmentation of the airway tree can be used.
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY389
tion. Subsequently fissures were delineated with ridge detection
paring computer results with manual tracings in 22 scans of 12
subjects. Kuhnigk etal.  enhanced the fissures by extracting
form on this segmentation and adding that to the original data.
puted from this fissure-enhanced data. The method was evalu-
Saita and co-workers havedescribed variousrelated systemsfor
fissure detection. In a recent version , a four step approach
. From this segmentation search volumes were determined.
A set of filters was applied to enhance sheet-like structures in
these volumes. Subsequently the fissures were found by mor-
phological processing. A qualitative evaluation on 20 low-dose
initialized in one slice, the algorithm propagated through the
scan and used shape information from the previous slices. The
method was tested on scans from four patients.
We conclude that previous work shows encouraging results
but automatic lobe segmentation is still largely unsolved, espe-
cially in the presence of incomplete fissures and pathology, an
issue not specifically addressed in any paper.
lung and eight for the left. The segmental boundaries can only
be estimated from the course of bronchi and veins. Automatic
identification of segments is a completely open research area.
Bringing images into spatial alignment, referred to as reg-
istration or matching, is one of the most common procedures
and also one of the most active research areas in medical image
analysis –. A plethora of algorithms has been proposed,
many of these general in the sense that they could be applied
as-is to chest CT. Publications on chest CT most often employ
elastic registration and typically include some dedicated modi-
fications to standard approaches.
There are four reasons for matching CT lung scans.
• Matching a CT scan to another scan of the same patient
from a different modality, typically a PET scan.
• Matching to a follow-up CT scan of the same patient
for effective visual or automatic comparison to detect
or quantify interval change and/or monitor response to
•Intrapatient matching to scans acquired at a different in-
spiration level to study ventilation or to extract functional
•Interpatient matching, possibly to an atlas, to guide seg-
mentations or detect deviations from normal appearance.
A recent example of intermodality matching is the work of
Mattes et al. . PET and CT chest scans were matched with
a rigid deformation followed by an elastic deformation based
on cubic B-splines. Mutual information was used as similarity
criterion in a hierarchical multiresolution framework with a
quasi- Newton optimization algorithm. This approach, similar
to the system described in , is typical of contemporary
volume-based elastic matching algorithms. Other works on
PET-CT matching used thresholding to find the lung contours
and chamfer-matching , preprocessing to make both im-
ages similar in appearance followed by multi-resolution elastic
matching based on minimization of the squared difference
image , and translations of several VOIs using mutual
Matching of follow-up scans serves many purposes, such
as pairing and comparing nodules ,  or, more gener-
ally, displaying similar slices to a radiologist  for which
several commercial workstations already offer automatic so-
lutions. Some studies use rigid deformations. Betke et al. 
determined a rigid alignment based on a small number of auto-
matically identified anatomical landmarks and the lung surfaces
and succeeded in finding corresponding nodules in 56 out of 58
based on anatomical landmarks and simple features computed
over complete slices. Dougherty et al.  used an optical
flow method. Stewart et al.  recently described a hybrid
registration scheme that combined feature- and intensity-based
on small numbers of scans only. Blaffert and Wiemker 
with and without the use of segmentation masks and concluded
that affine registration on the lung mask is a good compromise
the aligned scans to detect changes automatically in a CAD
system is hinted at in some studies ,  but actual applica-
tion of CAD systems on follow-up scans is currently limited to
commercial workstations for lung nodule work-up, e.g., .
Several studies focus specifically on the registration of
scans acquired at different moments of the respiratory cycle.
Matching inspiration and expiration scans is challenging be-
cause of the substantial, locally varying deformations that take
place during breathing. Fan et al.  described a system to
assessseveralfunctionalparameters from registeredscans, such
as regional lung volume changes, local changes in air content
and the distribution of ventilation. Their registration algorithm
combined feature point matching with lung surface matching
and constraints on the optical flow of mass and the smoothness
of the deformation. Other studies focused on radiotherapy
applications: Correcting for tumor motion is a major challenge
in radiotherapy treatment planning. Boldea et al.  used the
popular demons algorithm  to validate the use of active
breath control during radiotherapy treatment. In , an elastic
registration method was described that was tested on 2-D slices.
Both methods require long computation times. Kaus et al. 
compared volumetric B-spline registration with surface based
registration using manual segmentations of heart and lungs
and found that the surface based method is much faster with
Registration across individuals is another area. Similar
methods can be employed, but many anatomical landmarks
390IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
that can be used for intrapatient registration are not always
available in two scans of different patients. Li et al.  have
constructeda humanlungatlasasa generaltoolthatcan beused
for many purposes. Examples are guiding of segmentations and
establishing a normal range for local functional measurements
that could be used to detect early indications of disease. An
earlier work  showed that atlas registration could be used to
pinpoint the approximate location of lobar fissures which was
used later for fissure segmentation  (see also Section II-D).
Sluimer et al.  used atlas registration to segment the lungs
in scans with dense pathologies.
In any application, the proper registration method is a com-
promise between computation time and demands for accuracy
and robustness. Providing fast and fully automatic registration
with an accuracy comparable to manual indication of corre-
sponding points is still a major challenge, especially in the pres-
ence of large (ventilation) deformations, pathology and noise.
No studies which address these issues specifically have been
published. On the other hand, several authors conclude that the
performance of their method already suffices for their purposes.
V. COMPUTERIZED DETECTION, QUANTIFICATION,
Segmentation and registration are often precursors to a host
of specific application-dependent image analysis systems.
In this section, compound systems on automated detection,
quantification and classification of pulmonary disorders are
discussed. These are grouped by clinical application area: em-
physema (Section IV-A), lung cancer (Section IV-B1, IV-B2,
and IV-B3), PE (Section IVC), (bronchial) signs of airway
diseases (Section IV-D), and differential diagnosis of lung
disease (Section IV-E).
Emphysema is a pathology of the lung, characterized by the
destruction of lung tissue. This deficiency can be measured by
aberrations of PF tests, expressing the performance of the lungs
to the expected performance. However, PF tests can only distin-
guish the progression of emphysema in rough stages: normal,
mild, or severe. Studies have shown that a volume of tissue
amounting to about a third of the total lung volume has to be
destructed before PF tests indicate a significant deviation from
tool for emphysema which can diagnose emphysema at an ear-
lier stage and is more sensitive to small changes in the progres-
sion of the disease. The best candidate for this tool is CT. Two
earlier overviews of emphysema quantifications are by Madani
et al.  and Müller and Coxson .
Because emphysema shows up on CT as areas with ab-
normally low attenuation coefficients (close to that of air),
visual CT scoring of emphysema is feasible. A lot of studies
on emphysema quantification with CT focus on a reliable
automatic method as a replacement for the insensitive lung
function tests , – and subjective visual scoring
, , –. Indeed most of these show that computer
scoring of CT is better suited for emphysema detection than PF
tests, and more objective than visual scoring. In fact, for some
studies on emphysema, CT has already become the accepted
gold standard for quantification , , .
The emphysema detection studies mostly focus on extracting
from CT a single value expressing the emphysematous fraction
of the lungs. As emphysema is identified by air voxels in the
lungs, thresholding seems the best way to obtain such a value.
Usually this results in calculating one of the following.
•Fraction of lung voxels with intensity values below a
(density mask method , or pixel
• Lung area occupied by low attenuation areas larger than
some minimum size, with intensities below threshold
•Mean lung density (MLD).
•Fraction of histogram covered by a given lowest per-
Visually it is possible to distinguish a variety of emphysema
sions (which can be termed bullae or LAAs in literature). How-
ever, few studies , , , , – tried to incor-
porate this additional knowledge in the emphysema quantifica-
tion. Texture features to determine if a voxel represents emphy-
sema or not were only used by  and .
studies. The table describes the data sets, the quantification
methods, distinguishes 2-D and 3-D algorithms, and reports
performance. The table covers researches concerning the
(long-term) reproducibility and trends with time of automated
CT scores , , –, asthma scoring , and
more informative descriptive emphysema staging , ,
the validation of single value replacements for visual and PF
test based grading of emphysema are not displayed in Table II.
A number of emphysema studies , ,  focused
on the selection of patients for lung volume reduction surgery
(LVRS) instead of emphysema quantification. When emphy-
sema is severe and concentrated in one lobe, a patient might be
The challenges for future research on emphysema quantifica-
tion lie in quantification of emphysema patterns and prognosis
low-dose screening data. It is well known from older and rela-
tively recent studies that the typically used emphysema scores
are sensitive to noise, slice thickness, reconstruction filters and
level of inspiration , , –. How to correct for
this is an open area. Another issue is whether LAAs are in fact
responsible for decreased PF; as there are many examples of
the true disease is in fact not characterized by the dead space.
Possibly there is a role for CAD in solving this medical issue.
The CAD techniques developed for emphysema quantifica-
ping refers to the retention of air in the lungs during expiration).
So far we haven’t found any studies of automatic quantification
of airtrapping, but the visual techniques used in medical liter-
ature (see e.g., ) are the same as those used for the assess-
ment of emphysema, which suggest that CAD could be useful
for quantification of airtrapping.
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY391
STUDIES ON EMPHYSEMA QUANTIFICATION. FOR EACH STUDY THE NUMBER OF NORMAL (#N) AND ABNORMAL (#A) SCANS USED IS GIVEN (WITH POSSIBLY A
“NOTE” TO IT). THE METHOD IS BRIEFLY DESCRIBED AND IT IS STATED WHETHER IT IS 2-D OR FULLY 3-D. “PERFORMANCE” REPORTS THE
EVALUATION METHOD AND THE RESULTS
B. Lung Cancer
Much of published CAD research is focused on detecting
lung cancer, which is the main cause of cancer deaths. Espe-
cially since the start of a number of lung cancer CT screening
relatedliterature. Themain focus overthepast years has beento
areas of research cover nodule size measurements and the char-
acterization of nodule appearance. Both are used to attempt to
three areas is described in more detail in the subsections IV-B-1
through IV-B-3 below.
In general, it can be concluded that for the development of
systems that can be used in clinical practice it is necessary that
thealgorithmsare trainedand testedonlargernumbersofcases.
and cannot be done on a single site. The availability of common
tems substantially.Itwill also beincreasingly important tomea-
sure system performance in a clinical setting and evaluate the
usefulness of CAD as a second reader for both experienced and
1) LungCancer:DetectionofPulmonaryNodules: Overthe
years covered in this survey (1998–2004), the number of arti-
year. However, many of these articles deal with a small exten-
sion to a previously published method, or a different database
is used for testing. If the results in the follow-up paper are not
sion is discussed in this section.
As a rule, nodule detection systems consist of several steps:
a) preprocessing; b) candidate detection; c) false positive re-
duction; d) classification. Most often the preprocessing stage is
used to restrict the search space to the lungs and to reduce noise
and image artifacts. As will be discussed below, there are many
392IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
THE REPORTED BEST PERFORMANCES OF VARIOUS NODULE DETECTION SYSTEMS AND THE FP REDUCTION TECHNIQUES EMPLOYED. IF THE
PERFORMANCE IS OBTAINED BY MODIFYING A PREVIOUSLY REPORTED SYSTEM, THE ORIGINAL ALGORITHM IS GIVEN IN THE THIRD COLUMN. THE
DATA PARAMETERS (NUMBER OF SCANS, NUMBER OF PATIENTS, SLICE THICKNESS, AND RADIATION DOSE) MENTIONED IN THE “DATA”
COLUMN REFER TO THE DATA USED TO OBTAIN THE LISTED BEST PERFORMANCE
ways to generate nodule candidates, but amongst those candi-
date there are always many (obvious) false positives. Therefore,
one tries to cheaply and drastically reduce the number of these
FPs [step c)] before going to the more computationally expen-
sive classification step [step d)]. Still, after the classification
stage, many false positives exist, and much of current research
on nodule detection is in fact not focused on the detection part,
but on FP reduction instead. Stages b)–d) of nodule detection
systems will be covered in the following subsections.
Table III gives an overview of the different CAD models that
are covered in this survey. The performances given are the best
performances if previous models have been extended, or if new
databases have been used in later articles. If there is explicit
mention of a false positive reduction scheme in the study de-
scription, this is mentioned in the table.
a) Candidate detection schemes: For finding nodule can-
didates, the following techniques have been reported: multiple
gray-level thresholding –, mathematical morphology
–, genetic algorithm template matching of Gaussian
spheres and discs , , clustering –, con-
nected component analysis of thresholded images , ,
thresholding –, detection of (half) circles in thresh-
olded images , gray-level distance transform , and
filters enhancing (spherical) structures –.
The multiple gray-level thresholding technique tried to find
connected components of similar intensity, and to remove
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY393
attached vessels. For the schemes described, mathematical
morphology covered a number of convolution filters: variable
N-Quoit filter , , , , selective marking and
depth constrained cost map , top-hat and sieve filter .
The used clustering methods differed in clustering technique
and in the features used for clustering: Kanazawa et al. ,
Kubo et al. , Yamada et al.  applied fuzzy clustering
to intensity values and Gurcan et al.  used -means clus-
tering on intensities in the original and median-filtered images.
b) Reduction of false positives: A good example of the
shift in focus from nodule detection to false-positive reduction
is the set of papers published by Armato and co-workers. In
, their nodule detection scheme was described, containing
the steps: preprocessing, candidate detection, and classification.
In the first step, the lungs were segmented. Candidates were
found by multiple gray-level thresholding. Using a number
of geometric and gray-level features and linear discriminant
analysis (LDA), the results of leave-one-out classification were
the detection of 70% of the nodules marked by experts, with
on average 3 FPs per image slice (i.e., about 80–90 FPs per
scan). In later papers, Armato and co-workers concentrated on
schemes to reduce the number of FPs: rule-based , ,
LDA , , and massive training ANN , . The
most successful of these techniques reported a nodule detection
rate of 80.3% with on average 4.8 FPs/scan as opposed to
27.4 FPs/scan without FP reduction .
Saita et al.  added an FP reduction step to the nodule de-
tection algorithm by Oda et al. . For FP reduction promi-
nent anatomical structures in or near the lungs were (roughly)
extracted: bones, mediastinum, and vessels. Usage of the posi-
tions of nodule candidates relative to these structures resulted in
a 100% detection rate with 2.6 FPs/scan. The original paper of
Oda et al.  reported 59% detected at 19.2 FPs/scan.
The model by Gurcan et al.  was extended with FP re-
duction by Ge et al. : They added a 3-D gradient field as
an extra feature for the LDC. As a result the area under the ROC
increased from 0.91 to 0.93.
Lee and co-workers published their detection method in
, and introduced an FP reduction step in . In the
latter work, they added five new gray-level features and tuned
the threshold parameters of the original model. The sensitivity
of the model remained 72.4%, but the FP rate dropped from
30.8 to 5.5 per scan.
c) Classification: There is a number of classification
techniques used in the final stage of the nodule detection sys-
tems: rule-based or linear classifier , , , ,
, , ; LDA , ; template matching
; nearest cluster , ; Markov random field ;
neural network , ; Bayesian classifier , .
The CAD schemes described in , –, ,
 did not state explicitly how a label was determined.
The most common features for classification were gray-level
features, shape descriptions, and spatial and size information.
A growing area of interest related to nodule detection is that
of nodule matching. This specific problem pertains to the local-
ization of previously detected nodules in a follow-up scan.
Brown et al.  described the construction of an anatom-
ical a priori fuzzy model which was used in combination with
image primitives matching to find nodules. From the results ob-
tained from a scan, a patient-specific model was tuned which
could be applied to follow-up scans.
Ko and Betke  used multiple gray-level thresholding to
find nodule candidates in both original and follow-up scans.
Using position, shape, and volume information a rule-based
classification found nodule matches between scans.
With the FDA approval1of several commercial CAD systems
for nodule detection, it seems that CAD for this field has come
to an acceptable performance. This performance is not perfect
CAD and the achievable workload reduction for the radiologist
demand for usage of these systems in CT screenings as well
as daily hospital practice. For future work, research leading to
improved detection of ground glass opacities should have top
2) Lung Cancer: Characterization of Pulmonary Nod-
ules: The pulmonary nodule is a dilemma for the radiologist.
Most large nodules (diameter
for cancer are malignant, but current CT scanners allow for
the detection of small nodules with diameters well below a
centimeter. Such nodules are extremely common and the vast
majority of them is benign . Follow-up procedures to de-
termine malignancy are often invasive, and induce risks for the
patient . It is, however, of crucial importance for patient
management to determine as soon as possible whether nodules
are malignant, because symptoms of lung cancer often don’t
appear until the malignancy is advanced and unresectable. As
a result, the 5 year survival for a patient diagnosed with lung
cancer is only 10%–15%, but for patients in which early stage
lung cancer has been completely resected, this increases to
Severalattempts havebeen made to design computersystems
it is obviously of crucial importance to know which characteris-
tics point toward malignancy. This is also important for radiol-
ogists, and the subject of clinical research. It is becoming clear
that rules of thumb that apply to larger nodules do not always
Clinical information such as old age, male sex, a history of
smoking, a history of cancer, and exposure to certain chemical
compounds increases the pCa, while other factors decrease this
probability. Bayesian analysis to include this information in the
diagnostic process was proposed by , , but applied to
nodule characterization in chest radiographs only.
The most important characteristics appear to be nodule size
and growth rate. To accurately estimate this, precise segmen-
tation of nodules is essential. This topic is further discussed in
Section IV-B3. Another possibility is to perform a contrast-en-
hanced CT scan. A malignant tumor with a diameter over 2
ment in the nodule . To accurately determine enhance-
ment in small nodules, again, precise segmentation is essential,
as was shown by Wormanns et al. . Another noninvasive
follow-up procedure is PET with 18-fluorodeoxyglucose .
1 cm) in subjects at high risk
1The FDA is the government agency responsible for regulating medical de-
vices in the USA.
394IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
STUDIES ON NODULE CHARACTERIZATION. FOR EACH STUDY THE NUMBER OF BENIGN AND MALIGNANT NODULES IS GIVEN AND THE TYPE OF DATA IS LISTED
(THIN-SLICE REFERS TO AROUND 1-mm THICKNESS, THICK SLICES ARE 3 mm OR MORE). THE FEATURES AND CLASSIFIER ARE BRIEFLY DESCRIBED, THE TYPE
OF ANALYSIS (2-D/3-D) IS INDICATED, AND REPORTED RESULTS ARE SUMMARIZED
trastCThavetoourknowledge notyetbeeninvestigated.In this
section, we restrict our attention to the estimation of pCa from
noncontrast CT scans.
characterization. All studies used a combination of features to
In that way, they indirectly encoded radiologists’ knowledge
about indicators of malignancy. For example, certain calcifica-
of fat in a nodule points toward benign hamartoma . The
border of nodules can be smooth, lobulated (consisting of mul-
tiple lobes) or spiculated. The former points toward benignity,
 any nodule deviating significantly from a spherical shape
is probably benign. It also appears that concave margins and a
polygonal shape are typical for benign lesions , .
characterization; an ill-defined small lesion on thick slices may
analysis is hardly possible using thick slices. Nevertheless, sev-
eral studies used thick slices only (Table IV). The use of low-
dose may also negatively affect the possibility to make a reli-
able diagnosis, especially for segmentation (Section IV-B3). It
has also been reported that the use of different reconstruction
filters affected the likelihood that radiologists rate a nodule as
calcified . When human ratings of nodule characteristics
are used to train computer systems, it is important to realize that
such ratings are not always reliable and reproducible .
A clear trend is to switch from 2-D analysis on thick slices
to 3-D analysis on thin-slice data. Recent studies tended to use
more data, but the size of training and testing databases re-
mains a limitation; all studies resorted to leave-one-out evalu-
ation strategies. Often good results were reported but compar-
isons between systems cannot be made as no standard data-
base is employed. Performance depends heavily on the data;
when performance of radiologists was measured it ranged from
that in some studies ,  stand-alone CAD systems out-
Several systems for content-based image retrieval (CBIR)
that retrieve similar cases from a database for a given nodule
at hand have also been proposed. Display of similar cases with
known classification may help radiologists to make a diagnosis.
Modest improvements in observer performance when such
similar cases are provided were reported in .
. It is interesting to note
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY 395
Signs that are not in the direct neighborhood of the nodule
eral. However, often findings in the direct vicinity of a nodule
are important, such as the presence of peripheral subpleural le-
sions and pleural tags ,  and could be integrated in
more advanced CAD systems.
When malignancy is suspected,staging the tumor is required.
analysis in tumor staging has not been attempted yet.
Further research on nodule characterization should be fo-
cused on the integration of multiple features, extracted from
both patient history and several examinations—not just a single
CT scan—in order to compute a reliable pCa.
3) Lung Cancer: Nodule Size Measurements: The size of a
rate is another important indicator. Benign nodules typically
have either a very small ( 1 month, e.g., for inflammation or
pneumonia) or a very large doubling time ( 16 months). The
volume doubling time for cancers is typically between 40 and
360 days . To measure nodule size and growth rate, accu-
rate segmentation is needed. Segmentation is also required for
2-D outlines, but recent algorithms segment in 3-D. The major
industrial vendors currently all provide automatic nodule seg-
mentation in their chest workstations, although they typically
require a manually indicated seed point.
It is difficult or impossible to obtain a ground truth for nodule
segmentations in real clinical data. Manual outlines, provided
by experts, have been used often, but it has been shown that
there can be interobserver and intraobserver differences in
such outlines . Size measurements can also be affected
by slice thickness  and acquisition parameters .
Several studies have, therefore, used phantoms for algorithm
validation and some of these phantoms even contain structures
mimickingpart-solidnodules .However,itis verydifficult
to realistically model the wide variety of pulmonary structures
encountered in patients using phantoms.
in terms of their reproducibility. Most published algorithms
require a manually indicated seed point, and a slightly different
This has been investigated in several studies –. More
importantly, it has been investigated if consecutive scanning of
–. Reasons for measurement deviations in repeated
scans are image noise (especially evident in low dose scans),
20% using commercial software with manual interaction by a
measurements for example when compared to manual segmen-
are systematic. This was observed by Mullally et al. .
The excellent contrast between tissue and air on CT makes
segmentation of an isolated solid nodule of reasonable size a
simple task. But difficulties arise when a) the nodule is small,
so that PVE play an important role; b) the nodule is connected
to vasculature or other structures such as the pleura, fissures or
abnormalities; c) the nodule is part-solid or nonsolid, in which
(typical for low-dose scans). The algorithms described in this
section have been designed to cope with difficulties a) to c).
The PVE can be dealt with by supersampling the VOI around
the nodule. A discussion of different ways to do this is given
in ; a comparison between binary methods and methods
that take the PVE into account is given in , . Typi-
cally, the segmentation of solid nodules is performed by a ded-
icated algorithm that performs thresholding or region growing
while constraining the shape of the grown nodule, or by tem-
plate matching or—in the case of 2-D processing, dynamic pro-
gramming (typically used in older studies, or e.g., ). In
, fixed and variable thresholds and shape based segmenta-
by post-processing, usually with a sequence of mathematical
morphological operators or basic image processing operations
involving connected component labeling, distance transforms,
etc. Vessel and pleura each had their own sequences.
Kostis et al.  described 3-D algorithms for four dif-
ferent types of nodules, depending on their attachments. The
effect of varying parameters was described in detail and evalu-
ation was on 16 scans for which a follow-up is available. The
method appeared to improve upon a 2-D segmentation method
from the same research group . Fan et al.  described
a method based on template matching. The union of the tem-
plate and a thresholded VOI was taken and the segmentation
was refined using certain rules. The method was extended and
used in a recent study  which described a complete nodule
detection, matching and size measurement system. Shen et al.
 described an algorithm for nodules attached to the chest
wall. Using 2-D projections, the chest wall was removed from
the segmentation. Fetita et al.  used dedicated sequences
of mathematical morphological operators to segment isolated,
juxtavascular and peripheral nodules but evaluation was only
performed qualitatively. Kuhnigk et al.  focused specifi-
cally on large, not necessarily spherical tumors with possibly
complex attachments to vessels and pleura. Their algorithm had
reasonably low (4.7%) interscan variability.
to deal particularly with such nodules. Okada et al.  used
a two-step method based on scale-space analysis that first de-
scribed the nodule by an ellipsoid which was subsequently de-
formed by attracting the boundary to the gradient. Zhang et al.
nodule segmentation. In both cases, further evaluation is neces-
sary to better assess the value of these techniques.
Finally, there are alternative approaches that do not directly
segment the nodule but infer growth from the Jacobian matrix
of an elastic registration of two consecutive nodules .
Given the large number of algorithms proposed, a compara-
tive study on a common database with a reliable reference stan-
396IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
dard would be very worthwhile. The data collected by the Lung
Image Database Consortium  could be used for such a
study. An advanced system for nodule segmentation will likely
consist of multiple algorithms each tailored for a particular type
of nodule and attachments and a recipe for choosing the best
algorithm for a nodule at hand. Systems that allow more user
valuable in clinical practice and deserve more research. Other
future challenges are automatic correction for inspiration level
and suppression of inaccuracies caused by acquisition noise.
C. Pulmonary Embolism
Suspected acute PE is the main indication for pulmonary CT
angiography (CTA, chest CT with arterial contrast). PE is a life-
threatening complication which should be diagnosed promptly
and treated with anti-coagulants. Thin-slice CT is the preferred
modality when PE is suspected because it has high sensitivity,
relatively high specificity and it can establish an alternative di-
agnosis in up to one third of patients .
of enhancement of pulmonary arteries. Acute emboli can get
trapped at bifurcations or in peripheral arteries. Complete oc-
clusion of vessels by clots is possible but residual perfusion in
the periphery is more common. This suggests two ways to de-
tect PE: the direct detection of the clots from the HU values
in the vessels or indirect detection by localization of perfusion
defects. Herzog et al.  presented a visualization method
for the latter. The lungs were segmented by a simple algorithm,
vessel and bronchi were excluded by thresholding, and the av-
erage local attenuation in the remaining voxels was superim-
these results to a radiologist through visualization, these mea-
surements could also prove valuable for a CAD system.
The few published CAD systems in this area have focused
on direct detection. Masutani et al.  developed a scheme
tested on 19 scans with 3.0 mm slices. A sensitivity of 85%
was obtained at 2.6 false positives per case but scans with poor
image quality in which vessel segmentation failed were ex-
cluded. Vessels were segmented with standard techniques such
as hysteresis thresholding, region growing and mathematical
morphology. For each voxel in the vessel segmentation, the
HU value and a local contrast measure were determined and
shape features based on second order derivatives were com-
puted. Rules were used to select suspicious voxels. These were
grouped into PE candidates and size measurements were added
to the feature set. Rules with variable thresholds provided the
final classification. Zhou et al.  developed a segmentation
scheme for pulmonary vessels based on histogram analysis and
an expectation-maximization algorithm followed by a tracking
algorithm. The vessel tree was subdivided in distinct regions
and intensity, shape and edge features were computed and
entered in a rule-based classifier to detect PE. When tested on 6
cases of thin-slice CT data, 58% of manually indicated emboli
were detected at the expense of 10.5 false positives per case.
Pichon et al.  presented a method to segment the vessel tree
and the intraluminal values were used to colorcode a rendering
of the arterial tree to make locations of clot more evident. The
main purpose of this work was to generate these renderings for
use in clinical practice, but with a simple threshold on the result
of their algorithm, component labeling and constraints on size
they were also able to construct a CAD system with 86% PE
detection sensitivity at the expense of 2 false positives per TP.
PE appears to be an area in which CAD has high potential,
because of the encouraging results reported so far and the enor-
mous clinical importance of prompt diagnosis. PE is, after car-
over 50000 deaths in the USA alone every year. Several major
industrial vendors are, therefore, active in this area and have re-
cently demonstrated prototypes of PE CAD systems. Consid-
ering the relatively small number of studies published so far,
the small number of cases on which they have been tested, the
modest performance obtained, and the simple sets of features
and classifiers employed, there should be ample room for fur-
Apart from detection, quantification of PE is also important.
quantification of PE has not been reported yet.
D. Airways Diseases (Bronchial Signs)
Signs of airways diseases that the bronchi may exhibit on
thin-slice CT examinations are wall thickening and dilation or
narrowing , . These three clear signs lend themselves
well to automated analysis. Thus far, the work on this has cen-
verse slices, in analogy to current methods of visual assessment
by a radiologist.
The bronchi are usually of equal size as their accompanying
artery. Normal bronchovascular cross-sectional pairs, therefore,
and one hole closed (the artery). It is the size relation between
vessel and bronchus that indicates bronchial narrowing or di-
latation. A few studies focus on the detection of normal  as
well as abnormal  bronchovascular pairs. In other studies
these pairs were indicated manually and automated size mea-
surements were performed , , , . Such size
measurements always incorporated a correction for nonperpen-
dicular cross sections and were always validated on phantoms.
Nakano et al.  correlated bronchial size measurements to
clinical parameters of COPD in smokers.
Two-dimensional analysis reflects the way in which a radiol-
ogist currently assesses the airways. This restricts the measure-
ments to specific favorable cross sections in a transverse slice.
Three-dimensional analysis will soon become the standard in
this area, however. The combination of measurement and 3-D
bronchial segmentation techniques (Section II-B) will provide
more data for the analysis which can be of added clinical value.
Techniques developed for comprehensive virtual bronchoscopy
systems, e.g.,  provide the basis for 3-D airway analysis.
E. Diffuse Lung Disease
There is a large group of disorders that primarily affect the
lung parenchyma. This group is referred to by the generic term
diffuse parenchymal lung disease (DPLD), but terms such as
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY397
encountered. The DPLDs account for about 15% of respiratory
Thin slice CT plays an important role in the detection, diag-
nosis and follow-up of these disorders. They are characterized
by specific abnormal findings mostly texture-like in appearance
and it is the cooccurrence of several such findings that can point
toward a specific diagnosis .
It is for this reason that computer analysis of DPLD is com-
monly viewed as a texture analysis problem. Systems designed
to quantify certain lung diseases and/or differentiate between
them, are without fail based on the vector space paradigm .
This means that ROIs are represented by features that are input
to a classifier which is trained to (re)produce category labels.
Most often the application of these systems revolves around the
detection of abnormal tissue and its simultaneous classification
into several textural categories –. One study focusses
exclusively on the detection of abnormalities without further
classification . A different field of application (employing
retrieval, in which one group has been active –.
With respect to implementation, various choices are made
sifier. ROI sizes typically lie in the range of 31
pixels. The smallest ROIs are 9
compass an entire lung field . For the features representing
the ROIs, a discriminatory set can be automatically chosen by
supervised feature selection from families of textural features
known from pattern recognition theory, such as histogram sta-
tistics, features from filterbanks or cooccurrence matrices, run-
length parameters or fractal features –, , .
Alternatively, features may be designed for the task at hand
, or a combination of both can be used . Kauczor et
al. use a combination of two artificial neural networks into
which the raw pixel values in the ROIs are input as features.
The ANN (also employed by Uchiyama et al. ) is only
one example of classifier. We find also LDCs , Bayesian
classifiers , ,  and -nearest-neighbor classifiers
Table V provides a summary of the described studies with re-
Descriptions of purpose and the lists of used classification cate-
gories show considerable variation, illustrating that in this field
there is no consensus yet on a common goal and exact defini-
tions. Although follow-up measurements for the quantification
of disease progression are mentioned as a possible area of in-
terest, in none of the studies this task is performed explicitly.
A noticeable common problem to all studies is the estab-
lishment of a reliable reference standard. This is due to the
fact that texture classification in thin-slice CT scans is a com-
plex task even for experts: Low intraobserver and interobserver
agreements of about 50%  reflect this. We see that in most
studies the opinion of a single expert is used as a reference stan-
dard. In some cases, multiple radiologists are involved and a
consensus is taken. For samples on which there is no agreement
between the radiologists, the computer may be regarded as an-
other observer. It can then be ascertained whether its behavior
is statistically equivalent to that of the radiologists, avoiding the
use of an explicit reference standard.
31 to 96 96
9 pixels , the largest en-
For completely automated differential diagnosis of DPLD,
much more information will need to be incorporated into the
computer systems than is the case at present. The scans need to
the full 3-D character of modern scan data should be exploited
and information from the entire scan (e.g., from all regions of
interest) should be combined to come to a single diagnosis. In
able. Shyu et al.  are the only group incorporating forms
of global and anatomical knowledge, combining features from
severalpathologicalregionsand anatomical indicatorsperslice.
The regions are, however, manually delineated rather than au-
tomatically detected. Recently also the first work appeared ana-
lyzing 3-D objects in complete scans , although these 3-D
objects were combined from multiple candidates detected be-
forehand on 2-D slices. Another source of information that is of
paramount diagnostic value but is not being taken into account
so far is clinical data. Fukushima et al.  showed that an ex-
pert system can come to reasonable diagnoses of DPLD when
image analysis results (in their case rated by radiologists, not
automatically) are augmented by clinical parameters.
We conclude that with the systems described in this section
the first steps toward more advanced processing schemes have
been taken, but that in computer analysis of DPLD, the question
on what exactly to aim for and how to achieve it is still open.
This final section consists of four parts. First we summarize
what has been achieved so far and what is the state of the art.
Thenwe identifya number ofchallengesfor academicresearch.
Third, we specifically focus on the discrepancy between theo-
retical availability of an algorithm and its availability in clinical
routine, and try to explain its causes. Finally we provide an out-
look to new developments that can be expected in the coming
A. State of the Art
Initially, the clinical use of computer analysis of CT of the
lungs did not reach much further than the quantification of em-
physema, a direct consequence of its relatively simple detec-
tion on CT. Simple tools for emphysema quantification are now
available on commercial workstations.
Ongoing improvements in scanner speed and quality have
broadened the field of applications. Currently, the emphasis
lies on detection and analysis of pulmonary nodules. We see
CAD for lung cancer follow in the footsteps of CAD for breast
cancer: Several commercial systems for automated nodule
detection have acquired FDA approval and research effort is
shifting from detection to characterization and follow-up. In
this field, industrial R&D efforts probably outweigh those of
academic research. Commercial workstations are now capable
of automatic nodule volumetry and nodule detection. These
workstations perform segmentation, but beyond complete lung
segmentation, these segmentation techniques are not directly
available yet for clinical use.
398IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 4, APRIL 2006
STUDIES ON THE (TEXTURE) ANALYSIS OF DPLD. FOR EACH STUDY A DESCRIPTION OF THE PURPOSE IS GIVEN AND OF THE CATEGORIES INVOLVED (DISEASE OR
TEXTURAL). THE DATA USED IS DESCRIBED ON THE LEVEL OF SUBJECTS/SCANS, AS WELL AS THE LEVEL OF SLICES/ROIS. IT IS STATED HOW
A REFERENCE STANDARD WAS SET, AND A CHARACTERISTIC RESULT IS GIVEN
Apart from the pulmonary nodule, the lungs may contain
much more pathology. Segmentation, registration and fur-
ther image analysis of each anatomical structure has been
researched, as is evident from this survey, but has not reached
the status of routine clinical use.
B. Research Challenges
For all methods of analysis the present trend is to move from
(axial) 2-D to 3-D processing, born out of possibility as much
This inevitably leads to a need to create suitable reference
standards in 3-D. This represents a major hurdle for evaluation
of tasks such as segmentation, for which the reference standard
is classically set using manually traced outlines. It is a prohib-
itively large amount of work to create manual delineations of
structures of interest in thorax exams that typically consist of
over300 slices. More effort should be spent on the development
of interactive tools (e.g., , ). Apart from their use in
tools can be developed in such a way as to enforce consistency
of manual segmentations in 3-D, as manual delineations per-
formed on 2-D slices often lack such consistency over larger
Maybe the largest challenge is to design algorithms that are
robust against pathological and anatomical variety, image noise
and differences in acquisition parameters. Many of the algo-
populations that do not necessarily represent clinical practice. It
proaches are necessary to achieve this increased robustness.
SLUIMER et al.: COMPUTER ANALYSIS OF COMPUTED TOMOGRAPHY SCANS OF THE LUNG: A SURVEY399
Proper validation is indispensable in order to assess the
quality and application range of an algorithm. Algorithms
should be tested in context of their intended practical clinical
use, with due consequences for the choice of test data and
measures of performance. Test data should reflect the variety of
pathology, noise levels and acquisition variety encountered in
clinical practice. However, a lack of robustness does not have
to hinder clinical use of a system as long as it is compensated
by adequate means of failure detection. Failure detection is an
important part of an automated system, but this fact is typically
ignored in academic research. A clinical user should not be
required to examine closely either the input data, or the (inter-
mediate) results to estimate the validity of the final outcome
of an automated analysis. Construction of adequate failure
detection mechanisms is far from trivial and in our opinion this
topic deserves more research.
Ideally, common, carefully annotated and representative
databases should be available for algorithm benchmarking.
This does, however, require consensus on the definition and
relevance of the clinical questions to be answered, which has
not been reached for all discussed areas of application. Detec-
tion of lung nodules is a task with a clear definition of purpose
and a vast range of algorithms in need of competitive testing,
presenting a prime example for which a common database can
and should be assembled. This necessity has been recognized
and steps in this direction are being undertaken .
In many of the studies describing application oriented work
stagesof thecomprehensiveimageanalysis system, and prepro-
cessing modules (such as lung segmentation) are implemented
in a suboptimal way. Many of these systems would benefit from
incorporation of existing methods from other research groups.
not be solved easily (making research software publicly avail-
able might be a solution). It is important to be aware of the limi-
tations this might set on system performance. Situations should
be avoided where effort goes into refinement of the final stages
of a compound system when the true bottleneck lies elsewhere.
C. Clinical Application
The advent of multislice CT has made isotropic imaging of
the lung a standard technique. Since nonisotropic data was one
of the main problems in computer vision of CT data, it should
ical routine practice.Unfortunately, of themanypotential appli-
cations for the lungs described in this survey, only very few are
What are the reasons for this discrepancy? For an algorithm
to succeed in clinical practice, there needs to be a market for
it (a real clinical need by a sufficient amount of radiologists),
it needs to be fast and easy to use (patient throughput is ever
increasing with multislice CT), it needs to be reliable (correct
results in the vast amount of cases) and it should be correctable
(easy adjustments in case the algorithm fails). Because radiolo-
gists are used to working without such algorithms, it is not easy
as in the case of nodule volumetry and detection of small nod-
ules, and when there is a growing need to perform such tasks
(lung cancer screening depends on nodule detection and vol-
umetry) do such algorithms find their way into clinical practice.
In addition, regulatory bodies such as the FDA require positive
proof of the benefit of such techniques and their use may be
hampered by medico-legal considerations.
Many if not all techniques described in this survey have the
potential to be important in clinical practice. How much so,
however, will mainly depend on their ease of use and their re-
liability. As mentioned in the previous section, improving re-
liability and robustness is a major research challenge. In addi-
tion, most algorithms have not been optimized for speed, and
even if they had, they probably would take longer than most ra-
diologists would accept. New concepts are, therefore, needed
to implement such algorithms in clinical practice. One poten-
tial solution could be automated preprocessing of the data as
soon as it is sent from the scanner to the CT workstation: such
preprocessing should include the time-consuming steps of any
algorithm that is made available. The radiologist then should
have the results at the fingertips and can easily use—and ide-
ally also modify—the results in any patient in whom he deems
a particular computer-assisted analysis valuable. By making re-
sults of CAD easily accessible in every patient, radiologists will
be much more prone to using these results. Speed and ease of
use, therefore, will be the determining factors as soon as perfor-
mance of algorithms has risen beyond a certain basic level.
The fact that many good algorithms are not available in clin-
ical practice demonstrates a general dilemma in the image pro-
cessing community: Research money is mainly available for
basic work and algorithm development but not for implemen-
tation and optimization of technique. Workflow issues will ul-
timately determine whether a new technique is practicable or
not. Close collaboration between academic image analysis re-
searchers, radiologists and industry is mandatory for success.
The past few years have seen the introduction of nodule vol-
umetry and detection into clinical workstations. Techniques for
classification of nodules are ready to follow relatively soon but
probably will be hampered bymedico-legal considerations. The
available techniques will have to be upgraded to suit the in-
creased exposure to clinical cases, which will invariably lead to
new problems that have previously not been considered. How-
surement and classification can be expected to be standard in
A new “hot” application is the automated detection of PE, a
more cumbersome because many more vessels are visible and
have to be evaluated on multislice CT data sets. Techniques that
indicate potential emboli, and later may even help in differenti-
ating emboli from artifacts would be very welcome in clinical
Registration of various types of scans is already available for
PET/CT and SPECT/CT data. Registration of lung nodules is
already implemented in commercial workstations but should
become even more reliable in the future. The registration of