Mammographic Mass Segmentation with Online Learned Shape and Appearance Priors

Conference Paper · October 2016with 253 Reads
DOI: 10.1007/978-3-319-46723-8_5
Cite this publication
Abstract
Automatic segmentation of mammographic mass is an important yet challenging task. Despite the great success of shape prior in biomedical image analysis, existing shape modeling methods are not suitable for mass segmentation. The reason is that masses have no specific biological structure and exhibit complex variation in shape, margin, and size. In addition, it is difficult to preserve the local details of mass boundaries, as masses may have spiculated and obscure boundaries. To solve these problems, we propose to learn online shape and appearance priors via image retrieval. In particular, given a query image, its visually similar training masses are first retrieved via Hough voting of local features. Then, query specific shape and appearance priors are calculated from these training masses on the fly. Finally, the query mass is segmented using these priors and graph cuts. The proposed approach is extensively validated on a large dataset constructed on DDSM. Results demonstrate that our online learned priors lead to substantial improvement in mass segmentation accuracy, compared with previous systems.
Mammographic Mass Segmentation with Online
Learned Shape and Appearance Priors
Menglin Jiang1, Shaoting Zhang2(B
), Yuanjie Zheng3,
and Dimitris N. Metaxas1
1Department of Computer Science, Rutgers University, Piscataway, NJ, USA
2Department of Computer Science, UNC Charlotte, Charlotte, NC, USA
szhang16@uncc.edu
3School of Information Science and Engineering, Shandong Normal University,
Jinan, China
Abstract. Automatic segmentation of mammographic mass is an
important yet challenging task. Despite the great success of shape prior
in biomedical image analysis, existing shape modeling methods are not
suitable for mass segmentation. The reason is that masses have no spe-
cific biological structure and exhibit complex variation in shape, margin,
and size. In addition, it is difficult to preserve the local details of mass
boundaries, as masses may have spiculated and obscure boundaries. To
solve these problems, we propose to learn online shape and appearance
priors via image retrieval. In particular, given a query image, its visu-
ally similar training masses are first retrieved via Hough voting of local
features. Then, query specific shape and appearance priors are calcu-
lated from these training masses on the fly. Finally, the query mass is
segmented using these priors and graph cuts. The proposed approach is
extensively validated on a large dataset constructed on DDSM. Results
demonstrate that our online learned priors lead to substantial improve-
ment in mass segmentation accuracy, compared with previous systems.
1 Introduction
For years, mammography has played a key role in the diagnosis of breast cancer,
which is the second leading cause of cancer-related death among women. The
major indicators of breast cancer are mass and microcalcification. Mass seg-
mentation is important to many clinical applications. For example, it is critical
to diagnosis of mass, since morphological and spiculation characteristics derived
from segmentation result are strongly correlated to mass pathology [2]. However,
mass segmentation is very challenging, since masses vary substantially in shape,
margin, and size, and they often have obscure boundaries [7].
During the past two decades, many approaches have been proposed to facil-
itate mass segmentation [7]. Nevertheless, few of them adopt shape and appear-
ance priors, which provide promising directions for many other biomedical image
segmentation problems [12,13], such as segmentation of human lung, liver,
prostate, and hippocampus. In mass segmentation, the absence of the study
c
Springer International Publishing AG 2016
S. Ourselin et al. (Eds.): MICCAI 2016, Part II, LNCS 9901, pp. 35–43, 2016.
DOI: 10.1007/978-3-319-46723-8 5
36 M. Jiang et al.
Appearance
Prior
Retrieved
Training Masses
Shape Prior
Query Mass
Boundary
Query Mass Training
Masses
Fig. 1. Overview of our approach. The blue lines around training masses denote
radiologist-labeled boundaries, and the red line on the rightmost image denotes our
segmentation result.
of shape and appearance priors is mainly due to two reasons. First, unlike the
aforementioned organs/objects, mammographic masses have no specific biologi-
cal structure, and they present large variation in shape, margin, and size. Nat-
urally, it is very hard to construct shape or appearance models for mass [7].
Second, masses are often indistinguishable from surrounding tissues and may
have greatly spiculated margins. Therefore, it is difficult to preserve the local
details of mass boundaries.
To solve the above problems, we propose to incorporate image retrieval into
mammographic mass segmentation, and learn “customized” shape and appear-
ance priors for each query mass. The overview of our approach is shown in Fig.1.
Specifically, during the offline process, a large number of diagnosed masses form
a training set. SIFT features [6] are extracted from these masses and stored
in an inverted index for fast retrieval [11]. During the online process, given a
query mass, it is first matched with all the training mass through Hough vot-
ing of SIFT features [10] to find the most similar ones. A similarity score is
also calculated to measure the overall similarity between the query mass and its
retrieved training masses. Then, shape and appearance priors are learned from
the retrieved masses on the fly, which characterize the global shape and local
appearance information of these masses. Finally, the two priors are integrated
in a segmentation energy function, and their weights are automatically adjusted
using the aforesaid similarity score. The query mass is segmented by solving the
energy function via graph cuts [1].
In mass segmentation, our approach has several advantages over existing
online shape prior modeling methods, such as atlas-based methods [12]and
sparse shape composition (SSC) [13]. First, these methods are generally designed
for organs/objects with anatomical structures and relatively simple shapes.
When dealing with mass segmentation, some assumptions of those methods,
such as correspondence between organ landmarks, will be violated and thus the
results will become unreliable. On the contrary, our approach adopts a retrieval
method dedicated to handle objects with complex shape variations. Therefore, it
Mammographic Mass Segmentation with Online Learned Priors 37
could get effective shape priors for masses. Second, since the retrieved training
masses are similar to the query mass in terms of not only global shape but also
local appearance, our approach incorporates a novel appearance prior, which
complements shape prior and preserves the local details of mass boundaries.
Finally, the priors’ weights in our segmentation energy function are automat-
ically adjusted using the similarity between the query mass and its retrieved
training masses, which makes our approach even more adaptive.
2 Methodology
In this section, we first introduce our mass retrieval process, and then describe how
to learn shape and appearance priors from the retrieval result, followed by our mass
segmentation method. The framework of our approach is illustrated in Fig.1.
Mass Retrieval Based on Hough Voting: Our approach characterizes mam-
mographic images with densely sampled SIFT features [6], which demonstrate
excellent performance in mass retrieval and analysis [4]. To accelerate the
retrieval process, all the SIFT features are quantized using bag-of-words (BoW)
method [4,11], and the quantized SIFT features extracted from training set are
stored in an inverted index [4]. The training set Dcomprises a series of samples,
each of which contains a diagnosed mass located at the center. A training mass
d∈Dis represented as d=vd
j,pd
jn
j=1, where nis the number of features
extracted form d,vd
jdenotes the j-th quantized feature (visual word ID), and
pd
j=xd
j,y
d
jTdenotes the relative position of vd
jfrom the center of d(the
coordinate origin is at mass center). The query set Qincludes a series of query
masses. Note that query masses are not necessarily located at image centers.
A query mass q∈Qis represented as q={(vq
i,pq
i)}m
i=1, where pq
i=[xq
i,y
q
i]T
denotes the absolute position of vq
i(the origin is at the upper left corner of the
image since the position of the mass center is unknown).
Given a query mass q, it is matched with all the training masses. In order to find
similar training masses with different orientations or sizes, all the training masses
are virtually transformed using 8 rotation degrees (from 0 to 7π/4) and 8 scaling
factors (from 1/2 to 2). To this end, we only need to re-calculate the positions of
SIFT features, since SIFT is invariant to rotation and scale change [6].
For the given query mass qand any (transformed) training mass d, we cal-
culate a similarity map Sq,d, a similarity score sq,d, and the position of the
query mass center cq,d.Sq,d is a matrix of the same size of q, and its element
at position p, denoted as Sq,d (p), indicates the similarity between the region
of qcentered at pand d. The matching process is based on generalized Hough
voting of SIFT features [10], which is illustrated in Fig.2. The basic idea is that
the features should be quantized to the same visual words and be spatially con-
sistent (i.e., have similar positions relative to mass centers) if qmatches d.In
particular, given a pair of matched features vq
i=vd
j=v,theabsolute position
of the query mass center, denoted as cq
i, is first computed based on pq
iand pd
j.
Then vq
iupdates the similarity map Sq,d. To resist gentle nonrigid deformation,
38 M. Jiang et al.
Query Mass Retrieved MassMatched SIFT Matched SIFT
Fig. 2. Illustration of our mass matching algorithm. The blue lines denote the mass
boundaries labeled by radiologists. The dots indicate the positions of the matched SIFT
features, which are spatially consistent. The arrows denote the rel at iv e positions of the
training features to the center of the training mass. The center of the query mass is
localized by finding the maximum element in Sq,d.
vq
ivotes in favor of not only cq
ibut also the neighbors of cq
i.cq
iearns a full vote,
and each neighbor gains a vote weighed by a Gaussian factor:
Sq,d (cq
i+δp)+= idf2(v)
tf (v, q)tf(v, d)exp δp2
2σ2,(1)
where δprepresents the displacement from cq
ito its neighbor, σdetermines the
order of δp,tf(v, q) and tf (v, d) are the term frequencies (TFs) of vin qand
drespectively, and idf (v) is the inverse document frequency (IDF) of v.TF-IDF
reflects the importance of a visual word to an image in a collection of images,
and is widely adopted in BoW-based image retrieval methods [4,10,11]. The
cumulative votes of all the feature pairs generate the similarity map Sq,d.The
largest element in Sq,d is defined as the similarity score sq,d, and the position of
the largest element is defined as the query mass center cq,d.
After computing the similarity scores between qand all the (transformed)
training masses, the top kmost similar training masses along with their diagnostic
reports are returned to radiologists. These masses are referred to as the retrieval
set of q, which is denoted as Nq. The average similarity score of these ktraining
masses is denoted as ω=(1/k)d∈Nqsq,d. During the segmentation of q,ωmea-
sures the confidence of our shape and appearance priors learned from Nq.
Note that our retrieval method could find training masses which are similar in
local appearance and global shape to the query mass. A match between a query
feature and a training feature assures that the two local patches, from where
the SIFT features are extracted, have similar appearances. Besides, the spatial
consistency constraint guarantees that two matched masses have similar shapes
and sizes. Consequently, the retrieved training masses could guide segmentation
of the query mass. Moreover, due to the adoption of BoW technique and inverted
index, our retrieval method is computationally efficient.
Learning Online Shape and Appearance Priors: Given a query mass q,our
segmentation method aims to find a foreground mask Lq.Lqis a binary matrix
Mammographic Mass Segmentation with Online Learned Priors 39
of the size of q, and its element Lq(p)∈{0,1}indicates the label of the pixel
at position p, where 0 and 1 represent background and mass respectively. Each
retrieved training mass d∈N
qhas a foreground mask Ld. To align dwith q,we
simply copy Ldto a new mask of the same size of Lqand move the center of Ld
to the query mass center cq,d.Ldwill hereafter denote the aligned foreground
mask.
Utilizing the foreground masks of the retrieved training masses in Nq,we
could learn shape and appearance priors for qon the fly. Shape prior models
the spatial distribution of the pixels in qbelonging to a mass. Our approach
estimates this prior by averaging the foreground masks of the retrieved masses:
pS(Lq(p)=1)= 1
k
d∈Nq
Ld(p),
pS(Lq(p)=0)=1pS(Lq(p)=1).
(2)
Appearance prior models how likely a small patch in qbelongs to a mass. In
our approach, a patch is a small region from where a SIFT feature is extracted,
and it is characterized by its visual word (quantized SIFT feature). The proba-
bility of word vbelonging to a mass is estimated on Nq:
pA(Lq(pv)=1)=nf
v
nv,
pA(Lq(pv)=0)=1pA(Lq(pv)=1),(3)
where pvis the position of word v,nvis the total number of times that vappears
in Nq,nf
vis the number of times that vappears in foreground masses.
It is noteworthy that our shape and appearance priors are complementary.
In particular, shape prior tends to recognize mass centers, since the average fore-
ground mask of the retrieved training masses generally has large scores around
mass centers. Appearance prior, on the other hand, tends to recognize mass
edges, as SIFT features extracted from mass edges are very discriminative [4].
Examples of shape and appearance priors are provided in Fig. 1.
Mass Segmentation via Graph Cuts with Priors: Our segmentation
method computes the foreground mask Lqby minimizing the following energy
function:
E(Lq)=λ1EI(Lq)+λ2ωES(Lq)+λ3ωEA(Lq)+ER(Lq)
=λ1
p
ln pI(Iq(p)|Lq(p)) λ2ω
p
ln pS(Lq(p))
λ3ω
p
ln pA(Lq(p)) +
p,p
β(Lq(p),Lq(p)),
(4)
where Iqdenotes the intensity matrix of q,Iq(p) represents the value of the
pixel at position p.EI(Lq), ES(Lq), EA(Lq)andER(Lq) are the energy terms
related to intensity information, shape prior, appearance prior, and regularity
constraint, respectively. β(Lq(p),Lq(p)) is a penalty term for adjacent pixels
with different labels. λ1,λ2and λ3are the weights for the first three energy
terms, and the last term has an implicit weight 1.
40 M. Jiang et al.
In particular, the intensity energy EI(Lq) evaluates how well Lqexplains
Iq. It is derived from the total likelihood of the observed intensities given cer-
tain labels. Following conventions in radiological image segmentation [5,12],
the foreground likelihood pI(Iq(p)|Lq(p) = 1) and background likelihood
pI(Iq(p)|Lq(p) = 0) are approximated by Gaussian density function and
Parzen window estimator respectively, and are both learned on the entire train-
ing set D. The shape energy ES(Lq) and appearance energy EA(Lq)measure
how well Lqfits the shape and appearance priors. The regularity energy ER(Lq)
is employed to promote smooth segmentation. It calculates a penalty score for
every pair of neighboring pixels (p,p). Following [1,12], we compute this score
using:
β(Lq(p),Lq(p)) = 1(Lq(p)=Lq(p))
2ppexp (Iq(p)Iq(p))2
2ζ2,(5)
where 1is the indicator function, and ζdetermines the order of intensity dif-
ference. The above function assigns a positive score to (p,p) only if they have
different labels, and the score will be large if they have similar intensities and
short distance. Similar to [12], we first plug in Eqs. (2), (3) and (5) to energy
function Eq. (4), then convert it to the sum of unary potentials and pairwise
potentials, and finally minimize it via graph cuts [1] to obtain the foreground
mask Lq.
Note that the overall similarity score ωis utilized to adjust the weights of
the prior-related energy terms in Eq. (4). As a result, if there are similar masses
in the training set, our segmentation method will rely on the priors. Otherwise,
ωwill be very small and Eq. (4) automatically degenerates to traditional graph
cuts-based segmentation, which prevents ineffective priors from reducing the
segmentation accuracy.
3 Experiments
In this section, we first describe our dataset, then evaluate the performance of
mass retrieval and mass segmentation using our approach.
Dataset: Our dataset builds on the digital database for screening mammog-
raphy (DDSM) [3], which is currently the largest public mammogram data-
base. DDSM is comprised of 2,604 cases, and every case consists of four views,
i.e., LEFT-CC, LEFT-MLO, RIGHT-CC and RIGHT-MLO. The masses have
diverse shapes, margins, sizes, breast densities as well as patients’ ages. They
also have radiologist-labeled boundaries and diagnosed pathologies. To build our
dataset, 2,340 image regions centered at masses are extracted. Our approach and
the comparison methods are tested five times. During each time, 100 images are
randomly selected to form the query set Q, and the remaining 2,240 images form
the training set D.Qand Dare selected from different cases in order to avoid
positive bias. Below we report the average of the evaluation results during five
tests.
Mammographic Mass Segmentation with Online Learned Priors 41
Fig. 3. Our segmentation results on four masses, which are represented by red lines.
The blue lines denote radiologist-labeled mass boundaries. The left two masses are
malignant (cancer), and the right two masses are benign.
Evaluation of Mass Retrieval: The evaluation metric adopted here is retrieval
precision. In our context, precision is defined as the percentage of retrieved train-
ing masses that have the same shape category as that of the query mass. All
the shape attributes are divided as two categories. The first category includes
“irregular”, “lobulated”, “architectural distortion”, and their combinations. The
second category includes “round” and “oval”. We compare our method with
a state-of-the-art mass retrieval approach [4], which indexes quantized SIFT
features with a vocabulary tree. The precision scores of both methods change
slightly as the size of retrieval set kincreases from 1 to 30, and our method
systematically outperforms the comparison method. For instance, at k= 20,
the precision scores of our method and the vocabulary tree-based method are
0.85 ±0.11 and 0.81 ±0.14, respectively. Our precise mass retrieval method lays
the foundation for learning accurate priors and improving mass segmentation
performance.
Evaluation of Mass Segmentation: Segmentation accuracy is assessed by
area overlap measure (AOM) and average minimum distance (DIST), which are
two widely used evaluation metrics in medical image segmentation. Our method
is tested with three settings, i.e., employing shape prior, appearance prior, and
both priors. The three configurations are hereafter denoted as “Ours-Shape”,
“Ours-App”, and “Ours-Both”. For all the configurations, we set kto the same
value in mass retrieval experiments, i.e. k= 20. λ1,λ2and λ3are tuned through
cross validation.
Three classical and state-of-the-art mass segmentation approaches are imple-
mented for comparison, which are based on active contour (AC) [8], convolution
neural network (CNN) [5], and traditional graph cuts (GC) [9], respectively. The
key parameters of these methods are tuned using cross validation. The evalua-
tion results are summarized in Table 1. A few segmentation results obtained by
Ours-Both are provided in Fig. 3for qualitative evaluation.
The above results lead to several conclusions. First, our approach could find
visually similar training masses for most query masses and calculate effective
shape and appearance priors. Therefore, Ours-Shape and Ours-App substantially
surpass GC. Second, as noted earlier, the two priors are complementary: shape
prior recognizes mass centers, whereas appearance prior is vital to keep the
42 M. Jiang et al.
Table 1. AOM and DIST (unit is mm) scores of the evaluated methods
AC [8]CNN[5]GC[9] Ours-Shape Ours-App Ours-Both
AOM 0.78 ±0.12 0.73 ±0.17 0.75 ±0.14 0.81 ±0.13 0.80 ±0.10 0.84 ±0.09
DIST 1.09 ±0.43 1.36 ±0.62 1.24 ±0.54 0.97 ±0.49 1.01 ±0.45 0.88 ±0.47
local details of mass boundaries. Thus, by integrating both priors, Ours-Both
further improves the segmentation accuracy. Third, detailed results show that
for some “uncommon” query masses, which have few similar training masses, the
overall similarity score ωis very small and the segmentation results of Ours-Both
are similar to those of GC. That is, the adaptive weights of priors successfully
prevent ineffective priors from backfiring. Finally, Ours-Both outperforms all the
comparison methods especially for masses with irregular and spiculated shapes.
Its segmentation results have a close agreement with radiologist-labeled mass
boundaries, and are highly consistent with mass pathologies.
4 Conclusion
In this paper, we leverage image retrieval method to learn query specific shape
and appearance priors for mammographic mass segmentation. Given a query
mass, similar training masses are found via Hough voting of SIFT features, and
priors are learned from these masses. The query mass is segmented through graph
cuts with priors, where the weights of priors are automatically adjusted according
to the overall similarity between query mass and its retrieved training masses.
Extensive experiments on DDSM demonstrate that our online learned priors
considerably improve the segmentation accuracy, and our approach outperforms
several widely used mass segmentation methods or systems. Future endeavors
will be devoted to distinguishing between benign and malignant masses using
features derived from mass boundaries.
References
1. Boykov, Y.Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via
graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)
2. D’Orsi, C.J., Sickles, E.A., Mendelson, E.B., et al.: ACR BI-RADS Atlas, Breast
Imaging Reporting and Data System, 5th edn. American College of Radiology,
Reston (2013)
3. Heath, M., Bowyer, K., Kopans, D., Moore, R., Kegelmeyer, W.P.: The digital
database for screening mammography. In: Proceeding IWDM, pp. 212–218 (2000)
4. Jiang, M., Zhang, S., Li, H., Metaxas, D.N.: Computer-aided diagnosis of mammo-
graphic masses using scalable image retrieval. IEEE Trans. Biomed. Eng. 62(2),
783–792 (2015)
5. Lo, S.B., Li, H., Wang, Y.J., Kinnard, L., Freedman, M.T.: A multiple circular
paths convolution neural network system for detection of mammographic masses.
IEEE Trans. Med. Imaging 21(2), 150–158 (2002)
Mammographic Mass Segmentation with Online Learned Priors 43
6. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Com-
put. Vis. 60(2), 91–110 (2004)
7. Oliver, A., Freixenet, J., Mart´ı, J., P´erez, E., Pont, J., Denton, E.R.E., Zwiggelaar,
R.: A review of automatic mass detection and segmentation in mammographic
images. Med. Image Anal. 14(2), 87–110 (2010)
8. Rahmati, P., Adler, A., Hamarneh, G.: Mammography segmentation with maxi-
mum likelihood active contours. Med. Image Anal. 16(6), 1167–1186 (2012)
9. Saidin, N., Sakim, H.A.M., Ngah, U.K., Shuaib, I.L.: Computer aided detection
of breast density and mass, and visualization of other breast anatomical regions
on mammograms using graph cuts. Comput. Math. Methods Med. 2013(205384),
1–13 (2013)
10. Shen, X., Lin, Z., Brandt, J., Avidan, S., Wu, Y.: Object retrieval and localization
with spatially-constrained similarity measure and k-NN re-ranking. In: Proceeding
CVPR, pp. 3013–3020 (2012)
11. Sivic, J., Zisserman, A.: Video Google: a text retrieval approach to ob ject matching
in videos. In: Proceeding ICCV, pp. 1470–1477 (2003)
12. van der Lijn, F., den Heijer, T., Breteler, M.M.B., Niessen, W.J.: Hippocampus
segmentation in MR images using atlas registration, voxel classification, and graph
cuts. NeuroImage 43(4), 708–720 (2008)
13. Zhang, S., Zhan, Y., Zhou, Y., Uzunbas, M., Metaxas, D.N.: Shape prior mod-
eling using sparse representation and online dictionary learning. In: Ayache, N.,
Delingette, H., Golland, P., Mori, K. (eds.) MICCAI 2012. LNCS, vol. 7512, pp.
435–442. Springer, Heidelberg (2012). doi:10.1007/978-3-642-33454-2 54
  • Article
    Full-text available
    Mass segmentation provides effective morphological features which are important for mass diagnosis. In this work, we propose a novel end-to-end network for mammographic mass segmentation which employs a fully convolutional network (FCN) to model a potential function, followed by a CRF to perform structured learning. Because the mass distribution varies greatly with pixel position, the FCN is combined with a position priori. Further, we employ adversarial training to eliminate over-fitting due to the small sizes of mammogram datasets. Multi-scale FCN is employed to improve the segmentation performance. Experimental results on two public datasets, INbreast and DDSM-BCRP, demonstrate that our end-to-end network achieves better performance than state-of-the-art approaches. \footnote{https://github.com/wentaozhu/adversarial-deep-structural-networks.git}
  • Conference Paper
    Full-text available
    Mass segmentation provides effective morphological features which are important for mass diagnosis. In this work, we propose a novel end-to-end network for mammographic mass segmentation which employs a fully convolutional network (FCN) to model a potential function, followed by a conditional random field (CRF) to perform structured learning. Because the mass distribution varies greatly with pixel position, the FCN is combined with a position priori. Further, we employ adversarial training to eliminate over-fitting due to the small sizes of mammogram datasets. Multi-scale FCN is employed to improve the segmentation performance. Experimental results on two public datasets, INbreast and DDSM-BCRP, demonstrate that our end-to-end network achieves better performance than state-of-the-art approaches. 1
  • Chapter
    State-of-the-art deep learning methods for image processing are evolving into increasingly complex meta-architectures with a growing number of modules. Among them, region-based fully convolutional networks (R-FCN) and deformable convolutional nets (DCN) can improve CAD for mammography: R-FCN optimizes for speed and low consumption of memory, which is crucial for processing the high resolutions of to \(50\,\upmu \hbox {m}\) used by radiologists. Deformable convolution and pooling can model a wide range of mammographic findings of different morphology and scales, thanks to their versatility. In this study, we present a neural net architecture based on R-FCN/DCN, that we have adapted from the natural image domain to suit mammograms—particularly their larger image size—without compromising resolution. We trained the network on a large, recently released dataset (Optimam) including 6,500 cancerous mammograms. By combining our modern architecture with such a rich dataset, we achieved an area under the ROC curve of 0.879 for breast-wise detection in the DREAMS challenge (130,000 withheld images), which surpassed all other submissions in the competitive phase.
  • Article
    Full-text available
    A multiple circular path convolution neural network (MCPCNN) architecture specifically designed for the analysis of tumor and tumor-like structures has been constructed. We first divided each suspected tumor area into sectors and computed the defined mass features for each sector independently. These sector features were used on the input layer and were coordinated by convolution kernels of different sizes that propagated signals to the second layer in the neural network system. The convolution kernels were trained, as required, by presenting the training cases to the neural network. In this study, randomly selected mammograms were processed by a dual morphological enhancement technique. Radiodense areas were isolated and were delineated using a region growing algorithm. The boundary of each region of interest was then divided into 36 sectors using 36 equi-angular dividers radiated from the center of the region. A total of 144 Breast Imaging-Reporting and Data System-based features (i.e., four features per sector for 36 sectors) were computed as input values for the evaluation of this newly invented neural network system. The overall performance was 0.78-0.80 for the areas (A/sub z/) under the receiver operating characteristic curves using the conventional feed-forward neural network in the detection of mammographic masses. The performance was markedly improved with A/sub z/ values ranging from 0.84 to 0.89 using the MCPCNN. This paper does not intend to claim the best mass detection system. Instead it reports a potentially better neural network structure for analyzing a set of the mass features defined by an investigator.
  • Article
    This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
  • Article
    Full-text available
    Computer-aided diagnosis of masses in mammograms is important to the prevention of breast cancer. Many approaches tackle this problem through content-based image retrieval (CBIR) techniques. However, most of them fall short of scalability in the retrieval stage, and their diagnostic accuracy is therefore restricted. To overcome this drawback, we propose a scalable method for retrieval and diagnosis of mammographic masses. Specifically, for a query mammographic region of interest (ROI), SIFT features are extracted and searched in a vocabulary tree, which stores all the quantized features of previously diagnosed mammographic ROIs. In addition, to fully exert the discriminative power of SIFT features, contextual information in the vocabulary tree is employed to refine the weights of tree nodes. The retrieved ROIs are then used to determine whether the query ROI contains a mass. The presented method has excellent scalability due to the low spatial-temporal cost of vocabulary tree. Extensive experiments are conducted on a large dataset of 11,553 ROIs extracted from the digital database for screening mammography (DDSM), which demonstrate the accuracy and scalability of our approach.
  • Conference Paper
    Full-text available
    One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion.
  • Article
    Full-text available
    Breast cancer mostly arises from the glandular (dense) region of the breast. Consequently, breast density has been found to be a strong indicator for breast cancer risk. Therefore, there is a need to develop a system which can segment or classify dense breast areas. In a dense breast, the sensitivity of mammography for the early detection of breast cancer is reduced. It is difficult to detect a mass in a breast that is dense. Therefore, a computerized method to separate the existence of a mass from the glandular tissues becomes an important task. Moreover, if the segmentation results provide more precise demarcation enabling the visualization of the breast anatomical regions, it could also assist in the detection of architectural distortion or asymmetry. This study attempts to segment the dense areas of the breast and the existence of a mass and to visualize other breast regions (skin-air interface, uncompressed fat, compressed fat, and glandular) in a system. The graph cuts (GC) segmentation technique is proposed. Multiselection of seed labels has been chosen to provide the hard constraint for segmentation of the different parts. The results are promising. A strong correlation (r = 0.93) was observed between the segmented dense breast areas detected and radiological ground truth.
  • Conference Paper
    Full-text available
    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
  • Article
    We present a computer-aided approach to segmenting suspicious lesions in digital mammograms, based on a novel maximum likelihood active contour model using level sets (MLACMLS). The algorithm estimates the segmentation contour that best separates the lesion from the background using the Gamma distribution to model the intensity of both regions (foreground and background). The Gamma distribution parameters are estimated by the algorithm. We evaluate the performance of MLACMLS on real mammographic images. Our results are compared to those of two leading related methods: The adaptive level set-based segmentation method (ALSSM) and the spiculation segmentation using level sets (SSLS) approach, and show higher segmentation accuracy (MLACMLS: 86.85% vs. ALSSM: 74.32% and SSLS: 57.11%). Moreover, our results are qualitatively compared with those of the Active Contour Without Edge (ACWOE) and show a better performance. Further, the suitability of using ML as the objective function as opposed to the KL divergence and to the energy functional of the ACWOE is also demonstrated. Our algorithm is also shown to be robust to the selection of a required single seed point.
  • Article
    The Digital Database for Screening Mammography1 is a resource for use by researchers investigating mammogram image analysis. In particular, the resource is focused on the context of image analysis to aid in screening for breast cancer. The database now contains substantial numbers of “normal” and “cancer” cases. This paper describes recent improvements and additions to DDSM.
  • Conference Paper
    We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films.