Content uploaded by Ahmed Wasif Reza
Author content
All content in this area was uploaded by Ahmed Wasif Reza on Sep 05, 2014
Content may be subject to copyright.
ORIGINAL PAPER
Diagnosis of Diabetic Retinopathy: Automatic Extraction
of Optic Disc and Exudates from Retinal Images
using Marker-controlled Watershed Transformation
Ahmed Wasif Reza &C. Eswaran &Kaharudin Dimyati
Received: 9 September 2009 / Accepted: 27 December 2009 / Published online: 29 January 2010
#Springer Science+Business Media, LLC 2010
Abstract Due to increasing number of diabetic retinopathy
cases, ophthalmologists are experiencing serious problem to
automatically extract the features from the retinal images.
Optic disc (OD), exudates, and cotton wool spots are the
main features of fundus images which are used for
diagnosing eye diseases, such as diabetic retinopathy and
glaucoma. In this paper, a new algorithm for the extraction
of these bright objects from fundus images based on
marker-controlled watershed segmentation is presented.
The proposed algorithm makes use of average filtering
and contrast adjustment as preprocessing steps. The concept
of the markers is used to modify the gradient before the
watershed transformation is applied. The performance of
the proposed algorithm is evaluated using the test images of
STARE and DRIVE databases. It is shown that the
proposed method can yield an average sensitivity value of
about 95%, which is comparable to those obtained by the
known methods.
Keywords Diabetic retinopathy .Exudates .
Cotton wool spots .Biomedical imaging and image
processing .Optic disc .Watershed transformation
Introduction
Eye diseases, such as diabetic retinopathy and glaucoma
affect the retina and cause blindness. This condition tends
to occur in patients who have had diabetes for five or more
years. It is reported that more than half of all newly
registered blindness is caused by the retinal diseases and
diabetic retinopathy is one of the main contributors [1].
Automatic screening for eye disease has been shown to be
very effective in preventing loss of sight. Manual analysis
and diagnosis requires a great deal of time and energy to
review retinal images which are obtained by fundus camera.
Therefore, an automated analysis and diagnosis system will
save cost and time significantly considering the large
number of retinal images that need to be manually reviewed
by the medical professionals each year [2]. In summary,
computer-aided analysis can be helpful to assist the
screening procedure in detecting diabetic retinopathy and
to aid the ophthalmologists when they evaluate or diagnose
the fundus images [3]. With this motivation in mind, this
paper presents an automatic tracing technique for the
boundary detection of bright objects, such as optic disc
(OD), exudates, and cotton wool spots in fundus images.
The extraction of these objects forms an important step in
the diagnosis of eye diseases, such as diabetic retinopathy
and glaucoma.
Extraction, detection, and identification of OD, exudates,
and cotton wool spots in the fundus images have been one
of the main focuses in modern research [3–7]. The
detection of the OD in the human retina is a very important
task. The OD is the entrance of the vessels and the optic
nerve into the retina. It becomes visible in color fundus
images as a bright yellowish or white region. Its shape is
approximately circular, interrupted by the outgoing vessels.
The size of OD differs from patient to patient and its
A. W. Reza (*):K. Dimyati
Faculty of Engineering, Department of Electrical Engineering,
University of Malaya,
50603 Kuala Lumpur, Malaysia
e-mail: awreza98@yahoo.com
C. Eswaran
Faculty of Information Technology, Multimedia University,
63100 Cyberjaya, Malaysia
J Med Syst (2011) 35:1491–1501
DOI 10.1007/s10916-009-9426-y
diameter lies between 40 and 60 pixels in 640 × 480 color
photographs [7]. The diameter delivers a calibration of the
measurements [8], and it determines approximately the
localization of macula [9], the center of vision, which is of
great importance as objects in the macular region affect
vision directly. Various methods [7–17], such as high gray
level variation, area threshold, Hough transform, back-
tracking technique, morphological filtering techniques,
watershed transformation, principal component analysis
(PCA), and point distribution model have been reported
for the detection and extraction of OD. On the other hand,
there is extensive research on developing and improving the
image processing algorithms for the detection of exudates
in retinal images [5–7]. Hard exudates are yellowish
intraretinal deposits, which are usually located in the
posterior pole of the fundus image. The exudates are made
up of serum lipoproteins, considered to leak from the
abnormally permeable blood vessels, especially across the
walls of leaking microaneurysms. Hard exudates may be
observed in several retinal vascular pathologies, but are
a main feature of diabetic macular edema [7]. In fact,
diabetic macular edema is the main reason of visual
impairment in diabetic patients. The most effectual way to
diagnose the macular edema is to detect the hard exudates,
which are normally associated with macular edema. The
exudates are well contrasted relating to the background
that surrounds them, and their shape and size vary
significantly. The cotton wool spots or soft exudates are
infractions in the nerve fiber layers of the retina, which are
round or oval in shape, pale yellow-white in color, and are
results of capillary occlusions that cause permanent
damages to the function of the retina [18]. Hard and soft
exudates can be distinguished because of their color and
the sharpness of their borders. Various methods [8,19,
20], such as shade correction, contrast enhancement,
sharpening, combination of local and global thresholding,
color normalization, fuzzy C-means clustering, and neural
networks have been reported for the detection and
classification of exudates. However, in this study, the
parameters ‘number’and ‘size’(refer to “Proposed
methodology”) are taken into account to differentiate and
quantify the hard exudates and the cotton wool spots in the
fundus images.
In this paper, a new technique for the detection of OD,
exudates, and cotton wool spots in ocular fundus images is
proposed. The proposed method makes use of: (1)
preprocessing algorithms to make the bright object features
more distinguishable from the background, (2) markers to
modify the gradient image to control oversegmentation, and
(3) watershed segmentation to trace the boundary from the
marker modified gradient. The proposed method has the
following advantages compared to Walter’s method [7] and
our existing method [5]: (1) there is no need to select
different threshold value for different test images those of
which are varying with different brightness and contrast
conditions (the method proposed in this study only relies on
H-minima, which requires a fixed threshold), (2) it
automatically segments (does not require manual interac-
tion) all bright lesions in a colour fundus image with the
possibility to distinguish between hard exudates and cotton
wool spots, (3) the proposed method gives an average
sensitivity value of about 95%, which is comparable to [7]
and [5], and (4) it will be very useful in the development of
computer based automatic screening systems for the early
diagnosis of diabetic retinopathy.
The first part of this introductory section is dedicated to
the discussion of the ways in which automated image
processing technique can contribute to the diagnosis of
diabetic retinopathy, and also highlights the properties and
state of the art of OD, hard exudates, and cotton wool spots.
The organization of the remaining part of this paper is as
follows: in “Proposed methodology”, the methodology for
detecting bright objects from the fundus images is
presented. “Results and discussion”presents the test results.
Moreover, it includes a comparative performance study
with a known method [7] and also with those obtained by
human experts. Finally, “Conclusions”concludes the paper
along with suggestions for foreseeable future work.
Proposed methodology
Figure 1shows flow chart of the proposed method with
symbol for input and output.
Average filtering
In the original fundus image (Fig. 2a), the intensity
variation between the bright objects and the blood vessels
is relatively high and the vessels usually have poor local
contrast with respect to the background. To isolate the OD
and other bright parts is a fundamental problem and hence,
preprocessing of the image for subsequent analysis
becomes extremely crucial. Therefore, in the first phase,
an averaging filter of size 25× 35 pixels (the parameter is
set by using trial and error procedure on several images)
containing equal weights of value “1”is applied to the
original image R
i
(x,y) (where i∈1–25×35) in order to
blend the small objects with low intensity variations into
the background, while leaving the objects of interest
relatively unchanged. The average filter is implemented
using the following Eq. 1[5,21].
fx;yðÞ¼ 1
MNX
MN
i¼1
Rix;yðÞ ð1Þ
1492 J Med Syst (2011) 35:1491–1501
Average
Filtering
Input RGB
Image
Contrast
Stretching
Transformation
Extended Minima
Transformation
),( yxRi
),( yxf
),( yxT
(, )gxy
i
m
me
Img
L
Negative
Transformation
Inverse of Internal
Marker
Euclidean Distance
Transform
Watershed
Transform
Superimposition
Minima
Imposition
RGB Segmented
Output
RGB Conversion
i
m
)(~ i
m
))((~ id m
ε
)( ei mm
n
I
Gradient
Magnitude
Fig. 1 Phases of the proposed
algorithm
(
a
)(
b
)
Fig. 2 a Original image; b
Average filtered image
J Med Syst (2011) 35:1491–1501 1493
where M=25 and N=35. In Eq. 1, average filtering uses a
25×35 mask and then take average of all values within the
mask to obtain the effect at each point (x,y) in the original
image. The resultant image f(x,y) after applying average
filter in Eq. 1is shown in Fig. 2(b). As shown from Fig. 2
(b), the small objects are blended into the background and
the objects of interest are relatively unchanged.
Contrast stretching transformation
As the green channel contains good contrast between the
background and the bright retinal components, it is reliable
to work on the green channel of the RGB color space in
order to localize the OD and exudates [5,10,21]. The green
component is extracted from the RGB color image as
shown in Fig. 3(a). This image is then automatically
enhanced by applying the contrast stretching transformation
shown in Fig. 3(b) to make the bright object features more
distinguishable from the background. In this transforma-
tion, only the darker regions have their intensity values
enhanced slightly while the brighter regions of the image
remain more or less unchanged. The result is an image of
higher contrast, which is achieved by using the contrast
stretching transform function as shown below [5,21].
Tx;yðÞ¼bfx;yðÞ½
nð2Þ
where f(x,y) and T(x,y) represent the input and output
(processed) pixel intensity values, respectively, 0≤n≤1, and
β=inmax
1–n
, where inmax is the desired upper limit
intensity value in the output image. The resulting trans-
formed image T(x,y) is shown in Fig. 4(a). In the contrast
stretching transformation, the parameter of nequals “1”in
Eq. 2. In this case, low intensity values of the darker
regions have enhanced their intensity values gradually until
it reaches maximum limit intensity value inmax as indicated
in Fig. 3(b).
Gradient magnitude
Figure 4(b) is obtained from Fig. 4(a) by applying a
negative transform. The watershed transform can be applied
to find catchment basins and watershed ridge lines in the
image of Fig. 4(b), which treats the image as a surface
where the light pixels (background pixels) have high
intensity values and the dark pixels (object pixels) have
low intensity values. The image shown in Fig. 4(b) contains
several dark blobs. The gradient magnitude is used to
preprocess the image prior to using the watershed transform
for segmentation. By computing the gradient magnitude of
Fig. 4(b) using a linear filter (e.g., Sobel [7,22]), we obtain
Fig. 5(a). It is seen from Fig. 5(a) that the gradient
magnitude image g(x,y) has high pixel values along the
edges and low pixel values everywhere else. If watershed
transform is directly applied on Fig. 5(a), we would get an
image as shown in Fig. 5(b), which contains too many
watershed ridge lines that do not correspond to the objects
in which we are interested. Direct application of the
watershed transform to a gradient image usually leads to
oversegmentation due to noise and other local irregularities
of the gradient [22].
We can see that the image in Fig. 5(b) is severely
oversegmented, due to the presence of a large number of
regional minima. To control the oversegmentation, an
approach based on the concept of markers [7,22] is made
use of. A marker is a connected component belonging to an
image. We identify two types of markers, namely, internal
markers (associated with objects of interest), and external
markers (associated with the background). These markers
are then used to modify the gradient image.
0 50 100 150 200 250
0
50
100
150
200
250
Input intensity values, f(x,y)
Output intensity values, T(x,y)
Maximum
Limit
Contras t
Stret ching
Transform
Function
n=1
inmax
(a) (b)
Fig. 3 a Green channel; bContrast stretching transformation
1494 J Med Syst (2011) 35:1491–1501
Extended minima transformation
The internal markers m
i
shown in Fig. 6(a) are obtained from
the negative image of Fig. 4(b) using the extended minima
transformation, f
imextendedmin
[22,23]. The f
imextendedmin
com-
putes the extended-minima transform, which is the regional
minima of the H-minima transform. In Eq. 3,I
n
represents the
negative image (Fig. 4b)andHis the height threshold, which
requires a fixed parameter (a threshold: H=2).
mi¼fimextendedmin In;HðÞ ð3Þ
The m
i
is a binary image whose foreground pixels mark
the locations of the deep regional minima. The extended
minima transformation determines the groups of brightest
pixels belonging to the foreground such that the points in
each region form a connected component and all the points
in the connected component have the same gray-level
value. The internal markers superimpose the extended
minima locations as gray blobs on the image of Fig. 4(b)
to get the image shown in Fig. 6(b).
Euclidean distance transform
Next, we find external markers [22], or pixels that we are
confident belong to the background. The external markers m
e
effectively partition the image (Fig. 6a)intoregions,with
each region contains single internal marker and part of the
background. The approach is to mark the background by
finding pixels that are exactly midway between the internal
markers m
i
. The problem is thus reduced to partitioning each
of these regions into two: (1) a single object and (2) its
background. This can be done by computing the watershed
segmentation (WS) of the distance map of the inverse of m
i
as shown in Eq. 4.
me¼WS "dmi
ðÞðÞ½ ð4Þ
where ε
d
represents the Euclidean distance transform, which
is used in conjunction with the watershed transform for
segmentation. For each pixel in m
i
, the distance transform
assigns a number that is the distance between that pixel and
the nearest nonzero pixel of m
i
. Figure 7(a) shows the
(a) (b)
Fig. 4 a Image after contrast
adjustment; bNegative image of
Fig. 4(a)
(a) (b)
Fig. 5 a Gradient image; b
Oversegmentation resulting
from applying the watershed
transform to Fig. 5(a)
J Med Syst (2011) 35:1491–1501 1495
resulting watershed ridge lines as external markers in the
binary image.
Minima imposition
Given both internal markers and external markers, the
gradient image g(x,y) of Fig. 5(a) is modified using a
procedure called minima imposition, f
imposemin
[22]. This
technique modifies the image so that regional minima occur
only in marked location. Other pixels values are pushed up
as necessary to remove all other regional minima. We
modify the gradient image by imposing regional minima at
the locations of both the internal and the external markers
as shown in Eq. 5.
Img ¼fimposemin gx;yðÞ;mime
j
ðÞðÞ ð5Þ
where f
imposemin
modifies the gradient image g(x,y) using
morphological reconstruction so that it only has regional
minima wherever superimposition (denoted by logical “or”)
of m
i
and m
e
i.e., (m
i
|m
e
) is nonzero. The modified gradient
image I
mg
is shown in Fig. 7(b).
Watershed transform
We finally compute the watershed transform of the marker
modified gradient image I
mg
of Fig. 7(b) as follows.
L¼WS Img
ð6Þ
The watershed transformed image L, then superimposes the
watershed ridge lines on the image of Fig. 4(b) to get the
image shown in Fig. 8(a). The image of Fig. 8(a) is
converted into an RGB color image as shown in Fig. 8(b),
for the purpose of visualizing the labeled regions (the
objects of interest).
To differentiate OD, exudates, and cotton wool spots
from other bright areas detected in the background, the
technique (a procedure [24] to convert label matrix of Fig. 8
(a) that is returned by watershed, into an RGB color image)
determines the color to assign to each object based on the
(a) (b)
Fig. 7 a External markers; b
Modified gradient magnitude
(a) (b)
Fig. 6 a Internal markers;
bSuperimposition of the inter-
nal markers on the image of
Fig. 4(b)
1496 J Med Syst (2011) 35:1491–1501
number of objects in the label matrix and range of colors in
the colormap. The procedure picks colors from the entire
range. The objects with similar characteristics are filled up
with the same color. Therefore, the objects filled with white
color within the marked region represent the brightest
regions of interest, such as OD, exudates, and cotton wool
spots. The additional markers in Fig. 8(b) represent other
bright pixels (as the external marker uses a larger area than
the diameters of the OD and exudates) of the retinal image
that belong to the background.
In our study, two features are identified to distinguish the
hard exudates and the cotton wool spots. These two features
are ‘size’and ‘number’(refer to Table 1). The representa-
tions of the feature values are given as follows:
size: value indicates the number of pixels covered.
number: value indicates the number of bright lesions.
In this study, hard exudates and cotton wool spots are
distinguished because of their size and the number of
occurrence in the fundus image. Obviously, the features
‘size’and ‘number’are sufficient to quantify the hard
exudates and the cotton wool spots. In order to define the
hard exudates and the cotton wool spots in this case study
(refer to Table 1), the feature, such as number of hard
exudates or size of hard exudates (if size exceeds the value
of 150 as the parameter is set by using trial and error
procedure on several images) is present, and in addition to
this, number of cotton wool spots or size of cotton wool
spots (if size is below the value) is present.
Results and discussion
The method described in the previous section has been
tested on images of publicly available DRIVE and STARE
databases. The software tools selected are from the
MATLAB Image Processing Toolbox (IPT) [24]. Figure 9
shows some samples of the segmented images obtained by
using the proposed method.
The performance of the proposed algorithm is evaluated
on the basis of three measures: (1) true positive fraction (TPF),
(2) true negative fraction (TNF), and (3) predictive value (PV).
TPF represents the fraction of pixels correctly classified as OD,
exudate, and cotton wool spot pixels. This measure is also
known as sensitivity [5,21]. TNF (also known as specificity)
represents the fraction of pixels erroneously classified as OD,
exudate, and cotton wool spot pixels [5,21]. PV is the
probability that a pixel which has been classified as OD,
exudate, and cotton wool spot is really an OD, exudate, and
cotton wool spot [5,21]. The three measures are calculated
using the following equations [5,7,21].
TPF ¼TP
TP þFN ð7Þ
TNF ¼TN
TN þFP ð8Þ
PV ¼TP
TP þFP ð9Þ
where TP,FN,TN, and FP represent true positive, false
negative, true negative, and false positive values. The TPF,
Table 1 Definition of hard exudates and cotton wool spots
Lesion Definition
HE Small size (> value)&number ≥1 (in most cases, number of
occurrence of HE in fundus images is higher than CWS)
CWS Smaller size (< value)&number ≥1
HE hard exudates; CWS cotton wool spots
(a) (b)
Fig. 8 a Watershed
segmentation result; bSegmen-
tation result in RGB color space
J Med Syst (2011) 35:1491–1501 1497
TNF, and PV values are determined using human graded
images as reference images. Figure 10 shows the sensitiv-
ity, specificity, and the predictive values obtained using the
proposed method on different images of DRIVE and
STARE databases.
Table 2shows the sample images in terms of different
categories of bright lesions along with their feature
characteristics. A total of 20 fundus images related to
diabetic retinopathy are selected from DRIVE and STARE
databases for comparison. Table 3compares the detection
results obtained by our proposed system with those
obtained or judged by the human experts (ophthalmologist
and surgeon from the medical center). It is found that the
results obtained by the proposed method are comparable to
those obtained by others. The results analysis further
reveals that, 16 out of 20 fundus images are correctly
(b) (a) (a)(b)
(b) (a) (a)(b)
(b)
(a) (a)
(b)
(b)
(a) (a)(b)
Fig. 9 a Original image; bSegmented image
1498 J Med Syst (2011) 35:1491–1501
detected by the proposed screening system. The correct
sorting rate (which is defined as the percentage of times the
algorithm finds the correct features over a number of
sample images) for this simulation yields 80%, which
indicates that only four images are wrongly sorted
(highlighted by yellow color in Table 3). The misdetections
are due to poor image quality (as OD is not bright in some
cases) or uncalibrated image processing.
The results obtained using the proposed method is also
compared with those obtained by using Walter’s method
[7]. To perform fair comparison between the proposed
method and Walter’s method, we have implemented
Walter’s method on the same test images of DRIVE and
STARE databases to obtain Fig. 11. Thus, Fig. 11 compares
the sensitivity values and the PV values obtained for test
images using both the methods. Table 4shows the average
values for sensitivity, specificity, and the PV obtained using
the proposed method and Walter’s method [7]. From
Fig. 11 and Table 4, we note that the proposed method
gives better sensitivity or TPF values compared to Walter’s
method [7] based on the test images from DRIVE and
STARE databases. For both the methods, high specificity
Table 2 Feature characteristics for the bright lesions
Note: OD = O
p
tic Disc; HE = Hard Exudates; CWS = Cotton Wool S
p
ots
Feature Name OD OD, HE OD, HE, CWS
Size - small (HE) smaller (CWS)
Number - HE=1 HE=2, CWS=1
Sample Image
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
60
65
70
75
80
85
90
95
100
DRIVE-STARE ima
g
es
Sensitivity-Specificity-Predictive value (%)
Sensitivity (TPF)
Specificity (TNF)
Predictive value
Fig. 10 The TPF, TNF, and PV
values on different images of
DRIVE and STARE databases
J Med Syst (2011) 35:1491–1501 1499
Table 3 Performance comparison with reference images detected by
human experts
Image ID Detected Features
(Human Experts)
Detected Features
(Proposed Method)
im100 OD, HE OD, HE
im0008 OD, HE OD, HE
im110 OD, HE, CWS OD, HE, CWS
im0149 OD, HE HE
im0102 OD, HE OD, HE
im0022 OD
im0023 OD
OD
OD
im0048 OD, HE, CWS
OD
OD
OD, HE, CWS
02_test OD
11_test OD
im0016 OD, HE OD, HE
im0148 OD, HE HE
im0013 OD, HE, CWS OD, HE, CWS
13_test OD OD
im0143 OD, HE HE
im113 OD, HE OD, HE
im111 OD, HE OD, HE
im112 OD, HE OD, HE
im103 OD, HE HE
im104 OD, HE OD, HE
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
70
75
80
85
90
95
100
Test ima
g
es
Sensitivity-Predictive value (%)
Walter method (Sensitivity)
Walter method (Predictive value)
Proposed method (Sensitivity)
Proposed method (Predictive value)
Fig. 11 Performance compari-
son of both the methods with
respect to TPF value and PV
Method TPF value (sensitivity) TNF value (specificity) PV (predictive value)
Walter [7] 92.74% 100% 92.39%
Proposed method 94.90% 100% 92.01%
Table 4 Performance of
segmentation techniques on
DRIVE and STARE databases
1500 J Med Syst (2011) 35:1491–1501
OD optic disc; HE hard exudates; CWS cotton wool spots
(almost 100%) values have been obtained. With respect to
PV, the values obtained by both the methods do not differ
significantly.
Conclusions
This paper describes a new method for automatic detection
of OD and other bright lesions, such as hard exudates and
cotton wool spots in colour fundus images, which is a very
important subject in computer assisted diagnosis of retinal
diseases. The method consists in three steps: preprocessing,
marker construction, and the watershed algorithm. Basical-
ly, the described method consists in applying a classical
marker controlled watershed transformation after pre-
filtering step. The originality or novelty of the paper mainly
relies on the selection of the markers, which successfully
match the OD and exudates. The results are then compared
with the results obtained by a previously published method.
The experimental results on DRIVE and STARE databases
show that the proposed method yields better sensitivity
values compared to Walter’s method. In comparison with
the detection results obtained by the human experts, the
proposed method yields correct sorting rate of 80%.
From a medical point of view, it segments all bright
patterns in a colour fundus image with the possibility to
distinguish between the lesions (e.g., between hard exu-
dates and cotton wool spots); which is actually an
advantage. Hence, the method can be applied for computer
assisted diagnosis of retinal diseases. In the future, for
instance, it is of interest to distinguish between normal and
pathological retinas with the proposed method.
Acknowledgment This research work is supported by E-Science
Project (No: 01-02-01-SF0025) sponsored by Ministry of Science,
Technology and Innovation (MOSTI), Malaysia.
References
1. Reza, A. W., Eswaran, C., and Hati, S., Diabetic retinopathy: A
quadtree based blood vessel detection algorithm using RGB
components in fundus images. J. Med. Syst. 32(2):147–155,
2008.
2. Teng, T., Lefley, M., and Claremont, D., Progress towards
automated diabetic ocular screening: A review of image analysis
and intelligent systems for diabetic retinopathy. Med. Biol. Eng.
Comput. 40:2–13, 2002.
3. Yen, G. G., and Leong, W.-F., A sorting system for hierarchical
grading of diabetic fundus images: A preliminary study. IEEE
Trans Inf Technol Biomed 12(1):118–130, 2008.
4. Usher, D., Dumskyj, M., Himaga, M., Williamson, T. H., Nussey,
S., and Boyce, J., Automated detection of diabetic retinopathy in
digital retinal images: A tool for diabetic retinopathy screening.
Diabet. Med. 21:84–90, 2003.
5. Reza, A. W., Eswaran, C., and Hati, S., Automatic tracing of optic
disc and exudates from color fundus images using fixed and
variable thresholds. J. Med. Syst. 33(1):73–80, 2009.
6. Eswaran, C., Reza, A. W., and Hati, S., Extraction of the contours
of optic disc and exudates based on marker-controlled watershed
segmentation. Proceedings of the International Conference on
Computer Science and Information Technology, Singapore, pp.
719–723, 2008.
7. Walter, T., Klein, J.-C., Massin, P., and Erginay, A., A
contribution of image processing to the diagnosis of diabetic
retinopathy—Detection of exudates in color fundus images of the
human retina. IEEE Trans. Med. Imag. 21(10):1236–1243, 2002.
8. Ward, N. P., Tomlinson, S., and Taylor, C. J., Image analysis of
fundus photographs—The detection and measurement of exudates
associated with diabetic retinopathy. Opthalmol. 96:80–86, 1989.
9. Akita, K., and Kuga, H., A computer method of understanding
ocular fundus images. Pattern Recogn. 15(6):431–443, 1982.
10. Sinthanayothin, C., Boyce, J. F., Cook, H. L., and Williamson, T.
H., Automated localization of the optic disc, fovea and retinal
blood vessels from digital color fundus images. Br. J. Opthalmol.
83:231–238, 1999.
11. Tamura, S., and Okamoto, Y., Zero-crossing interval correction in
tracing eye-fundus blood vessels. Pattern Recogn. 21(3):227–233,
1988.
12. Pinz, A., Prantl, M., and Datlinger, P., Mapping the human retina.
IEEE Trans. Med. Imag. 1:210–215, 1998.
13. Mendels, F., Heneghan, C., and Thiran, J.-P., Identification of the
optic disc boundary in retinal images using active contours.
Proceedings of Irish Machine Vision image Processing (IMVIP),
Maynooth, Ireland, pp. 103–115, 1999.
14. Walter, T., and Klein, J. C., Segmentation of color fundus images
of the human retina: Detection of the optic disc and the vascular
tree using morphological techniques. Proceedings of the second
International Symposium: Medical Data Analysis, Madrid, Spain,
pp. 282–287, 2001.
15. Li, H., and Chutatape, O., Automatic detection and boundary
estimation of the optic disk in retinal images using a model-based
approach. J. Electron. Imag. 12(1):97–105, 2003.
16. Li, H., and Chutatape, O., Automated feature extraction in color
retinal images by a model based approach. IEEE Trans. Biomed.
Eng. 51(2):246–254, 2004.
17. Niemeijer, M., Abramoff, M. D., and van Ginneken, B., Segmen-
tation of the optic disc, macula and vascular arch in fundus
photographs. IEEE Trans. Med. Imag. 26(1):116–127, 2007.
18. Vallabha, D., Dorairaj, R., Namuduri, K., and Thompson, H.,
Automated detection and classification of vascular abnormalities
in diabetic retinopathy. Proceedings of Thirty-Eighth Asilomar
Conference on Signals, Systems and Computers, vol. 2, pp. 1625–
1629, 2004.
19. Phillips, R., Forrester, J., and Sharp, P., Automated detection and
quanification of retinal exudates. Graefe’s Arch. Clin. Exp.
Opthalmol. 231:90–94, 1993.
20. Osareh, A., Mirmehdi,M., Thomas, B., and Markham, R., Automatic
recognition of exudative maculopathy using fuzzy c-means cluster-
ing and neural networks. Proceedings of Medical Image Under-
standing Analysis, UK, pp. 49–52, 2001.
21. Reza, A. W., and Eswaran, C., A decision support system for
automatic screening of non-proliferative diabetic retinopathy. J.
Med. Syst. Springer, 2009. doi:10.1007/s10916-009-9337-y.
22. Gonzalez, R. C., Woods, R. E., and Eddins, S. L., Digital image
processing using MATLAB. Prentice Hall, Upper Saddle River, 2004.
23. Soille, P., Morphological image analysis: principles and applica-
tions, 2nd edition. Springer-Verlag, New York, 2002.
24. Image Processing Toolbox, User’s Guide, Version 4, The Math
Works, Inc., Natick, MA, 2003.
J Med Syst (2011) 35:1491–1501 1501