ArticlePDF Available

Diagnosis of Diabetic Retinopathy: Automatic Extraction of Optic Disc and Exudates from Retinal Images using Marker-controlled Watershed Transformation

Authors:

Abstract

Due to increasing number of diabetic retinopathy cases, ophthalmologists are experiencing serious problem to automatically extract the features from the retinal images. Optic disc (OD), exudates, and cotton wool spots are the main features of fundus images which are used for diagnosing eye diseases, such as diabetic retinopathy and glaucoma. In this paper, a new algorithm for the extraction of these bright objects from fundus images based on marker-controlled watershed segmentation is presented. The proposed algorithm makes use of average filtering and contrast adjustment as preprocessing steps. The concept of the markers is used to modify the gradient before the watershed transformation is applied. The performance of the proposed algorithm is evaluated using the test images of STARE and DRIVE databases. It is shown that the proposed method can yield an average sensitivity value of about 95%, which is comparable to those obtained by the known methods.
ORIGINAL PAPER
Diagnosis of Diabetic Retinopathy: Automatic Extraction
of Optic Disc and Exudates from Retinal Images
using Marker-controlled Watershed Transformation
Ahmed Wasif Reza &C. Eswaran &Kaharudin Dimyati
Received: 9 September 2009 / Accepted: 27 December 2009 / Published online: 29 January 2010
#Springer Science+Business Media, LLC 2010
Abstract Due to increasing number of diabetic retinopathy
cases, ophthalmologists are experiencing serious problem to
automatically extract the features from the retinal images.
Optic disc (OD), exudates, and cotton wool spots are the
main features of fundus images which are used for
diagnosing eye diseases, such as diabetic retinopathy and
glaucoma. In this paper, a new algorithm for the extraction
of these bright objects from fundus images based on
marker-controlled watershed segmentation is presented.
The proposed algorithm makes use of average filtering
and contrast adjustment as preprocessing steps. The concept
of the markers is used to modify the gradient before the
watershed transformation is applied. The performance of
the proposed algorithm is evaluated using the test images of
STARE and DRIVE databases. It is shown that the
proposed method can yield an average sensitivity value of
about 95%, which is comparable to those obtained by the
known methods.
Keywords Diabetic retinopathy .Exudates .
Cotton wool spots .Biomedical imaging and image
processing .Optic disc .Watershed transformation
Introduction
Eye diseases, such as diabetic retinopathy and glaucoma
affect the retina and cause blindness. This condition tends
to occur in patients who have had diabetes for five or more
years. It is reported that more than half of all newly
registered blindness is caused by the retinal diseases and
diabetic retinopathy is one of the main contributors [1].
Automatic screening for eye disease has been shown to be
very effective in preventing loss of sight. Manual analysis
and diagnosis requires a great deal of time and energy to
review retinal images which are obtained by fundus camera.
Therefore, an automated analysis and diagnosis system will
save cost and time significantly considering the large
number of retinal images that need to be manually reviewed
by the medical professionals each year [2]. In summary,
computer-aided analysis can be helpful to assist the
screening procedure in detecting diabetic retinopathy and
to aid the ophthalmologists when they evaluate or diagnose
the fundus images [3]. With this motivation in mind, this
paper presents an automatic tracing technique for the
boundary detection of bright objects, such as optic disc
(OD), exudates, and cotton wool spots in fundus images.
The extraction of these objects forms an important step in
the diagnosis of eye diseases, such as diabetic retinopathy
and glaucoma.
Extraction, detection, and identification of OD, exudates,
and cotton wool spots in the fundus images have been one
of the main focuses in modern research [37]. The
detection of the OD in the human retina is a very important
task. The OD is the entrance of the vessels and the optic
nerve into the retina. It becomes visible in color fundus
images as a bright yellowish or white region. Its shape is
approximately circular, interrupted by the outgoing vessels.
The size of OD differs from patient to patient and its
A. W. Reza (*):K. Dimyati
Faculty of Engineering, Department of Electrical Engineering,
University of Malaya,
50603 Kuala Lumpur, Malaysia
e-mail: awreza98@yahoo.com
C. Eswaran
Faculty of Information Technology, Multimedia University,
63100 Cyberjaya, Malaysia
J Med Syst (2011) 35:14911501
DOI 10.1007/s10916-009-9426-y
diameter lies between 40 and 60 pixels in 640 × 480 color
photographs [7]. The diameter delivers a calibration of the
measurements [8], and it determines approximately the
localization of macula [9], the center of vision, which is of
great importance as objects in the macular region affect
vision directly. Various methods [717], such as high gray
level variation, area threshold, Hough transform, back-
tracking technique, morphological filtering techniques,
watershed transformation, principal component analysis
(PCA), and point distribution model have been reported
for the detection and extraction of OD. On the other hand,
there is extensive research on developing and improving the
image processing algorithms for the detection of exudates
in retinal images [57]. Hard exudates are yellowish
intraretinal deposits, which are usually located in the
posterior pole of the fundus image. The exudates are made
up of serum lipoproteins, considered to leak from the
abnormally permeable blood vessels, especially across the
walls of leaking microaneurysms. Hard exudates may be
observed in several retinal vascular pathologies, but are
a main feature of diabetic macular edema [7]. In fact,
diabetic macular edema is the main reason of visual
impairment in diabetic patients. The most effectual way to
diagnose the macular edema is to detect the hard exudates,
which are normally associated with macular edema. The
exudates are well contrasted relating to the background
that surrounds them, and their shape and size vary
significantly. The cotton wool spots or soft exudates are
infractions in the nerve fiber layers of the retina, which are
round or oval in shape, pale yellow-white in color, and are
results of capillary occlusions that cause permanent
damages to the function of the retina [18]. Hard and soft
exudates can be distinguished because of their color and
the sharpness of their borders. Various methods [8,19,
20], such as shade correction, contrast enhancement,
sharpening, combination of local and global thresholding,
color normalization, fuzzy C-means clustering, and neural
networks have been reported for the detection and
classification of exudates. However, in this study, the
parameters numberand size(refer to Proposed
methodology) are taken into account to differentiate and
quantify the hard exudates and the cotton wool spots in the
fundus images.
In this paper, a new technique for the detection of OD,
exudates, and cotton wool spots in ocular fundus images is
proposed. The proposed method makes use of: (1)
preprocessing algorithms to make the bright object features
more distinguishable from the background, (2) markers to
modify the gradient image to control oversegmentation, and
(3) watershed segmentation to trace the boundary from the
marker modified gradient. The proposed method has the
following advantages compared to Walters method [7] and
our existing method [5]: (1) there is no need to select
different threshold value for different test images those of
which are varying with different brightness and contrast
conditions (the method proposed in this study only relies on
H-minima, which requires a fixed threshold), (2) it
automatically segments (does not require manual interac-
tion) all bright lesions in a colour fundus image with the
possibility to distinguish between hard exudates and cotton
wool spots, (3) the proposed method gives an average
sensitivity value of about 95%, which is comparable to [7]
and [5], and (4) it will be very useful in the development of
computer based automatic screening systems for the early
diagnosis of diabetic retinopathy.
The first part of this introductory section is dedicated to
the discussion of the ways in which automated image
processing technique can contribute to the diagnosis of
diabetic retinopathy, and also highlights the properties and
state of the art of OD, hard exudates, and cotton wool spots.
The organization of the remaining part of this paper is as
follows: in Proposed methodology, the methodology for
detecting bright objects from the fundus images is
presented. Results and discussionpresents the test results.
Moreover, it includes a comparative performance study
with a known method [7] and also with those obtained by
human experts. Finally, Conclusionsconcludes the paper
along with suggestions for foreseeable future work.
Proposed methodology
Figure 1shows flow chart of the proposed method with
symbol for input and output.
Average filtering
In the original fundus image (Fig. 2a), the intensity
variation between the bright objects and the blood vessels
is relatively high and the vessels usually have poor local
contrast with respect to the background. To isolate the OD
and other bright parts is a fundamental problem and hence,
preprocessing of the image for subsequent analysis
becomes extremely crucial. Therefore, in the first phase,
an averaging filter of size 25× 35 pixels (the parameter is
set by using trial and error procedure on several images)
containing equal weights of value 1is applied to the
original image R
i
(x,y) (where i125×35) in order to
blend the small objects with low intensity variations into
the background, while leaving the objects of interest
relatively unchanged. The average filter is implemented
using the following Eq. 1[5,21].
fx;yðÞ¼ 1
MNX
MN
i¼1
Rix;yðÞ ð1Þ
1492 J Med Syst (2011) 35:14911501
Average
Filtering
Input RGB
Image
Contrast
Stretching
Transformation
Extended Minima
Transformation
),( yxRi
),( yxf
),( yxT
(, )gxy
i
m
me
Img
L
Negative
Transformation
Inverse of Internal
Marker
Euclidean Distance
Transform
Watershed
Transform
Superimposition
Minima
Imposition
RGB Segmented
Output
RGB Conversion
i
m
)(~ i
m
))((~ id m
ε
)( ei mm
n
I
Gradient
Magnitude
Fig. 1 Phases of the proposed
algorithm
(
a
)(
b
)
Fig. 2 a Original image; b
Average filtered image
J Med Syst (2011) 35:14911501 1493
where M=25 and N=35. In Eq. 1, average filtering uses a
25×35 mask and then take average of all values within the
mask to obtain the effect at each point (x,y) in the original
image. The resultant image f(x,y) after applying average
filter in Eq. 1is shown in Fig. 2(b). As shown from Fig. 2
(b), the small objects are blended into the background and
the objects of interest are relatively unchanged.
Contrast stretching transformation
As the green channel contains good contrast between the
background and the bright retinal components, it is reliable
to work on the green channel of the RGB color space in
order to localize the OD and exudates [5,10,21]. The green
component is extracted from the RGB color image as
shown in Fig. 3(a). This image is then automatically
enhanced by applying the contrast stretching transformation
shown in Fig. 3(b) to make the bright object features more
distinguishable from the background. In this transforma-
tion, only the darker regions have their intensity values
enhanced slightly while the brighter regions of the image
remain more or less unchanged. The result is an image of
higher contrast, which is achieved by using the contrast
stretching transform function as shown below [5,21].
Tx;yðÞ¼bfx;yðÞ½
nð2Þ
where f(x,y) and T(x,y) represent the input and output
(processed) pixel intensity values, respectively, 0n1, and
β=inmax
1n
, where inmax is the desired upper limit
intensity value in the output image. The resulting trans-
formed image T(x,y) is shown in Fig. 4(a). In the contrast
stretching transformation, the parameter of nequals 1in
Eq. 2. In this case, low intensity values of the darker
regions have enhanced their intensity values gradually until
it reaches maximum limit intensity value inmax as indicated
in Fig. 3(b).
Gradient magnitude
Figure 4(b) is obtained from Fig. 4(a) by applying a
negative transform. The watershed transform can be applied
to find catchment basins and watershed ridge lines in the
image of Fig. 4(b), which treats the image as a surface
where the light pixels (background pixels) have high
intensity values and the dark pixels (object pixels) have
low intensity values. The image shown in Fig. 4(b) contains
several dark blobs. The gradient magnitude is used to
preprocess the image prior to using the watershed transform
for segmentation. By computing the gradient magnitude of
Fig. 4(b) using a linear filter (e.g., Sobel [7,22]), we obtain
Fig. 5(a). It is seen from Fig. 5(a) that the gradient
magnitude image g(x,y) has high pixel values along the
edges and low pixel values everywhere else. If watershed
transform is directly applied on Fig. 5(a), we would get an
image as shown in Fig. 5(b), which contains too many
watershed ridge lines that do not correspond to the objects
in which we are interested. Direct application of the
watershed transform to a gradient image usually leads to
oversegmentation due to noise and other local irregularities
of the gradient [22].
We can see that the image in Fig. 5(b) is severely
oversegmented, due to the presence of a large number of
regional minima. To control the oversegmentation, an
approach based on the concept of markers [7,22] is made
use of. A marker is a connected component belonging to an
image. We identify two types of markers, namely, internal
markers (associated with objects of interest), and external
markers (associated with the background). These markers
are then used to modify the gradient image.
0 50 100 150 200 250
0
50
100
150
200
250
Input intensity values, f(x,y)
Output intensity values, T(x,y)
Maximum
Limit
Contras t
Stret ching
Transform
Function
n=1
inmax
(a) (b)
Fig. 3 a Green channel; bContrast stretching transformation
1494 J Med Syst (2011) 35:14911501
Extended minima transformation
The internal markers m
i
shown in Fig. 6(a) are obtained from
the negative image of Fig. 4(b) using the extended minima
transformation, f
imextendedmin
[22,23]. The f
imextendedmin
com-
putes the extended-minima transform, which is the regional
minima of the H-minima transform. In Eq. 3,I
n
represents the
negative image (Fig. 4b)andHis the height threshold, which
requires a fixed parameter (a threshold: H=2).
mi¼fimextendedmin In;HðÞ ð3Þ
The m
i
is a binary image whose foreground pixels mark
the locations of the deep regional minima. The extended
minima transformation determines the groups of brightest
pixels belonging to the foreground such that the points in
each region form a connected component and all the points
in the connected component have the same gray-level
value. The internal markers superimpose the extended
minima locations as gray blobs on the image of Fig. 4(b)
to get the image shown in Fig. 6(b).
Euclidean distance transform
Next, we find external markers [22], or pixels that we are
confident belong to the background. The external markers m
e
effectively partition the image (Fig. 6a)intoregions,with
each region contains single internal marker and part of the
background. The approach is to mark the background by
finding pixels that are exactly midway between the internal
markers m
i
. The problem is thus reduced to partitioning each
of these regions into two: (1) a single object and (2) its
background. This can be done by computing the watershed
segmentation (WS) of the distance map of the inverse of m
i
as shown in Eq. 4.
me¼WS "dmi
ðÞðÞ½ ð4Þ
where ε
d
represents the Euclidean distance transform, which
is used in conjunction with the watershed transform for
segmentation. For each pixel in m
i
, the distance transform
assigns a number that is the distance between that pixel and
the nearest nonzero pixel of m
i
. Figure 7(a) shows the
(a) (b)
Fig. 4 a Image after contrast
adjustment; bNegative image of
Fig. 4(a)
(a) (b)
Fig. 5 a Gradient image; b
Oversegmentation resulting
from applying the watershed
transform to Fig. 5(a)
J Med Syst (2011) 35:14911501 1495
resulting watershed ridge lines as external markers in the
binary image.
Minima imposition
Given both internal markers and external markers, the
gradient image g(x,y) of Fig. 5(a) is modified using a
procedure called minima imposition, f
imposemin
[22]. This
technique modifies the image so that regional minima occur
only in marked location. Other pixels values are pushed up
as necessary to remove all other regional minima. We
modify the gradient image by imposing regional minima at
the locations of both the internal and the external markers
as shown in Eq. 5.
Img ¼fimposemin gx;yðÞ;mime
j
ðÞðÞ ð5Þ
where f
imposemin
modifies the gradient image g(x,y) using
morphological reconstruction so that it only has regional
minima wherever superimposition (denoted by logical or)
of m
i
and m
e
i.e., (m
i
|m
e
) is nonzero. The modified gradient
image I
mg
is shown in Fig. 7(b).
Watershed transform
We finally compute the watershed transform of the marker
modified gradient image I
mg
of Fig. 7(b) as follows.
L¼WS Img
 ð6Þ
The watershed transformed image L, then superimposes the
watershed ridge lines on the image of Fig. 4(b) to get the
image shown in Fig. 8(a). The image of Fig. 8(a) is
converted into an RGB color image as shown in Fig. 8(b),
for the purpose of visualizing the labeled regions (the
objects of interest).
To differentiate OD, exudates, and cotton wool spots
from other bright areas detected in the background, the
technique (a procedure [24] to convert label matrix of Fig. 8
(a) that is returned by watershed, into an RGB color image)
determines the color to assign to each object based on the
(a) (b)
Fig. 7 a External markers; b
Modified gradient magnitude
(a) (b)
Fig. 6 a Internal markers;
bSuperimposition of the inter-
nal markers on the image of
Fig. 4(b)
1496 J Med Syst (2011) 35:14911501
number of objects in the label matrix and range of colors in
the colormap. The procedure picks colors from the entire
range. The objects with similar characteristics are filled up
with the same color. Therefore, the objects filled with white
color within the marked region represent the brightest
regions of interest, such as OD, exudates, and cotton wool
spots. The additional markers in Fig. 8(b) represent other
bright pixels (as the external marker uses a larger area than
the diameters of the OD and exudates) of the retinal image
that belong to the background.
In our study, two features are identified to distinguish the
hard exudates and the cotton wool spots. These two features
are sizeand number(refer to Table 1). The representa-
tions of the feature values are given as follows:
size: value indicates the number of pixels covered.
number: value indicates the number of bright lesions.
In this study, hard exudates and cotton wool spots are
distinguished because of their size and the number of
occurrence in the fundus image. Obviously, the features
sizeand numberare sufficient to quantify the hard
exudates and the cotton wool spots. In order to define the
hard exudates and the cotton wool spots in this case study
(refer to Table 1), the feature, such as number of hard
exudates or size of hard exudates (if size exceeds the value
of 150 as the parameter is set by using trial and error
procedure on several images) is present, and in addition to
this, number of cotton wool spots or size of cotton wool
spots (if size is below the value) is present.
Results and discussion
The method described in the previous section has been
tested on images of publicly available DRIVE and STARE
databases. The software tools selected are from the
MATLAB Image Processing Toolbox (IPT) [24]. Figure 9
shows some samples of the segmented images obtained by
using the proposed method.
The performance of the proposed algorithm is evaluated
on the basis of three measures: (1) true positive fraction (TPF),
(2) true negative fraction (TNF), and (3) predictive value (PV).
TPF represents the fraction of pixels correctly classified as OD,
exudate, and cotton wool spot pixels. This measure is also
known as sensitivity [5,21]. TNF (also known as specificity)
represents the fraction of pixels erroneously classified as OD,
exudate, and cotton wool spot pixels [5,21]. PV is the
probability that a pixel which has been classified as OD,
exudate, and cotton wool spot is really an OD, exudate, and
cotton wool spot [5,21]. The three measures are calculated
using the following equations [5,7,21].
TPF ¼TP
TP þFN ð7Þ
TNF ¼TN
TN þFP ð8Þ
PV ¼TP
TP þFP ð9Þ
where TP,FN,TN, and FP represent true positive, false
negative, true negative, and false positive values. The TPF,
Table 1 Definition of hard exudates and cotton wool spots
Lesion Definition
HE Small size (> value)&number 1 (in most cases, number of
occurrence of HE in fundus images is higher than CWS)
CWS Smaller size (< value)&number 1
HE hard exudates; CWS cotton wool spots
(a) (b)
Fig. 8 a Watershed
segmentation result; bSegmen-
tation result in RGB color space
J Med Syst (2011) 35:14911501 1497
TNF, and PV values are determined using human graded
images as reference images. Figure 10 shows the sensitiv-
ity, specificity, and the predictive values obtained using the
proposed method on different images of DRIVE and
STARE databases.
Table 2shows the sample images in terms of different
categories of bright lesions along with their feature
characteristics. A total of 20 fundus images related to
diabetic retinopathy are selected from DRIVE and STARE
databases for comparison. Table 3compares the detection
results obtained by our proposed system with those
obtained or judged by the human experts (ophthalmologist
and surgeon from the medical center). It is found that the
results obtained by the proposed method are comparable to
those obtained by others. The results analysis further
reveals that, 16 out of 20 fundus images are correctly
(b) (a) (a)(b)
(b) (a) (a)(b)
(b)
(a) (a)
(b)
(b)
(a) (a)(b)
Fig. 9 a Original image; bSegmented image
1498 J Med Syst (2011) 35:14911501
detected by the proposed screening system. The correct
sorting rate (which is defined as the percentage of times the
algorithm finds the correct features over a number of
sample images) for this simulation yields 80%, which
indicates that only four images are wrongly sorted
(highlighted by yellow color in Table 3). The misdetections
are due to poor image quality (as OD is not bright in some
cases) or uncalibrated image processing.
The results obtained using the proposed method is also
compared with those obtained by using Walters method
[7]. To perform fair comparison between the proposed
method and Walters method, we have implemented
Walters method on the same test images of DRIVE and
STARE databases to obtain Fig. 11. Thus, Fig. 11 compares
the sensitivity values and the PV values obtained for test
images using both the methods. Table 4shows the average
values for sensitivity, specificity, and the PV obtained using
the proposed method and Walters method [7]. From
Fig. 11 and Table 4, we note that the proposed method
gives better sensitivity or TPF values compared to Walters
method [7] based on the test images from DRIVE and
STARE databases. For both the methods, high specificity
Table 2 Feature characteristics for the bright lesions
Note: OD = O
p
tic Disc; HE = Hard Exudates; CWS = Cotton Wool S
p
ots
Feature Name OD OD, HE OD, HE, CWS
Size - small (HE) smaller (CWS)
Number - HE=1 HE=2, CWS=1
Sample Image
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
60
65
70
75
80
85
90
95
100
DRIVE-STARE ima
g
es
Sensitivity-Specificity-Predictive value (%)
Sensitivity (TPF)
Specificity (TNF)
Predictive value
Fig. 10 The TPF, TNF, and PV
values on different images of
DRIVE and STARE databases
J Med Syst (2011) 35:14911501 1499
Table 3 Performance comparison with reference images detected by
human experts
Image ID Detected Features
(Human Experts)
Detected Features
(Proposed Method)
im100 OD, HE OD, HE
im0008 OD, HE OD, HE
im110 OD, HE, CWS OD, HE, CWS
im0149 OD, HE HE
im0102 OD, HE OD, HE
im0022 OD
im0023 OD
OD
OD
im0048 OD, HE, CWS
OD
OD
OD, HE, CWS
02_test OD
11_test OD
im0016 OD, HE OD, HE
im0148 OD, HE HE
im0013 OD, HE, CWS OD, HE, CWS
13_test OD OD
im0143 OD, HE HE
im113 OD, HE OD, HE
im111 OD, HE OD, HE
im112 OD, HE OD, HE
im103 OD, HE HE
im104 OD, HE OD, HE
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
70
75
80
85
90
95
100
Test ima
g
es
Sensitivity-Predictive value (%)
Walter method (Sensitivity)
Walter method (Predictive value)
Proposed method (Sensitivity)
Proposed method (Predictive value)
Fig. 11 Performance compari-
son of both the methods with
respect to TPF value and PV
Method TPF value (sensitivity) TNF value (specificity) PV (predictive value)
Walter [7] 92.74% 100% 92.39%
Proposed method 94.90% 100% 92.01%
Table 4 Performance of
segmentation techniques on
DRIVE and STARE databases
1500 J Med Syst (2011) 35:14911501
OD optic disc; HE hard exudates; CWS cotton wool spots
(almost 100%) values have been obtained. With respect to
PV, the values obtained by both the methods do not differ
significantly.
Conclusions
This paper describes a new method for automatic detection
of OD and other bright lesions, such as hard exudates and
cotton wool spots in colour fundus images, which is a very
important subject in computer assisted diagnosis of retinal
diseases. The method consists in three steps: preprocessing,
marker construction, and the watershed algorithm. Basical-
ly, the described method consists in applying a classical
marker controlled watershed transformation after pre-
filtering step. The originality or novelty of the paper mainly
relies on the selection of the markers, which successfully
match the OD and exudates. The results are then compared
with the results obtained by a previously published method.
The experimental results on DRIVE and STARE databases
show that the proposed method yields better sensitivity
values compared to Walters method. In comparison with
the detection results obtained by the human experts, the
proposed method yields correct sorting rate of 80%.
From a medical point of view, it segments all bright
patterns in a colour fundus image with the possibility to
distinguish between the lesions (e.g., between hard exu-
dates and cotton wool spots); which is actually an
advantage. Hence, the method can be applied for computer
assisted diagnosis of retinal diseases. In the future, for
instance, it is of interest to distinguish between normal and
pathological retinas with the proposed method.
Acknowledgment This research work is supported by E-Science
Project (No: 01-02-01-SF0025) sponsored by Ministry of Science,
Technology and Innovation (MOSTI), Malaysia.
References
1. Reza, A. W., Eswaran, C., and Hati, S., Diabetic retinopathy: A
quadtree based blood vessel detection algorithm using RGB
components in fundus images. J. Med. Syst. 32(2):147155,
2008.
2. Teng, T., Lefley, M., and Claremont, D., Progress towards
automated diabetic ocular screening: A review of image analysis
and intelligent systems for diabetic retinopathy. Med. Biol. Eng.
Comput. 40:213, 2002.
3. Yen, G. G., and Leong, W.-F., A sorting system for hierarchical
grading of diabetic fundus images: A preliminary study. IEEE
Trans Inf Technol Biomed 12(1):118130, 2008.
4. Usher, D., Dumskyj, M., Himaga, M., Williamson, T. H., Nussey,
S., and Boyce, J., Automated detection of diabetic retinopathy in
digital retinal images: A tool for diabetic retinopathy screening.
Diabet. Med. 21:8490, 2003.
5. Reza, A. W., Eswaran, C., and Hati, S., Automatic tracing of optic
disc and exudates from color fundus images using fixed and
variable thresholds. J. Med. Syst. 33(1):7380, 2009.
6. Eswaran, C., Reza, A. W., and Hati, S., Extraction of the contours
of optic disc and exudates based on marker-controlled watershed
segmentation. Proceedings of the International Conference on
Computer Science and Information Technology, Singapore, pp.
719723, 2008.
7. Walter, T., Klein, J.-C., Massin, P., and Erginay, A., A
contribution of image processing to the diagnosis of diabetic
retinopathyDetection of exudates in color fundus images of the
human retina. IEEE Trans. Med. Imag. 21(10):12361243, 2002.
8. Ward, N. P., Tomlinson, S., and Taylor, C. J., Image analysis of
fundus photographsThe detection and measurement of exudates
associated with diabetic retinopathy. Opthalmol. 96:8086, 1989.
9. Akita, K., and Kuga, H., A computer method of understanding
ocular fundus images. Pattern Recogn. 15(6):431443, 1982.
10. Sinthanayothin, C., Boyce, J. F., Cook, H. L., and Williamson, T.
H., Automated localization of the optic disc, fovea and retinal
blood vessels from digital color fundus images. Br. J. Opthalmol.
83:231238, 1999.
11. Tamura, S., and Okamoto, Y., Zero-crossing interval correction in
tracing eye-fundus blood vessels. Pattern Recogn. 21(3):227233,
1988.
12. Pinz, A., Prantl, M., and Datlinger, P., Mapping the human retina.
IEEE Trans. Med. Imag. 1:210215, 1998.
13. Mendels, F., Heneghan, C., and Thiran, J.-P., Identification of the
optic disc boundary in retinal images using active contours.
Proceedings of Irish Machine Vision image Processing (IMVIP),
Maynooth, Ireland, pp. 103115, 1999.
14. Walter, T., and Klein, J. C., Segmentation of color fundus images
of the human retina: Detection of the optic disc and the vascular
tree using morphological techniques. Proceedings of the second
International Symposium: Medical Data Analysis, Madrid, Spain,
pp. 282287, 2001.
15. Li, H., and Chutatape, O., Automatic detection and boundary
estimation of the optic disk in retinal images using a model-based
approach. J. Electron. Imag. 12(1):97105, 2003.
16. Li, H., and Chutatape, O., Automated feature extraction in color
retinal images by a model based approach. IEEE Trans. Biomed.
Eng. 51(2):246254, 2004.
17. Niemeijer, M., Abramoff, M. D., and van Ginneken, B., Segmen-
tation of the optic disc, macula and vascular arch in fundus
photographs. IEEE Trans. Med. Imag. 26(1):116127, 2007.
18. Vallabha, D., Dorairaj, R., Namuduri, K., and Thompson, H.,
Automated detection and classification of vascular abnormalities
in diabetic retinopathy. Proceedings of Thirty-Eighth Asilomar
Conference on Signals, Systems and Computers, vol. 2, pp. 1625
1629, 2004.
19. Phillips, R., Forrester, J., and Sharp, P., Automated detection and
quanification of retinal exudates. Graefes Arch. Clin. Exp.
Opthalmol. 231:9094, 1993.
20. Osareh, A., Mirmehdi,M., Thomas, B., and Markham, R., Automatic
recognition of exudative maculopathy using fuzzy c-means cluster-
ing and neural networks. Proceedings of Medical Image Under-
standing Analysis, UK, pp. 4952, 2001.
21. Reza, A. W., and Eswaran, C., A decision support system for
automatic screening of non-proliferative diabetic retinopathy. J.
Med. Syst. Springer, 2009. doi:10.1007/s10916-009-9337-y.
22. Gonzalez, R. C., Woods, R. E., and Eddins, S. L., Digital image
processing using MATLAB. Prentice Hall, Upper Saddle River, 2004.
23. Soille, P., Morphological image analysis: principles and applica-
tions, 2nd edition. Springer-Verlag, New York, 2002.
24. Image Processing Toolbox, Users Guide, Version 4, The Math
Works, Inc., Natick, MA, 2003.
J Med Syst (2011) 35:14911501 1501
... It was suggested to use a top-down method to rank the disease's severity by assessing the macular region's symmetry [73]. The suggested method, which is use the H-minimum (fixed threshold) idea [74], distinguishes between luminous lesions like cotton wool and hard exudates patches by using markers derived from an pre-processing method with moderate filtering used for modifying gradient images and segmenting watersheds using boundary tracing. ...
Article
Full-text available
Background: This comprehensive review aims to provide a thorough overview of exudate detection techniques with a focus on their application in diagnosing diabetic retinopathy(DR) early. Main body of the abstract: This review employs a systematic analysis of peer-reviewed articles, investigating the utilization of deep learning techniques in exudate detection. These techniques encompass convolutional neural networks, fuzzy c-means clustering, neural networks, and more. The precise detection and quantification of exudates are pivotal in monitoring the progression of DR, as they serve as crucial indicators for assessing the risk factors associated with vision-threatening complications. Conventional methods are prone to erroneous clinical decisions due to factors like observer fatigue and subjectivity during interpretation. Consequently, an increasing number of deep learning-based approaches have emerged to address these limitations. Short conclusion: The techniques for detecting diabetic retinopathy exudates demonstrate considerable promise in terms of accuracy and efficiency. Nonetheless, further research is imperative to develop more robust and reliable methods, facilitating early diagnosis and timely intervention in cases of DR.
... Roychowdhury et al. [16] evaluated AdaBoost, k-nearest neighbor (KNN), SVM, and Gaussian Mixture Model (GMM) for classifying retinopathy lesions from fundus images. Reza et al. [17] proposed a machine learning algorithm based on marker-controlled watershed segmentation to detect optic disc, exudates, and cotton wool spots in fundus images and further identified DR. In general, most traditional machine learning methods still depend heavily on manual feature extraction and selection, which is not only a trial-and-error labor-intensive process but also depends on human expertise [15], [18], [19]. ...
Preprint
Full-text available
p>Diabetic retinopathy (DR), caused by damage to the blood vessels in the tissue of the retina, is a microvascular complication of diabetes. DR is the leading cause of vision loss among working-aged adults. However, due to the low compliance rate of DR screening and expensive medical devices for ophthalmic exams, many DR patients did not seek proper medical attention until DR develops to irreversible stages (i.e., vision loss). Fortunately, the widely available electronic health record (EHR) databases provide an unprecedented opportunity to develop cost-effective machine-learning tools for DR detection. This paper proposes a Multi-branching Temporal Convolutional Network with Tensor Data Completion (MB-TCN-TC) model to analyze the longitudinal EHRs collected from diabetic patients for DR prediction. Experimental results demonstrate that the proposed MB-TCN-TC model not only effectively copes with the imbalanced data and missing value issues commonly seen in EHR datasets but also captures the temporal correlation and complicated interactions among medical variables in the longitudinal clinical records, yielding superior prediction performance compared to existing methods.</p
... Roychowdhury et al. [16] evaluated AdaBoost, k-nearest neighbor (KNN), SVM, and Gaussian Mixture Model (GMM) for classifying retinopathy lesions from fundus images. Reza et al. [17] proposed a machine learning algorithm based on marker-controlled watershed segmentation to detect optic disc, exudates, and cotton wool spots in fundus images and further identified DR. In general, most traditional machine learning methods still depend heavily on manual feature extraction and selection, which is not only a trial-and-error labor-intensive process but also depends on human expertise [15], [18], [19]. ...
Article
Full-text available
Diabetic retinopathy (DR), a microvascular complication of diabetes, is the leading cause of vision loss among working-aged adults. However, due to the low compliance rate of DR screening and expensive medical devices for ophthalmic exams, many DR patients did not seek proper medical attention until DR develops to irreversible stages (i.e., vision loss). Fortunately, the widely available electronic health record (EHR) databases provide an unprecedented opportunity to develop cost-effective machine-learning tools for DR detection. This paper proposes a Multi-branching Temporal Convolutional Network with Tensor Data Completion (MB-TCN-TC) model to analyze the longitudinal EHRs collected from diabetic patients for DR prediction. Experimental results demonstrate that the proposed MB-TCN-TC model not only effectively copes with the imbalanced data and missing value issues commonly seen in EHR datasets but also captures the temporal correlation and complicated interactions among medical variables in the longitudinal clinical records, yielding superior prediction performance compared to existing methods. Specifically, our MB-TCN-TC model provides AUROC and AUPRC scores of 0.949 and 0.793 respectively, achieving an improvement of 6.27% on AUROC, 11.85% on AUPRC, and 19.3% on F1 score compared with the traditional TCN model.
... Roychowdhury et al. [16] evaluated AdaBoost, k-nearest neighbor (KNN), SVM, and Gaussian Mixture Model (GMM) for classifying retinopathy lesions from fundus images. Reza et al. [17] proposed a machine learning algorithm based on marker-controlled watershed segmentation to detect optic disc, exudates, and cotton wool spots in fundus images and further identified DR. In general, most traditional machine learning methods still depend heavily on manual feature extraction and selection, which is not only a trial-and-error labor-intensive process but also depends on human expertise [15], [18], [19]. ...
Preprint
Full-text available
p>Diabetic retinopathy (DR), caused by damage to the blood vessels in the tissue of the retina, is a microvascular complication of diabetes. DR is the leading cause of vision loss among working-aged adults. However, due to the low compliance rate of DR screening and expensive medical devices for ophthalmic exams, many DR patients did not seek proper medical attention until DR develops to irreversible stages (i.e., vision loss). Fortunately, the widely available electronic health record (EHR) databases provide an unprecedented opportunity to develop cost-effective machine-learning tools for DR detection. This paper proposes a Multi-branching Temporal Convolutional Network with Tensor Data Completion (MB-TCN-TC) model to analyze the longitudinal EHRs collected from diabetic patients for DR prediction. Experimental results demonstrate that the proposed MB-TCN-TC model not only effectively copes with the imbalanced data and missing value issues commonly seen in EHR datasets but also captures the temporal correlation and complicated interactions among medical variables in the longitudinal clinical records, yielding superior prediction performance compared to existing methods.</p
... Garcia et al. in 2009 [8] classified exudates using three neural network classifiers after segmenting exudates using a combination of the local and global thresholds. While W. Reza, C. Eswaran, and K, Dumyat, "in 2009 [7] proposed a methodology based on mixture models to discriminate between hard and soft exudates after separating exudates from the background. The image was adjusted using a geometrical model of the average retinal vessel orientation in relation to the position of the optic disk. ...
Article
Full-text available
Diabetic retinopathy is a leading cause of blindness in people with diabetes. Proliferative diabetic retinopathy is characterized by neovascularization of the retina as a result of a severe vascular problem. The automatic detection of such new vessels would be helpful in assessing the severity of diabetic retinopathy, and it is an important element of the screening procedure to identify those who may have the disease. Their diabetic retinopathy necessitates rapid care. The early and precise identification of proliferative diabetic retinopathy is critical for the patient's eyesight protection. Automated techniques for detecting proliferative diabetic retinopathy in digital retinal images should be able to distinguish between normal and pathological vessels. Using a multivariate m-Mediods-based classifier, statistical texture analysis (STA), high order spectrum analysis (HOS), and fractal analysis (FA), we suggested a new method for detecting aberrant blood vessels and evaluating proliferative diabetic retinopathy in this paper. The system extracts the vascular pattern and optic disc, using a multilayered thresholding technique and the Hough transform.
... Numerous studies over the past few decades have focused on the rising prevalence of diabetes and the consequent need for improved methods of measuring a range of retinal parameters. Scanning the retina and evaluating the layer of blood vessels at the back of the eye is the basis of retinal biometrics [2]. About 3 million people around the world have been diagnosed with DR. ...
Article
Full-text available
Recent advances in digital image analysis have improved diagnostic procedures and given healthcare experts vital information. Previously, ophthalmologists analysed medical images, which were limited by their non-systematic search pattern, image noise, and illness complexity. CAD systems were developed by combining image data and clinical information. This technology detects lesions, assesses disease extent, and supports diagnostic decisions to improve healthcare systems. diabetic retinopathy has increased mortality. Using a computer-aided decision support system can help ophthalmologists detect diabetic retinopathy early and reduce blindness. Retinal image screening classifies and grades retinal images effectively. Decision assistance systems were developed using many methods. Using a variety of samples and experts, a computer-aided diagnostic system was developed and evaluated to categorise retinal images using SVM. It can be concluded from the results that the accuracy with which retinal pictures are categorised as normal or abnormal increases by using several criteria and selecting the SVM classifier. There is no correlation between the size of the input space and the computational complexity of SVM. Therefore, it boosts classification precision across the board by decreasing erroneous acceptance and false rejection rates. The SVM classifier has been found to have both a lower false acceptance rate and a lower false rejection rate than competing classifiers. After putting the suggested system through its paces, its accuracy was determined to be 97.65 percent. The proposed technology can act as a third party observer in clinical decision-making. Images of lesions had an average sensitivity of 96.89 percent, a specificity of 98.76 percent, and an accuracy of 98.1 percent. The experimental findings prove that the proposed technique has better sensitivity, specificity, accuracy, and predictive values than the alternatives. The proposed method produces grading outcomes that are comparable to those obtained using other methods.
... The proposed method makes the use of average filtering and contrast adjustment as preprocessing steps. The result obtained was around 95% of sensitivity (Reza, Eswaran, and Dimyati 2010). Proliferative Diabetic Retinopathy is an early stage of diabetic retinopathy and is classified into three stages: mild, moderate, and severe PDR. ...
Article
Aim: The aim of this research work is for the presence of Innovative Proliferative Diabetic Retinopathy Detection, using modern algorithms, and comparing the Peak Signal to Noise Ratio (PSNR) between Watershed Algorithms and K-Means Clustering Algorithm. Materials and methods: The sample images were taken from kaggle’s website. Samples were considered as (N=24) for Watershed Algorithm and (N=24) for K-means clustering algorithm in accordance with total sample size calculated using clinicalc.com by keeping alpha error-threshold value 0.05, enrollment ratio as 0.1, 95% confidence interval, G power as 80%. The PSNR was calculated by using the MATLAB Programming with a standard data set. Results: Comparison of PSNR is done by independent sample test using SPSS software. There is a statistical significant difference between Watershed Algorithm and K-means clustering algorithm with p<0.001, p<0.05 (PSNR=10.8205) using Watershed Algorithm showed better results in comparison to K-Means Clustering Algorithm (PSNR=9.7350). Conclusion: Watershed Algorithms were found to give higher PSNR than in K-Means Clustering Algorithms for the Innovative Proliferative Diabetic Retinopathy Detection.
... 1 Finding the local minima as shown in Figure 7 Watershed transform approach is oversensitive and leads to define boundaries for each local minima, that gives rise to the regions present in the mammogram, leading in over segmentation. Hence, over-segmentation is major limitation of watershed segmentation approach (Reza et al., 2011;Parvati et al., 2009). Which can be overcome, either by amalgamating (merging) homogeneous regions before applying watershed transform or avoiding the computation for number of regions by reducing number of minima. ...
Chapter
Stroke is one of the common causes of death worldwide. Stroke is the inability of a focus to be fed in the brain due to clogged or bleeding of the vessels feeding the brain. Because early stroke treatment and diagnosis are related to a favorable patient outcome, time is a critical aspect of successful stroke treatment. In this chapter, we examine the stroke classification from Brain Stroke CT Dataset, with deep learning architectures. In the experimental study, a total of 2501 brain stroke computed tomography (CT) images were used for testing and training. For this purpose, numerus widely known pretrained convolutional neural networks (CNNs) such as GoogleNet, AlexNet, VGG-16, VGG-19, and Residual CNN were used to classify brain stroke CT images as normal and as stroke. Several performance metrics such as accuracy (ACC), specificity (SPE), sensitivity (SEN), and F-score are used to evaluate the performances of the classifier. The best classification results are achieved by VGG-19 with ACC 97.06%, SEN 97.41%, SPE 96.49%, and F-score 96.95%.
Conference Paper
Full-text available
This paper presents new algorithms based on mathematical morphology for the detection of the optic disc and the vascular tree in noisy low contrast color fundus photographs. Both features - vessels and optic disc - deliver landmarks for image registration and are indispensable to the understanding of retinal fundus images. For the detection of the optic disc, we first find the position approximately. Then we find the exact contours by means of the watershed transformation. The algorithm for vessel detection consists in contrast enhancement, application of the morphological top-hat-transform and a post-filtering step in order to distinguish the vessels from other blood containing features.
Article
Retinal exudates are typically manifested as spatially random yellow/white patches of varying sizes and shapes. They are a characteristic feature of retinal diseases such as diabetic maculopathy. An automatic method for the detection of exudate regions is introduced comprising image colour normalisation, enhancing the contrast between the objects and background, segmenting the colour retinal image into homogenous regions using Fuzzy C-Means clustering, and classifying the regions into exudates and non exudates patches using a neural network. Experimental results indicate that we are able to achieve 92% sensitivity and 82% specificity.
Article
A computer-based image analysis system was used to detect and measure exudates in fundus photographs. A fundus transparency was imaged, digitized, and stored in image memory. The stored image was then processed by several operators, to reduce shade variations in the image background and enhance the contrast between this background and the exudates. Exudates were separated from the background on the basis of their brightness or “gray level” and were then copied in to a binary image. For comparative purposes, the binary image was superimposed on the original unprocessed image. Exudate areas were measured using the binary image, which was also transferred to a printer to provide a permanent record or “exudate map”. The system was able to discriminate between standard photographs used to grade hard exudates in the Early Treatment for Diabetic Retinopathy Study (ETDRS). It was also used to monitor the response of a subject to laser treatment.
Article
We are developing a health screening system for color eye-fundus photography. The system is designed to detect the first signs of adult diseases, for which purpose it is important to detect and trace eye blood vessels. This paper describes a method of finding the papilla by Hough transform, and from there tracing blood vessels by a second-order derivative Gaussian filter. The width of the blood vessel is obtained as the zero-crossing interval of the filter output. The filter is adjustable to the current width of the blood vessel being traced. In this process, since the obtained zero-crossing interval is larger than the true width of an ideal step-wise blood vessel, it is corrected at each step.
Article
We have studied some fundamental problems towards the understanding of color ocular fundus images which are used in the mass diagnosis of adult diseases such as hypertension and diabetes.These problems are: the extraction of blood vessels from the retinal background; the recognition of arteries and veins; the detection and analysis of peculiar regions such as hemorrhages, exudates, optic discs and arterio-venous crossings.We propose a computer method for each of these problems and show some experimental results.
Article
A new method to Automatically locate the optic disk and estimate its shape in color retinal images is proposed. Principal component analysis (PCA) is applied to the candidate regions at various scales to locate the optic disk. The minimum distance between the original retinal image and its projection onto "disk spaces" indicates the center of the optic disk. The shape of optic disk is obtained by an active shape method in which affine transformation is used to transform the shape model from shape space to image space. The effects of vessels present inside and around optic disk are not eliminated, but also incorporated in the processing. The proposed algorithm takes advantage of top-down strategy that can achieve more robust results especially with the presence of large areas of light lesions and when the edge of the optic disk is partly occluded by vessels. (C) 2003 SPIE and IST.