A New Combined Method Based on Curvelet
Transform and Morphological Operators for
Automatic Detection of Foveal Avascular
Shirin Hajeb Mohammad Alipour1, Hossein Rabbani1*, Mohammadreza
1 Biomedical Engineering Dept., Medical Image & Signal Processing Research
Center, Isfahan Univ. of Medical Sciences, Isfahan, Iran
2 Ophthalmology Dept., School of Medicine, Isfahan Univ. of Medical Sciences,
The total number of words of the manuscript, including entire text from title page to figure
The number of words of the abstract: 209
The number of figures: 16
The number of tables: 4
A New Combined Method Based on Curvelet
Transform and Morphological Operators for
Automatic Detection of Foveal Avascular
In order to achieve early detection of Diabetic Retinopathy (DR) for the sake of preventing from
blindness, regular screening using retinal photography is necessary. Abnormalities of DR do not
have uniform distribution over the retina. Certain types of abnormalities usually occur in specific
areas on the retina. The distance between lesions, such as Micro-Aneurysms (MAs), and the
Foveal Avascular Zone (FAZ) is a useful feature for later analysis and grading of DR. In this paper
a new fully-automatic system is presented to find the location of FAZ in fundus fluorescein
angiogram photographs. The method is based on two procedures: Digital Curvelet Transform
(DCUT) and morphological operations. Firstly, end points of vessels are detected based on vessel
segmentation using DCUT. By connecting these points in the selected region of interest (ROI),
FAZ region is extracted. Secondly, vessels are subtracted from the retinal image and
morphological dilatation and erosion are applied on the resulted image. By choosing an
appropriate threshold, FAZ region is detected. The final FAZ region is extracted by performing
logical AND between two segmented FAZ. Our experiments show that the system achieves
respectively the specificity and sensitivity of (>98% and >96%) for normal stage, (>98%, >95%)
for Mild/Moderate Non-Proliferative DR (NPDR), and (>97% and >93%) for Sever NPDR+PDR.
Diabetic retinopathy (DR), foveal avascular zone (FAZ), fundus fluorescein
angiogram, digital curvelet transform (DCUT), morphological operations
Diabetic retinopathy (DR) is a strong symptom of diabetes that changes the blood
vessels of the retina and distorts patient vision. In order to achieve early detection
of DR for the sake of preventing (from) blindness and vision loss in high stages,
regular screening based on retinal photography is necessary [1-2]. In the recent
years the main stages of screening are done using automatic methods for retinal
image analysis [1-15]. Detection of the macula and fovea location in retinal
images is an important task for automatic detection of retinal disease in
photographs of the retina. The fovea is responsible for sharp central vision and is
located in the center of a dark and without vessel area within macula known as
Foveal Avascular Zone (FAZ) (see Fig. 1). Because of its importance in vision,
the distance between lesions and fovea (or the center of fovea known as “fovea
centralis”) is an important landmark for detection of the DR severity . In this
base, detection of FAZ is an important task in processing of retinal photographs
L. Kovacs et al  combined different types of Optic Disc (OD) and macula
detectors represented by a weighted complete graph. The worst vertices of the
graph were removed by node pruning procedure and finally the weighted averages
were applied to get the best possible detector outputs. Keneth et al  reported a
method based on digital red free fundus photography for detecting the macula.
First, geometric model of the vasculature were extracted, then macula was
localized based on optic nerve location. S. Sekhar et al.  reported a method
based on applying a threshold iteratively within Region of Interest (ROI), and
then performing morphological opening operator to identify the macula. The
method presented in  defined ROI by using OD height and then the macula
was extracted by finding lower pixel intensity. M. Niemeijer et al.  utilized k-
NN regressor to predict the distance of pixels in the image to the fovea based on a
set of features measured at that location. The method combined features measured
directly in the image with features derived from a segmentation of the vasculature
arch. Predefined ROI were scanned for detecting fovea and pixels with the lowest
predicted distance to the fovea in ROI were selected as the fovea location. J.
Gutirrez et al.  worked on fluorescein angiogram images. They characterized
the boundary of the foveal zone using B-snakes and a greedy algorithm to
minimize an appropriate energy. Zana et al.  presented a region merging
algorithm based on watershed cell decomposition and morphological operations
for macula localization.
Note that most of the reported methods till now (e.g., , , [22-23]) only
detect the location of fovea or an approximation of macula (circular region) [20,
25] not FAZ. The first attempts to evaluate the FAZ go back to quarter century
ago [26-28] when Philips et al  tried to propose a semi-quantitative method
dependent on evaluation by a trained observer for quantifying macular oedema. In
this method after manually defining ROI (a square centered on the fovea), pixels
whose gradient were below a threshold were identified as corresponding to
leakage. Later in 2005 after manual detection / delineation of a ROI for FAZ
detection, ImageJ was used for extraction of FAZ perimeter and surface area .
In another work, Conrath et al  proposed a semi-automated method based on
region growing function that needs manual definition of a square window in the
center of FAZ. It is clear that these methods are not automatic, and some regions
(e.g., ROI or FAZ center) must be defined by user.
A more recent work of these methods is a level-set based segmentation method
for FAZ extraction  that results in promising results, but in this method an
initializing contour must be manually placed inside the FAZ.
In this paper we want to detect FAZ in Fundus Fluorescein Angiography (FFA)
retinal images. FFA enables study of blood circulation in the retina in normal and
abnormal states. In order to take a photograph from the retina, sodium fluorescein
are injected inter venous. Fluorescein flows in all capillaries of retina. Just when
the vessel is damaged, fluorescein can leak out of retinal vessels into the retina.
Due to this condition, Micro-Aneurysms (MAs) are visible as white small dots
(Fig. 2). In FFA images small blood vessels and MAs are more distinguishable
than color fundus images because of their color intensity. Although FAZ detection
methods based on FFA images have some disadvantages such as being time
consuming and invasive, but they remained the gold standard method for
detecting the FAZ and methods that noninvasively visualize the FAZ are usually
using techniques based on entoptic phenomenon or adaptive optics imaging that
remain experimental or less popular [32-35]. Other methods based on genetic
snakes , Bayesian statistical technique  and a Markov random field
method  showed promising results in automated FAZ segmentation, but the
reliability and reproducibility of these techniques have not been investigated. In
some other methods such as region-growing based method  and thresholding
method  the results are not as good as other methods due to the weakness of
these algorithms in handling noise and variations in image intensity.
Note that the FAZ detection can be considered as a segmentation problem and in
this point of view the techniques can be categorized into various groups. The first
group are classifier-based methods which use a supervised learning such as
artificial neural network, k-NN classifier  or Bayesian classifier . The
main issue with these methods is that building of a large database and a training
step are necessary for final segmentation. Another group of methods use
properties of the FAZ such as low pixel intensity and its oval shape , or
locating inside the vascular arch [40-41], or matching a template to locate a
required place [42-44]. These methods usually need several pre-requirements for
success in final segmentation, e.g., in vascular arch based methods the vascular
arch must be visible.
The proposed method in this paper is based on the main anatomical definition of
FAZ, i.e., a vessel-free region around the fovea. Based on this we detect the
vessels and try to find FAZ by connecting the end of vessels in a predefined ROI.
Specially in this work we present FAZ detection algorithms based on Digital
Curvelet Transform (DCUT) and morphological operators. For this reason, DCUT
are applied on gray-scale FFA images in order to detect OD and vessels . The
ROI is defined relatively to the OD location. As discussed above, FAZ is a dark
vessel-free region. So in the next step we scan predefined ROI region for finding
end points of vessels and finally by connecting selected end points FAZ is
extracted. Then a morphological-based method is employed to improve FAZ
detection procedure. This method, which tries to benefit from the advantages of
both theoretical and empirical definitions of FAZ (by using a two pronged
approach), is fully automatic and is successful in handling noise and variations in
image intensity and quality.
The paper is organized as follows. In section 2 we explain about the proposed
method for FAZ detection. For this, we describe the preprocessing step, FAZ
detection method by DCUT, morphological-based method for FAZ detection, and
final FAZ detection technique. Section 3 is dedicated to results of our FAZ
detection method applied on 30 FFA images from normal and 40 FFA images
from abnormal subjects. Finally, this paper is concluded in section 4.
2. Proposed Method
Our algorithm includes a preprocessing step and two main branches for FAZ
detection. One of them employs DCUT for FAZ extraction based on connecting
vessels' end points, and the other one is a morphological-based FAZ detection
method. The final result is obtained by combining the output of two branches (see
Fig. 3). The DCUT-based branch in our two pronged approach would
theoretically produce desired FAZ due to connecting the end points of vessels. On
the other hand, empirically, specialists are more familiar with FAZ area as a
nearly circular region. In this base, the morphological-based method which results
in a more smooth and nearly circular FAZ region is introduced. This method
works based on this fact that the darkest region (which is obtained by removing
vessels, applying closing operator and thresholding) corresponds to the FAZ. By
combination of both methods we tried to benefit from the advantages of both
theoretical and empirical definitions of FAZ. In addition, combining two methods
can help us for detection of failure cases such as deviation of FAZ position wrt
OD (only by several degrees) which results in a wrong ROI for connecting the end
points of vessels in DCUT-based method. This situation can be detected by
comparing the results of DCUT-based method and morphological-based method.
If the final result of segmentation is very different from DCUT-based method
and/or morphological-based method, this failure can be detected.
First of all, it is necessary to remove outer boundary of images , because this
boundary may interfere with the detection of the FAZ. So, at first we find the
extreme boundary, and a circular mask is applied on the image (see Fig. 4). Then
we use Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm
 and illumination equalization to enhance contrast and reach uniform
2.2. Curvelet Transform
After preprocessing, the optic disk (OD) and vessels are detected by applying
DCUT  and modifying its coefficients based on the sparsity of curvelet
coefficients [49-50]. DCUT is a multi-scale directional transform which allows an
almost sparse representation of objects. One of the main properties of DCUT is its
better directional selectivity (DS) property comparing to other transforms such as
wavelets. Actually curvelets are important in the context of privileging
directionality / curved structures. This property makes DCUT an appropriate tool
for extraction of linear and curve structures in an image such as blood vessels in
retinal images. The weak DS property of wavelets results in checkerboard artifact
(when the processed image in wavelet domain is reconstructed), also the sparsity
property of wavelets is weaker than curvelets (i.e., the time-frequency contents
corresponding to various object and features are more distinguishable in
curvelets). In contrast to wavelet transform that is an optimal tool for 1D signal
processing, DCUT has been designed for direct analysis of 2D signals (i.e.,
instead of producing 2D basis functions as a tensor product of 1D wavelets, 2D
basis functions in DCUT is directly produced from translation, scaling and
rotation of a 2D mother curvelet).
2.3. OD Detection
In [49-50] DCUT is applied on the color retinal image for detecting OD. In this
paper we use this method on gray-scale FFA images. This is because, the vessels
and MAs are much more distinguishable from the background in the FFA images
compared to the color images. The following steps are needed for detection of
1) Applying adaptive histogram equalization.
2) Applying illumination equalization.
3) Transforming the enhanced image using the wrapping based DCUT.
4) Modifying the values of the curvelet coefficients with exponent of p.
We empirically choose p=5 (see Section 3 for more explanations).
5) To segment candidate regions, we use Canny edge detector and some
morphological operators . After applying Canny edge detector on the
produced image in previous step, the boundary of candidate regions are
extracted. Then we dilate edges using a flat, disk-shaped structuring
element (the radius is 1 pixel) and use filling morphological operator (to
fill the holes and remove the inappropriate regions) and erode
morphological operator to get location of candidate region of OD.
6) Finally, to determine the accurate region of OD we use information of
retinal vessels around this points, as we know OD is partly covered with
vessels. For this reason a 5×10 window is applied on bitwise negated
binary edge map (obtained by a simple thresholding of modified green-
plane image). The center of this window is placed on the center of each
extracted candidate region (in the previous step) and the window with
highest summation is detected as OD location. Note that the size of this
window is chosen because of structure of vessels around OD.
2.4. Vessel Detection
For detecting vessels, the following steps are proposed:
1) Inverting the gray levels of the FFA image.
2) As we know DCUT is a multi-scale transform, we apply this transform
for edge enhancement on the image resulting from step 1. So, in order to
amplify vessels (as a preprocessing step) a function as follows (that is a
modification of proposed function in  for color images) is used:
: Noise standard deviation which is calculated from the original data.
: Degree of nonlinearity which for our FFA database is 0.01.
: Dynamic rang compression which is 0 in this work.
: Normalization parameter which is 1 for our FFA database.
: Value under which coefficients are amplified and can be derived from
the maximum curvelet coefficient (M). In this paper 0.9M is used.
3) Taking DCUT of the match filtered response of enhanced retinal image
4) Removing the low frequency component, and amplifying all other
coefficients. The main reason for this step is amplifying the edges
(including vessels) and attenuating other components in the image.
5) Applying inverse DCUT
6) Thresholding using the mean of the pixel values of the image. Actually
after modifications of curvelet coefficients, the vessels are truly
distinguishable from other objects in the images, so using the mean of the
pixel values of reconstructed image as threshold results in acceptable
vessel segmentation results. Note that for each image an optimum
threshold could be obtained, however since we want to have an automatic
method, this is not the case for our dataset, and because of using the length
filtering for removing misclassified pixels this threshold leads to desired
7) Applying length filtering and remove misclassified pixels
2.5. FAZ Detection Using Curvelet Transform
After extracting OD and vessels, in order to facilitate scanning of retinal image for
finding FAZ, we limit the search area to be as small as possible. As we know that
the macula is at approximately a radius of 2.5 disc diameter away from the center
of the OD, and at almost the same horizontal level as the OD. We use this relative
position of the fovea with respect to OD to automatically define a search area and
scan only within this ROI for the FAZ detection. Now it is time to detect and
select retinal blood vessel end points surrounded in the ROI to determine and
calculate the FAZ area. Our selection is based on minimum distance between
detected points and center of FAZ (i.e., the far points from FAZ are rejected). So,
first of all the center of these points is found. As we know FAZ is the darkest
region in FFA images, we search ROI region for lowest intensity value.
Coordinate of this dark pixel is assumed as a center of FAZ. Then each point’s
distance from this point is calculated. After calculating the average of all resulted
distances, if point’s distance is less than the average distance it is selected as final
end point . Finally, these selected end points are connected to form the FAZ
2.6. Morphological-based Method for FAZ Detection
In the second method, at first vessels which were detected in previous step by
DCUT, are subtracted from the original image and then closing operator (erosion
followed by dilation) is performed on the resulted image. For this reason a disk
structuring element with a radius of 20 pixels is employed to eliminate the dark
areas with small diameter and produce a more circular FAZ area (usually
specialists are more familiar with FAZ area as a nearly circular region). In fact, by
removing vessels and then applying closing operator the following results are
achieved: 1) The darkest region in the produced image represents the FAZ and by
choosing an appropriate threshold (e.g., the mean of the pixel values of the image)
this area is detected, 2) The detected FAZ area is more smooth and nearly circular
2.7. Final Method for FAZ Extraction
In order to have a more accurate FAZ detection procedure, the logical AND
operator is performed between two extracted regions from DCUT-based and
morphological-based methods. The block diagram of our final algorithm is
presented in Fig. 5. As we will see in Section 3 the combination of both
algorithms, will increase the accuracy of the delineation (according to Table 1).
Actually the morphology-based method is based on the darkest area and the
DCUT-based method is based on the vessels' information. As we explained, by
removing vessels and then applying closing operator in morphological-based
method the darkest region corresponds to the FAZ, and by choosing an
appropriate threshold a more smooth and nearly circular region is detected
(usually specialists are more familiar with FAZ area as a nearly circular region).
On the other hand, DCUT-based method is based on vessel information and
connecting the end points of vessels would produce desired FAZ (according to
theoretical definition of FAZ). By combination of both methods we tried to
benefit from the advantages of both methods.
In addition, combining two methods can help us for detection of failure cases. For
example usually the FAZ position wrt OD may be deviated only by several
degrees and since our DCUT-based algorithm is based on connecting the end
points of vessels it would be able to extract vessels unless having a deviation more
than several degree. This situation can be detected by comparing the results of
DCUT-based method and morphological-based method. If the final results of
segmentation is very different from DCUT-based method and/or morphological-
based method (e.g., the overlapped area of produced FAZs can be compared to the
area of extracted FAZ from DCUT-based method), we can conclude that results
are not acceptable and orientation could be one of the main causes of this failure
(and so we must select a new ROI).
As explained in section 1, FFA is a more informative modality for FAZ detection
due to its ability in emphasizing on vessels and also distinguishing between
vessels and MAs. In this work we use 8-bit FFA images of size pixels
from angiography unit of Isfahan Feiz hospital. These images are available in
. We have collected images of 70 patients at different DR stages. Fig. 2 shows
an example of normal and abnormal subjects from this database.
3.2. Visual Results of Algorithm for a Sample FFA
Figures 6-14 show the results of the main steps of our FAZ detection approach.
Figure 6 shows the output of each of the two dark blocks in Figure 5, and the
output of the red block is illustrated in Figure 7, then we go into details of our
algorithm in Figures 8-14. Fig. 8 shows the results of the proposed DCUT-based
OD detection algorithm described in section 2.3. Fig. 9 illustrates the extracted
vessels using the proposed vessel detection method in section 2.4. Note that we
can see that the presence of MA doesn’t influence the vessel detection procedure.
Actually using length filtering in the final stage of our vessel detection algorithm
will remove MAs. Figure 10 shows extracted ROI for searching FAZ and Figure
11 illustrates the darkest pixel in ROI (known as center of FAZ). As we explained
in section 2.5, this area is extracted automatically according to anatomical
position of macula with respect to OD. While limiting the search area reduces the
complexity of the scanning process; however the main point behind scanning only
within this ROI for the FAZ detection is increasing accuracy. Actually if we
search the whole image, many points may be detected as vessels' end points which
cause non-acceptable FAZ extraction results. Fig. 12 shows the final results of
DCUT-based FAZ detection. In this figure, unnecessary endpoints in ROI are
removed and FAZ is extracted by connecting only endpoints which their distance
from the center of FAZ is less than mean distance. The FAZ detection results for
the same FFA image using morphological-based method, and the final results
using the combination of both branches (DCUT-based method and
morphological-based method) are shown in Figures 13 and 14 respectively.
3.3. Parameters Selection
Several parameters are used in the proposed method in this paper. We explained
before that most of them are obtained automatically or justified why the suggested
value would be appropriate for all FFA images. For example, the proposed
threshold for morphological-based method or vessel detection is set to the mean
value of pixels' intensity (note that for each image an optimum threshold could be
obtained in a non-automatic manner by trial and error).
One of the most important parameters used for OD detection is the exponent value
of p. When we amplify the curvelet coefficients and illustrate the reconstructed
image with a limit number of bits, the large coefficients tend to the largest value
of pixel (e.g., ±255 for a 8-bit graylevel image) and around zero coefficients (that
are less than one) tend to zero very fast. Since the curvelet transform results in a
sparse representation, only a few large coefficients correspond to bright regions
and so the main structure of OD will appear after using exponential operator and
inverse curvelet transformation.
Note that in contrast to image intensity level that is a positive integer (e.g.,
between 0 to 255 for a 8-bit graylevel image), the corresponding curvelet
coefficient would be a signed real number. So, we can only use odd numbers in
exponential (because of negative numbers of some curvelet coefficients). p=3 is
not enough to produce the whole desired OD area and applying p≥7 causes the
apparition other unessential bright objects in the reconstructed image.
3.4. Evaluation of Algorithm and Discussion
The (final) proposed method has been evaluated on 70 FFA images. The FAZ area
is analyzed for 30 normal stage and 40 abnormal stages including mild non-
proliferative DR (NPDR), moderate NPDR, severe NPDR and proliferative DR
(PDR). Then these results are compared with results reported by ophthalmologist
as our gold standard.
To evaluate our system, we defined an overlapping ratio for measuring the
performance of algorithm. This metric is defined as the ratio of intersection
of our results (A) and the gold standard (B), over their union.
The ground truth images (B) have been drawn based on landmark delineation
(medical experts provided their opinion to define the correct FAZ region using a
prepared GUI in MATLAB). The value of this ratio is between 0 and 100, with 0
indicating no overlap at all and 100 indicating perfect agreement between
algorithm’s result and gold standard. The quantitative results obtained from the
proposed method are shown in Table 1. The failure cases are due to partial and
poor appearance of vessels in the image. The experimental result shows that the
proposed algorithm can give sufficiently accurate location of FAZ in both normal
and abnormal cases. The inter- and intra-observer variability of this procedure for
all ground-truth images, also the overlap between the regions obtained from each
of the two methods (curvelet-based and morphological-based methods), can be
seen in Appendix for comparison to the quantitative overlap index reported in
Table 1 and checking the lack of outlier cases.
The best overlapping ratio reported for the proposed method in  is 0.79. Note
that in this method an initializing contour must be manually placed inside the FAZ
while our method is fully automatic. In addition, the quality of proposed FFA
images in  are much better than our database, which means our algorithm may
be more robust to lower image quality. Similarly, this ratio for proposed methods
in  and  was reported respectively 0.78 and 0.77 which shows the
superiority of our method. Please note that one probable reason for the good
results achieved with the proposed method comes from its own design, very
specific to the studied images (which may also explain why more generic methods
such as segmentation have lower performance).
In order to obtain specificity and sensitivity of our method we need to define TP,
FP, FN and TN. TP is the common area between regions extracted by algorithm
and regions detected by the ophthalmologist. FP is the area that does not belong to
FAZ but our algorithm detects it as FAZ. TN is the area which does not belong to
FAZ and our algorithm also doesn’t detect that region as a part of FAZ. FN is the
area that belongs to FAZ but our algorithm is not able to detect it as a part of
FAZ. Table 2 shows the specificity and sensitivity of our method for normal and
abnormal stages. Each line in this table (and Table 1) corresponds to a single
patient. In order to group the results for each sub-population of DR, the average +
std of overlapping ratio, specificity and sensitivity (and the number of patients) in
each group, (normal, Mild/Moderate NPDR, and Severe NPDR + PDR) have
been shown in Table 3. (These values for each patient in each group can be seen
According to main properties of curvelet transform (e.g., edge preservation
property, rotation invariant property), the vessel structure does not impact the
detection process. We tested our algorithm on our database (and similar to
reported results in  for color fundus images from DRIVE dataset) it can be
seen that the segmentation is relatively insensitive to the vessel structure and wide
variations in intensity that are inherent in these images. However, in some cases
due to damaging of vessels and enlargement of FAZ region, our defined ROI is
not large enough to surround end-points of damaged vessels. In order to solve this
problem we must increase ROI area in these cases (Fig.15) as explained in first
paragraph of Section 2.
In this study, FAZ detection of retinal images based on curvelet transform and
morphological operators has been presented. The curvelet transform is used for
detecting ROI based on extracted optic disk, also for vessel segmentation. By
connecting end points of vessels in ROI and using morphological operators the
final FAZ is extracted. The algorithm shows high specificity and sensitivity for all
As we know FAZ region changes in different stages of DR. By extracting useful
features such as FAZ area and roundness and detecting microaneurysms , and
exudates  we will be able to automatically detect different stages of DR using
an appropriate classifier. For example, for our database, the area of FAZ for
"Normal Stage", "Mild/Moderate NPD", and "Severe NPDR/PDR" are
respectively 3498.25, 5773, and 13624. Another measure which shows the
roundness of the area can be obtained by calculating the variance of distance
between points around FAZ and center of FAZ which are respectively 12.93,
27.03, 110.84 .
In order to show an estimation of the interobserver variability of proposed
procedure in this paper for FAZ detection and intra-observer too, the overlap
between segmentations for all ground-truth images in this study is showed in
Table 4 that could be compared directly to the quantitative overlap index reported
in Table 1 for checking the lack of outlier cases.
In order to group the results for each sub-population of DR (normal,
Mild/Moderate NPDR, and Severe NPDR + PDR), the overlapping ratio,
specificity and sensitivity of all data in each group can be seen in Figure 16.
 Tobin KW, Chaum SE, Govindasamy VP, and Karnowski ThP, Detection of Anatomic
Structures in Human Retinal Imagery. IEEE Trans. on Medical Imaging 26: 1729-1739, 2007.
 Niemeijer M, Abramoff MD, Segmentation of the Optic Disk, Macula and Vascular Arch in
Fundus Photographs. IEEE Trans. on Medical Imaging 26:116-127, 2007.
 Hiuiqi L, Automated Feature Extraction in Color Retinal Images by a Model Based Approach.
IEEE Trans. on Biomedical Engineering 51: 246-254, 2004.
 Walter T, Massin P, Erginay A, Ordonez R, Jeulin C, Klein JC (2007) Automatic Detection of
Microaneurysms in Color Fundus Images. Med Image Analysis 11(6): 555-566.
 Walter T, Klein JC, Massin P, Erginay A, A Contribution of Image Processing to the Diagnosis
of Diabetic Retinopathy - Detection of Exudates in Color Fundus Images of the Human Retina.
IEEE Trans. Med. Imaging 21(10): 1236-1243, 2002.
 Walter T, Klein JC, Segmentation of Color Fundus Images of the Human Retina: Detection of
the Optic Disc and the Vascular Tree Using Morphological Techniques. ISMDA 2001: 282-287,
 Abràmoff MD, Garvin M, Sonka M, Retinal Imaging and Image Analysis. IEEE Reviews in
Biomedical Engineering, 3: 169-208, 2010.
 Esmaeili M, Rabbani H, Dehnavi AM, Automatic optic disk boundary extraction by the use of
curvelet transform and deformable variational level set model. Pattern Recognition 45(7): 2832-
 Zana F, Klein JC, Segmentation of vessel-like patterns using mathematical morphology and
curvature evaluation. IEEE Trans. on Image Processing 10(7): 1010-1019, 2001.
 Zana F, Klein JC, A Multi-Modal Registration Algorithm of Eye Fundus Images Using
Vessels Detection and Hough Transform. IEEE Trans. Med. Imaging 18(5): 419-428, 1999.
 Sangyeol Lee, Joseph M. Reinhardt, Philippe C. Cattin, Michael D. Abràmoff, Objective and
expert-independent validation of retinal image registration algorithms by a projective imaging
distortion model, Med Image Analysis 14 (4), 539-549, 2010.
 Patton N, Aslam TM, MacGillivray T, Deary IJ, Dhillon B, Eikelboom RH, Yogesan K,
Constable IJ, Retinal image analysis: concepts, applications and potential, Prog Retin Eye Res
 Tsai CL, Madore B, Leotta MJ, Sofka M, Yang G, Majerovics A, Tanenbaum HL, Stewart
CV, Roysam B, Automated retinal image analysis over the internet, IEEE Trans. Inf. Technol.
Biomed. 12(4):480-487, 2008.
 Ahmed MI, Amin MA, High speed detection of optical disc in retinal fundus image. Signal,
Image and Video Processing, Dec. 2012.
 Nirmala SR, Dandapat S, Bora PK, Wavelet weighted distortion measure for retinal images.
Signal, Image and Video Processing, Jan. 2012.
 Niemeijer M, Abramoff MD, Ginneken BV, Fast Detection of the Optic Disc and Fovea in
Color Fundus Photographs. Med Image Analysis 13: 859–870, 2009.
 Haddouche A, Adel M, Rasigni M, Conrath J and Bourennanea S, Detection of the Foveal
Avascular Zone on Retinal Angiograms Using Markov Random Fields. Digital Signal Processing
 Regillo CD, 2007-2008 Basic and Clinical Science Course Section 12: Retina and Vitreous,
American Academy of Ophthalmology, http://one.aao.org/CE/EducationalProducts/BCSC.aspx
Accessed 7 Dec 2011.
 Kovacs L, Qureshi RJ, Nagy B, Harangi B and Hajdu A, Graph Based Detection of Optic
Disc and Fovea in Retinal Image. IEEE Int. Workshop on Soft Computing Applications, pp: 143-
 Tobin KW, Detection of Anatomic Structures in Human Retinal Imagery, IEEE Trans. on
Medical Imaging 26: 1729-1739, 2007.
 Sekhar S, Nuaimy W.Al and Nandi AK, Automated Localization of Optic Disc and Fovea in
Retinal Fundus Images. Proc. 16th European Signal Processing Conference, 5 pages, Lausanne,
 Tan NM, Wong DWK, Liu J, Ng WJ, Zhang Z, Lim JH, Tan Z, Tang Y, Li H, Lu S and
Wong TY, Automatic Detection of the Macula in the Retinal Fundus Image by Detecting Regions
with Low Pixel Intensity. IEEE Biomedical and Pharmaceutical Engineering, pp:1-5, 2009.
 Gutirrez J, Epifanio I, DeVes E and Fed FJ, An Active Contour Model for the Automatic
Detection of the Fovea in Fluorescein Angiographies. IEEE Int. Conf. on Pattern Recognition, pp:
 Zana F, Meunier I and Klein JC, A Region Merging Algorithm Using Mathematical
Morphology: Application to Macula Detection. Int. Symp. on Mathematical Morphology and its
Applications to Image and Signal Processing, pp: 423 - 430, 1998.
 A. D. Fleming, S. Philip, K. A. Goatman, J. A. Olson, and P.F. Sharp, “Automated
Assessment of Diabetic Retinal Image Quality Based on Clarity and Field Definition”, Invest
Ophthalmol Vis Sci., 47, pp. 1120-1125, 2006.
 Goldberg RE, Varma R, Spaeth GL, Magargal LE, Callen D. Quantification of progressive
diabetic macular nonperfusion. Ophthalmic Surg.; 20:42–45, 1989.
 Classification of diabetic retinopathy from fluorescein angiograms. ETDRS report number 11.
Early Treatment Diabetic Retinopathy Study Research Group. Ophthalmology, 98:807–822, 1991.
 Phillips RP, Spencer T, Ross PG, Sharp PF, Forrester JV. Quantification of diabetic
maculopathy by digital imaging of the fundus. Eye, 5:130 –137,1991.
 Conrath J, Giorgi R, Raccah D, Ridings B. Foveal avascular zone in diabetic retinopathy:
quantitative vs qualitative assessment. Eye, 19:322–326, 2004.
 Conrath J, Valat O, Giorgi R, et al. Semi-automated detection of the foveal avascular zone in
fluorescein angiograms in diabetes mellitus, Clin Exp Ophthalmol, 34:119–123, 2006.
 Zheng Y, Gandhi JS, Stangos AN, Campa C, Broadbent DM, Harding SP. Automated
segmentation of foveal avascular zone in fundus fluorescein angiography. Invest Ophthalmol Vis
Sci., 51:3653–3659, 2010.
 Popovic Z, Knutsson P, Thaung J, Owner-Petersen M, Sjo¨strand J. Noninvasive imaging of
human foveal capillary network using dual-conjugate adaptive optics. Invest Ophthalmol Vis Sci.;
 Martin JA, Roorda A. Direct and noninvasive assessment of parafoveal capillary leukocyte
velocity. Ophthalmology;112: 2219–2224, 2005.
 Tam J, Martin JA, Roorda A. Noninvasive visualization and analysis of parafoveal capillaries
in humans. Invest Ophthalmol Vis Sci., 51:1691–1698, 2010.
 Shin YU, Kim S, Lee BR, Shin JW, Kim SI, Novel Noninvasive Detection of the Fovea
Avascular Zone Using Confocal Red-free Imaging in Diabetic Retinopathy and Retinal Vein
Occlusion, Invest Ophthalmol Vis Sci.; 53(1):309-315, 2012.
 Ballerini L. Genetic snakes for medical images segmentation. Math Modeling Estimation
Techn Comput Vision; 3457:284–295, 1998.
 Iban˜ez MV, Simo´ A. Bayesian detection of the fovea in eye fundus angiographies. Pattern
Recognition Lett.; 20:229–240, 1999.
 Petsatodis T, Diamantis A, Syrcos GP. A Complete Algorithm for Automatic Human
Recognition based on Retina Vascular Network Characteristics, 1st Int. Scientific Conf. e RA,
Tripolis, Greece, pp. 41-46, 2004.
 Sinthanayothin C, Boyce JF, Cook HL, and Williamson TH, Automated localization of the
optic disc, fovea, and retinal blood vessels from digital colour fundus images, British Journal of
Ophthalmology, vol. 83, no. 8, pp. 902–910, 1999.
 Fleming AD, Goatman KA, Philip S, Olson JA, Sharp PF. Automatic detection of retinal
anatomy to assist diabetic retinopathy screening. Phys Med Biol. 2007. pp. 331–345.
 Li H, Chutatape O. Automated feature extraction in color retinal images by a model based
approach. IEEE Trans. on Biomedical Engineering. 2004;51(no 2):246–254.
 A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, Comparison of colour spaces for
optic disc localisation in retinal images, in Proc. of the 16th Int. Conf. on Pattern Recognition
(ICPR'02), vol. 1, pp. 743–746, 2002.
 M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic disc detection using
pyramidal decomposition and Hausdorff-based template matching, IEEE Trans. on Medical
Imaging, vol. 20, no. 11, pp. 1193–1200, 2001.
 A. Youssif, A. Ghalwash, and A. Ghoneim, Optic disc detection from normalized digital
fundus images by means of a vessels' direction matched filter, IEEE Trans. on Medical Imaging,
vol. 27, no. 1, pp. 11–18, 2008.
 Starck J-L, Murtagh F, Candès EJ, and Donoho DL, Gray and Color Image Contrast
Enhancement by the Curvelet Transform. IEEE Trans. on Image processing 12: 706-717, 2003.
 Pisano E, Zong S, Heminger B, Deluca M, Johnston R, Muller K, Breauning MP and Pizer S
M, Contrast Limited Adaptive Histogram Equalization Image Processing to Improve the Detection
of Simulated Speculations in Dense mammograms. Digital Imaging11:193-200, 1998.
 Aibinu AM, Salami MJE, and Shfie AA, Retina Fundus Image Mask Generation Using
Pseudo Parametric Modeling Technique. IIUM Engineering Journal 11: 163-177, 2010.
 Cand`es E, Demanet L, Donoho D, Ying L, Fast Discrete Curvelet Transforms, Multiscale
Model. Simulation 5: 861-899, 2006.
 Esmaeili M, Rabbani H, Mehri Dehnavi AR and Dehghani AR, Automatic Optic Disk
Detection by the Use of Curvelet Transform, IEEE Int. Conf. on Information Technology and
Applications in Biomedicine, pp: 1-4.
 Esmaeili M, Rabbani H, Mehri Dehnavi AR, Dehghani AR (2009) Extraction of Retinal
Blood Vessels by Curvelet Transform. IEEE Int. Conf. on Image Processing, pp: 3353-3356.
 Hajeb SH, Rabbani H, Akhlaghi M, Diabetic Retinopathy Grading by Digital Curvelet
Transform, Computational and Mathematical Methods in Medicine, vol. 2012, Article ID 761901,
11 pages, 2012.
 Fadzil MHA, Nugroho H, Izhar LI and Nugroho HA (2010) Analysis of Retinal Fundus
Images for Grading of Diabetic Retinopathy Severity. Medical and Biological Engineering and
Computing 49: 693–700.
Figure 1. Foveal Avascular Zone (FAZ) in a sample fluorescein angiograph image
Figure 2. (a) A normal fluorescien angiography image, (b) MAs in abnormal fluorescien
Figure 3. The concise block diagram of our two pronged FAZ detection algorithm
curvelet-based FAZ detection by
connecting the end point of vessels
morphological-based FAZ detection
by finding nearly circular darkest
region (empirical definition)
Figure 4. (a) The extracted mask image, (b) Boundary of eye.
ROI of FAZ
closing FAZ extraction (2)Thresholding
Vessels in ROI
Figure 5. The block diagram of our FAZ detection algorithm
(a) (b) (c)
Figure 6. The output of each of the two dark blocks in Figure 5. a) original image, b) OD
detection, c) vessel extraction.
Figure 7. The output of red block in Figure 5. a) original image, b) final extracted FAZ.
(a) (b) (c)
(d) (e) (f)
Figure 8. Results of OD detection for a sample image. a) Original image, b) Enhanced image by
CLAHE, c) Image after modification of bright objects by DCUT, d) Applying Canny edge
detector, e) Filling of holes, f) Extracting OD location using morphological operators.
(a) (b) (c)
Figure 9. Results of vssel detection for a sample image. a) Inverse of FFA, b) Taking DCUT of
the match filtered response of enhanced retinal image, removing the high frequency component,
and amplify all other coefficients, c) Thresholding, d) Applying length filtering.
Figure 10. (a) Produced image after CLHE, (b) ROI for searching FAZ.
Figure 11. Center of FAZ.
(c) (d) (e)
Figure 12. Results of curvelet-based FAZ detection for a sample image. a) ROI on original image
which defined by relative position of the fovea with respect to the OD, b) Extracted end-points in
ROI, c) Removing unnecessary points, d) Selected end-points, e) Connecting of end-points.
(a) (b) (c)
Figure 13. Results of morphological-based FAZ detection for a sample image. a) Original image,
b) Image without vessels, (c) Closing of (b), d) Extracted FAZ after thresholding.
Figure 14. Results of final FAZ detection method for a sample image. a) The extracted FAZ by
DCUT, b) The extracted FAZ by morphological based method, c) The extracted FAZ by combined
Figure 15. This image needs bigger ROI.
Figure 16. The plot of overlapping ratio, specificity and sensitivity of our FAZ detection method
for each patient in each group (normal, Mild/Moderate NPDR, and Sever NPDR + PDR)
Table1. Overlapping ratio for several FFA images in various stages: normal, mild NPDR,
moderate NPDR, severe NPDR and PDR
Final (Combined) Method
Mean±std for all 30
Mean±std for all 40
Table 2. Performance of our FAZ detection algorithm in terms of specificity and sensitivity for
several FFA images
Table 3. MeanStd value of overlapping ratio, specificity and sensitivity of our FAZ detection
method in each group (normal, Mild/Moderate NPDR, and Sever NPDR + PDR)
(Sever NPDR +
Number of patients
Mean overlapping ratio Std
Mean sensitivity Std
Mean specificity Std
Table4. Overlapping ratio between results of morphological-based and curvelet-based methods,
and inter- and intra-observer overlapping ratio for FFA images in this study
Overlapping ratio between
results of two methods
(morphological and curvelet
Mean±std for all
30 normal cases
Mean±std for all
40 abnormal cases