ArticlePDF Available

Abstract and Figures

In order to achieve early detection of diabetic retinopathy (DR) for the sake of preventing from blindness, regular screening using retinal photography is necessary. Abnormalities of DR do not have uniform distribution over the retina. Certain types of abnormalities usually occur in specific areas on the retina. The distance between lesions, such as micro-aneurysms, and the foveal avascular zone (FAZ) is a useful feature for later analysis and grading of DR. In this paper, a new fully automatic system is presented to find the location of FAZ in fundus fluorescein angiogram photographs. The method is based on two procedures: digital curvelet transform (DCUT) and morphological operations. Firstly, end points of vessels are detected based on vessel segmentation using DCUT. By connecting these points in the selected region of interest, FAZ region is extracted. Secondly, vessels are subtracted from the retinal image, and morphological dilatation and erosion are applied on the resulted image. By choosing an appropriate threshold, FAZ region is detected. The final FAZ region is extracted by performing logical AND between two segmented FAZ. Our experiments show that the system achieves, respectively, the specificity and sensitivity of (>98 and >96 %) for normal stage, for mild/moderate non-proliferative DR (NPDR) (>98, and >95 %) and for Sever NPDR + PDR (>97 and >93 %).
Content may be subject to copyright.
A New Combined Method Based on Curvelet
Transform and Morphological Operators for
Automatic Detection of Foveal Avascular
Zone
Shirin Hajeb Mohammad Alipour1, Hossein Rabbani1*, Mohammadreza
Akhlaghi2
1 Biomedical Engineering Dept., Medical Image & Signal Processing Research
Center, Isfahan Univ. of Medical Sciences, Isfahan, Iran
2 Ophthalmology Dept., School of Medicine, Isfahan Univ. of Medical Sciences,
Isfahan, Iran
*Corresponding author:
Hossein Rabbani
email: h_rabbani@med.mui.ac.ir,
tell: +98-311-792-2474
fax: +98-311-792-2362
The total number of words of the manuscript, including entire text from title page to figure
legends: 7345
The number of words of the abstract: 209
The number of figures: 16
The number of tables: 4
A New Combined Method Based on Curvelet
Transform and Morphological Operators for
Automatic Detection of Foveal Avascular
Zone
Abstract
In order to achieve early detection of Diabetic Retinopathy (DR) for the sake of preventing from
blindness, regular screening using retinal photography is necessary. Abnormalities of DR do not
have uniform distribution over the retina. Certain types of abnormalities usually occur in specific
areas on the retina. The distance between lesions, such as Micro-Aneurysms (MAs), and the
Foveal Avascular Zone (FAZ) is a useful feature for later analysis and grading of DR. In this paper
a new fully-automatic system is presented to find the location of FAZ in fundus fluorescein
angiogram photographs. The method is based on two procedures: Digital Curvelet Transform
(DCUT) and morphological operations. Firstly, end points of vessels are detected based on vessel
segmentation using DCUT. By connecting these points in the selected region of interest (ROI),
FAZ region is extracted. Secondly, vessels are subtracted from the retinal image and
morphological dilatation and erosion are applied on the resulted image. By choosing an
appropriate threshold, FAZ region is detected. The final FAZ region is extracted by performing
logical AND between two segmented FAZ. Our experiments show that the system achieves
respectively the specificity and sensitivity of (>98% and >96%) for normal stage, (>98%, >95%)
for Mild/Moderate Non-Proliferative DR (NPDR), and (>97% and >93%) for Sever NPDR+PDR.
Keywords:
Diabetic retinopathy (DR), foveal avascular zone (FAZ), fundus fluorescein
angiogram, digital curvelet transform (DCUT), morphological operations
1. Introduction
Diabetic retinopathy (DR) is a strong symptom of diabetes that changes the blood
vessels of the retina and distorts patient vision. In order to achieve early detection
of DR for the sake of preventing (from) blindness and vision loss in high stages,
regular screening based on retinal photography is necessary [1-2]. In the recent
years the main stages of screening are done using automatic methods for retinal
image analysis [1-15]. Detection of the macula and fovea location in retinal
images is an important task for automatic detection of retinal disease in
photographs of the retina. The fovea is responsible for sharp central vision and is
located in the center of a dark and without vessel area within macula known as
Foveal Avascular Zone (FAZ) (see Fig. 1). Because of its importance in vision,
the distance between lesions and fovea (or the center of fovea known as “fovea
centralis”) is an important landmark for detection of the DR severity [16]. In this
base, detection of FAZ is an important task in processing of retinal photographs
[17-18].
L. Kovacs et al [19] combined different types of Optic Disc (OD) and macula
detectors represented by a weighted complete graph. The worst vertices of the
graph were removed by node pruning procedure and finally the weighted averages
were applied to get the best possible detector outputs. Keneth et al [20] reported a
method based on digital red free fundus photography for detecting the macula.
First, geometric model of the vasculature were extracted, then macula was
localized based on optic nerve location. S. Sekhar et al. [21] reported a method
based on applying a threshold iteratively within Region of Interest (ROI), and
then performing morphological opening operator to identify the macula. The
method presented in [22] defined ROI by using OD height and then the macula
was extracted by finding lower pixel intensity. M. Niemeijer et al. [16] utilized k-
NN regressor to predict the distance of pixels in the image to the fovea based on a
set of features measured at that location. The method combined features measured
directly in the image with features derived from a segmentation of the vasculature
arch. Predefined ROI were scanned for detecting fovea and pixels with the lowest
predicted distance to the fovea in ROI were selected as the fovea location. J.
Gutirrez et al. [23] worked on fluorescein angiogram images. They characterized
the boundary of the foveal zone using B-snakes and a greedy algorithm to
minimize an appropriate energy. Zana et al. [24] presented a region merging
algorithm based on watershed cell decomposition and morphological operations
for macula localization.
Note that most of the reported methods till now (e.g., [16], [19], [22-23]) only
detect the location of fovea or an approximation of macula (circular region) [20,
25] not FAZ. The first attempts to evaluate the FAZ go back to quarter century
ago [26-28] when Philips et al [28] tried to propose a semi-quantitative method
dependent on evaluation by a trained observer for quantifying macular oedema. In
this method after manually defining ROI (a square centered on the fovea), pixels
whose gradient were below a threshold were identified as corresponding to
leakage. Later in 2005 after manual detection / delineation of a ROI for FAZ
detection, ImageJ was used for extraction of FAZ perimeter and surface area [29].
In another work, Conrath et al [30] proposed a semi-automated method based on
region growing function that needs manual definition of a square window in the
center of FAZ. It is clear that these methods are not automatic, and some regions
(e.g., ROI or FAZ center) must be defined by user.
A more recent work of these methods is a level-set based segmentation method
for FAZ extraction [31] that results in promising results, but in this method an
initializing contour must be manually placed inside the FAZ.
In this paper we want to detect FAZ in Fundus Fluorescein Angiography (FFA)
retinal images. FFA enables study of blood circulation in the retina in normal and
abnormal states. In order to take a photograph from the retina, sodium fluorescein
are injected inter venous. Fluorescein flows in all capillaries of retina. Just when
the vessel is damaged, fluorescein can leak out of retinal vessels into the retina.
Due to this condition, Micro-Aneurysms (MAs) are visible as white small dots
(Fig. 2). In FFA images small blood vessels and MAs are more distinguishable
than color fundus images because of their color intensity. Although FAZ detection
methods based on FFA images have some disadvantages such as being time
consuming and invasive, but they remained the gold standard method for
detecting the FAZ and methods that noninvasively visualize the FAZ are usually
using techniques based on entoptic phenomenon or adaptive optics imaging that
remain experimental or less popular [32-35]. Other methods based on genetic
snakes [36], Bayesian statistical technique [37] and a Markov random field
method [17] showed promising results in automated FAZ segmentation, but the
reliability and reproducibility of these techniques have not been investigated. In
some other methods such as region-growing based method [30] and thresholding
method [38] the results are not as good as other methods due to the weakness of
these algorithms in handling noise and variations in image intensity.
Note that the FAZ detection can be considered as a segmentation problem and in
this point of view the techniques can be categorized into various groups. The first
group are classifier-based methods which use a supervised learning such as
artificial neural network, k-NN classifier [16] or Bayesian classifier [19]. The
main issue with these methods is that building of a large database and a training
step are necessary for final segmentation. Another group of methods use
properties of the FAZ such as low pixel intensity and its oval shape [39], or
locating inside the vascular arch [40-41], or matching a template to locate a
required place [42-44]. These methods usually need several pre-requirements for
success in final segmentation, e.g., in vascular arch based methods the vascular
arch must be visible.
The proposed method in this paper is based on the main anatomical definition of
FAZ, i.e., a vessel-free region around the fovea. Based on this we detect the
vessels and try to find FAZ by connecting the end of vessels in a predefined ROI.
Specially in this work we present FAZ detection algorithms based on Digital
Curvelet Transform (DCUT) and morphological operators. For this reason, DCUT
are applied on gray-scale FFA images in order to detect OD and vessels [45]. The
ROI is defined relatively to the OD location. As discussed above, FAZ is a dark
vessel-free region. So in the next step we scan predefined ROI region for finding
end points of vessels and finally by connecting selected end points FAZ is
extracted. Then a morphological-based method is employed to improve FAZ
detection procedure. This method, which tries to benefit from the advantages of
both theoretical and empirical definitions of FAZ (by using a two pronged
approach), is fully automatic and is successful in handling noise and variations in
image intensity and quality.
The paper is organized as follows. In section 2 we explain about the proposed
method for FAZ detection. For this, we describe the preprocessing step, FAZ
detection method by DCUT, morphological-based method for FAZ detection, and
final FAZ detection technique. Section 3 is dedicated to results of our FAZ
detection method applied on 30 FFA images from normal and 40 FFA images
from abnormal subjects. Finally, this paper is concluded in section 4.
2. Proposed Method
Our algorithm includes a preprocessing step and two main branches for FAZ
detection. One of them employs DCUT for FAZ extraction based on connecting
vessels' end points, and the other one is a morphological-based FAZ detection
method. The final result is obtained by combining the output of two branches (see
Fig. 3). The DCUT-based branch in our two pronged approach would
theoretically produce desired FAZ due to connecting the end points of vessels. On
the other hand, empirically, specialists are more familiar with FAZ area as a
nearly circular region. In this base, the morphological-based method which results
in a more smooth and nearly circular FAZ region is introduced. This method
works based on this fact that the darkest region (which is obtained by removing
vessels, applying closing operator and thresholding) corresponds to the FAZ. By
combination of both methods we tried to benefit from the advantages of both
theoretical and empirical definitions of FAZ. In addition, combining two methods
can help us for detection of failure cases such as deviation of FAZ position wrt
OD (only by several degrees) which results in a wrong ROI for connecting the end
points of vessels in DCUT-based method. This situation can be detected by
comparing the results of DCUT-based method and morphological-based method.
If the final result of segmentation is very different from DCUT-based method
and/or morphological-based method, this failure can be detected.
2.1. Preprocessing
First of all, it is necessary to remove outer boundary of images [47], because this
boundary may interfere with the detection of the FAZ. So, at first we find the
extreme boundary, and a circular mask is applied on the image (see Fig. 4). Then
we use Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm
[46] and illumination equalization to enhance contrast and reach uniform
background.
2.2. Curvelet Transform
After preprocessing, the optic disk (OD) and vessels are detected by applying
DCUT [48] and modifying its coefficients based on the sparsity of curvelet
coefficients [49-50]. DCUT is a multi-scale directional transform which allows an
almost sparse representation of objects. One of the main properties of DCUT is its
better directional selectivity (DS) property comparing to other transforms such as
wavelets. Actually curvelets are important in the context of privileging
directionality / curved structures. This property makes DCUT an appropriate tool
for extraction of linear and curve structures in an image such as blood vessels in
retinal images. The weak DS property of wavelets results in checkerboard artifact
(when the processed image in wavelet domain is reconstructed), also the sparsity
property of wavelets is weaker than curvelets (i.e., the time-frequency contents
corresponding to various object and features are more distinguishable in
curvelets). In contrast to wavelet transform that is an optimal tool for 1D signal
processing, DCUT has been designed for direct analysis of 2D signals (i.e.,
instead of producing 2D basis functions as a tensor product of 1D wavelets, 2D
basis functions in DCUT is directly produced from translation, scaling and
rotation of a 2D mother curvelet).
2.3. OD Detection
In [49-50] DCUT is applied on the color retinal image for detecting OD. In this
paper we use this method on gray-scale FFA images. This is because, the vessels
and MAs are much more distinguishable from the background in the FFA images
compared to the color images. The following steps are needed for detection of
OD:
1) Applying adaptive histogram equalization.
2) Applying illumination equalization.
3) Transforming the enhanced image using the wrapping based DCUT.
4) Modifying the values of the curvelet coefficients with exponent of p.
We empirically choose p=5 (see Section 3 for more explanations).
5) To segment candidate regions, we use Canny edge detector and some
morphological operators [51]. After applying Canny edge detector on the
produced image in previous step, the boundary of candidate regions are
extracted. Then we dilate edges using a flat, disk-shaped structuring
element (the radius is 1 pixel) and use filling morphological operator (to
fill the holes and remove the inappropriate regions) and erode
morphological operator to get location of candidate region of OD.
6) Finally, to determine the accurate region of OD we use information of
retinal vessels around this points, as we know OD is partly covered with
vessels. For this reason a 5×10 window is applied on bitwise negated
binary edge map (obtained by a simple thresholding of modified green-
plane image). The center of this window is placed on the center of each
extracted candidate region (in the previous step) and the window with
highest summation is detected as OD location. Note that the size of this
window is chosen because of structure of vessels around OD.
2.4. Vessel Detection
For detecting vessels, the following steps are proposed:
1) Inverting the gray levels of the FFA image.
2) As we know DCUT is a multi-scale transform, we apply this transform
for edge enhancement on the image resulting from step 1. So, in order to
amplify vessels (as a preprocessing step) a function as follows (that is a
modification of proposed function in [49] for color images) is used:

 



  



(1)
: Noise standard deviation which is calculated from the original data.
: Degree of nonlinearity which for our FFA database is 0.01.
: Dynamic rang compression which is 0 in this work.
: Normalization parameter which is 1 for our FFA database.
: Value under which coefficients are amplified and can be derived from
the maximum curvelet coefficient (M). In this paper 0.9M is used.
3) Taking DCUT of the match filtered response of enhanced retinal image
4) Removing the low frequency component, and amplifying all other
coefficients. The main reason for this step is amplifying the edges
(including vessels) and attenuating other components in the image.
5) Applying inverse DCUT
6) Thresholding using the mean of the pixel values of the image. Actually
after modifications of curvelet coefficients, the vessels are truly
distinguishable from other objects in the images, so using the mean of the
pixel values of reconstructed image as threshold results in acceptable
vessel segmentation results. Note that for each image an optimum
threshold could be obtained, however since we want to have an automatic
method, this is not the case for our dataset, and because of using the length
filtering for removing misclassified pixels this threshold leads to desired
results.
7) Applying length filtering and remove misclassified pixels
2.5. FAZ Detection Using Curvelet Transform
After extracting OD and vessels, in order to facilitate scanning of retinal image for
finding FAZ, we limit the search area to be as small as possible. As we know that
the macula is at approximately a radius of 2.5 disc diameter away from the center
of the OD, and at almost the same horizontal level as the OD. We use this relative
position of the fovea with respect to OD to automatically define a search area and
scan only within this ROI for the FAZ detection. Now it is time to detect and
select retinal blood vessel end points surrounded in the ROI to determine and
calculate the FAZ area. Our selection is based on minimum distance between
detected points and center of FAZ (i.e., the far points from FAZ are rejected). So,
first of all the center of these points is found. As we know FAZ is the darkest
region in FFA images, we search ROI region for lowest intensity value.
Coordinate of this dark pixel is assumed as a center of FAZ. Then each point’s
distance from this point is calculated. After calculating the average of all resulted
distances, if point’s distance is less than the average distance it is selected as final
end point [52]. Finally, these selected end points are connected to form the FAZ
area.
2.6. Morphological-based Method for FAZ Detection
In the second method, at first vessels which were detected in previous step by
DCUT, are subtracted from the original image and then closing operator (erosion
followed by dilation) is performed on the resulted image. For this reason a disk
structuring element with a radius of 20 pixels is employed to eliminate the dark
areas with small diameter and produce a more circular FAZ area (usually
specialists are more familiar with FAZ area as a nearly circular region). In fact, by
removing vessels and then applying closing operator the following results are
achieved: 1) The darkest region in the produced image represents the FAZ and by
choosing an appropriate threshold (e.g., the mean of the pixel values of the image)
this area is detected, 2) The detected FAZ area is more smooth and nearly circular
region.
2.7. Final Method for FAZ Extraction
In order to have a more accurate FAZ detection procedure, the logical AND
operator is performed between two extracted regions from DCUT-based and
morphological-based methods. The block diagram of our final algorithm is
presented in Fig. 5. As we will see in Section 3 the combination of both
algorithms, will increase the accuracy of the delineation (according to Table 1).
Actually the morphology-based method is based on the darkest area and the
DCUT-based method is based on the vessels' information. As we explained, by
removing vessels and then applying closing operator in morphological-based
method the darkest region corresponds to the FAZ, and by choosing an
appropriate threshold a more smooth and nearly circular region is detected
(usually specialists are more familiar with FAZ area as a nearly circular region).
On the other hand, DCUT-based method is based on vessel information and
connecting the end points of vessels would produce desired FAZ (according to
theoretical definition of FAZ). By combination of both methods we tried to
benefit from the advantages of both methods.
In addition, combining two methods can help us for detection of failure cases. For
example usually the FAZ position wrt OD may be deviated only by several
degrees and since our DCUT-based algorithm is based on connecting the end
points of vessels it would be able to extract vessels unless having a deviation more
than several degree. This situation can be detected by comparing the results of
DCUT-based method and morphological-based method. If the final results of
segmentation is very different from DCUT-based method and/or morphological-
based method (e.g., the overlapped area of produced FAZs can be compared to the
area of extracted FAZ from DCUT-based method), we can conclude that results
are not acceptable and orientation could be one of the main causes of this failure
(and so we must select a new ROI).
3. Results
3.1. Material
As explained in section 1, FFA is a more informative modality for FAZ detection
due to its ability in emphasizing on vessels and also distinguishing between
vessels and MAs. In this work we use 8-bit FFA images of size  pixels
from angiography unit of Isfahan Feiz hospital. These images are available in
[53]. We have collected images of 70 patients at different DR stages. Fig. 2 shows
an example of normal and abnormal subjects from this database.
3.2. Visual Results of Algorithm for a Sample FFA
Figures 6-14 show the results of the main steps of our FAZ detection approach.
Figure 6 shows the output of each of the two dark blocks in Figure 5, and the
output of the red block is illustrated in Figure 7, then we go into details of our
algorithm in Figures 8-14. Fig. 8 shows the results of the proposed DCUT-based
OD detection algorithm described in section 2.3. Fig. 9 illustrates the extracted
vessels using the proposed vessel detection method in section 2.4. Note that we
can see that the presence of MA doesn’t influence the vessel detection procedure.
Actually using length filtering in the final stage of our vessel detection algorithm
will remove MAs. Figure 10 shows extracted ROI for searching FAZ and Figure
11 illustrates the darkest pixel in ROI (known as center of FAZ). As we explained
in section 2.5, this area is extracted automatically according to anatomical
position of macula with respect to OD. While limiting the search area reduces the
complexity of the scanning process; however the main point behind scanning only
within this ROI for the FAZ detection is increasing accuracy. Actually if we
search the whole image, many points may be detected as vessels' end points which
cause non-acceptable FAZ extraction results. Fig. 12 shows the final results of
DCUT-based FAZ detection. In this figure, unnecessary endpoints in ROI are
removed and FAZ is extracted by connecting only endpoints which their distance
from the center of FAZ is less than mean distance. The FAZ detection results for
the same FFA image using morphological-based method, and the final results
using the combination of both branches (DCUT-based method and
morphological-based method) are shown in Figures 13 and 14 respectively.
3.3. Parameters Selection
Several parameters are used in the proposed method in this paper. We explained
before that most of them are obtained automatically or justified why the suggested
value would be appropriate for all FFA images. For example, the proposed
threshold for morphological-based method or vessel detection is set to the mean
value of pixels' intensity (note that for each image an optimum threshold could be
obtained in a non-automatic manner by trial and error).
One of the most important parameters used for OD detection is the exponent value
of p. When we amplify the curvelet coefficients and illustrate the reconstructed
image with a limit number of bits, the large coefficients tend to the largest value
of pixel (e.g., ±255 for a 8-bit graylevel image) and around zero coefficients (that
are less than one) tend to zero very fast. Since the curvelet transform results in a
sparse representation, only a few large coefficients correspond to bright regions
and so the main structure of OD will appear after using exponential operator and
inverse curvelet transformation.
Note that in contrast to image intensity level that is a positive integer (e.g.,
between 0 to 255 for a 8-bit graylevel image), the corresponding curvelet
coefficient would be a signed real number. So, we can only use odd numbers in
exponential (because of negative numbers of some curvelet coefficients). p=3 is
not enough to produce the whole desired OD area and applying p≥7 causes the
apparition other unessential bright objects in the reconstructed image.
3.4. Evaluation of Algorithm and Discussion
The (final) proposed method has been evaluated on 70 FFA images. The FAZ area
is analyzed for 30 normal stage and 40 abnormal stages including mild non-
proliferative DR (NPDR), moderate NPDR, severe NPDR and proliferative DR
(PDR). Then these results are compared with results reported by ophthalmologist
as our gold standard.
To evaluate our system, we defined an overlapping ratio for measuring the
performance of algorithm. This metric is defined as the ratio of intersection
of our results (A) and the gold standard (B), over their union.

 (2)
The ground truth images (B) have been drawn based on landmark delineation
(medical experts provided their opinion to define the correct FAZ region using a
prepared GUI in MATLAB). The value of this ratio is between 0 and 100, with 0
indicating no overlap at all and 100 indicating perfect agreement between
algorithm’s result and gold standard. The quantitative results obtained from the
proposed method are shown in Table 1. The failure cases are due to partial and
poor appearance of vessels in the image. The experimental result shows that the
proposed algorithm can give sufficiently accurate location of FAZ in both normal
and abnormal cases. The inter- and intra-observer variability of this procedure for
all ground-truth images, also the overlap between the regions obtained from each
of the two methods (curvelet-based and morphological-based methods), can be
seen in Appendix for comparison to the quantitative overlap index reported in
Table 1 and checking the lack of outlier cases.
The best overlapping ratio reported for the proposed method in [31] is 0.79. Note
that in this method an initializing contour must be manually placed inside the FAZ
while our method is fully automatic. In addition, the quality of proposed FFA
images in [31] are much better than our database, which means our algorithm may
be more robust to lower image quality. Similarly, this ratio for proposed methods
in [17] and [35] was reported respectively 0.78 and 0.77 which shows the
superiority of our method. Please note that one probable reason for the good
results achieved with the proposed method comes from its own design, very
specific to the studied images (which may also explain why more generic methods
such as segmentation have lower performance).
In order to obtain specificity and sensitivity of our method we need to define TP,
FP, FN and TN. TP is the common area between regions extracted by algorithm
and regions detected by the ophthalmologist. FP is the area that does not belong to
FAZ but our algorithm detects it as FAZ. TN is the area which does not belong to
FAZ and our algorithm also doesn’t detect that region as a part of FAZ. FN is the
area that belongs to FAZ but our algorithm is not able to detect it as a part of
FAZ. Table 2 shows the specificity and sensitivity of our method for normal and
abnormal stages. Each line in this table (and Table 1) corresponds to a single
patient. In order to group the results for each sub-population of DR, the average +
std of overlapping ratio, specificity and sensitivity (and the number of patients) in
each group, (normal, Mild/Moderate NPDR, and Severe NPDR + PDR) have
been shown in Table 3. (These values for each patient in each group can be seen
in Appendix).
According to main properties of curvelet transform (e.g., edge preservation
property, rotation invariant property), the vessel structure does not impact the
detection process. We tested our algorithm on our database (and similar to
reported results in [50] for color fundus images from DRIVE dataset) it can be
seen that the segmentation is relatively insensitive to the vessel structure and wide
variations in intensity that are inherent in these images. However, in some cases
due to damaging of vessels and enlargement of FAZ region, our defined ROI is
not large enough to surround end-points of damaged vessels. In order to solve this
problem we must increase ROI area in these cases (Fig.15) as explained in first
paragraph of Section 2.
4. Conclusions
In this study, FAZ detection of retinal images based on curvelet transform and
morphological operators has been presented. The curvelet transform is used for
detecting ROI based on extracted optic disk, also for vessel segmentation. By
connecting end points of vessels in ROI and using morphological operators the
final FAZ is extracted. The algorithm shows high specificity and sensitivity for all
stages.
As we know FAZ region changes in different stages of DR. By extracting useful
features such as FAZ area and roundness and detecting microaneurysms [4], and
exudates [5] we will be able to automatically detect different stages of DR using
an appropriate classifier. For example, for our database, the area of FAZ for
"Normal Stage", "Mild/Moderate NPD", and "Severe NPDR/PDR" are
respectively 3498.25, 5773, and 13624. Another measure which shows the
roundness of the area can be obtained by calculating the variance of distance
between points around FAZ and center of FAZ which are respectively 12.93,
27.03, 110.84 [51].
Appendix
In order to show an estimation of the interobserver variability of proposed
procedure in this paper for FAZ detection and intra-observer too, the overlap
between segmentations for all ground-truth images in this study is showed in
Table 4 that could be compared directly to the quantitative overlap index reported
in Table 1 for checking the lack of outlier cases.
In order to group the results for each sub-population of DR (normal,
Mild/Moderate NPDR, and Severe NPDR + PDR), the overlapping ratio,
specificity and sensitivity of all data in each group can be seen in Figure 16.
References
[1] Tobin KW, Chaum SE, Govindasamy VP, and Karnowski ThP, Detection of Anatomic
Structures in Human Retinal Imagery. IEEE Trans. on Medical Imaging 26: 1729-1739, 2007.
[2] Niemeijer M, Abramoff MD, Segmentation of the Optic Disk, Macula and Vascular Arch in
Fundus Photographs. IEEE Trans. on Medical Imaging 26:116-127, 2007.
[3] Hiuiqi L, Automated Feature Extraction in Color Retinal Images by a Model Based Approach.
IEEE Trans. on Biomedical Engineering 51: 246-254, 2004.
[4] Walter T, Massin P, Erginay A, Ordonez R, Jeulin C, Klein JC (2007) Automatic Detection of
Microaneurysms in Color Fundus Images. Med Image Analysis 11(6): 555-566.
[5] Walter T, Klein JC, Massin P, Erginay A, A Contribution of Image Processing to the Diagnosis
of Diabetic Retinopathy - Detection of Exudates in Color Fundus Images of the Human Retina.
IEEE Trans. Med. Imaging 21(10): 1236-1243, 2002.
[6] Walter T, Klein JC, Segmentation of Color Fundus Images of the Human Retina: Detection of
the Optic Disc and the Vascular Tree Using Morphological Techniques. ISMDA 2001: 282-287,
2001.
[7] Abràmoff MD, Garvin M, Sonka M, Retinal Imaging and Image Analysis. IEEE Reviews in
Biomedical Engineering, 3: 169-208, 2010.
[8] Esmaeili M, Rabbani H, Dehnavi AM, Automatic optic disk boundary extraction by the use of
curvelet transform and deformable variational level set model. Pattern Recognition 45(7): 2832-
2842, 2012.
[9] Zana F, Klein JC, Segmentation of vessel-like patterns using mathematical morphology and
curvature evaluation. IEEE Trans. on Image Processing 10(7): 1010-1019, 2001.
[10] Zana F, Klein JC, A Multi-Modal Registration Algorithm of Eye Fundus Images Using
Vessels Detection and Hough Transform. IEEE Trans. Med. Imaging 18(5): 419-428, 1999.
[11] Sangyeol Lee, Joseph M. Reinhardt, Philippe C. Cattin, Michael D. Abràmoff, Objective and
expert-independent validation of retinal image registration algorithms by a projective imaging
distortion model, Med Image Analysis 14 (4), 539-549, 2010.
[12] Patton N, Aslam TM, MacGillivray T, Deary IJ, Dhillon B, Eikelboom RH, Yogesan K,
Constable IJ, Retinal image analysis: concepts, applications and potential, Prog Retin Eye Res
25(1):99-127, 2006.
[13] Tsai CL, Madore B, Leotta MJ, Sofka M, Yang G, Majerovics A, Tanenbaum HL, Stewart
CV, Roysam B, Automated retinal image analysis over the internet, IEEE Trans. Inf. Technol.
Biomed. 12(4):480-487, 2008.
[14] Ahmed MI, Amin MA, High speed detection of optical disc in retinal fundus image. Signal,
Image and Video Processing, Dec. 2012.
[15] Nirmala SR, Dandapat S, Bora PK, Wavelet weighted distortion measure for retinal images.
Signal, Image and Video Processing, Jan. 2012.
[16] Niemeijer M, Abramoff MD, Ginneken BV, Fast Detection of the Optic Disc and Fovea in
Color Fundus Photographs. Med Image Analysis 13: 859870, 2009.
[17] Haddouche A, Adel M, Rasigni M, Conrath J and Bourennanea S, Detection of the Foveal
Avascular Zone on Retinal Angiograms Using Markov Random Fields. Digital Signal Processing
20:149154, 2010.
[18] Regillo CD, 2007-2008 Basic and Clinical Science Course Section 12: Retina and Vitreous,
American Academy of Ophthalmology, http://one.aao.org/CE/EducationalProducts/BCSC.aspx
Accessed 7 Dec 2011.
[19] Kovacs L, Qureshi RJ, Nagy B, Harangi B and Hajdu A, Graph Based Detection of Optic
Disc and Fovea in Retinal Image. IEEE Int. Workshop on Soft Computing Applications, pp: 143-
148, 2010.
[20] Tobin KW, Detection of Anatomic Structures in Human Retinal Imagery, IEEE Trans. on
Medical Imaging 26: 1729-1739, 2007.
[21] Sekhar S, Nuaimy W.Al and Nandi AK, Automated Localization of Optic Disc and Fovea in
Retinal Fundus Images. Proc. 16th European Signal Processing Conference, 5 pages, Lausanne,
Switzerland, 2008.
[22] Tan NM, Wong DWK, Liu J, Ng WJ, Zhang Z, Lim JH, Tan Z, Tang Y, Li H, Lu S and
Wong TY, Automatic Detection of the Macula in the Retinal Fundus Image by Detecting Regions
with Low Pixel Intensity. IEEE Biomedical and Pharmaceutical Engineering, pp:1-5, 2009.
[23] Gutirrez J, Epifanio I, DeVes E and Fed FJ, An Active Contour Model for the Automatic
Detection of the Fovea in Fluorescein Angiographies. IEEE Int. Conf. on Pattern Recognition, pp:
312-315, 2000.
[24] Zana F, Meunier I and Klein JC, A Region Merging Algorithm Using Mathematical
Morphology: Application to Macula Detection. Int. Symp. on Mathematical Morphology and its
Applications to Image and Signal Processing, pp: 423 - 430, 1998.
[25] A. D. Fleming, S. Philip, K. A. Goatman, J. A. Olson, and P.F. Sharp, “Automated
Assessment of Diabetic Retinal Image Quality Based on Clarity and Field Definition”, Invest
Ophthalmol Vis Sci., 47, pp. 1120-1125, 2006.
[26] Goldberg RE, Varma R, Spaeth GL, Magargal LE, Callen D. Quantification of progressive
diabetic macular nonperfusion. Ophthalmic Surg.; 20:4245, 1989.
[27] Classification of diabetic retinopathy from fluorescein angiograms. ETDRS report number 11.
Early Treatment Diabetic Retinopathy Study Research Group. Ophthalmology, 98:807822, 1991.
[28] Phillips RP, Spencer T, Ross PG, Sharp PF, Forrester JV. Quantification of diabetic
maculopathy by digital imaging of the fundus. Eye, 5:130 137,1991.
[29] Conrath J, Giorgi R, Raccah D, Ridings B. Foveal avascular zone in diabetic retinopathy:
quantitative vs qualitative assessment. Eye, 19:322326, 2004.
[30] Conrath J, Valat O, Giorgi R, et al. Semi-automated detection of the foveal avascular zone in
fluorescein angiograms in diabetes mellitus, Clin Exp Ophthalmol, 34:119123, 2006.
[31] Zheng Y, Gandhi JS, Stangos AN, Campa C, Broadbent DM, Harding SP. Automated
segmentation of foveal avascular zone in fundus fluorescein angiography. Invest Ophthalmol Vis
Sci., 51:36533659, 2010.
[32] Popovic Z, Knutsson P, Thaung J, Owner-Petersen M, Sjo¨strand J. Noninvasive imaging of
human foveal capillary network using dual-conjugate adaptive optics. Invest Ophthalmol Vis Sci.;
52:26492655, 2011.
[33] Martin JA, Roorda A. Direct and noninvasive assessment of parafoveal capillary leukocyte
velocity. Ophthalmology;112: 22192224, 2005.
[34] Tam J, Martin JA, Roorda A. Noninvasive visualization and analysis of parafoveal capillaries
in humans. Invest Ophthalmol Vis Sci., 51:16911698, 2010.
[35] Shin YU, Kim S, Lee BR, Shin JW, Kim SI, Novel Noninvasive Detection of the Fovea
Avascular Zone Using Confocal Red-free Imaging in Diabetic Retinopathy and Retinal Vein
Occlusion, Invest Ophthalmol Vis Sci.; 53(1):309-315, 2012.
[36] Ballerini L. Genetic snakes for medical images segmentation. Math Modeling Estimation
Techn Comput Vision; 3457:284295, 1998.
[37] Iban˜ez MV, Simo´ A. Bayesian detection of the fovea in eye fundus angiographies. Pattern
Recognition Lett.; 20:229240, 1999.
[38] Petsatodis T, Diamantis A, Syrcos GP. A Complete Algorithm for Automatic Human
Recognition based on Retina Vascular Network Characteristics, 1st Int. Scientific Conf. e RA,
Tripolis, Greece, pp. 41-46, 2004.
[39] Sinthanayothin C, Boyce JF, Cook HL, and Williamson TH, Automated localization of the
optic disc, fovea, and retinal blood vessels from digital colour fundus images, British Journal of
Ophthalmology, vol. 83, no. 8, pp. 902910, 1999.
[40] Fleming AD, Goatman KA, Philip S, Olson JA, Sharp PF. Automatic detection of retinal
anatomy to assist diabetic retinopathy screening. Phys Med Biol. 2007. pp. 331345.
[41] Li H, Chutatape O. Automated feature extraction in color retinal images by a model based
approach. IEEE Trans. on Biomedical Engineering. 2004;51(no 2):246254.
[42] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, Comparison of colour spaces for
optic disc localisation in retinal images, in Proc. of the 16th Int. Conf. on Pattern Recognition
(ICPR'02), vol. 1, pp. 743746, 2002.
[43] M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic disc detection using
pyramidal decomposition and Hausdorff-based template matching, IEEE Trans. on Medical
Imaging, vol. 20, no. 11, pp. 11931200, 2001.
[44] A. Youssif, A. Ghalwash, and A. Ghoneim, Optic disc detection from normalized digital
fundus images by means of a vessels' direction matched filter, IEEE Trans. on Medical Imaging,
vol. 27, no. 1, pp. 1118, 2008.
[45] Starck J-L, Murtagh F, Candès EJ, and Donoho DL, Gray and Color Image Contrast
Enhancement by the Curvelet Transform. IEEE Trans. on Image processing 12: 706-717, 2003.
[46] Pisano E, Zong S, Heminger B, Deluca M, Johnston R, Muller K, Breauning MP and Pizer S
M, Contrast Limited Adaptive Histogram Equalization Image Processing to Improve the Detection
of Simulated Speculations in Dense mammograms. Digital Imaging11:193-200, 1998.
[47] Aibinu AM, Salami MJE, and Shfie AA, Retina Fundus Image Mask Generation Using
Pseudo Parametric Modeling Technique. IIUM Engineering Journal 11: 163-177, 2010.
[48] Cand`es E, Demanet L, Donoho D, Ying L, Fast Discrete Curvelet Transforms, Multiscale
Model. Simulation 5: 861-899, 2006.
[49] Esmaeili M, Rabbani H, Mehri Dehnavi AR and Dehghani AR, Automatic Optic Disk
Detection by the Use of Curvelet Transform, IEEE Int. Conf. on Information Technology and
Applications in Biomedicine, pp: 1-4.
[50] Esmaeili M, Rabbani H, Mehri Dehnavi AR, Dehghani AR (2009) Extraction of Retinal
Blood Vessels by Curvelet Transform. IEEE Int. Conf. on Image Processing, pp: 3353-3356.
[51] Hajeb SH, Rabbani H, Akhlaghi M, Diabetic Retinopathy Grading by Digital Curvelet
Transform, Computational and Mathematical Methods in Medicine, vol. 2012, Article ID 761901,
11 pages, 2012.
[52] Fadzil MHA, Nugroho H, Izhar LI and Nugroho HA (2010) Analysis of Retinal Fundus
Images for Grading of Diabetic Retinopathy Severity. Medical and Biological Engineering and
Computing 49: 693700.
[53] http://misp.mui.ac.ir/data/fundus-fluorescent-angiography-images.html
Figure 1. Foveal Avascular Zone (FAZ) in a sample fluorescein angiograph image
Fove
(a) (b)
Figure 2. (a) A normal fluorescien angiography image, (b) MAs in abnormal fluorescien
angiography image.
Section 2.1
Section 2.4
Section 2.3
Section 2.5
Section 2.6
Section 2.7
Figure 3. The concise block diagram of our two pronged FAZ detection algorithm
Preprocessing
curvelet-based FAZ detection by
connecting the end point of vessels
(theoretical definition)
morphological-based FAZ detection
by finding nearly circular darkest
region (empirical definition)
&
FFA Image
(a) (b)
Figure 4. (a) The extracted mask image, (b) Boundary of eye.
Section 2.4
Section 2.3
Section 2.5
Section 2.6
Section 2.7
Reverse
CLAHE
Curvelet
Transform
Modifying
coefficients and
Edge
enhancement
Inverse Curvelet
Transform
Thresholding
Removing small
edge
OD Detection
ROI of FAZ
FAZ extraction(1)
Vessel Extraction
Subtracting
Vessals from
original image
Morphological
closing FAZ extraction (2)Thresholding
FAZ Extraction
FFA
Detecting
End-points of
Vessels in ROI
Connecting the
selected
End-points
CLAHE
Curvelet
Transform
Modifying
coefficients
Canny edge
detector
Inverse Curvelet
Transform
Morphological
analysis
Figure 5. The block diagram of our FAZ detection algorithm
(a) (b) (c)
Figure 6. The output of each of the two dark blocks in Figure 5. a) original image, b) OD
detection, c) vessel extraction.
(a) (b)
Figure 7. The output of red block in Figure 5. a) original image, b) final extracted FAZ.
(a) (b) (c)
(d) (e) (f)
Figure 8. Results of OD detection for a sample image. a) Original image, b) Enhanced image by
CLAHE, c) Image after modification of bright objects by DCUT, d) Applying Canny edge
detector, e) Filling of holes, f) Extracting OD location using morphological operators.
(a) (b) (c)
(d)
Figure 9. Results of vssel detection for a sample image. a) Inverse of FFA, b) Taking DCUT of
the match filtered response of enhanced retinal image, removing the high frequency component,
and amplify all other coefficients, c) Thresholding, d) Applying length filtering.
(a) (b)
Figure 10. (a) Produced image after CLHE, (b) ROI for searching FAZ.
Figure 11. Center of FAZ.
(a)
(b)
(c) (d) (e)
Figure 12. Results of curvelet-based FAZ detection for a sample image. a) ROI on original image
which defined by relative position of the fovea with respect to the OD, b) Extracted end-points in
ROI, c) Removing unnecessary points, d) Selected end-points, e) Connecting of end-points.
(a) (b) (c)
(d)
Figure 13. Results of morphological-based FAZ detection for a sample image. a) Original image,
b) Image without vessels, (c) Closing of (b), d) Extracted FAZ after thresholding.
(a)
(b)
(c)
Figure 14. Results of final FAZ detection method for a sample image. a) The extracted FAZ by
DCUT, b) The extracted FAZ by morphological based method, c) The extracted FAZ by combined
method.
Figure 15. This image needs bigger ROI.
Figure 16. The plot of overlapping ratio, specificity and sensitivity of our FAZ detection method
for each patient in each group (normal, Mild/Moderate NPDR, and Sever NPDR + PDR)
Normal cases
Table1. Overlapping ratio for several FFA images in various stages: normal, mild NPDR,
moderate NPDR, severe NPDR and PDR
Stage
Method1
Method2
Final (Combined) Method
Normal
0.7756
0.6545
0.8551
Normal
0.8212
0.6932
0.8617
Mild NPDR
0.6770
0.5852
0.7541
Mild NPDR
0.7811
0.6520
0.7811
Moderate NPDR
0.7440
0.6360
0.7914
Moderate NPDR
0.7170
0.6320
0.7736
Severe NPDR
0.7222
0.7550
0.7869
Severe NPDR
0.6518
0.6764
0.7590
PDR
0.7223
0.6673
0.7811
PDR
0.8119
0.70
0.8420
Mean±std for all 30
normal cases
0.7726±0.0478
0.6317±0.052
0.8533± 0.0427
Mean±std for all 40
abnormal cases
0.7659±0.0587
0.5015±0.075
0.81±0.0501
Table 2. Performance of our FAZ detection algorithm in terms of specificity and sensitivity for
several FFA images
Stage
Sensitivity
Specificity
Normal
92.29
97.78
Normal
95.08
99.38
Mild NPDR
93.23
97.13
Mild NPDR
100
99.48
Moderate NPDR
100
99.57
Moderate NPDR
87.00
98.72
Severe NPDR
86.26
96.33
Severe NPDR
83.46
96.51
PDR
86.54
97.49
PDR
93.21
98.87
Mean±std
95.26±2.89
98.13±1.09
Table 3. MeanStd value of overlapping ratio, specificity and sensitivity of our FAZ detection
method in each group (normal, Mild/Moderate NPDR, and Sever NPDR + PDR)
Normal
Low
(Mild/Moderate
NPDR)
High
(Sever NPDR +
PDR)
Number of patients
30
25
15
Mean overlapping ratio Std
0.85530.04
0.80770.1
0.82800.1
Mean sensitivity Std
96.701.98
95.962.8
93.444.48
Mean specificity Std
98.151.13
98.221.1
97.91 1.09
Table4. Overlapping ratio between results of morphological-based and curvelet-based methods,
and inter- and intra-observer overlapping ratio for FFA images in this study
Image
number
stage
Intra observer
Inter observer
Overlapping ratio between
results of two methods
(morphological and curvelet
based methods)
1
Normal
0.8852
0.8232
0.8341
2
Normal
0.8736
0.8843
0.5480
3
Normal
0.9218
0.8732
0.6731
4
Normal
0.8711
0.8919
0.8125
5
Normal
0.9514
0.9126
0.7214
6
Normal
0.9129
0.8536
0.6911
7
Normal
0.8973
0.8131
0.7019
8
Normal
0.8818
0.7616
0.5127
9
Normal
0.9043
0.8328
0.6333
10
Normal
0.9319
0.9411
0.4816
11
Normal
0.9404
0.8912
0.6704
12
Normal
0.8612
0.8205
0.4654
13
Normal
0.8727
0.8642
0.8208
14
Normal
0.9125
0.8754
0.8129
15
Normal
0.8733
0.9111
0.5761
16
Normal
0.7815
0.7924
0.6612
17
Normal
0.8472
0.7419
0.6477
18
Normal
0.8660
0.8429
0.4528
19
Normal
0.9141
0.8847
0.7058
20
Normal
0.8914
0.8291
0.6144
21
Normal
0.8673
0.8076
0.8100
22
Normal
0.8700
0.7854
0.5947
23
Normal
0.9305
0.9521
0.6515
24
Normal
0.8827
0.8237
0.7611
25
Normal
0.9018
0.8190
0.6143
26
Normal
0.7549
0.7835
0.8231
27
Normal
0.8992
0.8853
0.7351
28
Normal
0.8523
0.8837
0.5492
29
Normal
0.8312
0.7482
0.5261
30
Normal
0.8437
0.8104
0.6027
Mean±std for all
30 normal cases
0.8808±0.0427
0.8447±0.0543
0.6568± 0.1134
31
Abnormal
0.8591
0.7629
0.6433
32
Abnormal
0.8140
0.8361
0.5481
33
Abnormal
0.7612
0.7747
0.3211
34
Abnormal
0.8521
0.8693
0.4127
35
Abnormal
0.7314
0.7835
0.7062
36
Abnormal
0.8271
0.7242
0.2769
37
Abnormal
0.8219
0.8430
0.5681
38
Abnormal
0.8461
0.7504
0.4983
39
Abnormal
0.7811
0.8419
0.2849
40
Abnormal
0.6897
0.7373
0.4057
41
Abnormal
0.8153
0.7542
0.6455
42
Abnormal
0.8649
0.8222
0.6819
43
Abnormal
0.9111
0.8846
0.4345
44
Abnormal
0.8740
0.9239
0.3769
45
Abnormal
0.7925
0.8312
0.6772
46
Abnormal
0.7851
0.7294
0.7391
47
Abnormal
0.8155
0.6821
0.5128
48
Abnormal
0.8372
0.8550
0.3405
49
Abnormal
0.8397
0.7594
0.7673
50
Abnormal
0.7417
0.7940
0.6061
51
Abnormal
0.8537
0.8231
0.5892
52
Abnormal
0.8622
0.8373
0.6905
53
Abnormal
0.7345
0.7103
0.5642
54
Abnormal
0.8429
0.8175
0.2872
55
Abnormal
0.8437
0.8842
0.3914
56
Abnormal
0.7771
0.8236
0.2287
57
Abnormal
0.6909
0.7445
0.4243
58
Abnormal
0.8193
0.8427
0.3846
59
Abnormal
0.8247
0.7993
0.5821
60
Abnormal
0.8299
0.8046
0.6473
61
Abnormal
0.8473
0.7835
0.3811
62
Abnormal
0.8461
0.8529
0.4739
63
Abnormal
0.9101
0.9371
0.7123
64
Abnormal
0.7934
0.7016
0.4372
65
Abnormal
0.8011
0.8348
0.3506
66
Abnormal
0.8113
0.7620
0.4975
67
Abnormal
0.8924
0.8448
0.3819
68
Abnormal
0.8109
0.8362
0.4274
69
Abnormal
0.7890
0.7347
0.2841
70
Abnormal
0.8249
0.7729
0.3458
Mean±std for all
40 abnormal cases
0.8167±0.0511
0.8027±0.0597
0.4882±0.1503
... Although a large body of literature is available regarding different image processing techniques for automatic delineation of FAZ area in various retinal imaging modalities [17][18][19][20][21][22][23][24][25][26] , studies focusing on FAZ segmentation in OCTA were usually conducted on healthy subjects (Supplementary Table S1). In addition, few studies assessing the accuracy of FAZ delineation in OCTA images of diabetic eye have failed to exhibit a high correlation (Intersection over Union: 0.70 21 and 0.82 20 ), due to high incidence of signal noise and artifacts in OCTA imaging of diabetic patients 16 . ...
Article
Full-text available
The purpose of this study was to introduce a new deep learning (DL) model for segmentation of the fovea avascular zone (FAZ) in en face optical coherence tomography angiography (OCTA) and compare the results with those of the device’s built-in software and manual measurements in healthy subjects and diabetic patients. In this retrospective study, FAZ borders were delineated in the inner retinal slab of 3 × 3 enface OCTA images of 131 eyes of 88 diabetic patients and 32 eyes of 18 healthy subjects. To train a deep convolutional neural network (CNN) model, 126 enface OCTA images (104 eyes with diabetic retinopathy and 22 normal eyes) were used as training/validation dataset. Then, the accuracy of the model was evaluated using a dataset consisting of OCTA images of 10 normal eyes and 27 eyes with diabetic retinopathy. The CNN model was based on Detectron2, an open-source modular object detection library. In addition, automated FAZ measurements were conducted using the device’s built-in commercial software, and manual FAZ delineation was performed using ImageJ software. Bland–Altman analysis was used to show 95% limit of agreement (95% LoA) between different methods. The mean dice similarity coefficient of the DL model was 0.94 ± 0.04 in the testing dataset. There was excellent agreement between automated, DL model and manual measurements of FAZ in healthy subjects (95% LoA of − 0.005 to 0.026 mm ² between automated and manual measurement and 0.000 to 0.009 mm ² between DL and manual FAZ area). In diabetic eyes, the agreement between DL and manual measurements was excellent (95% LoA of − 0.063 to 0.095), however, there was a poor agreement between the automated and manual method (95% LoA of − 0.186 to 0.331). The presence of diabetic macular edema and intraretinal cysts at the fovea were associated with erroneous FAZ measurements by the device’s built-in software. In conclusion, the DL model showed an excellent accuracy in detection of FAZ border in enfaces OCTA images of both diabetic patients and healthy subjects. The DL and manual measurements outperformed the automated measurements of the built-in software.
... In Welfer's method [10], fovea detection is based on the optic disc information and the region of interest is identified by finding the darkest candidate in the region of interest (ROI) by performing morphological operations on the green channel of the fundus image. In [11], fovea avascular zone is detected by combining discrete curvelet transform and morphological operation for localization of fovea. In this technique, CLAHE is employed as a preprocessing technique for contrast enhancement. ...
Article
Full-text available
Accurate diagnosis of various retinal diseases requires high quality fundus images and exact fovea centre for pathological analysis. In this paper, a suitable preprocessing technique to enhance the fundus images and an accurate method for fovea centre detection are proposed. Luminosity component is enhanced by combining gamma correction, discrete shearlet transform and singular value decomposition. Local contrast is improved by applying CLAHE and a suitable weighting function is applied to alleviate over-enhancement. Region of interest for fovea localization is determined based on the optic disc position using the luminosity channel of the enhanced fundus image. This method is also suitable for images with abnormal structures around macula as the actual macula is identified from the multiple macula candidates based on optic disc position as well as the segmented blood vessels. Using appropriate color channels, thresholding and morphological operations, the macula is binary segmented and the fovea centre is marked. The proposed enhancement technique yields better results based on visual assessment as well as various quantitative parameters. The proposed method achieves the success rate of 99.4%, 100%, 98.9%, 99.2% and 100% for the proprietary, DRIVE, MESSIDOR, DIARETDB0 and DIARETDB1 databases, respectively.
... The dataset [37] used in this research comprises 70 FFA images of diabetic patients captured for a study in Isfahan University of Medical Sciences. These images are of dimension 576 × 720 with 8-bit depth. ...
... For example, there exist methods for the FAZ segmentation in retinography images [12], [13]. In other cases, the FAZ identification method is also used to obtain the DR degree [14], [15]. There are also other studies where the FAZ is segmented in images obtained by fluorescein angiography [16], an invasive image modality that allows the experts a better visualization of the retinal vessels and FAZ than with traditional retinography. ...
Article
Full-text available
The Foveal Avascular Zone (FAZ) is a capillary-free area that is placed inside the macula and its morphology and size represent important biomarkers to detect different ocular pathologies such as diabetic retinopathy, impaired vision or retinal vein occlusion. Therefore, an adequate and precise segmentation of the FAZ presents a high clinical interest. About to this, Angiography by Optical Coherence Tomography (OCT-A) is a non-invasive imaging technique that allows the expert to visualize the vascular and avascular foveal zone. In this work, we present a robust methodology composed of three stages to model, localize, and segment the FAZ in OCT-A images. The first stage is addressed to generate two FAZ normality models: superficial and deep plexus. The second one uses the FAZ model as a template to localize the FAZ center. Finally, in the third stage, an adaptive binarization is proposed to segment the entire FAZ region. A method based on this methodology was implemented and validated in two OCT-A image subsets, presenting the second subset more challenging pathological conditions than the first. We obtained localization success rates of 100% and 96% in the first and second subsets, respectively, considering a success if the obtained FAZ center is inside the FAZ area segmented by an expert clinician. Complementary, the Dice score and other indexes (Jaccard index and Hausdorff distance) are used to measure the segmentation quality, obtaining competitive average values in the first subset: 0.84 ± 0.01 (expert 1) and 0.85 ± 0.01 (expert 2). The average Dice score obtained in the second subset was also acceptable (0.70 ± 0.17), even though the segmentation process is more complex in this case.
... To remove over amplification the bins above a certain clip limit are redistributed in the histogram bin resulting in fig.3c. Then for smoothing the image morphological filter are applied [6]. Sequential filtering is done using opening and closing operation alternately on the image along with elliptical kernel function of increasing size. ...
Article
Full-text available
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fun-dus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016-2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.
Preprint
Full-text available
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world,. In the past few Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, Artificial Intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss, Both fundus and optical coherence tomography (OCT) images are used to image the retina. With deep learning/machine learning apprroaches it is possible to extract features from the images and detect the presence of DR. Multiple strategies are implemented to detect and grade the presence of DR using classification, segmentation, and hybrid techniques. This review covers the literature dealing with AI approaches to DR that have been published in the open literature over a five year span (2016-2021). In addition a comprehensive list of available DR datasets is reported. Both the PICO (P-patient, I-intervention, C-control O-outcome) and Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA)2009 search strategies were employed. We summarize a total of 114 published articles which conformed to the scope of the review. In addition a list of 43 major datasets is presented.
Chapter
Fluorescein fundus angiography (FFA) is a promising modality in the diagnosis of several ophthalmic disorders. However, these images suffer from low resolution due to the limitations of the image acquisition mechanisms. Super-resolving these images with image processing approaches is a cost-effective solution for improving these images’ diagnostic values. This chapter recommends a model for FFA images’ super-resolution, merging deep image prior (DIP), and structural regularization functional. This model becomes adaptive based on residual deep learning network and adaptive structural variation. This model is evaluated with a standard dataset and demonstrates superior objective performance metrics and computational time. The proposed model is also flexible for extended investigations on super-resolution operations specific to abnormalities.
Article
Full-text available
The first task in any retinal fundus image processing is to detect the optical disc, as this is the prime location in a fundus image from where all retinal blood vessels originate. In this paper, a faster method to detect retinal optical disc is proposed that uses mean intensity value of retinal image to detect the center of optical disc, which can be used in retinal image–based person authentication system or retinal disease diagnosis. A candidate-based approach on green channel of RGB fundus image is used to detect optical disc center location. The system has been successfully tested on several publicly available standard databases, namely: DRIVE, messidor, VARIA, VICAVR and DIARETDB_01 and produced 97.5, 97.8, 94, 93.1 and 86.5 % accuracies, respectively. It is observed that if lower recognition accuracy is accepted (from 100 to 97.5 %) on DRIVE database, the detection speed increases from 7 to 2 s per image, which is faster than any other previous methods with such high accuracies.
Article
Full-text available
Efficient optic disk (OD) localization and segmentation are important tasks in automated retinal screening. In this paper, we take digital curvelet transform (DCUT) of the enhanced retinal image and modify its coefficients based on the sparsity of curvelet coefficients to get probable location of OD. If there are not yellowish objects in retinal images or their size are negligible, we can then directly detect OD location by performing Canny edge detector to reconstructed image with modified coefficients. Otherwise, if the size of these objects is eminent, we can see circular regions in edge map as candidate regions for OD. In this case, we use some morphological operations to fill these circular regions and erode them to get final locations for candidate regions and remove undesired pixels in edge map. Finally, we choose the candidate region that has maximum summation of pixels in strongest edge map that obtained by performing threshold to curvelet-based enhanced image, as final location of OD. This method has been tested on different retinal image datasets and quantitative results are presented.
Conference Paper
Full-text available
In this paper an approach is described for segmenting medical images. We use active contour model, also known as snakes, and we propose an energy minimization procedure based on Genetic Algorithms (GA). The widely recognized power of deformable models stems from their ability to segment anatomic structures by exploiting constraints derived from the image data together with a priori knowledge about the location, size, and shape of these structures. The application of snakes to extract region of interest is, however, not without limitations. As is well known, there may be a number of problems associated with this approach such as initialization, existence of multiple minima, and the selection of elasticity parameters. We propose the use of GA to overcome these limits. GAs offer a global search procedure that has shown its robustness in many tasks, and they are not limited by restrictive assumptions as derivatives of the goal function. GAs operate on a coding of the parameters (the positions of the snake) and their fitness function is the total snake energy. We employ a modified version of the image energy which consider both the magnitude and the direction of the gradient and the Laplacian of Gaussian. Experimental results on medical images are reported. Images used in this work are ocular fundus images, snakes result very useful in the segmentation of the Foveal Avascular Zone. The experiments performed with ocular fundus images show that the proposed method is promising in the early detection of the diabetic retinopathy.
Article
Full-text available
In this paper, a novel wavelet transform based blood vessel distortion measure (WBVDM) is proposed to assess the image quality of blood vessels in the processed retinal images. The wavelet analysis of retinal image shows that different wavelet subbands carry different information about the blood vessels. The WBVDM is defined as the sum of wavelet weighted root of the normalized mean square error of subbands expressed in percentage. The proposed WBVDM is compared with other wavelet based distortion measures such as wavelet mean square error(WMSE), Relative WMSE(Rel WMSE) and root of the normalized WMSE(RNWMSE). The results show that WBVDM performs better in capturing the blood vessel distortion. For distortion in clinically nonsignificant regions, the proposed WBVDM shows a low value of 1.1676 compared to a large mean square error value of 7.9909. The evaluation of correlation using Pearson linear correlation coefficient (PLCC) and Spearman rank order correlation coefficient (SROCC) shows a higher value for the correlation between WBVDM and subjective score. The experimental observations show that WBVDM is able to capture the distortion in blood vessels more effectively and responds weakly to the distortion inherent in the other retinal features.
Article
Full-text available
The retinal fundus photograph is widely used in the diagnosis and treatment of various eye diseases such as diabetic retinopathy and glaucoma. Medical image analysis and processing has great significance in t medicine, especially in non-invasive treatment and clinical study. Normally fundus images are manually graded by specially trained clinicians in a time resource intensive process. A computer-aided fundus image analysis could provide an immediate detection and characterisation of retinal features prior to specialist inspection. This paper describes a novel method to automatically localise both the optic disk and optic disk is localised by means of using the morphological operations and by using the Hough transform. The fovea is localised by means of its spatial relationship with the optic disk, and from the spatial distribution of the macula lutea. Results from two clinical data sets have been promising.
Article
The use of vascular intersection as one of the symptoms for monitoring and diagnosis of diabetic retinopathy from fundus images have been widely reported in literatures. In this work, a new hybrid approach that makes use of three different methods of vascular intersection detection namely Modified cross-point number (MCN), Combine Cross Points (CNN) and Artificial Neural Network (ANN) is hereby proposed. Result obtained from the application of this technique to both simulated and experimental shows a very high accuracy and precision value in detecting both bifurcation and cross over points. Thus an improvement in bifurcation and vascular point detection and a good tool in the monitoring and diagnosis of diabetic retinopathy.
Article
The Early Treatment Diabetic Retinopathy Study included use of nonsimultaneous stereoscopic fluorescein angiography to assess severity of characteristics such as capillary loss and fluorescein leakage and to guide treatment of macular edema. Two 30° photographic fields were taken, extending along the horizontal meridian from about 25° nasal to the disc to about 20° temporal to the macula, and a classification system was constructed to allow assessment of selected characteristics. This classification system relies on comparisons with standard and example photographs to evaluate the presence and severity of capillary loss and dilatation, arteriolar abnormalities, leakage of fluorescein dye (including characterization of source), abnormalities of the retinal pigment epithelium, cystoid changes, and several other features. The classification is described and illustrated and its reproducibility between graders assessed by calculating percentages of agreement and kappa statistics for duplicate gradings of baseline angiograms. Agreement was substantial (weighted kappa, 0.61 to 0.80) for severity of fluorescein leakage and cystoid spaces, and moderate (weighted kappa, 0.41 to 0.60) for capillary loss, capillary dilatation, narrowing/pruning of arteriolar side branches, staining of arteriolar walls, and source of fluorescein leakage (microaneurysms versus diffusely leaking capillaries).