ArticlePDF Available

Abstract and Figures

This work is part of an ongoing project aimed to generate synthetic retinal fundus images. This paper concentrates on the generation of synthetic vascular networks with realistic shape and texture characteristics. An example-based method, the Active Shape Model, is used to synthesize reliable vessels’ shapes. An approach based on Kalman Filtering combined with an extension of the Multiresolution Hermite vascular cross-section model has been developed for the simulation of vessels’ textures. The proposed method is able to generate realistic synthetic vascular networks with morphological properties that guarantee the correct flow of the blood and the oxygenation of the retinal surface observed by fundus cameras. The validity of our synthetic retinal images is demonstrated by qualitative assessment and quantitative analysis.
Content may be subject to copyright.
Procedia Computer Science 90 ( 2016 ) 54 60
1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the Organizing Committee of MIUA 2016
doi: 10.1016/j.procs.2016.07.010
ScienceDirect
Available online at www.sciencedirect.com
International Conference On Medical Imaging Understanding and Analysis 2016, MIUA 2016,
6-8 July 2016, Loughborough, UK
Automatic Generation of Synthetic Retinal Fundus Images:
Vascular Network
Lorenza Bonaldia, Elisa Mentia, Lucia Ballerinib,, Alfredo Ruggeric, Emanuele Truccoa
aVAMPIRE project, Computing, School of Science and Engineering, University of Dundee, UK
bVAMPIRE project, Department of Neuroimaging Sciences, University of Edinburgh, UK
cDepartment of Information Engineering, University of Padova, Italy
Abstract
This work is part of an ongoing project aimed to generate synthetic retinal fundus images. This paper concentrates on the gener-
ation of synthetic vascular networks with realistic shape and texture characteristics. An example-based method, the Active Shape
Model, is used to synthesize reliable vessels’ shapes. An approach based on Kalman Filtering combined with an extension of the
Multiresolution Hermite vascular cross-section model has been developed for the simulation of vessels’ textures. The proposed
method is able to generate realistic synthetic vascular networks with morphological properties that guarantee the correct flow of
the blood and the oxygenation of the retinal surface observed by fundus cameras. The validity of our synthetic retinal images is
demonstrated by qualitative assessment and quantitative analysis.
c
2016 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of the Organizing Committee of MIUA 2016.
Keywords: Synthetic Retinal Images, Shape, Texture, Validation
1. Introduction
Retinal Image Analysis (RIA) aims to develop computational and mathematical techniques for helping clinicians
with the diagnosis of diseases such as diabetes, glaucoma and cardiovascular conditions, that may cause changes
in retinal blood vessel patterns like tortuosity, bifurcations, variation of vessel width and colour1,2 . RIA algorithms
have to be validated to avoid obtaining misleading results. Validation can be defined as the process of showing
that an algorithm performs correctly by comparing its output with a reference standard3. A common practice for
validation of medical image algorithms is to use Ground Truth (GT) provided by medical experts. Obtaining manually
GT images annotated by clinicians is an expensive and laborious task which motivates the creation of a synthetic
dataset providing GT for algorithm validation. Medical phantoms are extensively used in many medical imaging
environments4,5 . However, to our best knowledge, there are no publicly available databases of synthetic retinal fundus
images, and providing annotations for large image repositories (e.g. UK Biobank alone stores fundus images for
Corresponding author. Tel.: +44-131-4659529 – Bonaldi and Menti contributed equally
E-mail address: lucia.ballerini@ed.ac.uk
© 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the Organizing Committee of MIUA 2016
55
Lorenza Bonaldi et al. / Procedia Computer Science 90 ( 2016 ) 54 – 60
68,000 patients) is often impossible. Synthesized high-resolution fundus images, along with GT free from inter-/
intra-observer variability, would allow an ecient validation of algorithms for segmentation and analysis of retinal
anatomical structures: by tuning morphological and textural characteristics of these images, we can represent the
hallmarks of several diseases or dierent populations. This work focuses on the generation of retinal vessels and their
integration with non-vessel regions to yield complete fundus camera images, i.e. retinal background, fovea and Optic
Disc (OD), previously reported by Fiorini et al.6. The resulting synthetic retinal fundus images include explicit GT
for vessels binary maps, bifurcation point locations, vessel widths and artery/vein classification.
This paper is organized as follows. In Section 2 we describe the proposed method for the generation of the mor-
phological properties (Subsection 2.2) and the textural features (Subsection 2.3) of the vasculature. In Section 3 we
report results and summarize and discuss our experiments to evaluate them. Finally in Section 4 we give concluding
remarks and hints for future work.
2. Method
2.1. Overview
The proposed approach consists of a learning phase and a generation phase. In the former phase, data describing
vascular morphology and texture are collected from annotations of real images. Models are specified and their pa-
rameters learned from the training data. In the latter phase, the models obtained are used to create synthetic vascular
networks. Arteries (A) and Veins (V) are created separately with the same protocol, and then combined together. This
work is based on the publicly available High-Resolution Fundus1(HRF) images database7, and on a subset of retinal
images of the GoDARTS bioresource 2.
2.2. Vascular Morphology
The generation of synthetic vessel morphology has been achieved using the well-known Active Shape Model
(ASM)8. This model provides a statistical representation of shape represented by a set of points, called landmark
points. By analysing the variations in shape over the training set, a PCA model is built. The training samples (vessel
shapes in our case) are aligned into a common coordinate frame and the deviations from the mean shape are analysed.
Each training shape is represented as a fixed number nof landmark points placed along a vessel and equally spaced.
These landmarks form a 2nvector x, the dimensionality of which is reduced using Principal Component Analysis
(PCA), assuming that the most interesting feature is the one with the largest variance. Hence, each shape can be
approximated as:
xi¯
x+Pbi(1)
where ¯
xis the mean shape of the aligned data, Pcontains the first teigenvectors corresponding to the largest t
eigenvalues of the covariance matrix of the training shapes, and biisatdimensional vector of parameters of a
deformable shape model. We choose t, so that the model represents 98% of the total variance of our training data. By
varying the element in bi, randomly choosing them from a multivatiate normal distribution learned across the training
set shapes, we generate a new synthetic vessel using Eq. (1).
The data describing the shape of the vessels of each type (A and V) for the main arcades, nasal and temporal
(n=81 landmarks), and their branches (n=31 landmarks) up to three levels of branching have been previously
collected from 50 GoDARTS retinal fundus images. We used the polar coordinate system centred in the OD and
having the main axis in the direction of the OD-Fovea axis (i.e. the line connecting the OD centre and the fovea),
adopted by the VAMPIRE software suite 9. Vessel shapes are represented into this system using a transformation that
includes a rigid translation and rotation. Fig. 1 (a) shows the aligned set of shapes of the temporal arcades and their
mean shape. Similarly the shapes of the branches have been aligned using a rigid transformation that shifts their
starting point to the origin of the same coordinate system.
1The HRF database can be free downloaded at http://www5.cs.fau.de/research/data/fundus-images/
2The GoDARTS resource is described at http://medicine.dundee.ac.uk/godarts
56 Lorenza Bonaldi et al. / Procedia Computer Science 90 ( 2016 ) 54 – 60
Individually generated synthetic vessels are then connected to create the vascular network skeleton. The location
of vessel bifurcations is estimated from real images as follows. First we calculate the spatial density distribution map
(Fig. 1 (b)) of all bifurcation points annotated on real images. Then we map our synthetic vessel onto it, obtaining a
probability score for each point of the vessel to become a bifurcation point (Fig. 1 (c)). We select one of the points
having maximum score as the first bifurcation point of the main arcades. We select the following bifurcation point as
one of the points having maximum score located at a distance d[l/2n,l/n] from the previous one, where lis the
length of the vessel and nis the desired number of bifurcations. We continue to select points until reaching the desired
number of bifurcation points.
For each branch originating from a bifurcation point we compute its orientation and calibre using the bifurca-
tion model described by Murray’s Law10, linking branching angles with vessel calibres. Newly generated synthetic
branches need to fit with the context of the vascular tree already generated: all vessels should be inside the Field of
View (FOV), but outside the foveal region, avoiding intersections between vessels of the same type, and converging
toward the fovea.
(a) (b)
(c)
Fig. 1. (a) Aligned shapes of the temporal arcades (green) and their mean shape (red). (b) Density Map distribution of artery bifurcation points in
the image plane. (c) A synthetic vessel with the probability score of each point to be a bifurcation point.
The binary map of the vascular tree (shown in Fig. 2) is obtained by adding calibre information using mathematical
morphological dilation of the skeleton. The initial calibre of the main arcades is sampled from the estimate distribution
of the largest vessel calibre of real images. The branches initial calibre is obtained from the parent vessel calibre
according to Murray’s Law.
2.3. Vascular Texture
To generate synthetic vessel textures we collected information of the intensity values along vessels and textural fea-
tures of the surrounding area (background), we created a model that combines these two sets of information capturing
the transition of intensities between vessels and background.
57
Lorenza Bonaldi et al. / Procedia Computer Science 90 ( 2016 ) 54 – 60
Fig. 2. Example of synthetic vascular tree (arteries in white and veins in gray for display purpose).
2.3.1. Data Collection
Cross-sections of the vessel of interest were defined, spaced by 5 pixels along the vessel centerline. We extracted
the intensity RGB profile on lines perpendicular to the direction of the vessel as depicted in Fig. 3 (a). The green
channel intensities are fitted (Fig. 3 (b)) with a weighted NonLinear Least Squares model using a 6-parameters Ex-
tended Multiresolution Hermite Model11 (EMHM) to fit the cross-sectional intensity profiles. The EMHM accounts
for non-symmetric and symmetric profiles, with or without central reflex, expressed by the formula:
H(a,m,p,q,x)=p{1+a[(xmδ)21]}e(xm)2
2σ2+q(2)
where a[1,1] models the depth of the central reflection; m[1,length(profile)] is the mean of the Gaussian
and allows shifts along the x-axis and length(profile) is the length of the vessel region around the target location;
δ[2,2] accounts for asymmetry; σ[1,15] is the standard deviation of the Gaussian; q[0,255] shifts the
function along the y-axis, avoiding negative pixel values; p[0,150] guarantees that vessels are darker than the
background; xis a vector of the same length of the cross-section of the vessel. The initial conditions are a=0,
m=length(profile)/2, δ=0.2, σ=length(profile)/std(profile), q=max(profile), p=max(profile)min(profile).
At the extremities of each cross-section (green circles in Fig. 3 (a)) we computed five statistical texture descrip-
tors12,13 (Mean, Std, Skewness, Kurtosis and Entropy) on two near-circular windows of 6 pixel radii.
The ensemble of these data, 6 EMHM parameters (Xn×6) and 5 ×2 background texture descriptors (Yn×10) for each
profile, for a total of 975 artery and 1593 vein profiles, collected from the 15 healthy subjects of the HRF dataset,
constitute the measurements for the procedure proposed below.
2.3.2. Generation of Vessel Textures
The procedure for creating reliable synthetic vessel texture takes into account both the continuity of intensity
profiles along the vessel and their consistence with background intensities. We apply a Kalman Filter14, we formulate
our problem as a state space system:
xk=Fxk1+wk1System model
yk=Hxk+vkMeasurement model (3)
where xkis the state vector containing the 6 parameters describing the intensity profile, Fis the state transition matrix
(set to identity matrix), ykis the vector of measurements given by 10 textural descriptors of the synthetic background
and the two vectors wk1and vkare unrelated realizations of white zero-mean Gaussian noise. The measurement
matrix Hhas been obtained, using Multivariate Multiple Linear Regression, solving the system:
y1,1... y1,10
.
.
.....
.
.
yn,1... yn,10
=
x1,1... x1,6
.
.
.....
.
.
xn,1... xn,6
h1,1... h1,10
.
.
.....
.
.
h6,1... h6,10
+
1,1... 
1,10
.
.
.....
.
.
n,1... 
n,10
(4)
where the matrices Xn×6and Yn×10 are the measurements calculated as described in Sec. 2.3.1 and represents the
system error.
Equations (3) recursively estimate, through a predictor-corrector method, the state xkand its covariance Pk. The
initial estimate of the state ˆ
x0(first profile) is assumed to be known and its covariance matrix P0is initialized to
58 Lorenza Bonaldi et al. / Procedia Computer Science 90 ( 2016 ) 54 – 60
,,/p()
(a)
,,
/
p()
(b)
Fig. 3. (a)Cross-sections perpendicular to vessel direction, background regions and RGB intensity profile along one of the cross-sections. (b) Green
channel fitting profile using the Extended Multiresolution Hermite Model.
zero. The first profile for the major arcades is the profile having background descriptors more similar to the current
synthetic ones. The first profile for the branches is the profile of the parent vessel at the bifurcation point from which
they originate.
Iterating this procedure, each new intensity profile of the green channel is generated taking into account the previous
one and the surrounding background. A similar procedure has been developed for the red and blue channels. However,
based on experimental results, we later decided to simply use the average intensity profile of the training ones for the
latter two channels. The red component has been weighted with underlying background red intensity level, in order
to take into account also the color spatial distribution of the whole image. Finally, the RGB intensity profile has been
cut with the Full Width at Half Maximum algorithm15 to keep the mere component of the vessel, and it has been re-
sampled using the Bresenham line-drawing algorithm16. Experiments showed that the quality of the synthetic images
generated would not improve using the full Kalman estimator in the red and blue channels.
The two vascular trees obtained, one for the arteries and one for the veins, are combined together and superimposed
on synthetic backgrounds6to create complete synthetic fundus camera images. A Gaussian filter to smooth vessel
edges, and one to reduce noise on the whole image are finally applied.
The generated synthetic image size is 3125×2336 pixels with FOV diameter of 2662 pixels, in line with the res-
olution of state-of-the-art fundus cameras. The whole method and a user friendly interface of the simulation tool
has been implemented in Matlab R
2014b. An extended dataset of synthetic images and the simulation tool will be
publicly available after publication.
3. Results
In Fig. 4 we visually compare a real image (a) with two synthetic images (b, c). We notice that the synthetic vessels
are characterized by a realistic morphology, including typical tortuosity. The temporal segments of the arcades go
toward and around the macula, and the nasal segments radiate radially from the nerve head. The vessels colouring is
always darker than the background, following real images: vessels appear brighter around the OD and darker towards
the fovea and the extremities of the FOV. The arteries appear, as in real images, brighter and narrower than veins.
59
Lorenza Bonaldi et al. / Procedia Computer Science 90 ( 2016 ) 54 – 60
Because of the changes in intensity profile along the tree, the central reflex (a central, thin, bright reflection appearing
sometimes along the centerline of large vessels, especially arteries) is automatically provided.
(a) (b) (c)
Fig. 4. Comparison between a real fundus image from the HRF dataset (a) and two complete synthetic retinal fundus images generated by our
method (b,c).
In absence of quantitative quality criteria, we performed a simple qualitative assessment by asking 7 experts (oph-
thalmologists and researchers in retinal image analysis) to score the degree of realism of 12 synthetic retinal fundus
images, using a scale from 1 to 4, where 1=not realistic at all, 2=slightly realistic, 3=nearly realistic, 4=very realistic.
The best image obtained a score of 2.8, while the average score over all the images is 2.13. We did not ask the experts
to make any allowance for the fact that many characteristics of fundus images are not modelled (e.g. small capillaries,
the vascular network inside the OD). Considering this, the scores suggest that our synthetic images are plausible, as
far as the only features generated go. The experts also suggested some improvements: the density of the vessels in
some zone is too high, the largest vessels occasionally end abruptly, some first level branches appear too straight and
the direction of growth sometimes recoils. These aspects will be considered in our future work.
The main purpose of this project is to generate a synthetic dataset along its GT for validation of retinal image
analysis algorithms. Such techniques have to function in the same way when applied to phantoms with synthetic GT
and to real images with manual GT. To demonstrate the suitability of our dataset for this purpose, we compared the
performance of an automatic segmentation algorithm. The segmentation is performed with the VAMPIRE software
suite17 on 10 healthy HRF images provided manual GT and on 10 of our synthetic images having synthetic binary
maps. Segmentation results are evaluated in term of the standard statistical criteria3. The comparison of these 2
experiments, summarized in Table 1, shows that our synthetic images behave comparably with real ones in term of
vasculature segmentation and certainly in line with the performance of algorithms reported recently in the literature18 .
We note generally small dierences between all values.
Table 1. Performance comparison of VAMPIRE segmentation algorithm on real (HRF) and synthetic images: True Positive Rate (TPR), False
Positive Rate (FPR), Specificity (Sp), Accuracy (Acc) (mean ±std).
TPR FPR Sp Acc
Real Images 0.9874 ±0.0015 0.0058 ±0.0070 0.9942 ±0.0070 0.9936 ±0.0063
Synthetic Images 0.9703 ±0.0185 0.0151 ±0.0125 0.9849 ±0.0125 0.9835 ±0.0122
4. Conclusions
This paper has presented a novel technique to generate a reliable synthetic retinal vasculature, as part of an ongoing
project aimed to generate full, realistic, synthetic fundus camera images. The results are promising for both the
morphology and the texture of the vessel networks. To our best knowledge no similar method has been reported in
the literature. The encouraging quality of our initial results is supported by so far small-scale visual inspection and
quantitative experiments. Further improvements to this preliminary work will take into account further properties of
real fundus images, including the geometric interaction between arteries and veins, the way vessels radiate from the
OD, the vascular network inside the OD and the appearance of further structures like small capillaries and the retinal
nerve fibre layer. An interesting future direction would be the generation of data to simulate individuals with known
medical conditions.
60 Lorenza Bonaldi et al. / Procedia Computer Science 90 ( 2016 ) 54 – 60
References
1. Yin, Y., Adel, M., Bourennane, S.. Retinal vessel segmentation using a probabilistic tracking method. Pattern Recognition 2012;45(4):1235–
1244.
2. Annunziata, R., Garzelli, A., Ballerini, L., Mecocci, A., Trucco, E.. Leveraging multiscale hessian-based enhancement with a novel
exudate inpainting technique for retinal vessel segmentation. IEEE Journal of Biomedical and Health Informatics 2015;In press.
3. Trucco, E., Ruggeri, A., Karnowski, T., et al. Validating retinal fundus image analysis algorithms: Issues and a proposal. Investigative
Ophthalmology &Visual Science 2013;54(5):3546–3559.
4. Collins, D.L., Zijdenbos, A.P., Kollokian, V., Sled, J.G., Kabani, N.J., Holmes, C.J., et al. Design and construction of a realistic digital
brain phantom. IEEE Transactions on Medical Imaging 1998;17(3):463–468.
5. Lehmussola, A., Ruusuvuori, P., Selinummi, J., Huttunen, H., Yli-Harja, O.. Computational Framework for Simulating Fluorescence
Microscope Images With Cell Populations. IEEE Transactions on Medical Imaging 2007;26(7):1010–1016.
6. Fiorini, S., Ballerini, L., Trucco, E., Ruggeri, A.. Automatic generation of synthetic retinal fundus images. In: Medical Image Under-
standing and Analysis (MIUA). 2014, p. 7–12.
7. Odstrcilik, J., Kolar, R., Budai, A., et al. Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution
fundus image database. IET Image Processing 2013;7(4):373–383.
8. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.. Active shape models-their training and application. Computer vision and image
understanding 1995;61(1):38–59.
9. Trucco, E., Ballerini, L., Relan, D., et al. Novel VAMPIRE algorithms for quantitative analysis of the retinal vasculature. In: Proc. IEEE
ISSNIP/BRC. 2013, p. 1–4.
10. Murray, C.D.. The Physiological Principle of Minimum Work Applied to the Angle of Branching of Arteries. The Journal of General
Physiology 1926;9(6):835–841.
11. Lupascu, C.A., Tegolo, D., Trucco, E.. Accurate estimation of retinal vessel width using bagged decision trees and an extended multireso-
lution Hermite model. Medical Image Analysis 2013;17(8):1164–1180.
12. Poletti, E., Veronese, E., Calabrese, M., Bertoldo, A., Grisan, E.. Supervised classification of brain tissues through local multi-scale
texture analysis by coupling dir and flair mr sequences. vol. 8314. 2012, p. 83142T–83142T–7.
13. Haralick, R.M.. Statistical and structural approaches to texture. Proceedings of the IEEE 1979;67(5):786–804.
14. Kalman, R.E.. A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering 1960;
82(Series D):35–45.
15. Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R., Kennedy, R.. Measurement of Retinal Vessel Widths From Fundus Images Based
on 2-D Modeling. IEEE Transactions on Medical Imaging 2004;23(10):1196–1204.
16. Bresenham, J.E.. Algorithm for computer control of a digital plotter. IBM Syst J 1965;4(1):25–30.
17. Trucco, E., Giachetti, A., Ballerini, L., Relan, D., Cavinato, A., MacGillivray, T.. Morphometric Measurements of The Retinal Vasculature
in Fundus Images with Vampire. In: Lim, J.H., Ong, S.H., Xiong, W., editors. Biomedical Image Understanding, Methods and Applications.
John Wiley & Sons, Inc; 2015, p. 91–111.
18. Fraz, M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A., Owen, C., et al. Blood vessel segmentation methodologies in retinal
images: A survey. Computer Methods and Programs in Biomedicine 2012;108(1):407–433.
... Pattern generation formed by the foraging of some multicellular organisms (i.e. Physarum polycephalum) [118,119] Constrained constructive optimisation [120] Computational modeling Intravascular volume and branching angles (cost function) Complex arterial trees [121,122], human cerebrovasculature [123] Active shape model [124] Computational modeling Reference points of real images Retinal vascular network [125] Diffusion-limited aggregation [126] Computational modeling Density correlations in terms of the distance separating the two sites ...
... Such models allow a variety of structures to be generated since they use reference images. Particularly, in [125] they use an active shape model to generate synthetic images of the retina. The disadvantage of the method is that good-quality images are required for segmenting the vascular networks to reproduce the desired structure. ...
Preprint
With the recent success of computer vision and deep learning, remarkable progress has been achieved on automatic personal recognition using vein biometrics. However, collecting large-scale real-world training data for palm vein recognition has turned out to be challenging, mainly due to the noise and irregular variations included at the time of acquisition. Meanwhile, existing palm vein recognition datasets are usually collected under near-infrared light, lacking detailed annotations on attributes (e.g., pose), so the influences of different attributes on vein recognition have been poorly investigated. Therefore, this paper examines the suitability of synthetic vein images generated to compensate for the urgent lack of publicly available large-scale datasets. Firstly, we present an overview of recent research progress on palm vein recognition, from the basic background knowledge to vein anatomical structure, data acquisition, public database, and quality assessment procedures. Then, we focus on the state-of-the-art methods that have allowed the generation of vascular structures for biometric purposes and the modeling of biological networks with their respective application domains. In addition, we review the existing research on the generation of style transfer and biological nature-based synthetic palm vein image algorithms. Afterward, we formalize a general flowchart for the creation of a synthetic database comparing real palm vein images and generated synthetic samples to obtain some understanding into the development of the realistic vein imaging system. Ultimately, we conclude by discussing the challenges, insights, and future perspectives in generating synthetic palm vein images for further works.
... Before the introduction of General Adversarial Networks (GANs) [11], synthesizing realistic retinal images was attempted by using a complex mathematical model of the eye anatomy [3], [15]. [4] used a pair of retinal fundus images with vessel tree segmentation to synthesize color retinal images. ...
Conference Paper
Two major challenges in applying deep learning to develop a computer-aided diagnosis of fundus images are the lack of enough labeled data and legal issues with patient privacy. Various efforts are being made to increase the amount of data either by augmenting training images or by synthesizing realistic-looking fundus images. However, augmentation is limited by the amount of available data and it does not address the patient privacy concern. In this paper, we propose a Generative Adversarial Network-based (GAN-based) fundus image synthesis method (Fundus GAN) that generates synthetic training images to solve the above problems. Fundus GAN is an improved way of generating retinal images by following a two-step generation process which involves first training a segmentation network to extract the vessel tree followed by vessel tree to fundus image-to-image translation using unsupervised generative attention networks. Our results show that the proposed Fundus GAN outperforms state of the art methods in different evaluation metrics. Our results also validate that generated retinal images can be used to train retinal image classifiers for eye diseases diagnosis. Clinical Relevance- Our proposed method Fundus GAN helps in solving the shortage of patient privacy-preserving training data in developing algorithms for automating image- based eye disease diagnosis. The proposed two-step GAN- based image synthesis can be used to improve the classification accuracy of retinal image classifiers without compromising the privacy of the patient.
... Since before DL era, synthesizing accurate pictures of the ocular fundus has been a difficult job. Initially, it was addressed by developing sophisticated mathematical models of ocular anatomy [18,19]. Nowadays, technological advancements have resulted in significant computing capacity, allowing ML to be applied to neural networks with deep architectures. ...
Article
— Diabetic Retinopathy (DR) is a serious consequence of diabetes that seriously impact on the eyes and is a leading cause of blindness. If the lesions in DR arise in the central portion of the fundus, they may result in significant vision loss, which we refer to as Diabetic Macular Edema (DME). Deep learning (DL) techniques are commonly used utilized in ophthalmology for discriminative tasks such as diabetic retinopathy or age-related macular degeneration (AMD) diagnosis. Deep learning techniques typically need huge picture data sets for deep convolutional neural networks (DCNNs) training, it should be graded by human specialists. According to international protocol, it is classified into five severity categories. However, improving a grading model for high generality needs a significant quantity of balanced training data, which is challenging to obtain, especially at high levels of severity. Typical techniques for data augmentation, in many applications of deep learning in the retinal image processing domain, the difficulty of access to huge annotated datasets and legal concerns about patient privacy are limiting issues. As a result, the concept of creating synthetic retinal pictures that are indistinguishable from actual data has garnered more attention. GANs have been certain to be an effective framework for creating synthetic databases of anatomically accurate retinal fundus pictures. GANs, in particular, have garnered increasing attention in ophthalmology. in this article, we present a loss-less generative adversarial network (DR-LL GAN) to generate good resolution fundus pictures that May be adjusted to include random grading and information about the lesion. As a result, large-scale generated data may be used to train a DR grading and lesion segmentation model with more appropriate augmentation. Our model experiments evaluated on IDRID and MESSIDOR datasets, it's obtained a discrimination loss of 0.69374 and a generation loss of 1.10438, as well as a segmentation accuracy of 0.9840 in our tests. This might support in the optimization techniques of the neural network design and in computer-aided screening of medical picture, thus increasing diagnostic reliability for clinical assessment in the future of sophisticated technological healthcare.
... The scarcity of retinal image data can be seen in some of the classification datasets which are published in the literature as summarized in Table 2a [140]. GANs and autoencoders are used to synthesize synthetic retinal images. ...
Article
Full-text available
Retinal image analysis is an integral and fundamental step towards the identification and classification of ocular diseases like glaucoma, diabetic retinopathy, macular edema, and cardiovascular diseases through computer-aided diagnosis systems. Various abnormalities are observed through retinal image modalities like fundus, fluorescein angiography, and optical coherence tomography by ophthalmologists, and computer science professionals. Retinal image analysis has gained a lot of importance in recent years due to advances in computational, storage, and image acquisition technologies. Better computational capabilities lead to a rise in the implementation of deep learning-based methods for ocular disease detection. Although deep learning promises better performance in this field, some issues like lack of well-labeled datasets, unavailability of large enough datasets, class imbalance, and model generalizability are yet to be addressed. Also, the real-time implementation of detection methods on new devices or existing hardware is an untouched area. This article highlights the development of retinal image analysis and related issues due to the introduction of AI-based methods. The methods are analyzed in terms of standard performance metrics on various publicly and privately available datasets.
... A parametric intensity model, in which the parameters have been estimated from real images, is used to generate the optical disk. Complementary to [39], the contribution in [40] focuses on the generation of the vascular network, based on a parametric model, in which the parameters are learned from real vessel trees. Despite these methods providing reasonable results, they are complex and heavily depend on the domain knowledge. ...
Article
Full-text available
In this paper, we use Generative Adversarial Networks (GANs) to synthesize high-quality retinal images along with the corresponding semantic label-maps, instead of real images during training of a segmentation network. Different from other previous proposals, we employ a two-step approach: first, a progressively growing GAN is trained to generate the semantic label-maps, which describes the blood vessel structure (i.e., the vasculature); second, an image-to-image translation approach is used to obtain realistic retinal images from the generated vasculature. The adoption of a two-stage process simplifies the generation task, so that the network training requires fewer images with consequent lower memory usage. Moreover, learning is effective, and with only a handful of training samples, our approach generates realistic high-resolution images, which can be successfully used to enlarge small available datasets. Comparable results were obtained by employing only synthetic images in place of real data during training. The practical viability of the proposed approach was demonstrated on two well-established benchmark sets for retinal vessel segmentation—both containing a very small number of training samples—obtaining better performance with respect to state-of-the-art techniques.
Chapter
Deep learning methods develop very rapidly and are widely used in computer vision applications as well as for medical image analysis. The deep learning methods provide a significant improvement on medical image analysis tasks by learning a hierarchical representation of different levels directly from data instead of handcrafted features. However, their superior performance highly relies on the number of available training samples. Lack of data either causes the performance to drop or overfitting problems. Unfortunately, it is not always easy to obtain big data for many applications, especially for medical images. In this chapter, we will discuss data augmentation methods including both traditional transformations and emerging generative adversarial networks. In traditional augmentation methods, techniques including geometric transformations and photometric transformations are introduced, e.g., image color space transformation, image rotation, random cropping. The work based on synthesis to augment data is presented followed by the challenges and future directions on data augmentation.
Article
With the recent success of computer vision and deep learning, remarkable progress has been achieved on automatic personal recognition using vein biometrics. However, collecting large-scale real-world training data for palm vein recognition has turned out to be challenging, mainly due to the noise and irregular variations included at the time of acquisition. Meanwhile, existing palm vein recognition datasets are usually collected under near-infrared light, lacking detailed annotations on attributes (e.g., pose), so the influences of different attributes on vein recognition have been poorly investigated. Therefore, this paper examines the suitability of synthetic vein images generated to compensate for the urgent lack of publicly available large-scale datasets. Firstly, we present an overview of recent research progress of palm vein recognition, from the basic background knowledge to vein anatomical structure, data acquisition, public database, and quality assessment procedures. Then, we focus on the state-of-the-art methods that have allowed the generation of vascular structures for biometric purposes and the modeling of biological networks with their respective application domains. In addition, we review the existing research on the generation of style transfer and biological nature-based synthetic palm vein images algorithms. Afterward, we formalize a general flowchart for the creation of a synthetic database comparing real palm vein images and generated synthetic samples to obtain some understanding into the development of the realistic vein imaging system. Ultimately, we conclude by discussing the challenges, insights, and future perspectives in generating synthetic palm vein images for further works.
Article
Due to data scarcity and class imbalance in medical images, the training dataset seriously affects the classification accuracy of the model. We propose a retinal image generation model based on GAN (RetiGAN). A dual-scale discriminator is operated to train the network at two scales to improve the quality of generated images. RetiGAN can better retain the semantic information of the original images under the guidance of the content loss due to the embedding of the VGG network into RetiGAN to extract the high-level semantic information of the original and the generated images. Besides, in order to enhance the details of the generated image, RetiGAN is guided to generate the retinal images with clearer edges by feeding smoothed images to the discriminator and forcing it to distinguish the smoothed from the original ones. The qualitative and quantitative analysis verifies that the generated retinal images are similar to the original ones in structure rather than simple copies. In addition, ablation experiments exhibit that the model can improve the resolution of generated images with better visibility and clearer edges. In summary, RetiGAN is superior to other retinal image generation models in the aspects of the preservation of structural similarity and high resolution.
Article
Full-text available
Accurate vessel detection in retinal images is an important and difficult task. Detection is made more challenging in pathological images with the presence of exudates and other abnormalities. In this paper we present a new unsupervised vessel segmentation approach to address this problem. A novel inpainting filter, called Neighbourhood Estimator Before Filling (NEBF), is proposed to inpaint exudates in a way that nearby false positives are significantly reduced during vessel enhancement. Retinal vascular enhancement is achieved with a multiple-scale Hessian approach. Experimental results show that the proposed vessel segmentation method outperforms state-of-the-art algorithms reported in the recent literature, both visually and in terms of quantitative measurements, with overall mean accuracy of 95.62% on the STARE dataset, and 95.81% on the HRF dataset.
Conference Paper
Full-text available
This study aims to generate synthetic and realistic retinal fundus colour images, similar in characteristics to a given dataset, as well as the values of all morphological parameters. A representative task could be, for example, the synthesis of a retinal image with the corresponding vessel tree and optic nerve head binary map, measure-ment of vessel width in any position, fovea localisation and so on. The presented paper describes the techniques developed for the generation of both vascular and non-vascular regions (i.e. retinal background, fovea and op-tic disc). To synthesise convincing retinal backgrounds and foveae, a patch-based algorithm has been developed; model-based texture synthesis techniques have also been implemented for the generation of realistic optic discs and vessel networks. The validity of our synthetic retinal images has been demonstrated by visual inspection and quantitative experiments.
Conference Paper
Full-text available
This study aims to generate synthetic and realistic retinal fundus colour images, similar in characteristics to a given dataset, as well as the values of all morphological parameters. A representative task could be, for example, the synthesis of a retinal image with the corresponding vessel tree and optic nerve head binary map, measurement of vessel width in any position, fovea localisation and so on. The presented paper mainly focuses on the generation of non-vascular regions (i.e. retinal background, fovea and optic disc) and it is complemented by a parallel study on the generation of structure and texture of the vessel network. To synthesise convincing retinal backgrounds and foveae, a patch-based algorithm has been developed; model-based texture synthesis techniques have also been implemented for the generation of realistic optic discs. The validity of our synthetic retinal images has been demonstrated by visual inspection and quantitative experiments.
Article
Full-text available
Automatic assessment of retinal vessels plays an important role in the diagnosis of various eye, as well as systemic diseases. A public screening is highly desirable for prompt and effective treatment, since such diseases need to be diagnosed at an early stage. Automated and accurate segmentation of the retinal blood vessel tree is one of the challenging tasks in the computer-aided analysis of fundus images today. We improve the concept of matched filtering, and propose a novel and accurate method for segmenting retinal vessels. Our goal is to be able to segment blood vessels with varying vessel diameters in high-resolution colour fundus images. All recent authors compare their vessel segmentation results to each other using only low-resolution retinal image databases. Consequently, we provide a new publicly available high-resolution fundus image database of healthy and pathological retinas. Our performance evaluation shows that the proposed blood vessel segmentation approach is at least comparable with recent state-of-the-art methods. It outperforms most of them with an accuracy of 95% evaluated on the new database.
Article
Full-text available
The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.
Article
In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation functions, optical transforms, digital transforms, textural edgeness, structural element, gray tone co-occurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives. -Author
Chapter
Much research is being directed towards investigating links between quantitative characteristics of the retinal vasculature and a variety of outcomes to identify biomarkers. The interest for retinal biomarkers lies in the fact that the retina is easily observed via fundus photography. Outcomes considered for biomarkers research in the literature include conditions like diabetes and lacunar stroke, but also cognitive performance and genetic expression [35, 17, 24, 36, 50]. The need for measuring large volumes of images, needed to power biomarker discovery studies, makes semi-automatic software systems desirable. This chapter reports recent algorithms developed by the VAMPIRE group for vasculature detection and quantification, including recent developments on landmark detection. We focus on accuracy and validation issues, and, importantly, the conditions for comparing meaningfully results from different algorithms. This work is part of VAMPIRE (Vasculature Assessment and Measurement Platform for Images of the REtina), an international collaboration growing a software suite for automatic morphometric measurements of the retinal vasculature.
Article
Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply model-based methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images.
Article
We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy.
Article
A review is presented of the image processing literature on the various approaches and models investigators have used for textures. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone co-occurrence, run lengths, and auto-regressive models. A discussion and generalization is presented of some structural approaches to texture based on more complex primitives than gray tone. Some structural-statistical generalizations which apply the statistical techniques to the structural primitives are given.