ArticlePDF Available

Abstract and Figures

In ophthalmologic practice, retinal images are routinely obtained to diagnose and monitor primary eye diseases and systemic conditions affecting the eye, such as diabetic retinopathy. Recent studies have shown that biomarkers on retinal images, for example, retinal blood vessel density or tortuosity, are associated with cardiac function and may identify patients at risk of coronary artery disease. In this work we investigate the use of retinal images, alongside relevant patient metadata, to estimate left ventricular mass and left ventricular end-diastolic volume, and subsequently, predict incident myocardial infarction. We trained a multichannel variational autoencoder and a deep regressor model to estimate left ventricular mass (4.4 (–32.30, 41.1) g) and left ventricular end-diastolic volume (3.02 (–53.45, 59.49) ml) and predict risk of myocardial infarction (AUC = 0.80 ± 0.02, sensitivity = 0.74 ± 0.02, specificity = 0.71 ± 0.03) using just the retinal images and demographic data. Our results indicate that one could identify patients at high risk of future myocardial infarction from retinal imaging available in every optician and eye clinic.
Content may be subject to copyright.
Predicting Myocardial Infarction through Retinal Scans and Minimal
Personal Information
Andres Diaz-Pintoa,b,
, Nishant Ravikumara,b, Rahman Attara,b, Avan Suinesiaputraa,b, Yitian Zhaog,
Eylem Leveltb,d, Erica Dall’Armellinab,d, Marco Lorenzih, Qingyu Cheni, Tiarnan D. L. Keenanj,
Elvira Agr´onj, Emily Y. Chewj, Zhiyong Lui, Chris P. Galeb,c,d, Richard P. Galee,f, Sven Pleinb,d,
Alejandro F. Frangia,b,k,l,m,
aCentre for Computational Imaging and Simulation Technologies in Biomedicine, School of Computing,
University of Leeds, Leeds, UK.
bLeeds Institute for Cardiovascular and Metabolic Medicine, School of Medicine, University of Leeds, Leeds, UK.
cLeeds Institute for Data Analytics, University of Leeds, Leeds UK.
dDepartment of Cardiology, Leeds Teaching Hospitals NHS Trust, Leeds, UK.
eDepartment of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
fDepartment of Health Sciences, University of York, York, UK
gCixi Institute of Biomedical Engineering, Ningbo Institute of Materials, Technology and Engineering, Chinese Academy of
Sciences, Ningbo, China
hUniversit´e Cˆote d’Azur, Inria Sophia Antipolis, Epione Project-Team, France
iNational Center for Biotechnology Information, National Library of Medicine, NIH, Bethesda, MD, USA
jDivision of Epidemiology and Clinical Applications, National Eye Institute, NIH, Bethesda, MD, USA
kDepartment of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
lDepartment of Electrical Engineering, KU Leuven, Leuven, Belgium
mAlan Turing Institute, London, UK
In ophthalmologic practice, retinal images are routinely obtained to diagnose and monitor primary eye
diseases and systemic conditions affecting the eye, such as diabetic retinopathy. Recent studies have shown
that biomarkers on retinal images, e.g. retinal blood vessels density or tortuosity, are associated with cardiac
function and may identify patients at risk of coronary artery disease. In this work, we investigate the use
of retinal images alongside relevant patient metadata, to estimate left ventricular mass (LVM) and left
ventricular end-diastolic volume (LVEDV), and subsequently, predict incident myocardial infarction. We
trained a multi-channel variational autoencoder (mcVAE) and a deep regressor model to estimate LVM (4.4
(-32.30, 41.1) g) and LVEDV (3.02 (-53.45, 59.49) ml) and predict risk of myocardial infarction (AUC=0.80±
0.02, Sensitivity=0.74 ±0.02, Specificity=0.71 ±0.03) using just the retinal images and demographic data.
Our results indicate that one could identify patients at high risk of future myocardial infarction from retinal
imaging available in every optician and eye clinic.
Keywords: UK Biobank, AREDS, Retinal Images, Cardiac MRI, Multi-Channel VAE
Cardiovascular diseases (CVD) represent a major
cause of death and socio-economic burden globally.
In 2015 alone, there were 18 million CVD-related
deaths worldwide [1]. Identification and timely
Corresponding authors
Email addresses:
(Andres Diaz-Pinto),
(Alejandro F. Frangi)
treatment of CVD risk factors is a key strategy
for reducing CVD prevalence in populations and
for risk modulation in individuals. Conventionally,
CVD risk is estimated using demographic/clinical
parameters such as age, sex, ethnicity, smoking sta-
tus, family history, history of hyperlipidaemia, di-
abetes mellitus or hypertension [2]. Imaging tests
such as coronary computed tomography, echocar-
diography and cardiovascular magnetic resonance
(CMR) help further stratify patient risk, by assess-
ing coronary calcium burden, myocardial scar bur-
den, ischaemia, cardiac chamber size and function.
Cardiovascular imaging is usually performed in
secondary care and is relatively expensive, limiting
its availability in under-developed and developing
countries. An alternative approach to risk stratifi-
cation is to use the information available from non-
cardiac investigations. Retinal microvascular ab-
normalities, such as generalised arteriolar narrow-
ing, focal arteriolar narrowing, and arterio-venous
nicking have shown strong associations with sys-
temic, and cardiovascular disease, such as diabetes
mellitus, hypertension and coronary artery disease
[3, 4]. Retinal images (including details of principal
blood vessels) are now routinely acquired in opto-
metric and ophthalmologic practice and are rela-
tively inexpensive. Retinal images could, therefore,
be a potential cost-effective screening tool for car-
diovascular disease. Beyond risk prediction, retinal
images have also been associated with cardiovascu-
lar phenotypes such as left ventricular dimensions
and mass [3, 4]. Poplin et al. showed for the first
time that retinal images allowed prediction of car-
diovascular risk factors such as age, gender, smok-
ing status, systolic blood pressure and major ad-
verse cardiac events [5], driven by anatomical fea-
tures such as the optic disc or retinal blood vessels.
This highlighted the potential for using retinal im-
ages to assess risk of cardiovascular diseases.
We explore new ways to extend this line of
research, by learning a combined representation
of retinal images and cardiac magnetic resonance
(CMR) images, to assess cardiac function and pre-
dict myocardial infarction events. This is supported
by the work of Cheung et al. [6], who highlighted
the adverse effects of blood pressure and cardiac
dysfunction on retinal microvasculature. Similarly,
Tapp et al. [7] established associations between
retinal vessel morphology and cardiovascular dis-
ease risk factors and/or CVD outcomes using a
multilevel linear regressor. This study assessed the
relationships between retinal vessel morphometry,
blood pressure, and arterial stiffness index, further-
ing our understanding of preclinical disease pro-
cesses and the interplay between microvascular and
macrovascular diseases. Using retinal fundus im-
ages, Gargeya et al. [8] and Qummar et al. [9]
used deep learning to detect diabetes, and clas-
sify different grades of diabetic retinopathy, respec-
tively. These studies demonstrate the efficacy of
deep learning techniques to quantify and stratify
cardiovascular disease risk factors, given retinal im-
ages. Other studies such as Pickhardt et al. [10]
utilised whole-body CT scans and deep learning to
predict future adverse cardiovascular events, fur-
ther supporting the hypothesis that alternate im-
age modalities, covering multiple organs, help as-
sess cardiovascular health and predict CVD risk.
As markers of cardiovascular diseases are often
manifested in the retina, images of this organ could
identify future cardiovascular events such as left
ventricular hypertrophy or myocardial infarction.
This work proposes a novel method that estimates
cardiac indices and predicts incident myocardial in-
farction based on retinal images and demographic
data from the UKB. For myocardial infarction, we
only considered incidents that occurred after the
retinal image was taken. Our approach uses a
multi-channel variational autoencoder trained on
two channels of information: retinal and CMR im-
ages from the same subject. This method combines
features extracted from both imaging modalities in
a common latent space, allowing us to use that la-
tent space to subsequently estimate relevant quan-
tities from just one channel of information (i.e. reti-
nal images) and demographic data. Applied to clin-
ical practice, estimation of cardiac indices from reti-
nal images could guide patients at risk of CVDs to
cardiologists following a routine ophthalmic check
or directly predicting myocardial infarction based
on the retinal and minimal demographic data.
Experiments and Results
In this study, we jointly trained a multi-channel
VAE (mcVAE) and a deep regressor network on
CMR, retinal images and demographic data from
participants in the UKB cohort. As the first exper-
iment, we used manual and automatic delineations
of the CMR images as ground truth to estimate
LVM and LVEDV from retinal images. These man-
ually delineated images were analysed by a team
of eight experts using the commercially available
cvi42 post-processing software (Circle Cardiovas-
cular Imaging Inc., Calgary, Canada) [30]. On the
other hand, the automatic delineations were ob-
tained from the method proposed by Attar et al.
[31]. The main motivation for this set of exper-
iments is to perform a fair comparison between
our system and the state-of-the-art methods. It is
worth mentioning that all methods published in the
literature that used the UKB cohort, are trained
using the aforementioned manual delineations. Re-
sults of this experiment are presented in Bland-
Altman and Pearson’s correlation plots (See Figure
Figure 1(a) denotes the correlation between the
LVM (r= 0.65) and LVEDV (r= 0.45) values esti-
mated using our approach, and the ones manually
computed from the CMR images using cvi42 soft-
ware. The results obtained from this experiment
support the clinical findings published years ago by
clinical researchers in [32, 3, 4]. They found that
retinal images could be potentially used to quantify
parameters in the heart.
Besides the Bland-Altman and correlation plots,
we also compared our proposed method against the
state-of-the-art methods for cardiac quantification
using CMR images (Bai et al. [33]), including the
Siemens Inline VF system (See Extended Data Ta-
ble 4). The Siemens Inline VF system was the first
fully automatic left ventricular analysis tools com-
mercially available [34]. D13 and E11C versions
have been currently used as a baseline for compar-
ison against manual delineation [30].
Bland-Altman plots and Pearson’s correlation
were computed for the participants with automatic
annotations for LVM and LVEDV (see Figure 1(b)).
Figure 1(b) shows a significant correlation be-
tween the LVM and LVEDV estimated by the pro-
posed method and parameters computed from At-
tar’s algorithm. We also found that using more
images to train our method positively impacts the
obtained error (See Extended Data Table 4 Exp
As it was shown, our approach can estimate LVM
and LVEDV from the retinal images and demo-
graphic data. In addition to this, it can also be used
to improve the prediction of future MI events. To
demonstrate this, we compare MI prediction in two
settings: 1) using only demographic data, and 2)
using LVM/LVEDV (predicted using our approach)
plus demographic data. To do that, we performed
10-fold cross-validation on subjects not previously
used for training and a logistic regression model
(See Figure 2).
Figure 2 (right) shows a significant increase
in the area under the ROC curve when using
LVM/LVEDV plus demographics to predict MI.
Besides predicting myocardial infarction, we also
compared the estimated LVM/LVEDV values be-
tween the MI cases and no-MI cases using a t-test.
Here, the null hypothesis is that the LVM/LVEDV
values come from the same distribution, while the
alternative hypothesis is that these values come
from different distributions. We consider that the
obtained results are different if the p-value is less
than 0.05. According to experimental results, a p-
value of 1.43e57 and 2.32e52 were obtained for
LVM and LVEDV correspondingly, meaning we re-
jected the null hypothesis and LVM/LVEDV values
for MI no-MI cases come from different distribu-
Additional experiments evaluating the Frechet
Inception Distance (FID) [35] score for recon-
structed CMR images (Supplementary Figure 2)
and the impact of the retinal image size (Supple-
mentary Figure 4), training set size (Supplemen-
tary Figure 5), and different demographic variables
(Supplementary Figure 7) to the proposed algo-
rithm are presented in the Supplementary Material
Section 5
External Validation
Finally, external validation using the optimal
model identified from the preceding experiments
was carried out. This validation was conducted on
the AREDS dataset using retinal images and the
demographic data presented in Extended Data Ta-
ble 1. As previously mentioned, this dataset is com-
posed of 3,010 participants in total. From these par-
ticipants, there are 180 participants with MI events
and 2,830 with no-MI events.
We used the mcVAE trained on all the 5,663
retinal images available of size 128×128px. In the
AREDS dataset, the demographic data available
differed from that in the UKB. We trained our
method for the available metadata in the AREDS.
This means variables such as systolic blood pres-
sure, diastolic blood pressure, smoking status, al-
cohol consumption status, body mass index, age,
and gender were used for the external validation.
The demographic variable “alcohol consumption”
was converted to a continuous variable — in terms
of gm/day consumed. The remaining variables are
consistent between datasets in the way they are
As the AREDS dataset was initially used because
of the detailed information of AMD, we performed
three analyses discarding different levels of AMD to
show the impact AMD has on MI prediction. The
obtained results can be seen in Figure 3 and Table
The present study demonstrates that retinal im-
ages and demographic data could be of great value
Case A: Using Automatic Annotations
(5663 Subjects)
Case B: Using Automatic Annotations
(1248 Subjects)
(a) (b)
Using Manual Annotations on Cardiac MR Images
Figure 1: Estimation of LVM and LVEDV using manual and automatic annotations: (a) Bland-Altman and cor-
relation plots for estimated LVM and LVEDV using manual annotations on CMR images. (b) Bland-Altman and correlation
plots for estimated LVM and LVEDV using automatic annotations computed from Attar et al. [31] method. In Case A, we
used all the available subjects to train and test our method. The solid line represents the logistic regression, and the dotted
line represents the line of identity.
Figure 2: Cross-validation results for MI prediction: ROC curves obtained for MI prediction using only demographic
data. Accuracy: 0.66 ±0.03, Sensitivity: 0.7±0.04, Specificity: 0.64 ±0.03, Precision: 0.64 ±0.03, and F1 Score: 0.66 ±0.03
(left). ROC curves obtained for MI prediction using LVM, LVEDV (derived from the proposed pipeline) and demographic
data. Accuracy: 0.74 ±0.03, Sensitivity: 0.74 ±0.02, Specificity: 0.71 ±0.03, Precision: 0.73 ±0.05, and F1 Score: 0.74 ±0.03.
(a) (b) (c)
Figure 3: ROC curves obtained from the external validation using AREDS dataset: (a) ROC curve obtained
considering all the AMD cases, (b) ROC curve obtained after discarding AMD cases with labels 2 and 3, and (c) ROC curve
obtained after excluding all AMD cases (labels 1, 2 and 3).
Table 1: Obtained results from the external validation using AREDS dataset: Accuracy, Sensitivity, Specificity,
Precision and F1 Score were computed to show the impact of AMD on the MI prediction.
Accuracy Sensitivity Specificity Precision F1 Score
Considering all AMD cases 0.59 0.70 0.49 0.49 0.57
Discarding AMD labels 2 & 3 0.62 0.70 0.54 0.54 0.61
Excluding all AMD cases 0.68 0.70 0.67 0.67 0.68
to estimate cardiac indices such as the LV mass
(LVM) and LV end-diastolic volume (LVEDV) by
jointly learning a latent space retinal and CMR im-
ages. To the best of our knowledge, no previous
works use a multi-modal approach with retinal and
CMR to learn a joint latent space and subsequently
estimate cardiac indices using just retinal and de-
mographic data. Our results follow previous re-
search demonstrating strong associations between
biomarkers in the retina and the heart [3, 4, 32],
similar to what has been shown in a recent study
where, cardiovascular risk factors such as age, gen-
der, blood pressure were quantified using only reti-
nal images [5].
Using the proposed method to estimate LVM and
LVEDV, we can assess patients at risk of future MI
or risk of similar adverse cardiovascular events at
routine ophthalmic visits. This would enable pa-
tient referral for further examination. Besides this,
estimated LVM/LVEDV could also be used to pro-
vide insights into pathological cardiac remodelling
or hypertension at no extra cost. This means, if
an ophthalmologist keeps a record of those indices
for their patient over time, they can refer patients
for further assessment to cardiologists, if a signif-
icant increase in the LVM or LVEDV is detected.
The ophthalmologist could be bypassed with auto-
mated risk detection if patients consented to share
their data on the cloud.
Figure 1 shows that our trained model is less pow-
erful at estimating higher LVM and LVEDV. Two
main factors are involved here: (1) The proportion
of subjects with elevated LVM/LVEDV available
for training (with retinal images) is limited, and (2)
Retinal images do not contain “all” the information
to assess cardiac function.
We chose to predict LVM and LVEDV as an in-
termediate step rather than directly predicting fu-
ture MI events because: (1) this ensures that the
developed approach is flexible in its clinical appli-
cation, as it could be used not just to predict MI,
but to assess LV function in general; (2) using LVM
and LVEDV enhances the explainability of predic-
tions, as evidenced by the analysis of the logistic
regression coefficients presented in the supplemen-
tary material.
In the external validation analyses, we presented
detailed data on the relative performance of the al-
gorithm to predict incident MI, according to the
presence and severity of AMD in the retinal im-
ages. The performance was highest in the absence
of AMD and appeared to decrease with the in-
clusion of individuals with AMD of gradually in-
creasing severity. In its most severe form, neovas-
cular AMD, the disease can cause extensive fibro-
sis, haemorrhage, and exudation across much of the
macula; this is likely to obliterate the relevant sig-
nals employed by the algorithm for predicting inci-
dent MI. Even in less severe forms, such as early and
intermediate AMD, substantial alterations to mac-
ular anatomy are observed, including drusen and
pigmentary abnormalities [36], which may partially
degrade or interfere with the relevant signals. We
might assume that the most important signals from
the retinal images, for MI prediction, are encoded
in the retinal vessels [5]. In this case, even early
and intermediate AMD are accompanied by sub-
stantial changes in the retinal vasculature’s quanti-
tative and morphological features [37]. Overall, the
presence of retinal disease such as AMD, particu-
larly in its more severe forms, presumably interferes
with the ability of the algorithm to infer character-
istics of the systemic circulation from the retinal
The AUC scores obtained using our approach
for UKBB and AREDS populations has to be con-
sidered in the context of a second referral setting
at an optician/eye clinic and not a primary cardi-
ology clinic. The sensitivity, specificity and pre-
cision/positive predictive value (PPV) of our ap-
proach at predicting future MI events from retinal
images in - (a) the UKBB population were 0.74,
0.72 and 0.68, respectively, when just Age and Gen-
der were considered as additional demographic vari-
ables (representative of the information available
in an optician/eye clinic), as highlighted in Sup-
plementary Figure 7; and (b) the AREDS popula-
tion, after excluding all AMD cases, was 0.70, 0.67,
and 0.67 respectively. Established cardiovascular
disease risk assessment models (e.g. Framingham
Risk Score (FRS), Systemic Coronary Risk Eval-
uation (SCORE), Pooled Cohort Equation (PCE)
etc.) [38, 39, 40, 41] used previously to screen
populations for atherosclerotic cardiovascular dis-
ease are comparable to our approach in discrimina-
tory capacity, while requiring several additional de-
mographic variables and clinical measurements not
readily available at an optician/eye clinic. For in-
stance, in [39] the authors compare FRS, PCE and
SCORE in the multi-ethnic study of atherosclero-
sis, each achieving an AUC of 0.717, 0.737 and
0.721, respectively, and corresponding sensitivity
and specificity of 0.7-0.8 and 0.5-0.6, respectively.
Similarly, in [38] multiple cardiovascular risk as-
sessment models were compared in their sensitivity,
specificity and PPV on the Diabetes and Cardiovas-
cular Risk Evaluation: Targets and Essential Data
for Commitment of Treatment study. This study
revealed that FRS and PCE’s sensitivity, speci-
ficity, and PPV ranged from 0.56-0.78, 0.60-0.78
and 0.12-0.24, respectively, when considering a 10%
risk threshold. While the performance of our ap-
proach in this study cannot be directly compared
to the risk assessment models evaluated in either of
the studies above, they provide context to the re-
sults obtained on both UKBB and AREDS popula-
tions, highlighting its potential for use as a second
referral tool at an eye clinic/optician. However, it
is important to note that this is a proof of concept
study with limitations in study design (detailed in
the supplementary material for brevity), predom-
inantly the limited availability of the multi-modal
data required for such analyses.
This study presents a system that estimates car-
diac indices such as LVM and LVEDV and pre-
dicts future MI events using inexpensive and easy to
obtain retinal photographs and demographic data.
We used 5,663 subjects from the UKB imaging
study with end-diastolic cardiac MR, retinal images
and demographic data to train and test our method.
We used this system to predict MI in subjects that
have retinal images and were not used during the
training process. We found that using cardiac in-
dices and demographic data together, yields signif-
icant improvements in predicting MI events com-
pared with using only demographic data. Finally,
we performed an independent replication study of
our method on the AREDS dataset. Although a
drop in performance was observed, the discrimina-
tion capacity of our approach remained compara-
ble to established CVD risk assessment models re-
ported previously. This highlights the potential for
our approach to be employed as a second referral
tool in eye clinics/opticians, to identify patients at
risk of future MI events. Future work will explore
genetic data to improve the discriminatory capac-
ity of the proposed approach and explainable arti-
ficial intelligence techniques to identify the domi-
nant retinal phenotypes that help assess CVD risk.
This will help facilitate fine-grained stratification of
CVD risk in patients, which will be a crucial step
towards delivering personalised medicine.
Image datasets and demographic data
This study used CMR images (end-diastolic
short-axis view), retinal images, and demographic
data from the UKB cohort (under access applica-
tion #11350) were used to train and validate the
proposed method. When this method was devel-
oped, 39,705 participants underwent CMR imag-
ing using a clinical wide bore 1.5 Tesla MRI sys-
tem (MAGNETOM Aera, Syngo Platform VD13A,
Siemens Healthcare, Erlangen, Germany) [11], and
84,760 participants underwent retinal imaging us-
ing a Topcon 3D OCT 1000 Mark 2 (45field-of-
view, centred to include both optic disc and mac-
ula) [12]. Only those participants with CMR, reti-
nal images, and demographic data were selected to
train our proposed method, totalling 11,383 partic-
From 11,383 participants, 676 participants were
excluded due to a history of conditions known to af-
fect LV mass such as diabetes (336 subjects), previ-
ous myocardial infarction (293 subjects), cardiomy-
opathy (14 subjects) or frequent strenuous exercise
routines (33 subjects).
After excluding participants with the conditions
above, a deep learning method for quality assess-
ment (QA) [13] was used to obtain the retinal im-
ages of sufficient quality, as per certain pre-specified
criteria. This QA method utilises the public dataset
called EyePACS [14], a well-known dataset pre-
sented in the Kaggle platform for automatic dia-
betic retinopathy detection, to train and validate
its performance. Following QA, 5,663 participants
were identified to have good quality retinal images.
We followed the RECORD statement for reporting
observational data, and a STROBE flow diagram
showing the exclusion criteria is presented in Fig-
ure 4. Subsequent preprocessing steps for retinal
and CMR images (i.e. ROI detection [15]) are pre-
sented in Supplementary Material Section 1.
Regarding the demographic data, a combination
of variables derived from the patient’s history and
blood samples such as sex, age, gender, HbA1c, sys-
tolic and diastolic blood pressure, smoking habit,
alcohol consumption, glucose and body mass index
were also used as input to train and test the pro-
posed method. Although we excluded participants
with diabetes, we retain HbA1c as multiple stud-
ies have shown positive correlation of HbA1c with
cardiovascular mortality even in subjects without a
history of diabetes [16, 17, 18]. Additionally, in [19]
the authors showed a strong association between
HbA1c and LV mass. They found that a 1% rise in
HbA1c level was associated with a 3.0 g increase in
LV mass in elderly subjects. All these variables are
summarised in Extended Data Table 3.
Besides demographic data, we also utilised
LVEDV and LVM extracted directly from the CMR
images. These cardiac indices were computed from
the manual delineations [20] generated using cvi42
post-processing software, and segmentations gener-
ated automatically using the method proposed by
Attar et al. [21]. More details about how these val-
ues were used are outlined in the Experiments and
Results sections.
Age-Related Eye Disease Study (AREDS) database
The Age-Related Eye Disease Study (AREDS)
was a multicenter prospective study of the clinical
course of age-related macular degeneration (AMD)
and age-related cataract, as well as a phase III
randomised controlled trial designed to assess the
effects of nutritional supplements on AMD and
cataract progression [22, 23]. Institutional review
board approval was obtained at each clinical site
and written informed consent for the research was
obtained from all study participants. The research
was conducted under the Declaration of Helsinki.
Additional information on AREDS and associated
demographic data is included in the Supplementary
Material Section 2
Code Availability Statement
All algorithms used in this study were developed
using libraries and scripts in PyTorch. Source code
is publicly available at [42].
Data Availability
UKB images are reproduced with the kind
permission of UK Biobank ©. All UKB images
and demographic data are available, with re-
strictions, from UK Biobank. Researchers who
use the UKB dataset must first complete the
UK Biobank online Access Management Sys-
tem (AMS) application form. More information
for accessing the UKB dataset can be found
in this link:
enable-your-research/apply-for- access
The AREDS data set (NCT00000145)
is available in the dbGAP repository,
Participants with retinal images
in the UK Biobank (n=84,760)
Excluded (n=13,245)
- Participants with CMR images
utilised to train mcVAE (n=11,383)
- Myocardial infarction occurred before
image was taken (n=1,862)
Excluded (n=79,097)
- Participants without CMR images (n=73,377)
- Quality control on retinal images. Poor and very
poor images (n=5,044)
- Diabetes cases (n=336)
- Previous myocardial infarction (n=293)
- Cardiomyopathy (n=14)
- Frequent strenuous exercise (n=33)
Analysed (n=71,515) Analysed (n=5,663)
Participants to train
Myocardial infarction model (n=84,760)
Participants to train
multi-channel VAE (n=84,760)
Figure 4: STROBE flow diagram for excluded participants: Criteria for excluding participants in this study.
Our method is based on the multi-channel
variational autoencoder (mcVAE) [24] and a
deep regression network (
prediction_retina_mcVAE) (ResNet50). For the
mcVAE, we designed two pairs of encoder/decoders
to train the network, in which each pair is trained
on one of the two data channels (retinal and CMR
images), with a shared latent space. The full dia-
gram of the proposed method is presented in Figure
5. Details of the encoders and the decoders are de-
scribed in Extended Data Table 3.
Antelmi et al. [24] highlighted that using a sparse
version of the mcVAE ensures the evidence lower
bound generally reaches the maximum value at con-
vergence when the number of latent dimensions co-
incides with the true one used to generate the data.
Consequently, we used the sparse version of the mc-
VAE. We trained a sparse latent space zfor both
channels of information. A detailed explanation of
how mcVAE works and the difference between this
and a vanilla VAE [25, 26, 27, 28] is provided in
Supplementary Material Section 3
Once the mcVAE was trained, we used the
learned latent space to train the deep regressor
(ResNet50). To do that, we used CMR images re-
constructed from the retinal images plus the demo-
graphic data (Stage II in Figure 5).
Prediction of Incident Myocardial Infarction
We evaluate the ability of the proposed approach
to estimate LVM and LVEDV from the retinal im-
ages and demographic data. As an additional ex-
periment, we predict myocardial infarction (MI)
utilising logistic regression in two settings: 1) us-
ing the demographic data alone; and 2) using
LVM/LVEDV estimated from the retinal images
and the demographic data, and subsequently, com-
bined with the latter for predicting MI. Logistic re-
gression eased interpretability, allowing us to com-
pare the weights/coefficients of the variables to-
wards the final prediction (See Figure 3). We ex-
tract the cases with MI events from the participants
not used to train the system to make this compar-
ison. This means, 73,477 participants out of a to-
tal 84,760 participants with retinal images. Of the
73,477, 2,954 subjects have previous MI. However,
we only consider the cases where MI occurred after
the retinal images were taken, which results in 992
MI cases and 70,523 no-MI cases.
We are dealing with imbalanced data. Hence,
we randomly resampled the normal cases to the
same number of MI cases (992). Previous stud-
ies [29] have highlighted that resampling the ma-
jority class is a robust solution when having hun-
dreds of cases in the minority class. Once the ma-
jority class was resampled, we performed 10-fold
cross-validation using logistic regression to predict
MI in the scenarios described previously (i.e. us-
ing only demographic and using demographic plus
AFF is supported by the Royal Academy
of Engineering Chair in Emerging Technologies
Scheme (CiET1819\19), the MedIAN Network
(EP/N026993/1) funded by the Engineering and
Physical Sciences Research Council (EPSRC). This
* Demographic data
Fully Connected
Stage I
Stage II
Original Preprocessed
Characteristics Age
Training set 58.72 (7.43) 48.39 26.47 (4.38) 79.42 (15.51) 132.24 (26.55) 31.83 (11.45) 4.20 (2.26) 5.20 (2.10) 5.77 2.15
Test set 58.90 (7.46) 52.10 26.71 (4.36) 80.85 (15.53) 134.82 (25.10) 31.30 (12.34) 4.20 (2.18) 5.25 (2.11) 5.21 2.18
* Details of the demographic data
BMI: Body Mass Index, DBP: Diastolic Blood Pressure, SBP: Systolic Blood Pressure
Figure 5: Overview of the proposed method: This system comprises two main components, a multi-channel VAE and a deep
regressor network. During Stage I, a joint latent space is created with two channels: Retinal and cardiac MR. Then, during Stage
II a deep regressor is trained on the reconstructed CMR plus demographic data to estimate LVM and LVEDV. Demographic
data: Summary of the subjects metadata used in this study to train (5097 participants) and test (566 participants) the
proposed method. All continuous values are reported in mean and standard deviation (in parenthesis) while categorical data
are reported in percentage (%).These images were reproduced with the kind permission of UKB©.
work was also supported by the Intramural Re-
search Program of National Library of Medicine
and National Eye Institute, National Institutes of
Health. Additionally, this research was supported
by the European Union’s Horizon 2020 InSilc
(777119) and EPSRC TUSCA (EP/V04799X/1)
programmes. Erica Dall’Armellina acknowledges
funding from the BHF grant FS/13/71/30378.
Author Contributions
A.D.-P. designed and executed all experiments,
conducted all subsequent statistical analyses, and
drafted the manuscript. N.R. helped design the
experiments, helped with the writing, data inter-
pretation and made substantial revisions and edits
of the draft manuscript. R.A. helped design the
experiments, contributed to the data analysis and
data cleaning. A.S. and Y.Z. contributed to the
data analysis. E.L., E.D.A., C.P.G, R.P.G., S.P.
contributed to the analysis of retinal and cardiac
MR images and shaped the medical contribution of
this work. M.L. contributed to the design and im-
plementation of the mcVAE. Q.C., T.D.L.K., E.A.,
E.Y.C., Z.L. contributed to the external validation
of the proposed method. A.F.F. helped design the
experiments and contributed to the writing. All
authors contributed to the manuscript.
Competing Interests
The authors declare that they have no competing
financial interests.
[1] G. A. Roth, C. Johnson, A. Abajobir, F. Abd-Allah,
S. F. Abera, G. Abyu, M. Ahmed, B. Aksut, and et al.,
“Global, Regional, and National Burden of Cardiovas-
cular Diseases for 10 Causes, 1990 to 2015,” Journal
of the American College of Cardiology, vol. 70, no. 1,
pp. 1 – 25, 2017.
[2] R. B. D’Agostino, R. S. Vasan, M. J. Pencina, P. A.
Wolf, M. Cobain, J. M. Massaro, and W. B. Kannel,
“General Cardiovascular Risk Profile for Use in Primary
Care,” Circulation, vol. 117, no. 6, pp. 743–753, 2008.
[3] T. Y. Wong, R. Klein, B. E. K. Klein, J. M. Tielsch,
L. Hubbard, and F. J. Nieto, “Retinal microvascular
abnormalities and their relationship with hypertension,
cardiovascular disease, and mortality.,” Survey of oph-
thalmology, vol. 46 1, pp. 59–80, 2001.
[4] B. R. McClintic, J. I. McClintic, J. D. Bisognano, and
R. C. Block, “The relationship between retinal mi-
crovascular abnormalities and coronary heart disease:
a review,” The American journal of medicine, vol. 123,
no. 4, pp. 374–e1, 2010.
[5] R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu,
M. V. McConnell, G. S. Corrado, L. Peng, and D. R.
Webster, “Predicting Cardiovascular Risk Factors from
Retinal Fundus Photographs using Deep Learning,” Nat
Biomed Eng, vol. 2, pp. 158 – 164, 2018.
[6] C. Cheung, W. Tay, P. Mitchell, J. Wang, W. Hsu,
M. Lee, Q. Lau, A. Zhu, R. Klein, S. Saw, and
T. Wong, “Quantitative and qualitative retinal mi-
crovascular characteristics and blood pressure,” Jour-
nal of Hypertension, vol. 27, pp. 1380 – 1391, 2011.
[7] R. J. Tapp, C. G. Owen, S. A. Barman, R. A. Welikala,
P. J. Foster, P. H. Whincup, D. P. Strachan, A. R. Rud-
nicka, and null null, “Associations of Retinal Microvas-
cular Diameters and Tortuosity With Blood Pressure
and Arterial Stiffness,” Hypertension, vol. 74, no. 6,
pp. 1383–1390, 2019.
[8] R. Gargeya and T. Leng, “Automated Identification of
Diabetic Retinopathy Using Deep Learning,” Ophthal-
mology, vol. 124, no. 7, pp. 962 – 969, 2017.
[9] S. Qummar, F. G. Khan, S. Shah, A. Khan,
S. Shamshirband, Z. U. Rehman, I. Ahmed Khan, and
W. Jadoon, “A Deep Learning Ensemble Approach for
Diabetic Retinopathy Detection,” IEEE Access, vol. 7,
pp. 150530–150539, 2019.
[10] P. J. Pickhardt, P. M. Graffy, R. Zea, S. J. Lee, J. Liu,
V. Sandfort, and R. M. Summers, “Automated CT
biomarkers for opportunistic prediction of future car-
diovascular events and mortality in an asymptomatic
screening population: a retrospective cohort study,”
The Lancet Digital Health, vol. 2, no. 4, pp. e192 –
e200, 2020.
[11] S. E. Petersen, P. M. Matthews, J. M. Francis, M. D.
Robson, F. Zemrak, R. Boubertakh, A. A. Young,
S. Hudson, P. Weale, S. Garratt, R. Collins, S. Piech-
nik, and S. Neubauer, “UK Biobank’s cardiovascular
magnetic resonance protocol,” Journal of Cardiovascu-
lar Magnetic Resonance, vol. 18, pp. 1 – 7, 2015.
[12] T. J. MacGillivray, J. R. Cameron, Q. Zhang, A. El-
Medany, C. Mulholland, Z. Sheng, B. Dhillon, F. N.
Doubal, P. J. Foster, E. Trucco, C. Sudlow, U. B. Eye,
and V. Consortium, “Suitability of UK Biobank Retinal
Images for Automatic Analysis of Morphometric Prop-
erties of the Vasculature,” PLOS ONE, vol. 10, pp. 1–
10, 05 2015.
[13] H. Fu, B. Wang, J. Shen, S. Cui, Y. Xu, J. Liu, and
L. Shao, “Evaluation of Retinal Image Quality Assess-
ment Networks in Different Color-Spaces,” in Medical
Image Computing and Computer Assisted Intervention
– MICCAI 2019, (Cham), pp. 48–56, Springer Interna-
tional Publishing, 2019.
[14] Kaggle, “Kaggle diabetic retinopathy com-
diabetic-retinopathy- detection/data, 2015. Ac-
cessed: 2020-01-19.
[15] Q. Zheng, H. Delingette, N. Duchateau, and N. Ay-
ache, “3-D Consistent and Robust Segmentation of Car-
diac Images by Deep Learning With Spatial Propaga-
tion,” IEEE Transactions on Medical Imaging, vol. 37,
pp. 2137–2148, Sep. 2018.
[16] K.-T. Khaw, N. Wareham, S. Bingham, R. Luben,
A. Welch, and N. Day, “Association of Hemoglobin A1c
with Cardiovascular Disease and Mortality in Adults:
The European Prospective Investigation into Cancer in
Norfolk,” Annals of Internal Medicine, vol. 141, no. 6,
pp. 413–420, 2004. PMID: 15381514.
[17] E. Levitan, S. Liu, M. Stampfer, N. Cook, K. Rexrode,
P. Ridker, J. Buring, and J. Manson, “Hba1c mea-
sured in stored erythrocytes and mortality rate among
middle-aged and older women,” Diabetologia, vol. 51,
no. 2, pp. 267–275, 2008.
[18] H. C. Gerstein, K. Swedberg, J. Carlsson, J. J. McMur-
ray, E. L. Michelson, B. Olofsson, M. A. Pfeffer, and
S. Yusuf, “The hemoglobin A1c level as a progressive
risk factor for cardiovascular death, hospitalization for
heart failure, or death in patients with chronic heart
failure: an analysis of the Candesartan in Heart fail-
ure: Assessment of Reduction in Mortality and Morbid-
ity (CHARM) program,” Archives of internal medicine,
vol. 168, no. 15, pp. 1699–1704, 2008.
[19] H. Skali, A. Shah, D. K. Gupta, S. Cheng, B. Claggett,
J. Liu, N. Bello, D. Aguilar, O. Vardeny, K. Mat-
sushita, et al., “Cardiac structure and function across
the glycemic spectrum in elderly men and women free
of prevalent heart disease: the atherosclerosis risk in the
community study,” Circulation: Heart Failure, vol. 8,
no. 3, pp. 448–454, 2015.
[20] S. E. Petersen, N. Aung, M. M. Sanghvi, F. Zemrak,
K. Fung, J. Miguel Paiva, J. M. Francis, M. Y. Khanji,
E. Lukaschuk, A. Lee, et al., “Reference ranges for
cardiac structure and function in cardiovascular mag-
netic resonance (cmr) imaging in caucasians from the
uk biobank population cohort,” Journal of Cardiovas-
cular Magnetic Resonance, vol. 19, no. 1, 2017.
[21] R. Attar, M. Perea˜nez, C. Bowles, S. K. Piechnik,
S. Neubauer, S. E. Petersen, and A. F. Frangi, “3D
Cardiac Shape Prediction with Deep Neural Networks:
Simultaneous Use of Images and Patient Metadata,”
in Medical Image Computing and Computer Assisted
Intervention – MICCAI 2019, (Cham), pp. 586–594,
Springer International Publishing, 2019.
[22] T. A.-R. E. D. S. R. Group, “The Age-Related Eye
Disease Study (AREDS): Design Implications AREDS
Report No. 1,” Controlled Clinical Trials, vol. 20, no. 6,
pp. 573–600, 1999.
[23] A.-R. E. D. S. R. Group et al., “The Age-Related
Eye Disease Study system for classifying age-related
macular degeneration from stereoscopic color fundus
photographs: the Age-Related Eye Disease Study Re-
port Number 6,” American journal of ophthalmology,
vol. 132, no. 5, pp. 668–681, 2001.
[24] L. Antelmi, N. Ayache, P. Robert, and M. Lorenzi,
“Sparse Multi-Channel Variational Autoencoder for the
Joint Analysis of Heterogeneous Data,” in Proceed-
ings of the 36th International Conference on Machine
Learning, vol. 97, pp. 302–311, PMLR, 09–15 Jun 2019.
[25] D. P. Kingma and M. Welling, “Auto-Encoding Vari-
ational Bayes,” Proceedings 2nd International Confer-
ence on Learning Representations (ICLR), 2014.
[26] D. J. Rezende, S. Mohamed, and D. Wierstra,
“Stochastic Backpropagation and Approximate Infer-
ence in Deep Generative Models,” arXiv preprint
arXiv:1401.4082, 2014.
[27] H. Hotelling, “Relations between two sets of variates,”
Biometrika, vol. 28, no. 3/4, pp. 321–377, 1936.
[28] S. Haufe, F. Meinecke, K. orgen, S. ahne, J.-D.
Haynes, B. Blankertz, and F. Bießmann, “On the in-
terpretation of weight vectors of linear models in mul-
tivariate neuroimaging,” NeuroImage, vol. 87, pp. 96 –
110, 2014.
[29] J. M. Johnson and T. M. Khoshgoftaar, “Survey on
deep learning with class imbalance,” Journal of Big
Data, vol. 6, no. 1, p. 27, 2019.
[30] A. Suinesiaputra, M. M. Sanghvi, N. Aung, J. M. Paiva,
F. Zemrak, K. Fung, E. Lukaschuk, A. M. Lee, V. Cara-
pella, Y. J. Kim, et al., “Fully-automated left ventricu-
lar mass and volume MRI analysis in the UK Biobank
population cohort: evaluation of initial results,” The in-
ternational journal of cardiovascular imaging, vol. 34,
no. 2, pp. 281–291, 2018.
[31] R. Attar, M. Perea˜nez, A. Gooya, X. Alb`a, L. Zhang,
M. H. de Vila, A. M. Lee, N. Aung, E. Lukaschuk,
M. M. Sanghvi, K. Fung, J. M. Paiva, S. K. Piechnik,
S. Neubauer, S. E. Petersen, and A. F. Frangi, “Quan-
titative CMR population imaging on 20,000 subjects of
the UK biobank imaging study: LV/RV quantification
pipeline and its evaluation,” Medical Image Analysis,
vol. 56, pp. 26 – 42, 2019.
[32] N. Keith, “Some different types of essential hyperten-
sion: their course and prognosis,” American Journal of
the Medical Sciences, vol. 268, pp. 336–345, 1974.
[33] W. Bai, M. Sinclair, G. Tarroni, O. Oktay, M. Ra-
jchl, G. Vaillant, A. M. Lee, N. Aung, E. Lukaschuk,
M. M. Sanghvi, et al., “Automated cardiovascular mag-
netic resonance image analysis with fully convolutional
networks,” Journal of Cardiovascular Magnetic Reso-
nance, vol. 20, no. 1, p. 65, 2018.
[34] K. Lin, J. D. Collins, D. M. Lloyd-Jones, M.-P. Jolly,
D. Li, M. Markl, and J. C. Carr, “”automated assess-
ment of left ventricular function and mass using heart
deformation analysis:: Initial experience in 160 older
adults”,” Academic Radiology, vol. 23, no. 3, pp. 321 –
325, 2016.
[35] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler,
and S. Hochreiter, “GANs Trained by a Two Time-Scale
Update Rule Converge to a Local Nash Equilibrium,”
in Advances in Neural Information Processing Systems,
vol. 30, Curran Associates, Inc., 2017.
[36] F. L. Ferris, C. Wilkinson, A. Bird, U. Chakravarthy,
E. Chew, K. Csaky, and S. R. Sadda, “Clinical classi-
fication of age-related macular degeneration,” Ophthal-
mology, vol. 120, no. 4, pp. 844–851, 2013.
[37] M. Trinh, M. Kalloniatis, and L. Nivison-Smith, “Vas-
cular Changes in Intermediate Age-Related Macular
Degeneration Quantified Using Optical Coherence To-
mography Angiography,” Translational Vision Science
& Technology, vol. 8, pp. 20–20, 08 2019.
[38] T. B. Grammer, A. Dressel, I. Gergei, M. E. Kle-
ber, U. Laufs, H. Scharnagl, U. Nixdorff, J. Klotsche,
L. Pieper, D. Pittrow, et al., “Cardiovascular risk algo-
rithms in primary care: Results from the detect study,”
Scientific reports, vol. 9, no. 1, pp. 1–12, 2019.
[39] W. T. Qureshi, E. D. Michos, P. Flueckiger, M. Blaha,
V. Sandfort, D. M. Herrington, G. Burke, and
J. Yeboah, “Impact of replacing the pooled cohort equa-
tion with other cardiovascular disease risk scores on
atherosclerotic cardiovascular disease risk assessment
(from the multi-ethnic study of atherosclerosis [mesa]),”
The American journal of cardiology, vol. 118, no. 5,
pp. 691–696, 2016.
[40] C. Wallisch, G. Heinze, C. Rinner, G. Mundigler, W. C.
Winkelmayer, and D. Dunkler, “External validation
of two framingham cardiovascular risk equations and
the pooled cohort equations: a nationwide registry
analysis,” International journal of cardiology, vol. 283,
pp. 165–170, 2019.
[41] C. Wallisch, G. Heinze, C. Rinner, G. Mundigler,
W. C. Winkelmayer, and D. Dunkler, “Re-estimation
improved the performance of two framingham cardio-
vascular risk equations and the pooled cohort equa-
tions: a nationwide registry analysis,” Scientific re-
ports, vol. 10, no. 1, pp. 1–11, 2020.
[42] A. D. Pinto, “Predicting myocardial infarction through
retinal scans and minimal personal information,”, 2021.
Purpose of review: Retinal microvasculature assessment has shown promise to enhance cardiovascular disease (CVD) risk stratification. Integrating artificial intelligence into retinal microvasculature analysis may increase the screening capacity of CVD risks compared with risk score calculation through blood-taking. This review summarizes recent advancements in artificial intelligence based retinal photograph analysis for CVD prediction, and suggests challenges and future prospects for translation into a clinical setting. Recent findings: Artificial intelligence based retinal microvasculature analyses potentially predict CVD risk factors (e.g. blood pressure, diabetes), direct CVD events (e.g. CVD mortality), retinal features (e.g. retinal vessel calibre) and CVD biomarkers (e.g. coronary artery calcium score). However, challenges such as handling photographs with concurrent retinal diseases, limited diverse data from other populations or clinical settings, insufficient interpretability and generalizability, concerns on cost-effectiveness and social acceptance may impede the dissemination of these artificial intelligence algorithms into clinical practice. Summary: Artificial intelligence based retinal microvasculature analysis may supplement existing CVD risk stratification approach. Although technical and socioeconomic challenges remain, we envision artificial intelligence based microvasculature analysis to have major clinical and research impacts in the future, through screening for high-risk individuals especially in less-developed areas and identifying new retinal biomarkers for CVD research.
Purpose: Despite the huge investment in health care, there is still a lack of precise and easily accessible screening systems. With proven associations to many systemic diseases, the eye could potentially provide a credible perspective as a novel screening tool. This systematic review aims to summarize the current applications of ocular image-based artificial intelligence on the detection of systemic diseases and suggest future trends for systemic disease screening. Methods: A systematic search was conducted on September 1, 2021, using 3 databases-PubMed, Google Scholar, and Web of Science library. Date restrictions were not imposed and search terms covering ocular images, systemic diseases, and artificial intelligence aspects were used. Results: Thirty-three papers were included in this systematic review. A spectrum of target diseases was observed, and this included but was not limited to cardio-cerebrovascular diseases, central nervous system diseases, renal dysfunctions, and hepatological diseases. Additionally, one- third of the papers included risk factor predictions for the respective systemic diseases. Conclusions: Ocular image - based artificial intelligence possesses potential diagnostic power to screen various systemic diseases and has also demonstrated the ability to detect Alzheimer and chronic kidney diseases at early stages. Further research is needed to validate these models for real-world implementation.
Full-text available
Equations predicting the risk of occurrence of cardiovascular disease (CVD) are used in primary care to identify high-risk individuals among the general population. To improve the predictive performance of such equations, we updated the Framingham general CVD 1991 and 2008 equations and the Pooled Cohort equations for atherosclerotic CVD within five years in a contemporary cohort of individuals who participated in the Austrian health-screening program from 2009–2014. The cohort comprised 1.7 M individuals aged 30–79 without documented CVD history. CVD was defined by hospitalization or death from cardiovascular cause. Using baseline and follow-up data, we recalibrated and re-estimated the equations. We evaluated the gain in discrimination and calibration and assessed explained variation. A five-year general CVD risk of 4.61% was observed. As expected, discrimination c-statistics increased only slightly and ranged from 0.73–0.79. The two original Framingham equations overestimated the CVD risk, whereas the original Pooled Cohort equations underestimated it. Re-estimation improved calibration of all equations adequately, especially for high-risk individuals. Half of the individuals were reclassified into another risk category using the re-estimated equations. Predictors in the re-estimated Framingham equations explained 7.37% of the variation, whereas the Pooled Cohort equations explained 5.81%. Age was the most important predictor.
Full-text available
Background: Body CT scans are frequently performed for a wide variety of clinical indications, but potentially valuable biometric information typically goes unused. We investigated the prognostic ability of automated CT-based body composition biomarkers derived from previously-developed deep-learning and feature-based algorithms for predicting major cardiovascular events and overall survival in an adult screening cohort, compared with clinical parameters. Methods: Mature and fully-automated CT-based algorithms with pre-defined metrics for quantifying aortic calcification, muscle density, visceral/subcutaneous fat, liver fat, and bone mineral density (BMD) were applied to a generally-healthy asymptomatic outpatient cohort of 9223 adults (mean age, 57.1 years; 5152 women) undergoing abdominal CT for routine colorectal cancer screening. Longitudinal clinical follow-up (median, 8.8 years; IQR, 5.1-11.6 years) documented subsequent major cardiovascular events or death in 19.7% (n=1831). Predictive ability of CT-based biomarkers was compared against the Framingham Risk Score (FRS) and body mass index (BMI). Findings: Significant differences were observed for all five automated CT-based body composition measures according to adverse events (p<0.001). Univariate 5-year AUROC (with 95% CI) for automated CT-based aortic calcification, muscle density, visceral/subcutaneous fat ratio, liver density, and vertebral density for predicting death were 0.743(0.705-0.780)/0.721(0.683-0.759)/0.661(0.625-0.697)/0.619 (0.582-0.656)/0.646(0.603-0.688), respectively, compared with 0.499(0.454-0.544) for BMI and 0.688(0.650-0.727) for FRS (p<0.05 for aortic calcification vs. FRS and BMI); all trends were similar for 2-year and 10-year ROC analyses. Univariate hazard ratios (with 95% CIs) for highest-risk quartile versus others for these same CT measures were 4.53(3.82-5.37) /3.58(3.02-4.23)/2.28(1.92-2.71)/1.82(1.52-2.17)/2.73(2.31-3.23), compared with 1.36(1.13-1.64) and 2.82(2.36-3.37) for BMI and FRS, respectively. Similar significant trends were observed for cardiovascular events. Multivariate combinations of CT biomarkers further improved prediction over clinical parameters (p<0.05 for AUROCs). For example, by combining aortic calcification, muscle density, and liver density, the 2-year AUROC for predicting overall survival was 0.811 (0.761-0.860). Interpretation: Fully-automated quantitative tissue biomarkers derived from CT scans can outperform established clinical parameters for pre-symptomatic risk stratification for future serious adverse events, and add opportunistic value to CT scans performed for other indications.
Full-text available
To examine the baseline associations of retinal vessel morphometry with blood pressure (BP) and arterial stiffness in United Kingdom Biobank. The United Kingdom Biobank included 68 550 participants aged 40 to 69 years who underwent nonmydriatic retinal imaging, BP, and arterial stiffness index assessment. A fully automated image analysis program (QUARTZ [Quantitative Analysis of Retinal Vessel Topology and Size]) provided measures of retinal vessel diameter and tortuosity. The associations between retinal vessel morphology and cardiovascular disease risk factors/outcomes were examined using multilevel linear regression to provide absolute differences in vessel diameter and percentage differences in tortuosity (allowing within person clustering), adjusted for age, sex, ethnicity, clinic, body mass index, smoking, and deprivation index. Greater arteriolar tortuosity was associated with higher systolic BP (relative increase, 1.2%; 95% CI, 0.9; 1.4% per 10 mmHg), higher mean arterial pressure, 1.3%; 0.9, 1.7% per 10 mmHg, and higher pulse pressure (PP, 1.8%; 1.4; 2.2% per 10 mmHg). Narrower arterioles were associated with higher systolic BP (-0.9 µm; -0.94, -0.87 µm per 10 mmHg), mean arterial pressure (-1.5 µm; -1.5, -1.5 µm per 10 mmHg), PP (-0.7 µm; -0.8, -0.7 µm per 10 mmHg), and arterial stiffness index (-0.12 µm; -0.14, -0.09 µm per ms/m2). Associations were in the same direction but marginally weaker for venular tortuosity and diameter. This study assessing the retinal microvasculature at scale has shown clear associations between retinal vessel morphometry, BP, and arterial stiffness index. These observations further our understanding of the preclinical disease processes and interplay between microvascular and macrovascular disease.
Full-text available
Diabetic Retinopathy (DR) is an ophthalmic disease that damages retinal blood vessels. DR causes impaired vision and may even lead to blindness if it is not diagnosed in early stages. DR has five stages or classes, namely normal, mild, moderate, severe and PDR (Proliferative Diabetic Retinopathy). Normally, highly trained experts examine the colored fundus images to diagnose this fatal disease. This manual diagnosis of this condition (by clinicians) is tedious and error-prone. Therefore, various computer vision-based techniques have been proposed to automatically detect DR and its different stages from retina images. However, these methods are unable to encode the underlying complicated features and can only classify DR’s different stages with very low accuracy particularly, for the early stages. In this research, we used the publicly available Kaggle dataset of retina images to train an ensemble of five deep Convolution Neural Network (CNN) models (Resnet50, Inceptionv3, Xception, Dense121, Dense169) to encode the rich features and improve the classification for different stages of DR. The experimental results show that the proposed model detects all the stages of DR unlike the current methods and performs better compared to state-of-the-art methods on the same Kaggle dataset.
Full-text available
Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space and are developed based on small datasets with binary quality labels (i.e., ‘Accept’ and ‘Reject’). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., ‘Good’, ‘Usable’ and ‘Reject’) for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our EyeQ dataset show that our MCF-Net obtains a state-of-the-art performance, outperforming the other deep learning methods. Furthermore, we also evaluate diabetic retinopathy (DR) detection methods on images of different quality, and demonstrate that the performances of automated diagnostic systems are highly dependent on image quality.
Full-text available
Purpose: To examine changes in retinal vasculature and ganglion cell layer (GCL) thickness in intermediate age-related macular degeneration (AMD) using optical coherence tomography angiography (OCTA). Methods: Zeiss Cirrus Angioplex OCTA 6 × 6 mm scans and a macula 512 × 128 cube scans of the central retina were taken of 63 eyes with intermediate AMD and 51 control eyes. For OCTA scans, the superficial and deep capillary plexus were automatically segmented and vascular density quantified as total number of pixels contributing to the blood flow signal detectable by OCTA. Images were then skeletonized and vessel length, diameter index, morphology, and branching complexity determined. Foveal avascular zone (FAZ) characteristics and GCL thickness were extracted from in-built Angioplex software. Results: Vascular density was significantly reduced in the superficial capillary plexus of AMD eyes compared with normal eyes, particularly in the superior quadrant (42.4% ± 1.6% vs. 43.2% ± 1.4%; P < 0.05). A nonsignificant reduction was also seen in the deep capillary plexus (P = 0.06). Total vessel length and average vessel diameter were all significantly decreased in AMD eyes suggesting density changes were related to decreased vessel number and caliber. Vascular complexity and number of branch points was significantly decreased in the deep capillary plexus (P < 0.05) suggesting loss or significantly reduced flow of vessels. Average GCL thickness was also significantly reduced in the AMD eyes (P < 0.05). No significant changes in FAZ parameters were observed in AMD eyes. Conclusions: This study suggests intermediate AMD affects both the quantity and morphology of inner retinal vasculature and may be associated with changes in inner retinal structure. This work builds upon the notion that AMD pathogenesis may extends beyond the outer retina. Translational relevance: Better understanding of retinal vascular changes in AMD can provide insights in the development of treatment and prevention protocols for these diseases.
Conference Paper
Large prospective epidemiological studies acquire cardiovascular magnetic resonance (CMR) images for pre-symptomatic populations and follow these over time. To support this approach, fully automatic large-scale 3D analysis is essential. In this work, we propose a novel deep neural network using both CMR images and patient metadata to directly predict cardiac shape parameters. The proposed method uses the promising ability of statistical shape models to simplify shape complexity and variability together with the advantages of convolutional neural networks for the extraction of solid visual features. To the best of our knowledge, this is the first work that uses such an approach for 3D cardiac shape prediction. We validated our proposed CMR analytics method against a reference cohort containing 500 3D shapes of the cardiac ventricles. Our results show broadly significant agreement with the reference shapes in terms of the estimated volume of the cardiac ventricles, myocardial mass, 3D Dice, and mean and Hausdorff distance.
Large prospective epidemiological studies acquire cardiovascular magnetic resonance (CMR) images for pre-symptomatic populations and follow these over time. To support this approach, fully automatic large-scale 3D analysis is essential. In this work, we propose a novel deep neural network using both CMR images and patient metadata to directly predict cardiac shape parameters. The proposed method uses the promising ability of statistical shape models to simplify shape complexity and variability together with the advantages of convolutional neural networks for the extraction of solid visual features. To the best of our knowledge, this is the first work that uses such an approach for 3D cardiac shape prediction. We validated our proposed CMR analytics method against a reference cohort containing 500 3D shapes of the cardiac ventricles. Our results show broadly significant agreement with the reference shapes in terms of the estimated volume of the cardiac ventricles, myocardial mass, 3D Dice, and mean and Hausdorff distance.
Population imaging studies generate data for developing and implementing personalised health strategies to prevent, or more effectively treat disease. Large prospective epidemiological studies acquire imaging for pre-symptomatic populations. These studies enable the early discovery of alterations due to impending disease, and enable early identification of individuals at risk. Such studies pose new challenges requiring automatic image analysis. To date, few large-scale population-level cardiac imaging studies have been conducted. One such study stands out for its sheer size, careful implementation, and availability of top quality expert annotation; the UK Biobank (UKB). The resulting massive imaging datasets (targeting ca. 100,000 subjects) has put published approaches for cardiac image quantification to the test. In this paper, we present and evaluate a cardiac magnetic resonance (CMR) image analysis pipeline that properly scales up and can provide a fully automatic analysis of the UKB CMR study. Without manual user interactions, our pipeline performs end-to-end image analytics from multi-view cine CMR images all the way to anatomical and functional bi-ventricular quantification. All this, while maintaining relevant quality controls of the CMR input images, and resulting image segmentations. To the best of our knowledge, this is the first published attempt to fully automate the extraction of global and regional reference ranges of all key functional cardiovascular indexes, from both left and right cardiac ventricles, for a population of 20,000 subjects imaged at 50 time frames per subject, for a total of one million CMR volumes. In addition, our pipeline provides 3D anatomical bi-ventricular models of the heart. These models enable the extraction of detailed information of the morphodynamics of the two ventricles for subsequent association to genetic, omics, lifestyle habits, exposure information, and other information provided in population imaging studies. We validated our proposed CMR analytics pipeline against manual expert readings on a reference cohort of 4620 subjects with contour delineations and corresponding clinical indexes. Our results show broad significant agreement between the manually obtained reference indexes, and those automatically computed via our framework. 80.67% of subjects were processed with mean contour distance of less than 1 pixel, and 17.50% with mean contour distance between 1 and 2 pixels. Finally, we compare our pipeline with a recently published approach reporting on UKB data, and based on deep learning. Our comparison shows similar performance in terms of segmentation accuracy with respect to human experts.