Erratum to multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease [Neuroimage 59/2 (2012) 895–907]

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
NeuroImage (Impact Factor: 6.36). 01/2012; 59(2):895-907. DOI: 10.1016/j.neuroimage.2011.09.069
Source: PubMed


Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely multi-modal multi-task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of 'AD', 'MCI' or 'HC'), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods.

41 Reads
  • Source
    • "In the literature, many brain morphometric pattern analysis methods have been developed for computer-aided AD/MCI diagnosis, by identifying differences in shape and neuroanatomical configuration of different brains provided by magnetic resonance imaging (MRI) [2] [3] [4] [5] [6] [7] [8] [9] [10]. Most of early works use regional measurement of anatomical volumes in pre-defined regions of interest (ROIs) (e.g., hippocampus, entorhinal cortex, or neocortex) to investigate abnormal tissue structure caused by AD or MCI. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-template based brain morphometric pattern analysis using magnetic resonance imaging (MRI) has been recently proposed for automatic diagnosis of Alzheimer's disease (AD) and its prodromal stage (i.e., mild cognitive impairment or MCI). In such methods, multi-view morphological patterns generated from multiple templates are used as feature representation for brain images. However, existing multi-template based methods often simply assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while in reality the underlying data distribution is actually not pre-known. In this paper, we propose an inherent structure based multi-view leaning (ISML) method using multiple templates for AD/MCI classification. Specifically, we first extract multi-view feature representations for subjects using multiple selected templates, and then cluster subjects within a specific class into several sub-classes (i.e., clusters) in each view space. Then, we encode those sub-classes with unique codes by considering both their original class information and their own distribution information, followed by a multi-task feature selection model. Finally, we learn an ensemble of view-specific support vector machine (SVM) classifiers based on their respectively selected features in each view, and fuse their results to draw the final decision. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database demonstrate that our method achieves promising results for AD/MCI classification, compared to the state-of-the-art multi-template based methods.
    IEEE transactions on bio-medical engineering 11/2015; DOI:10.1109/TBME.2015.2496233 · 2.35 Impact Factor
  • Source
    • "In this study, a linear SVM classifier is used to identify AD patients from NCs, and progressive MCI patients from stable MCI patients. Here, we choose a linear model because it has good generalization capability across different training data as shown in extensive studies [Burges, 1998; Pereira et al., 2009; Zhang and Shen, 2012]. Finally, a classifier ensemble strategy is used to combine these K base classifiers to construct a more accurate and robust learning model. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-atlas based methods have been recently used for classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Compared with traditional single-atlas based methods, multiatlas based methods adopt multiple predefined atlases and thus are less biased by a certain atlas. However, most existing multiatlas based methods simply average or concatenate the features from multiple atlases, which may ignore the potentially important diagnosis information related to the anatomical differences among different atlases. In this paper, we propose a novel view (i.e., atlas) centralized multi-atlas classification method, which can better exploit useful information in multiple feature representations from different atlases. Specifically, all brain images are registered onto multiple atlases individually, to extract feature representations in each atlas space. Then, the proposed view-centralized multi-atlas feature selection method is used to select the most discriminative features from each atlas with extra guidance from other atlases. Next, we design a support vector machine (SVM) classifier using the selected features in each atlas space. Finally, we combine multiple SVM classifiers for multiple atlases through a classifier ensemble strategy for making a final decision. We have evaluated our method on 459 subjects [including 97 AD, 117 progressive MCI (p-MCI), 117 stable MCI (s-MCI), and 128 normal controls (NC)] from the Alzheimer's Disease Neuroimaging Initiative database, and achieved an accuracy of 92.51% for AD versus NC classification and an accuracy of 78.88% for p-MCI versus s-MCI classification. These results demonstrate that the proposed method can significantly outperform the previous multi-atlas based classification methods. Hum Brain Mapp, 2015. © 2014 Wiley Periodicals, Inc.
    Human Brain Mapping 01/2015; 36(5). DOI:10.1002/hbm.22741 · 5.97 Impact Factor
  • Source
    • "Classifiers using morphometric MRI data have presented high indices of diagnostic accuracy that reinforce the diagnosis of AD, the commonest form of dementia (Vemuri et al., 2008; Magnin et al., 2009; Plant et al., 2010; Dai et al., 2012; Liu et al., 2012). These techniques have also afforded promising results in studies related to MCI – cognitive decline not severe enough to fulfill the criteria for established dementia (Misra et al., 2009; Aksu et al., 2011; Chincarini et al., 2011; Cui et al., 2011; Davatzikos et al., 2011; Zhang and Shen, 2012). However , the latter is still an open area with intense investigations, as predicting whether individuals who are at increased risk convert to AD is more challenging than classifying AD versus control individuals. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent literature has presented evidence that cardiovascular risk factors (CVRF) play an important role on cognitive performance in elderly individuals, both those who are asymp-tomatic and those who suffer from symptoms of neurodegenerative disorders. Findings from studies applying neuroimaging methods have increasingly reinforced such notion. Studies addressing the impact of CVRF on brain anatomy changes have gained increasing importance, as recent papers have reported gray matter loss predominantly in regions traditionally affected in Alzheimer's disease (AD) and vascular dementia in the presence of a high degree of cardiovascular risk. In the present paper, we explore the association between CVRF and brain changes using pattern recognition techniques applied to structural MRI and the Framingham score (a composite measure of cardiovascular risk largely used in epidemiological studies) in a sample of healthy elderly individuals. We aim to answer the following questions: is it possible to decode (i.e., to learn information regarding cardio-vascular risk from structural brain images) enabling individual predictions? Among clinical measures comprising the Framingham score, are there particular risk factors that stand as more predictable from patterns of brain changes? Our main findings are threefold: (i) we verified that structural changes in spatially distributed patterns in the brain enable statistically significant prediction of Framingham scores. This result is still significant when controlling for the presence of the APOE 4 allele (an important genetic risk factor for both AD and cardiovascular disease). (ii) When considering each risk factor singly, we found different levels of correlation between real and predicted factors; however, single factors were not significantly predictable from brain images when considering APOE4 allele presence as covariate. (iii) We found important gender differences, and the possible causes of that finding are discussed.
    Frontiers in Aging Neuroscience 12/2014; 6(300). DOI:10.3389/fnagi.2014.00300 · 4.00 Impact Factor
Show more