Erratum to multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease [Neuroimage 59/2 (2012) 895–907]

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
NeuroImage (Impact Factor: 6.36). 01/2012; 59(2):895-907. DOI: 10.1016/j.neuroimage.2011.09.069
Source: PubMed


Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely multi-modal multi-task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of 'AD', 'MCI' or 'HC'), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods.

37 Reads
  • Source
    • "In this study, a linear SVM classifier is used to identify AD patients from NCs, and progressive MCI patients from stable MCI patients. Here, we choose a linear model because it has good generalization capability across different training data as shown in extensive studies [Burges, 1998; Pereira et al., 2009; Zhang and Shen, 2012]. Finally, a classifier ensemble strategy is used to combine these K base classifiers to construct a more accurate and robust learning model. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-atlas based methods have been recently used for classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Compared with traditional single-atlas based methods, multiatlas based methods adopt multiple predefined atlases and thus are less biased by a certain atlas. However, most existing multiatlas based methods simply average or concatenate the features from multiple atlases, which may ignore the potentially important diagnosis information related to the anatomical differences among different atlases. In this paper, we propose a novel view (i.e., atlas) centralized multi-atlas classification method, which can better exploit useful information in multiple feature representations from different atlases. Specifically, all brain images are registered onto multiple atlases individually, to extract feature representations in each atlas space. Then, the proposed view-centralized multi-atlas feature selection method is used to select the most discriminative features from each atlas with extra guidance from other atlases. Next, we design a support vector machine (SVM) classifier using the selected features in each atlas space. Finally, we combine multiple SVM classifiers for multiple atlases through a classifier ensemble strategy for making a final decision. We have evaluated our method on 459 subjects [including 97 AD, 117 progressive MCI (p-MCI), 117 stable MCI (s-MCI), and 128 normal controls (NC)] from the Alzheimer's Disease Neuroimaging Initiative database, and achieved an accuracy of 92.51% for AD versus NC classification and an accuracy of 78.88% for p-MCI versus s-MCI classification. These results demonstrate that the proposed method can significantly outperform the previous multi-atlas based classification methods. Hum Brain Mapp, 2015. © 2014 Wiley Periodicals, Inc.
    Human Brain Mapping 01/2015; 36(5). DOI:10.1002/hbm.22741 · 5.97 Impact Factor
  • Source
    • "Classifiers using morphometric MRI data have presented high indices of diagnostic accuracy that reinforce the diagnosis of AD, the commonest form of dementia (Vemuri et al., 2008; Magnin et al., 2009; Plant et al., 2010; Dai et al., 2012; Liu et al., 2012). These techniques have also afforded promising results in studies related to MCI – cognitive decline not severe enough to fulfill the criteria for established dementia (Misra et al., 2009; Aksu et al., 2011; Chincarini et al., 2011; Cui et al., 2011; Davatzikos et al., 2011; Zhang and Shen, 2012). However , the latter is still an open area with intense investigations, as predicting whether individuals who are at increased risk convert to AD is more challenging than classifying AD versus control individuals. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent literature has presented evidence that cardiovascular risk factors (CVRF) play an important role on cognitive performance in elderly individuals, both those who are asymp-tomatic and those who suffer from symptoms of neurodegenerative disorders. Findings from studies applying neuroimaging methods have increasingly reinforced such notion. Studies addressing the impact of CVRF on brain anatomy changes have gained increasing importance, as recent papers have reported gray matter loss predominantly in regions traditionally affected in Alzheimer's disease (AD) and vascular dementia in the presence of a high degree of cardiovascular risk. In the present paper, we explore the association between CVRF and brain changes using pattern recognition techniques applied to structural MRI and the Framingham score (a composite measure of cardiovascular risk largely used in epidemiological studies) in a sample of healthy elderly individuals. We aim to answer the following questions: is it possible to decode (i.e., to learn information regarding cardio-vascular risk from structural brain images) enabling individual predictions? Among clinical measures comprising the Framingham score, are there particular risk factors that stand as more predictable from patterns of brain changes? Our main findings are threefold: (i) we verified that structural changes in spatially distributed patterns in the brain enable statistically significant prediction of Framingham scores. This result is still significant when controlling for the presence of the APOE 4 allele (an important genetic risk factor for both AD and cardiovascular disease). (ii) When considering each risk factor singly, we found different levels of correlation between real and predicted factors; however, single factors were not significantly predictable from brain images when considering APOE4 allele presence as covariate. (iii) We found important gender differences, and the possible causes of that finding are discussed.
    Frontiers in Aging Neuroscience 12/2014; 6(300). DOI:10.3389/fnagi.2014.00300 · 4.00 Impact Factor
  • Source
    • "Accordingly, in this article, as motivated by the work in [Zhang and Shen, 2012], we proposed a new multitaskbased joint feature learning framework, which considers both the intrinsic relatedness among multimodality data and the distribution information of each modality data. Specifically, we formulated the classification of multimodality data as a multitask learning (MTL) problem, where each task denotes the classification based on individual modality of data. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. Hum Brain Mapp, 2014. © 2014 Wiley Periodicals, Inc.
    Human Brain Mapping 10/2014; 36(2). DOI:10.1002/hbm.22642 · 5.97 Impact Factor
Show more