Article

Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease

Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
NeuroImage (Impact Factor: 6.13). 01/2012; 59(2):895-907. DOI: 10.1016/j.neuroimage.2011.09.069
Source: PubMed

ABSTRACT Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely multi-modal multi-task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of 'AD', 'MCI' or 'HC'), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods.

1 Bookmark
 · 
150 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent literature has presented evidence that cardiovascular risk factors (CVRF) play an important role on cognitive performance in elderly individuals, both those who are asymp-tomatic and those who suffer from symptoms of neurodegenerative disorders. Findings from studies applying neuroimaging methods have increasingly reinforced such notion. Studies addressing the impact of CVRF on brain anatomy changes have gained increasing importance, as recent papers have reported gray matter loss predominantly in regions traditionally affected in Alzheimer's disease (AD) and vascular dementia in the presence of a high degree of cardiovascular risk. In the present paper, we explore the association between CVRF and brain changes using pattern recognition techniques applied to structural MRI and the Framingham score (a composite measure of cardiovascular risk largely used in epidemiological studies) in a sample of healthy elderly individuals. We aim to answer the following questions: is it possible to decode (i.e., to learn information regarding cardio-vascular risk from structural brain images) enabling individual predictions? Among clinical measures comprising the Framingham score, are there particular risk factors that stand as more predictable from patterns of brain changes? Our main findings are threefold: (i) we verified that structural changes in spatially distributed patterns in the brain enable statistically significant prediction of Framingham scores. This result is still significant when controlling for the presence of the APOE 4 allele (an important genetic risk factor for both AD and cardiovascular disease). (ii) When considering each risk factor singly, we found different levels of correlation between real and predicted factors; however, single factors were not significantly predictable from brain images when considering APOE4 allele presence as covariate. (iii) We found important gender differences, and the possible causes of that finding are discussed.
    Frontiers in Aging Neuroscience 12/2014; 6(300). DOI:10.3389/fnagi.2014.00300 · 2.84 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by rigorously imposing regularizations in the learning process of a deep neural network (DNN). Such a regularized DNN (rDNN) can be efficiently realized using a GPU-based implementation with an affordable training cost. Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed rDNN is more suitable for modeling video semantics. With extensive experimental evaluations, we show that rDNN produces superior performance over several state-of-the-art approaches. On the well-known Hollywood2 and Columbia Consumer Video benchmarks, we obtain very competitive results: 66.9\% and 73.5\% respectively in terms of mean average precision. In addition, to substantially evaluate our rDNN and stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multi-atlas based methods have been recently used for classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Compared with traditional single-atlas based methods, multiatlas based methods adopt multiple predefined atlases and thus are less biased by a certain atlas. However, most existing multiatlas based methods simply average or concatenate the features from multiple atlases, which may ignore the potentially important diagnosis information related to the anatomical differences among different atlases. In this paper, we propose a novel view (i.e., atlas) centralized multi-atlas classification method, which can better exploit useful information in multiple feature representations from different atlases. Specifically, all brain images are registered onto multiple atlases individually, to extract feature representations in each atlas space. Then, the proposed view-centralized multi-atlas feature selection method is used to select the most discriminative features from each atlas with extra guidance from other atlases. Next, we design a support vector machine (SVM) classifier using the selected features in each atlas space. Finally, we combine multiple SVM classifiers for multiple atlases through a classifier ensemble strategy for making a final decision. We have evaluated our method on 459 subjects [including 97 AD, 117 progressive MCI (p-MCI), 117 stable MCI (s-MCI), and 128 normal controls (NC)] from the Alzheimer's Disease Neuroimaging Initiative database, and achieved an accuracy of 92.51% for AD versus NC classification and an accuracy of 78.88% for p-MCI versus s-MCI classification. These results demonstrate that the proposed method can significantly outperform the previous multi-atlas based classification methods. Hum Brain Mapp, 2015. © 2014 Wiley Periodicals, Inc.
    Human Brain Mapping 01/2015; DOI:10.1002/hbm.22741 · 6.92 Impact Factor

Full-text (2 Sources)

Download
34 Downloads
Available from
May 23, 2014