ArticlePDF Available

Abstract and Figures

Using CNN deep learning to learn about fMRI data. Very impressive results.
Content may be subject to copyright.
A preview of the PDF is not available
... Suhaimi et al. [20] came up with feature map size selection on functional MRI (fMRI) and MRI images dataset and concluded that feature map size selection is an integral part of designing CNN for fMRI classification. Moreover, a study by Saleh et al. [21] used agency system classification brain tumors. ...
Article
Full-text available
Brain MRI classification is one of the key areas of research. The classification of brain MRI can help radiologists in different brain disease diagnostics without invasive measures. Brain MRI classification is a difficult task due to the variance and complexity of brain diseases. We have proposed a novel and efficient binary classification model for brain MRI images. The proposed model includes discrete wavelet transform (DWT) used for features extraction, statistical features for diminishing the number of features, and a blended artificial neural network for brain MRI classification. Brain MRI classification with less features is a challenging task. In this paper, we have proposed a novel technique for statical features calculation of approximate RGB images obtained from DWT. We have also proposed a new blended artificial neural network to improve classification accuracy. The proposed technique is compared with other state-of-the-art techniques, and results show that the proposed technique gives better outcomes in terms of accuracy and simplicity.
... Ullah et al. (2018) suggested a method for MRI classification using K-NN and obtained astounding accuracy. Suhaimi and Htike (2018) (Maniar and Shah, 2017). In the proposed method they have used data pre-processing technique and images for the classifier improvement. ...
Article
Full-text available
Many methods have been proposed to classify the MR brain images automatically. We have proposed a method based on a Neural Network (NN) to classify the normality and abnormality of a given MR brain image. This method first employs a median filter to minimize the noise from the image and converted the image to RGB. Then applies the technique of Discrete Wavelet Transform (DWT) to extract the important features from the image and color moments have been employed in the feature reduction stage to reduce the dimension of the features. The reduced features are sent to Feed-Forward Artificial neural network (FF-ANN) to discriminate the normal and abnormal MR brain images. We applied this proposed method on 70 images (45 normal, 25 abnormal). The accuracy of the proposed method of both training and testing images are 95.48%, while the computation time for feature extraction, feature reduction, and neural network classifier is 4.3216s, 4.5056s, and 1.4797s, respectively.
Article
In modern neuroscience and clinical study, neuroscientists and clinicians often use non-invasive imaging techniques to validate theories and computational models, observe brain activities and diagnose brain disorders. The functional Magnetic Resonance Imaging (fMRI) is one of the commonly-used imaging modalities that can be used to understand human brain mechanisms as well as the diagnosis and treatment of brain disorders. The advances in artificial intelligence and the emergence of deep learning techniques have shown promising results to better interpret fMRI data. Deep learning techniques have rapidly become the state of the art for analyzing fMRI data sets and resulted in performance improvements in diverse fMRI applications. Deep learning is normally presented as an end-to-end learning process and can alleviate feature engineering requirements and hence reduce domain knowledge requirements to some extent. Under the framework of deep learning, fMRI data can be considered as images, time series or images series. Hence, different deep learning models such as convolutional neural networks, recurrent neural network, or a combination of both, can be developed to process fMRI data for different tasks. In this review, we discussed the basics of deep learning methods and focused on its successful implementations for brain disorder diagnosis based on fMRI images. The goal is to provide a high-level overview of brain disorder diagnosis with fMRI images from the perspective of deep learning applications.
Article
Full-text available
We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest X-ray dataset, containing over 100,000 frontal-view X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on pneumonia detection on both sensitivity and specificity. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases.
Conference Paper
Full-text available
Task-based fMRI (tfMRI) has been widely used to study functional brain networks. Modeling tfMRI data is challenging due to at least two problems: the lack of the ground truth of underlying neural activity and the intrinsic structure of tfMRI data is highly complex. To better understand brain networks based on fMRI data, data-driven approaches were proposed, for instance, Independent Component Analysis (ICA) and Sparse Dictionary Learning (SDL). However, both ICA and SDL only build shallow models, and they are under the strong assumption that original fMRI signal could be linearly decomposed into time series components with their corresponding spatial maps. As growing evidence shows that human brain function is hierarchically organized, new approaches that can infer and model the hierarchical structure of brain networks are widely called for. Recently, deep convolutional neural network (CNN) has drawn much attention, in that deep CNN has been proven to be a powerful method for learning high-level and mid-level abstractions from low-level raw data. Inspired by the power of deep CNN, in this study, we developed a new neural network structure based on CNN, called Deep Convolutional Auto-Encoder (DCAE), in order to take the advantages of both data-driven approach and CNN’s hierarchical feature abstraction ability for the purpose of learning mid-level and high-level features from complex tfMRI time series in an unsupervised manner. The DCAE has been applied and tested on the publicly available human connectome project (HCP) tfMRI datasets, and promising results are achieved.
Article
Full-text available
Task-based fMRI (tfMRI) has been widely used to study functional brain networks under task performance. Modeling tfMRI data is challenging due to at least two problems: the lack of the ground truth of underlying neural activity and the highly complex intrinsic structure of tfMRI data. To better understand brain networks based on fMRI data, data-driven approaches have been proposed, for instance, Independent Component Analysis (ICA) and Sparse Dictionary Learning (SDL). However, both ICA and SDL only build shallow models, and they are under the strong assumption that original fMRI signal could be linearly decomposed into time series components with their corresponding spatial maps. As growing evidence shows that human brain function is hierarchically organized, new approaches that can infer and model the hierarchical structure of brain networks are widely called for. Recently, deep convolutional neural network (CNN) has drawn much attention, in that deep CNN has proven to be a powerful method for learning high-level and mid-level abstractions from low-level raw data. Inspired by the power of deep CNN, in this study, we developed a new neural network structure based on CNN, called Deep Convolutional Auto-Encoder (DCAE), in order to take the advantages of both data-driven approach and CNN’s hierarchical feature abstraction ability for the purpose of learning mid-level and high-level features from complex, large-scale tfMRI time series in an unsupervised manner. The DCAE has been applied and tested on the publicly available human connectome project (HCP) tfMRI datasets, and promising results are achieved.
Article
Full-text available
Current fMRI data modeling techniques such as Independent Component Analysis (ICA) and Sparse Coding methods can effectively reconstruct dozens or hundreds of concurrent interacting functional brain networks simultaneously from the whole brain fMRI signals. However, such reconstructed networks have no correspondences across different subjects. Thus, automatic, effective and accurate classification and recognition of these large numbers of fMRI-derived functional brain networks are very important for subsequent steps of functional brain analysis in cognitive and clinical neuroscience applications. However, this task is still a challenging and open problem due to the tremendous variability of various types of functional brain networks and the presence of various sources of noises. In recognition of the fact that convolutional neural networks (CNN) has superior capability of representing spatial patterns with huge variability and dealing with large noises, in this paper, we design, apply and evaluate a deep 3D CNN framework for automatic, effective and accurate classification and recognition of large number of functional brain networks reconstructed by sparse representation of whole-brain fMRI signals. Our extensive experimental results based on the Human Connectome Project (HCP) fMRI data showed that the proposed deep 3D CNN can effectively and robustly perform functional networks classification and recognition tasks, while maintaining a high tolerance for mistakenly labelled training instances. Our work provides a new deep learning approach for modeling functional connectomes based on fMRI data.
Article
Full-text available
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
Chapter
In previous chapters, we saw what artificial intelligence is and how machine learning and deep learning techniques are used to train machines to become smart. In these next few chapters, we will learn how machines are trained to take data, process it, analyze it, and develop inferences from it.
Article
In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data-modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided a weakly established model based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Meanwhile, analyzing and learning a large amount of tfMRI data from a variety of subjects has been shown to be very demanding but yet challenging even with technological advances in computational hardware. Given the Convolutional Neural Network (CNN), a robust method for learning high-level abstractions from low-level data such as tfMRI time series, in this work we propose a fast and scalable novel framework for distributed deep Convolutional Autoencoder model. This model aims to both learn the complex hierarchical structure of the tfMRI data and to leverage the processing power of multiple GPUs in a distributed fashion. To implement such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow library, leveraging from a very large cluster of GPU machines. Experimental data from applying the model on the Human Connectome Project (HCP) show that the proposed model is efficient and scalable toward tfMRI big data analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data in the future.
Technical Report
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively.
Conference Paper
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Article
Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modiáed CNN to decode the behavior of brain for different images with limited data set. Selection of signiácant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; signiácant features are selected using ttest. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to ánd the unknown parameters of every individual voxel and the classiácation is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).