ArticlePublisher preview available

Classification of brain MR images using Modified version of Simplified Pulse-Coupled Neural Network and Linear Programming Twin Support Vector Machines

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

The automated and accurate detection of brain tumors is challenging for classifying brain Magnetic Resonance (MR) images. The conventional techniques for diagnosing the images are tedious and inefficient in decision making. Therefore, this work proposes an adaptive and non-invasive method for accurately classifying images into pathological and normal brain MR images to overcome these drawbacks. This system uses the Skull Stripping algorithm for removing the non-cerebral tissues. We have developed the Modified version of Simplified Pulse-Coupled Neural Network for segmenting the Region of Interest. The Stationary Wavelet Transform is employed for transforming the image to extract the multiresolution data from the segmented images. The dimensionality of the transformed images is high. Thus, texture- and intensity-based features are extracted from transformed images, and the features of least entropy are selected to make a set of prominent features. Finally, Probabilistic Neural Network and Linear Programming Twin Support Vector Machines with Newton-Armijo algorithm are applied for the classification of images. The validation of the experiments is carried out on the three databases, viz., DB-66, DB-160, and DB-255. The experimental results show that the suggested scheme is robust and effective as compared to other state-of-the-art schemes. The suggested method can assist the radiologists in treatment planning. Hence, the proposed method can effectively classify the brain MR images and be installed on medical machines.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
The Journal of Supercomputing (2022) 78:13831–13863
https://doi.org/10.1007/s11227-022-04420-8
1 3
Classification ofbrain MR images using Modified version
ofSimplified Pulse‑Coupled Neural Network andLinear
Programming Twin Support Vector Machines
RaviShanker1 · MahuaBhattacharya1
Accepted: 28 February 2022 / Published online: 25 March 2022
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
2022
Abstract
The automated and accurate detection of brain tumors is challenging for classifying
brain Magnetic Resonance (MR) images. The conventional techniques for diagnos-
ing the images are tedious and inefficient in decision making. Therefore, this work
proposes an adaptive and non-invasive method for accurately classifying images
into pathological and normal brain MR images to overcome these drawbacks.
This system uses the Skull Stripping algorithm for removing the non-cerebral tis-
sues. We have developed the Modified version of Simplified Pulse-Coupled Neu-
ral Network for segmenting the Region of Interest. The Stationary Wavelet Trans-
form is employed for transforming the image to extract the multiresolution data
from the segmented images. The dimensionality of the transformed images is high.
Thus, texture- and intensity-based features are extracted from transformed images,
and the features of least entropy are selected to make a set of prominent features.
Finally, Probabilistic Neural Network and Linear Programming Twin Support Vec-
tor Machines with Newton-Armijo algorithm are applied for the classification of
images. The validation of the experiments is carried out on the three databases, viz.,
DB-66, DB-160, and DB-255. The experimental results show that the suggested
scheme is robust and effective as compared to other state-of-the-art schemes. The
suggested method can assist the radiologists in treatment planning. Hence, the pro-
posed method can effectively classify the brain MR images and be installed on med-
ical machines.
Keywords Computer-aided diagnosis system· Skull stripping· Simplified pulse-
coupled neural network· Texture features· Probabilistic neural network· Twin
support vector machine
* Ravi Shanker
ravis@iiitm.ac.in; rsmiet60@gmail.com
Extended author information available on the last page of the article
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... The high-dimensional images are plenitude in medical imaging. Hence, it is an arduous task to work with them [3]. MRI images are the most popular used 3D medical images. ...
... We observed that using the summation operator rather than concatenation increases the performance of our model and saves a significant amount of memory. After retrieving the features from the bottleneck layer, we have converted them into 1 x 96 3 1 and 3.1.3, respectively. ...
Article
Full-text available
Different deep learning-based architectures have been developed for medical image registration in the last few years. The architectures of these methods are complex and require a considerable amount of memory. The scalability of the architectures is limited due to their compatibility with low or moderate memory devices. The deformable image registration attained attention in the field of medical image registration. The uprising in deep learning has paved the path for more sophisticated solutions to the many medical imaging problems. We have proposed the resource-efficient and structure-preserving network (RESPNet) to solve medical image registration issues. RESPNet is convolutional neural networks (CNNs)-based architecture to predict the deformation vector field (DVF), which signifies the displacement of each pixel in all directions. We have developed the three CNNs modules (i.e., StraightCNN, UpCNN, and DownCNN) to reduce parameter size and preserve the structure of the images. In this paper, the architecture of RESPNet utilizes the structural properties of images for 2D registration of the retina images and 3D registration of brain magnetic resonance (MR) images. The proposed architecture requires less than 25% memory compared to the current state-of-the-art methods and can be trained in 6-8 hours on a 13 GB GPU to produce the results. The dice score of 2D and 3D images registration is 0.8784 and 0.7515, respectively. The suggested architecture reduces the memory requirements by more than 75% and achieves better performance in dice score, mutual information, and processing time. We have developed a memory-efficient deep learning architecture for medical image registration. This architecture can be employed to register 2D and 3D images efficiently.
... roughout the study of karst collapse, the five in one technical and theoretical framework system of "genetic mechanism, identification and evaluation, monitoring and alarm, emergency response and risk management" has been formed. So far, however, the problem of "where, when, and how to collapse" has not been well solved [3]. e prevention and control of karst collapse is still a world-class technical problem in the field of geoscience, which is mainly reflected in the insufficient quantification of the genetic mechanism and the identification and evaluation method of hidden dangers. ...
... where ω (3) li is the weighting coefficient from hidden layer to output layer; g(x) is the activation function of neurons in the output layer, expressed in the following formula: ...
Article
Full-text available
In order to comprehensively grasp the dynamics of karst collapse, promote the comprehensive prevention and control level of karst collapse, and prevent secondary disasters caused by lava collapse, this study presents a method of karst collapse early warning based on the BP neural network. This method does not need to set the sliding surface in the finite element calculation model. The stress of the sliding surface is fitted according to the spatial stress relationship of the deep karst layer through the improved BP neural network PID control algorithm and BP neural network algorithm, which avoids the modeling and mesh generation of the complex sliding block and has good accuracy and ease of use. According to the basic theory of the BP neural network, the calculation formulas of multilayer feedforward and error back propagation processes are derived, and the two-dimensional and three-dimensional finite element models of gravity dams without and with sliding blocks are established, respectively. Finally, according to the common formulas of viscoelastic artificial boundary and equivalent load, the two-dimensional and three-dimensional input programs of the karst fluid state are compiled, and a neural network early warning model is obtained. The experimental results show that the process karst state simulated by the algorithm is very close to the actual situation, and the minimum value of antisliding coefficient and its occurrence time can be accurately predicted, with an error range of less than 3%. Conclusion. BP neural network prediction can effectively predict karst collapse, with higher prediction accuracy, and can effectively simulate the actual collapse risk.
... Each SDN router does not have the routing learning function, but guides the routing by accepting the "flow table" issued by the SDN controller to complete the data forwarding, replacing the forwarding based on the IP address in the traditional network, and the programmable feature enables the router to complete different functions. Compared with the simple data plane, the centralized control plane is responsible for the management and control of the entire network and the connecting task [12][13][14]. The SDN controller communicates with the switch through the "control data plane interface," and the flow table distribution and data upload are all realized through the South interface. ...
Article
Full-text available
In order to solve the problem that the delay of wireless network and complex operating environment affects the stability and operating performance of teleoperation system, a method of intelligent control robot based on multimedia network defined by software is proposed in this paper. In the network environment established based on the software definition, the gain of the system control is increased according to the network delay to improve the operating performance of the system, and the output of parameters is dynamically adjusted to adapt to the stability of the system in complex environment. The experimental results show that the robot control system can obtain the best control stability by continuously adjusting the relevant parameters. After the simulation test, the final setting is k p = 0.8 , k i = 0.001 , k d = 0 . Conclusion. Based on the intelligence of gain scheduling control algorithm, the control effect of fuzzy control can be significantly improved when the network delay is large.
Article
Full-text available
Due to the severity and great harm of coal and gas outbursts accidents, outbursts prediction becomes very necessary; the paper presents a hybrid prediction model of feature extraction and pattern classification for coal and gas outbursts. First, discrete wavelet transform (DWT) is utilized as a preprocessing technique to decompose subseries and extract the features with different frequencies and the optimal feature components are retained; second, in order to eliminate the redundancy between the features and uncorrelation between features and outbursts, we use the fast independent component analysis (FICA) to obtain each independent component, obtaining the global information in the feature; then, the obtained features are input into linear discriminant analysis (LDA), under the guidance of class labels, then the local information in features is obtained; finally, the projected features are input into the deep extreme learning machine (DELM) classifier based on the optimal parameters by quantum particle swarm optimization (QPSO) for training and classification. The experimental results on the dataset of coal and gas outbursts show that compared with other models in the current prediction of coal and gas outbursts, this method has significant effect on various indicators.
Article
Full-text available
Brain abnormalities are neurological disorders of the human nervous system that contain biochemical, electrical, and structural changes in the brain and spinal cord. However, such changes produce diverse symptoms like paralysis, amnesia, and muscle weakness. The diagnosis of these abnormalities is crucial for treatment planning in the early stage to limit the progression of diseases. The brain Magnetic Resonance (MR) images are extensively used for treatment planning, but manually diagnosis of MR images is a time-consuming, expensive, and cumbersome task. Hence, in this paper, we have proposed the automated Computer-Aided Diagnosis (CAD) system for classification of brain MR images. These images are skill-stripped for removing the irrelevant tissues that improve the quality of images. We have developed the Fast version of Simplified Pulse-Coupled Neural Network (F-SPCNN) to segment the region of interest. Further, the features are extracted from the segmented images by using the Ripplet Transform (RT). Subsequently, Probabilistic Principal Component Analysis (PPCA) is employed for reducing the dimensionality of features. Finally, Twin Support Vector Machine (TWSVM) is applied for classification of brain MR images. The extensive simulation results on three standard datasets, e.g., DS-66, DS-160, and DS-255, demonstrate that the proposed method achieves better performance than the state-of-the-art methods.
Article
Full-text available
Image texture extraction and analysis are fundamental steps in computer vision. In particular, considering the biomedical field, quantitative imaging methods are increasingly gaining importance because they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment. In this context, radiomic approaches are fostering large-scale studies that can have a significant impact in the clinical practice. In this work, we present a novel method, called CHASM (Cuda, HAralick & SoM), which is accelerated on the graphics processing unit (GPU) for quantitative imaging analyses based on Haralick features and on the self-organizing map (SOM). The Haralick features extraction step relies upon the gray-level co-occurrence matrix, which is computationally burdensome on medical images characterized by a high bit depth. The downstream analyses exploit the SOM with the goal of identifying the underlying clusters of pixels in an unsupervised manner. CHASM is conceived to leverage the parallel computation capabilities of modern GPUs. Analyzing ovarian cancer computed tomography images, CHASM achieved up to $$\sim 19.5\times $$ ∼ 19.5 × and $$\sim 37\times $$ ∼ 37 × speed-up factors for the Haralick feature extraction and for the SOM execution, respectively, compared to the corresponding C++ coded sequential versions. Such computational results point out the potential of GPUs in the clinical research.
Article
Full-text available
Classification of brain tumors is of great importance in medical applications that benefit from computer-aided diagnosis. Misdiagnosis of brain tumor type will both prevent the patient from responding effectively to the applied treatment and decrease the patient’s chances of survival. In this study, we propose a solution for classifying brain tumors in MR images using transfer learning networks. The most common brain tumors are detected with VGG16, VGG19, ResNet50 and DenseNet21 networks using transfer learning. Deep transfer learning networks are trained and tested using four different optimization algorithms (Adadelta, ADAM, RMSprop and SGD) on the accessible Figshare dataset containing 3064 T1-weighted MR images from 233 patients with three common brain tumor types: glioma (1426 images), meningioma (708 images) and pituitary (930 images). The area under the curve (AUC) and accuracy metrics were used as performance measures. The proposed transfer learning methods have a level of success that can be compared with studies in the literature; the highest classification performance is 99.02% with ResNet50 using Adadelta. The classification result proved that the most common brain tumors can be classified with very high performance. Thus, the transfer learning model is promising in medicine and can help doctors make quick and accurate decisions.
Article
Full-text available
MR brain tumor classification is one of the extensively utilized approaches in medical prognosis. However, analyzing and processing MR brain images is still quite a task for radiologists. To encounter this problem, the evaluation of existing canonical techniques has already been done. There are numeral MR brain tumor classification approaches that are being used for medical diagnosis. In this paper, we have developed an automated computer-aided network for diagnosis of MR brain tumor class, i.e., HGG and LGG. We have proffered a Gabor-modulated convolutional filter-based classifier for brain tumor classification. The inclusion of Gabor filter dynamics endows the competency to deal with spatial and orientational transformations. This mere modification (modulation) of conventional convolutional filters by Gabor filters empowers the proposed architecture to learn relatively smaller feature maps and thereby, decreasing network parameter requirement. We have introduced some skip connections to our modulated CNN architecture without introducing an extra network parameter. Pre-trained networks, i.e., Alex-Net, Google-Net (Inception V1), Res-Net and VGG 19 have been considered for performance evaluation of our proposed Gabor-modulated CNN. Additionally, some popular machine learning classification techniques have also been considered for comparative analysis. Experimental findings demonstrate that our proposed network has limited network parameters to learn; therefore, it is quite easy to train such networks.
Article
Full-text available
The benefits of Artificial Intelligence (AI) in medicine are unquestionable and it is unlikely that the pace of its development will slow down. From better diagnosis, prognosis, and prevention to more precise surgical procedures, AI has the potential to offer unique opportunities to enhance patient care and improve clinical practice overall. However, at this stage of AI technology development it is unclear whether it will de-humanize or re-humanize medicine. Will AI allow clinicians to spend less time on administrative tasks and technology related procedures and more time being present in person to attend to the needs of their patients? Or will AI dramatically increase the presence of smart technology in the clinical context to a point of undermining the humane dimension of the patient–physician relationship? In this brief commentary, we argue that technological solutions should be only integrated into clinical medicine if they fulfill the following three conditions: (1) they serve human ends; (2) they respect personal identity; and (3) they promote human interaction. These three conditions form the moral imperative of humanity.
Article
Full-text available
With the increasing number of cases as well as care costs, Alzheimer’s disease has gained more interest in several scientific communities especially medical and computer science. Clinical and analytical tests are widely accepted techniques for detecting Alzheimer cases. However, early detection can help prevent damage to brain tissue and heal it with proper treatment. Interpreting brain images is considered as a time-consuming task with a high error-prone. Recently, advanced machine learning methods have successfully proved high performance in various fields including brain image analysis. These existing techniques, which become more used for clinical disease detection, present challenging wrongness sensibility to detect aberrant values or areas in the human brain. We conducted our work to automate the detection of the damaged areas and diagnose Alzheimer’s disease. Our method can segment MRI images, identify brain lesions and the different stages of Alzheimer’s disease. We evaluated our method using ample cases form public databases to demonstrate that our proposition performed reliable and effective results. Our proposal achieved an accuracy of 94.73%, a recall rate of 93.82%, and an F1-score of 92.8%. Also, the detection precision reached 91.76% with a sensitivity of 92.48% and a specificity rate of 90.64%. Our method creates an important way to optimize the imaging process via an automated computer-assisted diagnosis using potential deep learning methods to increase the consistency and accuracy of Alzheimer’s disease diagnosis worldwide.
Article
Machine learning has become the state-of-the-art technique for many tasks including computer vision, natural language processing, speech processing tasks, etc. However, the unique challenges posed by machine learning suggest that incorporating user knowledge into the system can be beneficial. The purpose of integrating human domain knowledge is also to promote the automation of machine learning. Human-in-the-loop is an area that we see as increasingly important in future research due to the knowledge learned by machine learning cannot win human domain knowledge. Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize the major approaches in the field; along with their technical strengths/ weaknesses, we have a simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and to motivate interested readers to consider approaches for designing effective human-in-the-loop solutions.
Article
Images captured from a distance often result in (very) low resolution (VLR/LR) region of interest, requiring automated identification. VLR/LR images (or regions of interest) often contain less information content, rendering ineffective feature extraction and classification. To this effect, this research proposes a novel DeriveNet model for VLR/LR classification, which focuses on learning effective class boundaries by utilizing the class-specific domain knowledge. DeriveNet model is jointly trained via two losses: (i) proposed Derived-Margin softmax loss and (ii) the proposed Reconstruction-Center (ReCent) loss. The Derived-Margin softmax loss focuses on learning an effective VLR classifier while explicitly modeling the inter-class variations. The ReCent loss incorporates domain information by learning a HR reconstruction space for approximating the class variations for the VLR/LR samples. It is utilized to \textit{derive} inter-class margins for the Derived-Margin softmax loss. The DeriveNet model has been trained with a novel Multi-resolution Pyramid based data augmentation which enables the model to learn from varying resolutions during training. Experiments and analysis have been performed on multiple datasets for (i) VLR/LR face recognition, (ii) VLR digit classification, and (iii) VLR/LR face recognition from drone-shot videos. The DeriveNet model achieves state-of-the-art performance across different datasets, thus promoting its utility for several VLR/LR classification tasks.
Article
Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.