Chapter

Integration of Machine Learning and Optimization Techniques for Cardiac Health Recognition

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cardiovascular disease (CVD) remains the primary reason for illness and death throughout the world despite tremendous progress in diagnosis and treatment. Artificial intelligence (AI) technology can drastically revolutionize the way we perform cardiology to enhance and optimize CVD results. With boosting of information technology and the increased volume and complexity of data, aside from a large number of optimization problems that arise in clinical fields, AI approaches such as machine learning and optimization have become extremely popular. AI also can help improve medical expertise by uncovering clinically important information. Early on, the treatment of vast amounts of medical data was a significant task, leading to adaptations in the biological field of machine learning. Improvements are carried out and tested every day in algorithms for machine learning so that more accurate data may be analyzed and provided. Machine learning has been active in the realm of healthcare, from the extraction of information from medical papers to the prediction and diagnosis of a disease. In this perspective, this chapter provides an overview of how to use meta-heuristic algorithm on CVD’s classification process for enhancing feature selection process, and various parameters optimization.KeywordsFeature selectionMetaheuristics algorithmsCloudCVDEngineering design problems

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The diagnosis and management of cardiovascular disease (CVD)-related sickness and demise were covered in [6]. Cardiology uses artificial intelligence (AI) technologies to improve and optimize CVD outcomes. ...
... Cardiology uses artificial intelligence (AI) technologies to improve and optimize CVD outcomes. The data's complexity is decreased through optimization and machine learning [6]. This boosting procedure increases data volume and complexity to improve optimization and therapeutically beneficial information retrieval based on machine learning. ...
Article
Full-text available
The rapid increase in Internet technology and machine-learning devices has opened up new avenues for online healthcare systems. Sometimes, getting medical assistance or healthcare advice online is easier to understand than getting it in person. For mild symptoms, people frequently feel reluctant to visit the hospital or a doctor; instead, they express their questions on numerous healthcare forums. However, predictions may not always be accurate, and there is no assurance that users will always receive a reply to their posts. In addition, some posts are made up, which can misdirect the patient. To address these issues, automatic online prediction (OAP) is proposed. OAP clarifies the idea of employing machine learning to predict the common attributes of disease using Never-Ending Image Learner with an intelligent analysis of disease factors. Never-Ending Image Learner predicts disease factors by selecting from finite data images with minimum structural risk and efficiently predicting efficient real-time images via machine-learning-enabled M-theory. The proposed multi-access edge computing platform works with the machine-learning-assisted automatic prediction from multiple images using multiple-instance learning. Using a Never-Ending Image Learner based on Machine Learning, common disease attributes may be predicted online automatically. This method has deeper storage of images, and their data are stored per the isotropic positioning. The proposed method was compared with existing approaches, such as Multiple-Instance Learning for automated image indexing and hyper-spectrum image classification. Regarding the machine learning of multiple images with the application of isotropic positioning, the operating efficiency is improved, and the results are predicted with better accuracy. In this paper, machine-learning performance metrics for online automatic prediction tools are compiled and compared, and through this survey, the proposed method is shown to achieve higher accuracy, proving its efficiency compared to the existing methods.
... The importance of healthcare data for data analysis has increased. This includes data on prescriptions and supplies, patients, healthcare professionals, and linked companies in charge of insurance or other financial-related operations [67]. The information about the healthcare industry is scattered, though. ...
Article
As healthcare data becomes increasingly available from various sources, including clinical institutions, patients, insurance companies, and pharmaceutical industries, machine learning (ML) services are becoming more significant in healthcare-facing domains. Therefore, it is imperative to ensure the integrity and reliability of ML models to maintain the quality of healthcare services. Particularly due to the growing need for privacy and security, healthcare data has resulted in each Internet of Things (IoT) device being treated as an independent source of data, isolated from other devices. Moreover, the limited computational and communication capabilities of wearable healthcare devices hinder the applicability of traditional ML. Federated Learning (FL) is a paradigm that maintains data privacy by storing only learned models on a server and advances with data from scattered clients, making it ideal for healthcare applications where patient data must be safeguarded. The potential of FL to transform healthcare is significant, as it can enable the development of new ML-powered applications that can enhance the quality of care, lower costs, and improve patient outcomes. However, the accuracy of current Federated Learning aggregation methods suffers greatly in unstable network situations due to the high volume of weights transmitted and received. To address this issue, we propose an alternative approach to Federated Average (FedAvg) that updates the global model by gathering score values from learned models primarily utilized in Federated Learning, using an improved version of Particle Swarm Optimization (PSO) called FedImpPSO. This approach boosts the robustness of the algorithm in erratic network conditions. To further enhance the speed and efficiency of data exchange within a network, we modify the format of the data clients send to servers using the FedImpPSO method. The proposed approach is evaluated using the CIFAR-10 and CIFAR-100 datasets and a Convolutional Neural Network (CNN). We found that it yielded an average accuracy improvement of 8.14% over FedAvg and 2.5% over Federated PSO (FedPSO). This study evaluates the use of FedImpPSO in healthcare by training a deep-learning model over two case studies to evaluate the effectiveness of our approach in healthcare. The first case study involves the classification of COVID-19 using public datasets (Ultrasound and X-ray) and achieved an F1-measure of 77.90% and 92.16%, respectively. The second case study was conducted over the cardiovascular dataset, where our proposed FedImpPSO achieves 91.18% and 92% accuracy in predicting the existence of heart diseases. As a result, our approach demonstrates the effectiveness of using FedImpPSO to improve the accuracy and robustness of Federated Learning in unstable network conditions and has potential applications in healthcare and other domains where data privacy is critical.
Article
Full-text available
Purposethe problem of big data analytics and health care support systems are analyzed. There exist several techniques in supporting such analytics and robust support systems; still, they suffer to achieve higher performance in disease prediction and generating the analysis.. For a hospital unit, maintaining such massive data becomes a headache. However, still, the big data can be accessed towards analyzing the bio signals obtained from the human body for the detection and prediction of various diseases. To overcome the deficiency, an efficient Health Care Big Data Analytics Model (HCBDA) is presented, which maintains a massive volume of data in the data server.Methods The HCBDA model monitors the patients for their current state in their cardiac and anatomic conditions to predict the diseases and risks. To perform analysis on health care, the model has accessed the data location by discovering the possible routes to reach the source. The monitored results on blood pressure, temperature, and blood sugar are transferred through the list of routes available. The network is constructed with a list of sensor nodes and Internet of Things (IoT) devices, where the sensor attached to the patient initiates the transmission with the monitored results. The monitored results are transferred through the number of intermediate nodes to the monitoring system, which accesses the big data to generate intelligence. The route selection is performed according to the value of Trusted Forwarding Weight (TFW) and Trusted Carrier Weight (TCW). At each reception, the features from the packet are extracted, and obtained values are fed to the decisive support system. The decisive support system cluster the big data using the FDS clustering algorithm, and the classification is performed by measuring the feature disease class similarity (FDCS). According to the class identified, the method would calculate Disease Prone Weight (DPW) to generate recommendations to the medical practitioner.ResultsThe unique Health Care Big Data Analytics (HCBDA) paradigm for patient-centered healthcare using wireless sensor networks and IoT devices was described. The patient's bio signals are watched in order to provide medical assistance. In comparison to the previous methods, the proposed approach helps to generate higher performance in disease prediction accuracy up to 96%.Conclusion The value of Trusted Forwarding Weight (TFW) and Trusted Carrier Weight is used to determine the route (TCW). Sensor based IoT values like Pressure glucose, pulse oximeter, and temperature etc. the following parameters like classification accuracy and false ratio are calculated based on efficient machine learning model. The crucial support system receives the values it receives after each reception together with the features that were derived from the packet. The classification is carried out by calculating the Feature Disease Class Similarity, and the decision support system clusters the huge data using the FDS clustering technique.
Article
Full-text available
Abstarct Harris Hawk Optimization (HHO) is a bio-inspired metaheuristic of Harris Hawk’s pack hunting. Although having provided competitive results in some optimization problems in science and engineering, HHO has weaknesses for highly multimodal and high-dimensional optimization problems. In this article, we propose a new metaheuristic Harris Hawk Optimization Encirclement Attack Synergy (HHO-EAS) with the ambition to obtain better capabilities than HHO in solving highly multimodal and high-dimensional optimization problems. Our hybridization strategy is entirely bio-inspired by a win-win hunting synergy between two predators during the extremely difficult winter periods: the crow and the wolf. The smart exploratory faculties of crows combined with the ability of wolves to capture prey larger than themselves with speed and efficiency, allow these two predators to detect and catch good prey that is very rare and very difficult to hunt in harsh winter periods. In order to mathematically model this win-win hunting synergy with the encirclement and attack equations and integrate it into HHO, we used fuzzy logic to create a Mamdani-like fuzzy inference system (FIS). HHO-EAS was tested firstly with HHO, GWO and PSO on a general benchmark of 19 well-known functions and secondly with HHO on a specific benchmark of the 20 most complex functions of CEC 2017. The experimental results obtained on these two benchmarks demonstrate the superiority of HHO-EAS over HHO for highly multimodal and high-dimensional optimization problems and validate our fully bio-inspired hybridization strategy.
Article
Full-text available
Growing science and medical technologies have produced a massive amount of knowledge on different scales of biological systems. By processing various amounts of medical data, these technologies will increase the quality of disease detection and enhance the usability of health information systems. The integration of machine learning in computer-based diagnostic systems facilitates the early detection of diseases , enabling more productive treatments and prolonged survival rates. The slime mould algorithm (SMA) may have drawbacks, such as being trapped in minimal local regions and having an unbalanced exploitation and exploration phase. To overcome these limitations, this paper proposes ISMA, an improved version of the slime mould algorithm (SMA) hybridized with the opposition-based learning (OBL) strategy based on the k-nearest neighbor (kNN) classifier for the classification approach. Opposition-based learning improves global exploratory ability while avoiding premature convergence. The experimental results revealed the superiority of the proposed ISMA-kNN in various classification evaluation metrics, including accuracy, sensitivity, specificity, precision, F-score, G-mean, computational time, and feature selection (FS) size compared with the tunicate swarm algorithm (TSA), the marine predators algorithm (MPA), the chimp optimization algorithm (ChOA), the moth-flame optimization (MFO) algorithm, the whale optimization algorithm (WOA), the sine cosine algorithm (SCA), and the original SMA algorithm. Performance tests were run on the same maximum number of function evaluations (FEs) on nine UCI benchmark disease data sets with different feature sizes. INDEX TERMS Medical classification, feature selection (FS), machine learning (ML), slime mould algorithm (SMA), opposition-based learning (OBL).
Chapter
Full-text available
With the development in technology, many other technologies like machine learning (ML), deep learning, blockchain technology, Internet of Things, and quantum computing have taken place in this current era. These technologies are helping human being to live their life comfortably and without any hurdle. Today, technology is helping human and protecting nature with minimum waste of available/limited resources. Among these inventions, ML and deep learning are two unique inventions which have attract many researchers or computer science researchers (or many research communities) to solve complex problems through ML. Today, ML use has been moved in many sectors to increase productivity of businesses; for example, for retail/marketing purpose, churn prediction of customers, for e-healthcare, and detecting disease in early stages. These are the few examples where ML is used in this current smart era. Together, this deep learning also has increased its importance over ML in many applications like bio-informatics, health informatics, identification of images or handwritten languages, and audio recognition. Many researchers get problematic scenario when they are not sure about particular use of machine and deep learning. This work fulfil such conditions/requirements and provide a complete details about ML and deep learning, i.e., with its evolution to forefront use, to use in many applications, to benefiting to the society, and to challenges and potential limitation in the respective learning techniques.
Article
Full-text available
Coronavirus disease 2019 (COVID-19) is pervasive worldwide, posing a high risk to people’s safety and health. Many algorithms were developed to identify COVID-19. One way of identifying COVID-19 is by computed tomography (CT) images. Some segmentation methods are proposed to extract regions of interest from COVID-19 CT images to improve the classification. In this paper, an efficient version of the recent manta ray foraging optimization (MRFO) algorithm is proposed based on the oppositionbased learning called the MRFO-OBL algorithm. The original MRFO algorithm can stagnate in local optima and requires further exploration with adequate exploitation. Thus, to improve the population variety in the search space, we applied Opposition-based learning (OBL) in the MRFO’s initialization step. MRFO-OBL algorithm can solve the image segmentation problem using multilevel thresholding. The proposed MRFO-OBL is evaluated using Otsu’s method over the COVID-19 CT images and compared with six meta-heuristic algorithms: sine-cosine algorithm, moth flame optimization, equilibrium optimization, whale optimization algorithm, slap swarm algorithm, and original MRFO algorithm. MRFO-OBL obtained useful and accurate results in quality, consistency, and evaluation matrices, such as peak signal-to-noise ratio and structural similarity index. Eventually, MRFO-OBL obtained more robustness for the segmentation than all other algorithms compared. The experimental results demonstrate that the proposed method outperforms the original MRFO and the other compared algorithms under Otsu’s method for all the used metrics.
Article
Full-text available
The electrocardiogram (ECG) is a non-invasive tool used to diagnose various heart conditions. Arrhythmia is one of the primary causes of cardiac arrest. Early ECG beat classification plays a significant role in diagnosing life-threatening cardiac arrhythmias. However, the ECG signal is very small, the anti-interference potential is low, and the noise is easily influenced. Thus, clinicians face challenges in diagnosing arrhythmias. Thus, a method to automatically identify and distinguish arrhythmias from the ECG signal is invaluable. In this paper, a hybrid approach based on marine predators algorithm (MPA) and convolutional neural network (CNN) called MPA-CNN is proposed to classify the non-ectopic, ventricular ectopic, supraventricular ectopic, and fusion ECG types of arrhythmia. The proposed approach is a combination of heavy feature extraction and classification techniques; hence, outperforms other existing classification approaches. Optimal characteristics were derived directly from the raw signal to decrease the time required for and complexity of the computation. Precision levels of 99.31%, 99.76%, and 99.47% were achieved by the proposed approach on the MIT-BIH,EDB, and INCART databases, respectively.
Article
Full-text available
The economic load dispatch (ELD) problems considering nonlinear characteristics where an optimal combination of power generating units is selected in order to minimize the total cost by economic allocation of power produced and the emission cost. As a consequence, optimal allocation is performed by considering both fuel cost and emission leading to Combined Economic and Emission Dispatch (CEED). This study presents a new Meta-heuristic algorithms (MHs) called the Turbulent Flow of Water Optimization (TFWO), which is based on the behaviour of whirlpools created in turbulent water flow, for solving different variants of ELD and CEED. To verify the robustness of the TFWO, various test network of CEED with effect of valve, and ELD with losses of transmission are incorporated. In comparison with seven well-known MHs such as Cuckoo Search Algorithm (CSA), Grey Wolf Algorithm (GW), Sine Cosine Algorithm (SCA), Earth Worm Optimization Algorithm (EWA), Tunicate Swarm Algorithm (TSA), Moth Search Algorithm (MSA) and Teaching Learning Based Optimization (TLBO), the TFWO provides the minimum fuel cost and significantly robust solutions of ELD problem over all tested networks. The results confirm the potential and effectiveness of the GWO to be a promising technique to solve various ELD problems.
Article
Full-text available
This paper presents modified versions of a recent swarm intelligence algorithm called Harris hawks optimization (HHO) via incorporating genetic operators (crossover and mutation CM) boosted by two strategies of (opposition-based learning and random opposition-based learning) to provide perfect balance between intensification and diversification and to explore efficiently the search space in order to jump out local optima. Three modified versions of HHO termed as HHOCM, OBLHHOCM and ROBLHHOCM enhance the exploitation ability of solutions and improve the diversity of the population. The core exploratory and exploitative processes of the modified versions are adapted for selecting the most important molecular descriptors ensuring high classification accuracy. The Wilcoxon rank sum test is conducted to assess the performance of the HHOCM and ROBLHHOCM algorithms. Two common datasets of chemical information are used in the evaluation process of HHOCM variants, namely Monoamine Oxidase and QSAR Biodegradation datasets. Experimental results revealed that the three modified algorithms provide competitive and superior performance in terms of finding optimal subset of molecular descriptors and maximizing classification accuracy compared to several well-established swarm intelligence algorithms including the original HHO, grey wolf optimizer, salp swarm algorithm, dragonfly algorithm, ant lion optimizer, grasshopper optimization algorithm and whale optimization algorithm.
Article
Full-text available
This study integrates a tunicate swarm algorithm (TSA) with a local escaping operator (LEO) for overcoming the weaknesses of the original TSA. The LEO strategy in TSA–LEO prevents searching deflation in TSA and improves the convergence rate and local search efficiency of swarm agents. The efficiency of the proposed TSA–LEO was verified on the CEC’2017 test suite, and its performance was compared with seven metaheuristic algorithms (MAs). The comparisons revealed that LEO significantly helps TSA by improving the quality of its solutions and accelerating the convergence rate. TSA–LEO was further tested on a real-world problem, namely, segmentation based on the objective functions of Otsu and Kapur. A set of well-known evaluation metrics was used to validate the performance and segmentation quality of the proposed TSA–LEO. The proposed TSA-LEO outperforms other MA algorithms in terms of fitness, peak signal-to-noise ratio, structural similarity, feature similarity, and segmentation findings.
Article
Full-text available
Economic load dispatch (ELD) in power system problems involves scheduling the power generating units to minimize cost and satisfy system constraints. Although previous works propose solutions to reduce CO2 emission and production cost, an optimal allocation needs to be considered on both cost and emission-leading to combined economic and emission dispatch (CEED). Metaheuristic optimization algorithms perform relatively well on ELD problems. The gradient-based optimizer (GBO) is a new metaheuristic algorithm inspired by Newton's method that integrates both the gradient search rule and local escaping operator. The GBO maintains a good balance between exploration and exploitation. Also, the possibility of the GBO getting stuck in local optima and premature convergence is rare. This paper tests the performance of GBO in solving ELD and CEED problems. We test the performance of GBO on ELD for various scenarios such as ELD with transmission losses, CEED and CEED with valve point effect. The experimental results revealed that GBO has been obtained better results compared to eight other metaheuristic algorithms such as Slime mould algorithm (SMA), Elephant herding optimization (EHO), Monarch butterfly optimization (MBO), Moth search algorithm (MSA), Earthworm optimization algorithm (EWA), Artificial Bee Colony (ABC) Algorithm, Tunicate Swarm Algorithm (TSA) and Chimp Optimization Algorithm (ChOA). Therefore, the simulation results showed the competitive performance of GBO as compared to other benchmark algorithms. INDEX TERMS Gradient-based optimizer (GBO), economic load dispatch (ELD), combined economic and emission dispatch (CEED), metaheuristics, optimization.
Article
Full-text available
With the development of new energy power systems, the estimation of the parameters of photovoltaic (PV) models has become increasingly important. Weather changes are random; therefore, the changes in the PV output power are periodic and nonlinear. Traditional power prediction methods are based on linearity, and relying only on a time series is not feasible. Consequently, metaheuristic algorithms have received considerable attention to extract the parameters of solar cell models and achieve excellent performance. In this study, the Turbulent Flow of Water-based Optimization (TFWO) is used to estimate the parameters of three traditional solar cell models, namely, Single-Diode Solar Cell Model (SDSCM), Double-Diode Solar Cell Model (DDSCM), and Three-Diode Solar Cell Model (TDSCM), in addition to three modified solar cell models, namely, modified SDSCM (MSDSCM), modified DDSCM (MDDSCM), and modified TDSCM (MTDSCM). Moreover, a polynomial equation of five degrees for the sum of squared errors (PE5DSSE) between the measured and calculated currents was used as a new objective function for extracting the parameters of the solar cell models. The proposed objective function delivered improved prediction accuracy than common objective functions. Experimental results revealed the effectiveness of TFWO compared with six counterparts, namely, “Tunicate Swarm Algorithm (TSA), Grey wolf optimizer (GWO), modified particle swarm optimization (MPSO), Cuckoo Search algorithm (CSA), Moth flame optimizer (MFO) and Teaching Learning based optimization algorithm (TLBO),) for all the traditional and modified solar cell models based on the optimal parameters extracted using best PE5DSSE values.
Article
Full-text available
COVID-19 has affected all peoples’ lives. Though COVID-19 is on the rising, the existence of misinformation about the virus also grows in parallel. Additionally, the spread of misinformation has created confusion among people, caused disturbances in society, and even led to deaths. Social media is central to our daily lives. The Internet has become a significant source of knowledge. Owing to the widespread damage caused by fake news, it is important to build computerized systems to detect fake news. The paper proposes an updated deep neural network for identification of false news. The deep learning techniques are The Modified-LSTM (one to three layers) and The Modified GRU (one to three layers).In particular, we carry out investigations of a large dataset of tweets passing on data with respect to COVID-19. In our study, we separate the dubious claims into two categories: true and false. We compare the performance of the various algorithms in terms of prediction accuracy. The six machine learning techniques are decision trees, logistic regression, k nearest neighbors, random forests, support vector machines, and naïve Bayes (NB). The parameters of deep learning techniques are optimized using Keras-tuner. Four Benchmark datasets were used. Two feature extraction methods were used (TF-ID with N-gram) to extract essential features from the four benchmark datasets for the baseline machine learning model and word embedding feature extraction method for the proposed deep neural network methods. The results obtained with the proposed framework reveal high accuracy in detecting Fake and non-Fake tweets containing COVID-19 information. These results demonstrate significant improvement as compared to the existing state of art results of baseline machine learning models. In our approach, we classify the data into two categories: fake or nonfake. We compare the execution of the proposed approaches with Six machine learning procedures. The six machine learning procedures are Decision Tree (DT), Logistic Regression (LR), K Nearest Neighbor (KNN), Random Forest (RF), Support Vector Machine (SVM), and Naive Bayes (NB). The parameters of deep learning techniques are optimized using Keras-tuner. Four Benchmark datasets were used. Two feature extraction methods were used (TF-ID with N-gram) to extract essential features from the four benchmark datasets for the baseline machine learning model and word embedding feature extraction method for the proposed deep neural network methods. The results obtained with the proposed framework reveal high accuracy in detecting Fake and non-Fake tweets containing COVID-19 information. These results demonstrate significant improvement as compared to the existing state of art results of baseline machine learning models.
Article
Full-text available
Solar radiation is increasingly used as a clean energy source, and photovoltaic (PV) panels that contain solar cells (SCs) transform solar energy into electricity. The current-voltage characteristics for PV models is nonlinear. Due to a lack of data on the manufacturer’s datasheet for PV models, there are several unknown parameters. It is necessary to accurately design the PV systems by defining the intrinsic parameters of the SCs. Various methods have been proposed to estimate the unknown parameters of PV cells. However, their results are often inaccurate. In this article, a gradient-based optimizer (GBO) was applied as an efficient and accurate methodology to estimate the parameters of SCs and PV modules. Three common SC models, namely, single-diode models (SDMs), double-diode models (DDMs), and three-diode models (TDMs) were used to demonstrate the capacity of the GBO to estimate the parameters of SCs. The proposed GBO algorithm for estimating the optimal values of the parameters for various SCs models are applied on the real data of a 55 mm diameter commercial R.T.C-France SC. Comparison between the GBO and other algorithms are performed for the same data set. The smallest value of the error between the experimental and the simulated data is achieved by the proposed GBO. Also, high closeness between the simulated P-V and I-V curves is achieved by the proposed GBO compared with the experimental.
Article
Full-text available
The optimal segmentation of medical images remains important for promoting the intensive use of automatic approaches in decision making, disease diagnosis, and facilitating the sustainable development of computer vision studies. Generally, recent methods tend to minimize human–machine interaction by using multi-agent systems (MAS) and optimize the segmentation systems control. Some of the existing segmentation methods consider MAS qualifications and advantages but underline a lack of global optimization goals, and therefore they provide unsatisfactory results taking into account the need for precision in medical imaging. Our work coupled an improved MAS control protocol for medical image segmentation with the particle swarm optimization algorithm to strengthen the system for better result performance. The proposed method could relieve agents’ conflicts during the medical image segmentation for optimum control, better decision-making, and higher processing quality under the critical medical restrictions.
Article
Full-text available
Introduction Quantum cloning operation, started with no-go theorem which proved that there is no capability to perform a cloning operation on an unknown quantum state, however, a number of trials proved that we can make approximate quantum state cloning that is still with some errors. Objectives To the best of our knowledge, this paper is the first of its kind to attempt using meta-heuristic algorithm such as Adaptive Guided Differential Evolution (AGDE), to tackle the problem of quantum cloning circuit parameters to enhance the cloning fidelity. Methods To investigate the effectiveness of the AGDE, the extensive experiments have demonstrated that the AGDE can achieve outstanding performance compared to other well-known meta-heuristics including; Enhanced LSHADE-SPACMA Algorithm (ELSHADE-SPACMA), Enhanced Differential Evolution algorithm with novel control parameter adaptation (PaDE), Improved Multi-operator Differential Evolution Algorithm (IMODE), Parameters with adaptive learning mechanism (PALM), QUasi-Affine TRansformation Evolutionary algorithm (QUATRE), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Cuckoo Search (CS), Bat-inspired Algorithm (BA), Grey Wolf Optimizer (GWO), and Whale Optimization Algorithm (WOA). Results In the present study, AGDE is applied to improve the fidelity of quantum cloning problem and the obtained parameter values minimize the cloning difference error value down to 10-8. Conclusion Accordingly, the qualitative and quantitative measurements including average, standard deviation, convergence curves of the competitive algorithms over 30 independent runs, proved the superiority of AGDE to enhance the cloning fidelity.
Article
Full-text available
The difficulty and complexity of the real-world numerical optimization problems has grown manifold, which demands efficient optimization methods. To date, various metaheuristic approaches have been introduced, but only a few have earned recognition in research community. In this paper, a new metaheuristic algorithm called Archimedes optimization algorithm (AOA) is introduced to solve the optimization problems. AOA is devised with inspirations from an interesting law of physics Archimedes' Principle. It imitates the principle of buoyant force exerted upward on an object, partially or fully immersed in fluid, is proportional to weight of the displaced fluid. To evaluate performance, the proposed AOA algorithm is tested on CEC'17 test suite and four engineering design problems. The solutions obtained with AOA have outperformed well-known state-of-the-art and recently introduced metaheuristic algorithms such genetic algorithms (GA), particle swarm optimization (PSO), differential evolution variants L-SHADE and LSHADE-EpSin, whale optimization algorithm (WOA), sine-cosine algorithm (SCA), Harris' hawk optimization (HHO), and equilibrium optimizer (EO). The experimental results suggest that AOA is a high-performance optimization tool with respect to convergence speed and exploration-exploitation balance, as it is effectively applicable for solving complex problems. The source code is currently available for public from: https://www. mathworks.com/matlabcentral/fileexchange/79822-archimedes-optimization-algorithm
Article
Full-text available
Genetic algorithm (GA) is a nature-inspired algorithm to produce best possible solution by selecting the fittest individual from a pool of possible solutions. Like most of the optimization techniques, the GA can also stuck in the local optima, producing a suboptimal solution. This work presents a novel metaheuristic optimizer named as the binary chaotic genetic algorithm (BCGA) to improve the GA performance. The chaotic maps are applied to the initial population, and the reproduction operations follow. To demonstrate its utility, the proposed BCGA is applied to a feature selection task from an affective database, namely AMIGOS (A Dataset for Affect, Personality and Mood Research on Individuals and Groups) and two healthcare datasets having large feature space. Performance of the BCGA is compared with the traditional GA and two state-of-the-art feature selection methods. The comparison is made based on classification accuracy and the number of selected features. Experimental results suggest promising capability of BCGA to find the optimal subset of features that achieves better fitness values. The obtained results also suggest that the chaotic maps, especially sinusoidal chaotic map, perform better as compared to other maps in enhancing the performance of raw GA. The proposed approach obtains, on average, a fitness value twice as better than the one achieved through the raw GA in the identification of the seven classes of emotions.
Article
Full-text available
INTRODUCTION: Heart diseases are the prominent human disorders that have significantly affected the lifestyle and lives of the victims. Cardiac arrhythmia (heart arrhythmia) is one of the critical heart disorders that reflects the state of heartbeat among individuals. ECG (Electrocardiogram) signals are commonly used in the diagnostic process of this cardiac disorder. OBJECTIVES: In this manuscript, an effort has been made to employ and examine the performance of emerging Swarm Intelligence (SI) techniques in finding an optimal set of features used for cardiac arrhythmia diagnosis. METHODS: A standard benchmark UCI dataset set comprises of 279 attributes and 452 instances have been considered. Five different SI-based meta-heuristic techniques viz. binary Grey-Wolf Optimizer (bGWO), Ant Lion Optimization(ALO), Butterfly optimization algorithm (BOA), Dragonfly Algorithm (DA), and Satin-Bird Optimization (SBO) have been also employed for the same. Additionally, five novel chaotic variants of SBO have been designed to solve the feature selection problem for diagnosing a cardiac arrhythmia. Different performance metrics like accuracy, fitness value, optimal set of features and execution time have been computed. CONCLUSION: It has been observed from the experimentation that in terms of accuracy and fitness value of cardiac arrhythmia, the SBO outperformed other SI algorithms viz. bGWO, DA, BOA, and ALO. Additionally, BOA and ALO seem to be the best fit when the emphasis is on dimension size only.
Article
Full-text available
One of the major drawbacks of cheminformatics is a large amount of information present in the datasets. In the majority of cases, this information contains redundant instances that affect the analysis of similarity measurements with respect to drug design and discovery. Therefore, using classical methods such as the protein bank database and quantum mechanical calculations are insufficient owing to the dimensionality of search spaces. In this paper, we introduce a hybrid metaheuristic algorithm called CHHO–CS, which combines Harris hawks optimizer (HHO) with two operators: cuckoo search (CS) and chaotic maps. The role of CS is to control the main position vectors of the HHO algorithm to maintain the balance between exploitation and exploration phases, while the chaotic maps are used to update the control energy parameters to avoid falling into local optimum and premature convergence. Feature selection (FS) is a tool that permits to reduce the dimensionality of the dataset by removing redundant and non desired information, then FS is very helpful in cheminformatics. FS methods employ a classifier that permits to identify the best subset of features. The support vector machines (SVMs) are then used by the proposed CHHO–CS as an objective function for the classification process in FS. The CHHO–CS-SVM is tested in the selection of appropriate chemical descriptors and compound activities. Various datasets are used to validate the efficiency of the proposed CHHO–CS-SVM approach including ten from the UCI machine learning repository. Additionally, two chemical datasets (i.e., quantitative structure-activity relation biodegradation and monoamine oxidase) were utilized for selecting the most significant chemical descriptors and chemical compounds activities. The extensive experimental and statistical analyses exhibit that the suggested CHHO–CS method accomplished much-preferred trade-off solutions over the competitor algorithms including the HHO, CS, particle swarm optimization, moth-flame optimization, grey wolf optimizer, Salp swarm algorithm, and sine–cosine algorithm surfaced in the literature. The experimental results proved that the complexity associated with cheminformatics can be handled using chaotic maps and hybridizing the meta-heuristic methods.
Article
Full-text available
Meta-heuristic search algorithms were successfully used to solve a variety of problems in engineering, science, business, and finance. Meta-heuristic algorithms share common features since they are population-based approaches that use a set of tuning parameters to evolve new solutions based on the natural behavior of creatures. In this paper, we present a novel nature-inspired search optimization algorithm called the capuchin search algorithm (CapSA) for solving constrained and global optimization problems. The key inspiration of CapSA is the dynamic behavior of capuchin monkeys. The basic optimization characteristics of this new algorithm are designed by modeling the social actions of capuchins during wandering and foraging over trees and riverbanks in forests while searching for food sources. Some of the common behaviors of capuchins during foraging that are implemented in this algorithm are leaping, swinging, and climbing. Jumping is an effective mechanism used by capuchins to jump from tree to tree. The other foraging mechanisms exercised by capuchins, known as swinging and climbing, allow the capuchins to move small distances over trees, tree branches, and the extremities of the tree branches. These locomotion mechanisms eventually lead to feasible solutions of global optimization problems. The proposed algorithm is benchmarked on 23 well-known benchmark functions, as well as solving several challenging and computationally costly engineering problems. A broad comparative study is conducted to demonstrate the efficacy of CapSA over several prominent meta-heuristic algorithms in terms of optimization precision and statistical test analysis. Overall results show that CapSA renders more precise solutions with a high convergence rate compared to competitive meta-heuristic methods.
Article
Full-text available
In this paper, a novel metaheuristic algorithm called Chaos Game Optimization (CGO) is developed for solving optimization problems. The main concept of the CGO algorithm is based on some principles of chaos theory in which the configuration of fractals by chaos game concept and the fractals self-similarity issues are in perspective. A total number of 239 mathematical functions which are categorized into four different groups are collected to evaluate the overall performance of the presented novel algorithm. In order to evaluate the results of the CGO algorithm, three comparative analysis with different characteristics are conducted. In the first step, six different metaheuristic algorithms are selected from the literature while the minimum, mean and standard deviation values alongside the number of function evaluations for the CGO and these algorithms are calculated and compared. A complete statistical analysis is also conducted in order to provide a valid judgment about the performance of the CGO algorithm. In the second one, the results of the CGO algorithm are compared to some of the recently developed fractal- and chaos-based algorithms. Finally, the performance of the CGO algorithm is compared to some state-of-the-art algorithms in dealing with the state-of-the-art mathematical functions and one of the recent competitions on single objective real-parameter numerical optimization named “CEC 2017” is considered as numerical examples for this purpose. In addition, a computational cost analysis is also conducted for the presented algorithm. The obtained results proved that the CGO is superior compared to the other metaheuristics in most of the cases.
Article
Full-text available
Deep Learning (DL) has recently become a topic of study in different applications including healthcare, in which timely detection of anomalies on Electrocardiogram (ECG) can play a vital role in patient monitoring. This paper presents a comprehensive review study on the recent DL methods applied to the ECG signal for the classification purposes. This study considers various types of the DL methods such as Convolutional Neural Network (CNN), Deep Belief Network (DBN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). From the 75 studies reported within 2017 and 2018, CNN is dominantly observed as the suitable technique for feature extraction, seen in 52% of the studies. DL methods showed high accuracy in correct classification of Atrial Fibrillation (AF) (100%), Supraventricular Ectopic Beats (SVEB) (99.8%), and Ventricular Ectopic Beats (VEB) (99.7%) using the GRU/LSTM, CNN, and LSTM, respectively.
Article
Full-text available
The electrocardiogram records the heart’s electrical activity and generates a significant amount of data. The analysis of these data helps us to detect diseases and disorders via heart bio-signal abnormality classification. In unbalanced-data contexts, where the classes are not equally represented, the optimization and configuration of the classification models are highly complex, reflecting on the use of computational resources. Moreover, the performance of electrocardiogram classification depends on the approach and parameter estimation to generate the model with high accuracy, sensitivity, and precision. Previous works have proposed hybrid approaches and only a few implemented parameter optimization. Instead, they generally applied an empirical tuning of parameters at a data level or an algorithm level. Hence, a scheme, including metrics of sensitivity in a higher precision and accuracy scale, deserves special attention. In this article, a metaheuristic optimization approach for parameter estimations in arrhythmia classification from unbalanced data is presented. We selected an unbalanced subset of those databases to classify eight types of arrhythmia. It is important to highlight that we combined undersampling based on the clustering method (data level) and feature selection method (algorithmic level) to tackle the unbalanced class problem. To explore parameter estimation and improve the classification for our model, we compared two metaheuristic approaches based on differential evolution and particle swarm optimization. The final results showed an accuracy of 99.95%, a F1 score of 99.88%, a sensitivity of 99.87%, a precision of 99.89%, and a specificity of 99.99%, which are high, even in the presence of unbalanced data.
Article
Full-text available
In this paper, a novel feature selection method is proposed for the categorization of electrocardiogram (ECG) heartbeats. The proposed technique uses the Fisher ratio and BAT optimization algorithm to obtain the best feature set for ECG classification. The MIT-BIH arrhythmia database contains sixteen classes of the ECG heartbeats. The MIT-BIH ECG arrhythmia database divided into intra-patient and inter-patient schemes to be used in this study. The proposed feature selection methodology works in following steps: firstly, features are extracted using empirical wavelet transform (EWT) and then higher-order statistics, as well as symbolic features, are computed for each decomposed mode of EWT. Thereafter, the complete feature vector is obtained by the conjunction of EWT based features and RR interval features. Secondly, for feature selection, the Fisher ratio is utilized. It is optimized by using BAT algorithm so as to have maximal discrimination of the between classes. Finally, in the classification step, the k-nearest neighbor classifier is used to classify the heartbeats. The performance measures i.e., accuracy, sensitivity, positive predictivity, specificity for intra-patient scheme are 99.80%, 99.80%, 99.80%, 99.987% and for inter-patient scheme are 97.59%, 97.589%, 97.589%, 99.196% respectively. The proposed feature selection technique outperforms the other state of art feature selection methods.
Article
Recently, the numerical optimization field has attracted the research community to propose and develop various metaheuristic optimization algorithms. This paper presents a new metaheuristic optimization algorithm called Honey Badger Algorithm (HBA). The proposed algorithm is inspired from the intelligent foraging behavior of honey badger, to mathematically develop an efficient search strategy for solving optimization problems. The dynamic search behavior of honey badger with digging and honey finding approaches are formulated into exploration and exploitation phases in HBA. Moreover, with controlled randomization techniques, HBA maintains ample population diversity even towards the end of the search process. To assess the efficiency of HBA, 24 standard benchmark functions, CEC'17 test-suite, and four engineering design problems are solved. The solutions obtained using the HBA have been compared with ten well-known metaheuristic algorithms including Simulated annealing (SA), Particle Swarm Optimization (PSO), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Success-History based Adaptive Differential Evolution variants with linear population size reduction (L-SHADE), Moth-flame Optimization (MFO), Elephant Herding Optimization (EHO), Whale Optimization Algorithm (WOA), Grasshopper Optimisation Algorithm (GOA), Thermal Exchange Optimization (TEO) and Harris hawks optimization (HHO). The experimental results, along with statistical analysis, reveal the effectiveness of HBA for solving optimization problems with complex search-space, as well as, its superiority in terms of convergence speed and exploration-exploitation balance, as compared to other methods used in this study. The source code is currently available for public from: https://www.mathworks.com/matlabcentral/fileexchange/98204-honey-badger-algorithm
Article
The artificial electric field algorithm (AEFA) is a recent physics population‐based optimization approach inspired by Coulomb's law of electrostatic force and Newton's law of motion. In this paper, an alternative version of AEFA called mAEFA is proposed to boost the searchability and the balance between the explorations to the exploitation of the original AEFA. To escape dropping on the local points in the mAEFA, three efficient strategies for instance; modified local escaping operator (MLEO), levy flight (LF), and opposition‐based learning (OBL), are in conjunction with the original AEFA. The convergence rate will be improved when the best agent is identified; thus, stagnation at a local solution can be efficiently avoided. To assess the performance of the proposed mAEFA, it has been evaluated over the CEC'2020 test functions. Furthermore, a robust methodology based on mAEEA is proposed to identify the best parameters of PEM fuel cell (PEMFC). The model of the PEMFC includes nonlinear characteristics that involve several unknown design variables. Thus, it is challenging to develop an accurate model. There are seven design variables to be tuned to reach the targeted dependable model. Two different types of PEMFCs: NedStack PS6 and SR‐12 500 W were used to demonstrate the superiority of the mAEEA. Throughout the optimization process, the unidentified parameters of PEMFC are appointed to be decision variables. But the objective function, which necessary to be least is represented by the SSE between the calculated PEMFC voltage and the experimental one. Nine recent optimizers are used in the comparison with the proposed mAEEA. According to the main findings, the advantage of the proposed mAEEA in determining the best PEMFC parameters is verified compared to the other optimizers. Lowest SSE, lowest RMSE, minimum stranded deviation, maximum efficiency, and high coefficient of determination are achieved by the proposed mAEEA.
Article
A recent meta-heuristic algorithm called Marine Predators Algorithm (MPA) is enhanced using Opposition-Based Learning (OBL) termed MPA-OBL to improve their search efficiency and convergence. A comprehensive set of experiments are performed to evaluate the MPA-OBL and prove the impact influence of merging OBL strategy with the original MPA in enhancing the quality of the solutions and the acceleration of the convergence speed, using IEEE CEC’2020 benchmark problems as recently complex optimization benchmark. In order to evaluate the performance of the proposed MPA-OBL, the effectiveness of conjunction of OBL with the original MPA and the other counterparts are calculated and compared with LSHADE with semi-parameter adaptation hybrid with CMA-ES (LSHADESPACMA-OBL), Restart covariance matrix adaptation ES (CMAES-OBL), Differential evolution (DE-OBL),Harris hawk optimization (HHO-OBL), Sine cosine algorithm (SCA-OBL), Salp swarm algorithm (SSA-OBL), and the original MPA. The extensive results and comparisons in terms of optimization metrics have revealed the superiority of the proposed MPA-OBL in solving the IEEE CEC’2020 benchmark problems and improving the convergence speed. Moreover, as a sequel to the proposed MPA-OBL, also, we have conducted experiments using two objective functions of Otsu and Kapur’s methods over a variety of benchmark images at different level of thresholds based on three commonly evaluation matrices namely Peak signal-to-noise ratio (PSNR), Structural similarity (SSIM), and Feature similarity (FSIM) indices are analyzed qualitatively and quantitatively. Eventually, the statistical post-hoc analysis reveal that the MPA-OBL obtains highly efficient and reliable results in comparison with the other competitor algorithms.
Article
Thermography images are a helpful screening tool that can detect breast cancer by showing the body parts that indicate an abnormal change in temperature. Various segmentation methods are proposed to extract regions of interest from breast cancer images to enhance the classification. Many issues were solved using thresholding. In this paper, a new efficient version of the recent chimp optimization algorithm (ChOA), namely opposition-based L\'evy Flight chimp optimizer (IChOA), was proposed. The original ChOA algorithm can stagnate in local optima and needs varied exploration with an adequate blending of exploitation. Therefore, the convergence is accelerated by improving the initial diversity and good exploitation capability at a later stage of generations. Opposition-based learning (OBL) is applied at the initialization phase of ChOA to boost its population diversity in the search space, and the L\'evy Flight is used to enhance its exploitation. Moreover, the IChOA is applied to tackle the image segmentation problem using multilevel thresholding. The proposed method tested using Otsu and Kapur methods over a dataset from Mastology Research with Infrared Image (DMR-IR) database during the optimization process. Furthermore, compared against seven other meta-heuristic algorithms, namely Gray wolf optimization (GWO), Moth flame optimization (MFO), Whale optimization algorithm (WOA), Sine-cosine algorithm (SCA), Slap swarm algorithm (SSA), Equilibrium optimization (EO), and original Chimp optimization algorithm (ChOA). Results based on the fitness values of obtained best solutions revealed that the IChOA achieved valuable and accurate results in terms of quality, consistency, accuracy, and the evaluation matrices such as PSNR, SSIM, and FSIM. Eventually, IChOA obtained robustness for the segmentation of various positive and negative cases compared to the methods of its counterparts.
Chapter
A number of COVID-19 outbreak classification and prediction methods have been proposed and are being applied around the globe to make the right decision and to enforce proper control measures. Among these methods, simple statistical and epidemiological methods have received much attention whereas, for the long term prediction, the standard methods do not perform well due to the lack of essential data and high-level of uncertainty. Thus, the essential robustness and generalization abilities of these methods need to be improved. Therefore, this work proposed a new hybrid Harris Hawks Optimization (HHO) combined with the Support Vector Machine (SVM) method called HHO-SVM. The HHO-SVM is applied on a big Gene Expression Cancer (RNA-Seq) dataset which comprises more than 20531 features to identify the critical Gene that causes the COVID-19. The experimental results revealed that HHO-SVM outperformed than Grey Wolf Optimizer (GWO), Particle Swarm Optimization (PSO), Moth Flame Optimization (MFO), Slap Swarm Algorithm (SSA) and Genetic Algorithms (GA’s). We further investigate that the most critical Gene is Tmprss2 which causes Prostate Cancer is the same Gene that causes COVID-19 through the ACE2 receptor.
Chapter
The Internet of Things (IoT) plays a very important role in various healthcare applications. The advancement of IoT and cloud computing facilitates the patient’s health, employee retention, and organizational quality in the medical sector. The study analyses the new IoT materials, implementations, and industry patterns for healthcare services. We also consider the way in which promising technologies such as cloud computing, immersive care home, Big data, and wearable sensors. Furthermore, this study analyses protection and safety, authentication, energy, control, maintenance, service quality, and real-time wireless health monitoring which are very problematic in many IoT healthcare architectures. Due to the lack of well-established system architecture, data constraint and the preservation of its privacy remain a challenge. The main aim of this survey is to analyze the purpose of healthcare based on the digital healthcare system. It also reports on a range of IoT and e-health policies and structures that determine whether to ease any sustainable growth.
Article
The parameter estimation of solar cell models is considered an important problem in the computational simulation and design of photovoltaic (PV) systems. In this paper, the PV parameters of single, double, and triple-diode models are extracted and tested under different environmental conditions. The parameter estimation of the three models is presented as an optimization process with an objective function to reduce the difference between the measured and calculated data. Moreover, a new optimization algorithm for extracting the PV parameters of the single, double, and three-diode models called the Manta Ray Foraging Optimization (MRFO) algorithm has been proposed. The results show that the obtained parameters are the optimal values and give the least difference between the measured and calculated data compared with other algorithms.
Article
An optimization algorithm called modified evaporation rate water cycle algorithm (MERWCA) is proposed in this paper to find the optimal solutions for the coordination problem of directional overcurrent relays (DOCRs). The proposed MERWCA improves the performance of a conventional evaporation rate water cycle algorithm (ERWCA) by enhancing the balance between exploitation (search locally) and exploration (search globally) phases to find the best optimum solution. MERWCA includes the Opposition Based Learning (OBL) and Levy Flight (LF) component to address the shortcomings that the original ERWCA may exhibit, to avoid falling on the local optimal and improve the convergence rate. The proposed MERWCA is verified on the CEC’2017 test suite and its performance are compared with those of ten common metaheuristic algorithms (MAs). Both MERWCA and ERWCA are evaluated in the case of non-conventional and conventional relay curves. The feasibility of MERWCA technique is assessed using the conventional IEEE 39-bus meshed distribution test system. The results prove the viability of the MERWCA in solving DOCRs coordination problems in both cases (conventional and non-conventional characteristic relay curve) without any miscoordination between DOCRs pair. Moreover, the reduction ratio in minimizing the total operating using MERWCA reaches 46% with respect to the ERWCA and about 66% using the non-conventional relay characteristic. Finally, the MERWCA is assessed using benchmark DIgSILENT PowerFactory.
Article
Under partial shading condition, the power-voltage curve of the photovoltaic (PV) system contains several maximum power points (MPPs). Among these points, there is only single global and some local points. Accordingly, modern optimization algorithms are highly required to tackle this problem. However, the methods are considered as time consuming. Therefore, finding a new algorithm that capable to solve the problem of tracking global maximum power point (GMPP) with minimum number of population is highly appreciated. Several new straightforward methods as well as meta-heuristic approaches are exist. Recently, the Marine Predator Algorithm (MPA) has been developed for engineering applications. In this study, an alternative method of MPA, integrating Opposition Based Learning (OBL) strategy with Grey Wolf Optimizer (GWO), named MPAOBL-GWO, is proposed to cope with the implied weaknesses of classical MPA. Firstly, Opposition Based Learning (OBL) strategy is adopted to prevent MPA method from searching deflation and to obtain faster convergence rate. Besides, the GWO is also implemented to further improve the swarm agents’ local search efficiency. Due to that, the MPA explores the search space well better than exploiting it; so, this combination improves the efficiency of the MPA and avoids it from falling in local points. To verify the effectiveness of the enhanced method, the well-known CEC’17 test suite and the maximum power point tracking (MPPT) of photovoltaic (PV) system problem are solved. The obtained results illustrate the ability of the proposed MPAOBLGWO based method to achieve the optimum solution compared with the original MPA, GWO and Particle Swarm Optimization (PSO). The findings revealed that, the proposed method can be viewed as an efficient and effective strategy for more complex optimization scenarios and the MPPT as well.
Article
Meta-heuristic optimization algorithms aim to tackle real world problems through maximizing some specific criteria such as performance, profit, and quality or minimizing others such as cost, time, and error. Accordingly, this paper introduces an improved version of a well-known optimization algorithm namely Archimedes optimization algorithm (AOA). The enhanced version combines two efficient strategies namely Local escaping operator (LEO) and Orthogonal learning (OL) to introduce the (I-AOA) optimization algorithm. Moreover, the performance of the proposed I-AOA has been evaluated on the CEC'2020 test suite, and three engineering design problems. Furthermore, I-AOA is applied to determine the optimal parameters of polymer electrolyte membrane (PEM) fuel cell (FC). Two commercial types of PEM fuel cells: 250W PEMFC and BCS 500W are considered to prove the superiority of the proposed optimizer. During the optimization procedure, the seven unknown parameters (1 , 2 , 3 , 4 , , , and) of PEM fuel cell are assigned to be the decision variables. Whereas the cost function that required to be in a minimum state is represented by the RMSE between the estimated cell voltage and the measured data. The obtained results by the I-AOA are compared to other well-known optimizers such as Whale Optimization Algorithm (WOA), Moth-Flame Optimization Algorithm (MFO), Sine Cosine Algorithm (SCA), Particle Swarm Optimization Algorithm (PSO), Harris hawks optimization (HHO), Tunicate Swarm Algorithm (TSA) and original AOA. The comparison confirmed the superiority of the suggested algorithm in identifying the optimum PEM fuel cell parameters considering various operating conditions compared to the other optimization algorithms.
Article
The Electrocardiogram (ECG) arrhythmia classification has become an interesting research area for researchers and developers as it plays a vital role in early prevention and diagnosis of cardiovascular diseases. In ECG signal classification, the feature extraction and selection processes are critical steps. Thus, in this paper, different ECG signal descriptors based on one-dimensional local binary pattern (LBP), wavelet, higher-order statistical (HOS), and morphological information are introduced for feature extraction. For feature selection and classification processes,a new hybrid ECG arrhythmia classification approach called MRFO-SVM that combines a metaheuristic algorithm termed Manta ray foraging optimization (MRFO) with support vector machine (SVM)is proposed to automatically determine the relevance features of LBP, HOS, wavelet and magnitude values. In MRFO-SVM approach,the MRFO is utilized to optimize the parameters of SVM and to select the significant features subset that provides the best classification performance, meanwhile SVM is used for classification purposes.The proposed MRFO-SVM approach is trained on the MIT-BIH Arrhythmia database containing four abnormal and one normal heartbeats. The experimental results of ECG arrhythmia classification using the proposed MRFO-SVM revealed with evidence its superiority with overall classification accuracy of 98.26% over seven well-known metaheuristic algorithms.
Article
Over the ages, nature has constantly been a rich source of inspiration for science, with much still to discover about and learn from. Swarm Intelligence (SI), a major branch of artificial intelligence, was rendered to model the collective behavior of social swarms in nature. Ultimately, Particle Swarm Optimization algorithm (PSO) is arguably one of the most popular SI paradigms. Over the past two decades, PSO has been applied successfully, with good return as well, in a wide variety of fields of science and technology with a wider range of complex optimization problems, thereby occupying a prominent position in the optimization field. However, through in-depth studies, a number of problems with the algorithm have been detected and identified; e.g., issues regarding convergence, diversity, and stability. Consequently, since its birth in the mid-1990s, PSO has witnessed a myriad of enhancements, extensions, and variants in various aspects of the algorithm, specifically after the twentieth century, and the related research has therefore now reached an impressive state. In this paper, a rigorous yet systematic review is presented to organize and summarize the information on the PSO algorithm and the developments and trends of its most basic as well as of some of the very notable implementations that have been introduced recently, bearing in mind the coverage of paradigm, theory, hybridization, parallelization, complex optimization, and the diverse applications of the algorithm, making it more accessible. Ease for researchers to determine which PSO variant is currently best suited or to be invented for a given optimization problem or application. This up-to-date review also highlights the current pressing issues and intriguing open challenges haunting PSO, prompting scholars and researchers to conduct further research both on the theory and application of the algorithm in the forthcoming years.
Article
Feature selection, an optimization problem, becomes an important pre-process tool in data mining, which simultaneously aims at minimizing feature-size and maximizing model generalization. Because of large search space, conventional optimization methods often fail to generate global optimum solution. A variety of hybrid techniques merging different search strategies have been proposed in feature selection literature, but mostly deal with low dimensional datasets. In this paper, a hybrid optimization method is proposed for numerical optimization and feature selection, which integrates sine-cosine algorithm (SCA) in Harris hawks optimization (HHO). The goal of SCA integration is to cater ineffective exploration in HHO, moreover exploitation is enhanced by dynamically adjusting candidate solutions for avoiding solution stagnancy in HHO. The proposed method, namely SCHHO, is evaluated by employing CEC'17 test suite for numerical optimization and sixteen datasets with low and high-dimensions exceeding 15000 attributes, and compared with original SCA and HHO, as well as, other well-known optimization methods like dragony algorithm (DA), whale optimization algorithm (WOA), grasshopper optimization algorithm (GOA), Grey wolf optimization (GWO), and salp swarm algorithm (SSA); in addition to state-of-the-art methods. Performance of the proposed method is also validated against hybrid methods proposed in recent related literature. The extensive experimental and statistical analyses suggest that the proposed hybrid variant of HHO is able to produce effcient search results without additional computational cost. With increased convergence speed, SCHHO reduced feature-size up to 87% and achieved accuracy up to 92%. Motivated from the findings of this study, various potential future directions are also highlighted.
Article
Segmentation is a crucial step in image processing applications. This process separates pixels of the image into multiple classes that permits the analysis of the objects contained in the scene. Multilevel thresholding is a method that easily performs this task, the problem is to find the best set of thresholds that properly segment each image. Techniques as Otsu's between class variance or Kapur's entropy helps to find the best thresholds but they are computationally expensive for more than two thresholds. To overcome such problem this paper introduces the use of the novel meta-heuristic algorithm called Black Widow Optimization (BWO) to find the best threshold configuration using Otsu or Kapur as objective function. To evaluate the performance and effectiveness of the BWO-based method, it has been considered the use of a variety of benchmark images, and compared against six well-known meta-heuristic algorithms including; the Gray Wolf Optimization (GWO), Moth Flame Optimization (MFO), Whale Optimization Algorithm (WOA), Sine-Cosine Algorithm (SCA), Slap Swarm Algorithm (SSA), and Equilibrium Optimization (EO). The experimental results have revealed that the proposed BWO-based method outperform the competitor algorithms in terms of the fitness values as well as the others performance measures such as PSNR, SSIM and FSIM. The statistical analysis manifests that the BWO-based method achieves efficient and reliable results in comparison with the other methods. Therefore, BWO-based method was found to be most promising for multi-level image segmentation problem over other segmentation approaches that are currently used in the literature.
Article
The Slime Mould Algorithm (SMA) is a recent metaheuristic inspired by the oscillation of slime mould. Similar to other original metaheuristic algorithms (MAs), SMA may suffer from drawbacks, such as being trapped in minimum local regions and improper balance between exploitation and exploration phases. To overcome these weaknesses, this paper proposes a hybrid algorithm: SMA combined to Adaptive Guided Differential Evolution Algorithm (AGDE) (SMA-AGDE). The AGDE mutation method is employed to enhance the swarm agents’ local search, increase the population’s diversity, and help avoid premature convergence. The SMA-AGDE’s performance is evaluated on the CEC’17 test suite, three engineering design problems - tension/compression spring, pressure vessel, and rolling element bearing - and two combinatorial optimization problems - bin packing and quadratic assignment. The SMA-AGDE is compared with three categories of optimization methods: 1) The well-studied MAs, i.e., Biogeography-Based Optimizer (BBO), Gravitational Search Algorithm (GSA), and Teaching Learning-Based Optimization (TLBO), 2) Recently developed MAs, i.e., Harris Hawks Optimization (HHO), Manta Ray Foraging optimization (MRFO), and the original SMA, and 3) High-performance MAs and among the best of IEEE CEC competition, i.e., Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), and AGDE. The overall simulation results reveal that the SMA-AGDE ranked first among the compared algorithms, and so, over different function landscapes. Thus, the proposed SMA-AGDE is a promising optimization tool for global and combinatorial optimization problems and engineering design problems.
Article
Cloud computing is a recently looming-evoked paradigm, the aim of which is to provide on-demand, pay-as-you-go, internet-based access to shared computing resources (hardware and software) in a metered, self-service, dynamically scalable fashion. A related hot topic at the moment is task scheduling, which is well known for delivering critical cloud service performance. However, the dilemmas of resources being underutilized (underloaded) and overutilized (overloaded) may arise as a result of improper scheduling, which in turn leads to either wastage of cloud resources or degradation in service performance, respectively. Thus, the idea of incorporating meta-heuristic algorithms into task scheduling emerged in order to efficiently distribute complex and diverse incoming tasks (cloudlets) across available limited resources, within a reasonable time. Meta-heuristic techniques have proven very capable of solving scheduling problems, which is fulfilled herein from a cloud perspective by first providing a brief on traditional and heuristic scheduling methods before diving deeply into the most popular meta-heuristics for cloud task scheduling followed by a detailed systematic review featuring a novel taxonomy of those techniques, along with their advantages and limitations. More specifically, in this study, the basic concepts of cloud task scheduling are addressed smoothly, as well as diverse swarm, evolutionary, physical, emerging, and hybrid meta-heuristic scheduling techniques are categorized as per the nature of the scheduling problem (i.e., single- or multi-objective), the primary objective of scheduling, task-resource mapping scheme, and scheduling constraint. Armed with these methods, some of the most recent relevant literature are surveyed, and insights into the identification of existing challenges are presented, along with a trail to potential solutions. Furthermore, guidelines to future research directions drawn from recently emerging trends are outlined, which should definitely contribute to assisting current researchers and practitioners as well as pave the way for newbies excited about cloud task scheduling to pursue their own glory in the field.
Article
Recently, Manta ray foraging optimization (MRFO) has been developed and applied for solving few engineering optimization problems. In this paper, an elegant approach based on MRFO integrated with Gradient-Based Optimizer (GBO), named MRFO–GBO, is proposed to efficiently solve the economic emission dispatch (EED) problems. The proposed MRFO–GBO aims to reduce the probability of original MRFO to get trapped into local optima as well as accelerate the solution process. The goal of solving optimal Economic Emission Dispatch (EED) is to economically provide all required electrical loads as well as minimizing the emission with satisfying the operating equality and inequality constraints. Single and multi-objective EED problems are solved using the proposed MRFO–GBO and classical MRFO. In multi-objective EED, fuzzy set theory is adapted to determine the best compromise solution among Pareto optimal solutions. The proposed algorithm is firstly validated through well-known CEC’17 test functions, and then applied for solving several scenarios of EED problems for three electrical systems with 3-generators, 5-generators, and 6-generators. The validation is achieved through different load levels of the tested systems to prove the robustness of the proposed algorithm. The results obtained by the proposed MRFO–GBO are compared with those obtained by recently published optimization techniques as well as the original MRFO and GBO. The results illustrate the ability of the proposed MRFO–GBO in effectively solving the single and multi-objective EED problems in terms of precision, robustness, and convergence characteristics.
Article
Arrhythmia is an abnormal heartbeat rhythm, and its prevalence increases with age. An electrocardiogram (ECG) is a standard tool for detecting cardiac activity. However, because of the low amplitude, complexity, and non-linearity of the ECG signal, it is difficult to manually perform a rapid and accurate classification. Therefore, an automatic system that can identify different abnormal heartbeats from a large amount of ECG data should be developed for use in the healthcare field. This study proposed an approach based on deep learning that combined convolutional neural networks (CNNs) and long short-term memory networks (LSTM) to automatically identify six types of ECG signals: normal (N) sinus rhythm segments, atrial fibrillation (AFIB), ventricular bigeminy (B), pacing rhythm (P), atrial flutter (AFL), and sinus bradycardia (SBR). The proposed network applied a multi-input structure to process 10 s ECG signal segments and corresponding RR intervals from the MIT-BIH arrhythmia database. With a five-fold cross-validation strategy, this network achieved 99.32 % accuracy. Then, the diversity of the subjects was increased in the training data by supplementing database, improving the previous network model. The method was validated using two additional databases, which are independent of the training database of the network. For the new N and AFIB in additional databases, the proposed method achieved an average accuracy of 97.15 %. The results showed that the proposed model had robust generalization performance and could be used as an auxiliary tool to help clinicians diagnose arrhythmia after training with a larger database.
Article
Abstract. Breast cancer is the second leading cause of death for women, so accurate early detection can help decrease breast cancer mortality rates. Computer-aided detection allows radiologists to detect abnormalities efficiently. Medical images are sources of information relevant to the detection and diagnosis of various diseases and abnormalities. Several modalities allow radiologists to study the internal structure, and these modalities have been met with great interest in several types of research. In some medical fields, each of these modalities is of considerable significance. This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress in this area. This review reflects on the classification of breast cancer utilizing multi-modalities medical imaging. Details are also given on techniques developed to facilitate the classification of tumors, non-tumors, and dense masses in various medical imaging modalities. It first provides an overview of the different approaches to machine learning, then an overview of the different deep learning techniques and specific architectures for the detection and classification of breast cancer. We also provide a brief overview of the different image modalities to give a complete overview of the area. In the same context, this review was performed using a broad variety of research databases as a source of information for access to various field publications. Finally, this review summarizes the future trends and challenges in the classification and detection of breast cancer.
Article
Financial analysis of the stock market using the historical data is the exigent demand in business and academia. This work explores the efficiency of three deep learning (DL) techniques, namely Bayesian regularization (BE), Levenberg–Marquardt (LM), and scaled conjugate gradient (SCG), for training nonlinear autoregressive artificial neural networks (NARX) for predicting specifically the closing price of the Egyptian Stock Exchange indices (EGX-30, EGX-30-Capped, EGX-50- EWI, EGX-70, EGX-100, and NILE). An empirical comparison is established among the experimented prediction models considering all techniques for the time horizon of 1 day, 3 days, 5 days, 7 days, 15 days and 30 days in advance, applying on all the datasets used in this study. For performance evaluation, statistical measures such as mean squared error (MSE) and correlation R are used. From the simulation result, it can be clearly suggested that BR outperforms other models for short-term prediction especially for 3 days ahead. On the other hand, LM generates better prediction accuracy than BR- and SCG-based models for long-term prediction, especially for 7-day prediction.
Article
In an organization, a group of people working for a common goal may not achieve their goal unless they organize themselves in a hierarchy called Corporate Rank Hierarchy (CRH). This principle motivates us to map the concept of CRH to propose a new algorithm for optimization that logically arranges the search agents in a hierarchy based on their fitness. The proposed algorithm is named as heap-based optimizer (HBO) because it utilizes the heap data structure to map the concept of CRH. The mathematical model of HBO is built on three pillars: the interaction between the subordinates and their immediate boss, the interaction between the colleagues, and self-contribution of the employees. The proposed algorithm is benchmarked with 97 diverse test functions including 29 CEC-BC-2017 functions with very challenging landscapes against 7 highly-cited optimization algorithms including the winner of CEC-BC-2017 (EBO-CMAR). In the first two experiments, the exploitative and explorative behavior of HBO is evaluated by using 24 unimodal and 44 multimodal functions, respectively. It is shown through experiments and Friedman mean rank test that HBO outperforms and secures 1st rank. In the third experiment, we use 29 CEC-BC-2017 benchmark functions. According to Friedman mean rank test HBO attains 2nd position after EBO-CMAR; however, the difference in ranks of HBO and EBO-CMAR is shown to be statistically insignificant by using Bonferroni method based multiple comparison test. Moreover, it is shown through the Friedman test that the overall rank of HBO is 1st for all 97 benchmarks. In the fourth and the last experiment, the applicability on real-world problems is demonstrated by solving 3 constrained mechanical engineering optimization problems. The performance is shown to be superior or equivalent to the other algorithms, which have been used in the literature. The source code of HBO is publicly available at https://github.com/qamar-askari/HBO.
Article
In this paper, we propose a new metaheuristic algorithm based on Lévy flight called Lévy flight distribution (LFD) for solving real optimization problems. The LFD algorithm is inspired from the Lévy flight random walk for exploring unknown large search spaces (e.g., wireless sensor networks (WSNs). To assess the performance of the LFD algorithm, various optimization test bed problems are considered, namely the congress on evolutionary computation (CEC) 2017 suite and three engineering optimization problems: tension/compression spring, the welded beam, and pressure vessel. The statistical simulation results revealed that the LFD algorithm provides better results with superior performance in most tests compared to several well-known metaheuristic algorithms such as simulated annealing (SA), differential evolution (DE), particle swarm optimization (PSO), elephant herding optimization (EHO), the genetic algorithm (GA), moth-flame optimization algorithm (MFO), whale optimization algorithm (WOA), grasshopper optimization algorithm (GOA), and Harris Hawks Optimization (HHO) algorithm. Furthermore, the performance of the LFD algorithm is tested on other different optimization problems of unknown large search spaces such as the area coverage problem in WSNs. The LFD algorithm shows high performance in providing a good deployment schema than energy-efficient connected dominating set (EECDS), A3, and CDS-Rule topology construction algorithms for solving the area coverage problem in WSNs. Eventually, the LFD algorithm performs successfully achieving a high coverage rate up to 43.16 %, while the A3, EECDS, and CDS-Rule algorithms achieve low coverage rates up to 40 % based on network sizes used in the simulation experiments. Also, the LFD algorithm succeeded in providing a better deployment schema than A3, EECDS, and CDS-Rule algorithms and enhancing the detection capability of WSNs by minimizing the overlap between sensor nodes and maximizing the coverage rate. The source code is currently available for public from: https://www.mathworks.com/matlabcentral/fileexchange/76103-lfd.
Article
This study was to assess the feasibility of using non-standardized single-lead electrocardiogram (ECG) monitoring to automatically detect atrial fibrillation (AF) with special emphasis on the combination of deep learning based algorithm and modified patch-based ECG lead. Fifty-five consecutive patients were monitored for AF in around 24 hours by patch-based ECG devices along with a standard 12-lead Holter. Catering to potential positional variability of patch lead, four typical positions on the upper-left chest were proposed. For each patch lead, the performance of automated algorithms with four different convolutional neural networks (CNN) was evaluated for AF detection against blinded annotations of two clinicians. A total of 349,388 10-second segments of AF and 161,084 segments of sinus rhythm were detected successfully. Good agreement between patch-based single-lead and standard 12-lead recordings was obtained at the position MP1 that corresponds to modified lead II, and a promising performance of the automated algorithm with an R-R intervals based CNN model was achieved on this lead in terms of accuracy (93.1%), sensitivity (93.1%), and specificity (93.4%). The present results suggest that the optimized patch-based ECG lead along by deep learning based algorithms may offer the possibility of providing an accurate, easy, and inexpensive clinical tool for mass screening of AF.
Article
This paper proposes a novel global optimization algorithm called Political Optimizer (PO), inspired by the multi-phased process of politics. PO is the mathematical mapping of all the major phases of politics such as constituency allocation, party switching, election campaign, inter-party election, and parliamentary affairs. The proposed algorithm assigns each solution a dual role by logically dividing the population into political parties and constituencies, which facilitates each candidate to update its position with respect to the party leader and the constituency winner. Moreover, a novel position updating strategy called recent past-based position updating strategy (RPPUS) is introduced, which is the mathematical modeling of the learning behaviors of the politicians from the previous election. The proposed algorithm is benchmarked with 50 unimodal, multimodal, and fixed dimensional functions against 15 state of the art algorithms. We show through experiments that PO has an excellent convergence speed with good exploration capability in early iterations. Root cause of such behavior of PO is incorporation of RPPUS and logical division of the population to assign dual role to each candidate solution. Using Wilcoxon rank-sum test, PO demonstrates statistically significant performance over the other algorithms. The results show that PO outperforms all other algorithms, and consistency in performance on such a comprehensive suite of benchmark functions proves the versatility of the algorithm. Furthermore, experiments demonstrate that PO is invariant to function shifting and performs consistently in very high dimensional search spaces. Finally, the applicability on real-world applications is demonstrated by efficiently solving four engineering optimization problems.
Article
In classification, regression, and other data mining applications, feature selection (FS) is an important pre-process step which helps avoid advert effect of noisy, misleading, and inconsistent features on the model performance. Formulating it into a global combinatorial optimization problem, researchers have employed metaheuristic algorithms for selecting the prominent features to simplify and enhance the quality of the high-dimensional datasets, in order to devise efficient knowledge extraction systems. However, when employed on datasets with extensively large feature-size, these methods often suffer from local optimality problem due to considerably large solution space. In this study, we propose a novel approach to dimensionality reduction by using Henry gas solubility optimization (HGSO) algorithm for selecting significant features, to enhance the classification accuracy. By employing several datasets with wide range of feature size, from small to massive, the proposed method is evaluated against well-known metaheuristic algorithms including grasshopper optimization algorithm (GOA), whale optimization algorithm (WOA), dragonfly algorithm (DA), grey wolf optimizer (GWO), salp swarm algorithm (SSA), and others from recent relevant literature. We used k-nearest neighbor (k-NN) and support vector machine (SVM) as expert systems to evaluate the selected feature-set. Wilcoxon’s ranksum non-parametric statistical test was carried out at 5% significance level to judge whether the results of the proposed algorithms differ from those of the other compared algorithms in a statistically significant way. Overall, the empirical analysis suggests that the proposed approach is significantly effective on low, as well as, considerably high dimensional datasets, by producing 100% accuracy on classification problems with more than 11,000 features.