Article

The parallel genetic algorithm as function optimizer[J]. Parallel Comput 17(6-7):619-632

Authors:
  • Fraunhofer AiS
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, the parallel genetic algorithm PGA is applied to the optimization of continuous functions. The PGA uses a mixed strategy. Subpopulations try to locate good local minima. If a subpopulation does not progress after a number of generations, hillclimbing is done. Good local minima of a subpopulation are diffused to neighboring subpopulations. Many simulation results are given with popular test functions. The PGA is at least as good as other genetic algorithms on simple problems. A comparison with mathematical optimization methods is done for very large problems. Here a breakthrough can be reported. The PGA is able to find the global minimum of Rastrigin's function of dimension 400 on a 64 processor system! Furthermore, we give an example of a superlinear speedup.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... While the general framework based on local and global search is kept similar to as in EFO, some modifications have been made to the base algorithm in order to adapt it for MOOPs (1)(2). Furthermore, by taking into account the more complex search space of such problems, some changes are applied in order to strengthen the exploration and exploitation search capabilities of the algorithm (3)(4), and to reduce the complexity of the algorithm (5). The modifications to the EFO algorithm are summarized in the followings: ...
... In the area of optimization, cellular-based approach was emerged from parallelization of genetic algorithm [5,6]. This approach relies on the placement of every node, which corresponds to individuals, in a toroidal mesh topology and edges directly connected to nodes represent the one-hop neighbors of individuals. ...
Chapter
Multiobjective optimization problems (MOOPs) require optimizing two or more, often conflicting objectives. The wide application of MOOPs has attracted the attention of researchers in academics and industry; therefore, a great deal of effort has been made to develop effective approaches toward solving MOOPs. In this chapter, we introduce a new metaheuristic approach called multiobjective electric fish optimization (MOEFO). The proposed approach is based on the Electric Fish Optimization (EFO) algorithm, a recently proposed metaheuristic algorithm for single-objective problems. Since EFO has achieved significant performance on solving different types of problems such as constrained and unconstrained problems, it is extended here for solving MOOPs efficiently. The proposed approach is compared with well-known meta-heuristics in the literature, and the experimental results show that MOEFO is among the best algorithms for solving MOOPs within a competitive running time. Moreover, it becomes very competitive for solving challenging Many-objective optimization problems (MaOPs) having four or more objectives.
... Approaches to overcome these issues include optimizing the GA parameters: using fuzzy logic [8], design of experiments (DOE) [9]- [11], or using machine learning [12]. Other approaches include parallel versions of GAs (PGA) like fine-grained PGA [13], [14], and coarse-grained PGA [7], [15]. However, as computation power increases, successively larger benchmark problem instances are now appearing in the literature, and marshaling the resources to execute complex algorithms on populations involving thousands of generations is not easy. ...
... Set index = thread row index, thread column index 13: Set auxiliary arr[row index, node ID] = population[index]. 14: return (auxiliary arr) 15: Function derive missing nodes(auxiliary arr) 16: For each thread: ...
Conference Paper
Full-text available
The Vehicle Routing Problem (VRP) is fundamental to logistics operations. Finding optimal solutions for VRPs related to large, real-world operations is computationally expensive. Genetic algorithms (GA) have been used to find good solutions for different types of VRPs but are slow to converge. This work utilizes high-performance computing (HPC) platforms to design a parallel GA (PGA) algorithm for solving large-scale VRP problems. The algorithm is implemented on an eight-GPU NVIDIA DGX-l server. Maximum parallelism is achieved by mapping all algorithm arrays into block threads to achieve high throughput and reduced latency for full GPU utilization. Tests with VRP benchmark problems of up to 20,000 nodes compare the algorithm performance (speed) with different GPU counts and a multi-CPU implementation. The developed algorithm provides the following improvements over CPU or single-GPU-based algorithms: (i) larger problem sizes up to 20,000 nodes are handled, (ii) execution time is reduced over the CPU by a factor of 1,700, and iii) for the range tested, the performance increases monotonically with the number of GPUs.
... With the "pbest-2" strategy, the position of the new individual in Figure 2c is also in the center of the selected individuals, this makes our algorithm utilize more individual sampling information. As shown in Figure In order to further reflect the different characteristics of the five strategies in seeking optimal solutions, we performed an experiment on the Rastrgin function [32] under the same conditions. The formula of Rastrigin is as follows, and its dimension was set to 2: ...
Article
Full-text available
As a newly developed metaheuristic algorithm, the artificial bee colony (ABC) has garnered a lot of interest because of its strong exploration ability and easy implementation. However, its exploitation ability is poor and dramatically deteriorates for high-dimension and/or non-separable functions. To fix this defect, a self-adaptive ABC with a candidate strategy pool (SAABC-CS) is proposed. First, several search strategies with different features are assembled in the strategy pool. The top 10% of the bees make up the elite bee group. Then, we choose an appropriate strategy and implement this strategy for the present population according to the success rate learning information. Finally, we simultaneously implement some improved neighborhood search strategies in the scout bee phase. A total of 22 basic benchmark functions and the CEC2013 set of tests were employed to prove the usefulness of SAABC-CS. The impact of combining the five methods and the self-adaptive mechanism inside the SAABC-CS framework was examined in an experiment with 22 fundamental benchmark problems. In the CEC2013 set of tests, the comparison of SAABC-CS with a number of state-of-the-art algorithms showed that SAABC-CS outperformed these widely-used algorithms. Moreover, despite the increasing dimensions of CEC2013, SAABC-CS was robust and offered a higher solution quality.
... This demonstrates SCM's greater ability in generalization over a range of different data. R-DB8's task is to model the Rastrigin function, which is often used to test optimization algorithms [58]. It can be seen in the table that SCM outperforms all other methods significantly in solving this complex problem. ...
Preprint
Full-text available
Real-time predictive modelling with desired accuracy is highly expected in industrial artificial intelligence (IAI), where neural networks play a key role. Neural networks in IAI require powerful, high-performance computing devices to operate a large number of floating point data. Based on stochastic configuration networks (SCNs), this paper proposes a new randomized learner model, termed stochastic configuration machines (SCMs), to stress effective modelling and data size saving that are useful and valuable for industrial applications. Compared to SCNs and random vector functional-link (RVFL) nets with binarized implementation, the model storage of SCMs can be significantly compressed while retaining favourable prediction performance. Besides the architecture of the SCM learner model and its learning algorithm, as an important part of this contribution, we also provide a theoretical basis on the learning capacity of SCMs by analysing the model's complexity. Experimental studies are carried out over some benchmark datasets and three industrial applications. The results demonstrate that SCM has great potential for dealing with industrial data analytics.
... At the same time, the dynamical exploration process deals with the case where a premature convergence might have occurred despite the diversity introduced by the distance-dependent mutation. More details can be found, e.g., in Grefenstette [15], Mühlenbein et al. [38], Sefrioui [50]. ...
Chapter
Full-text available
This article reviews the major improvements in efficiency and quality of evolutionary multi-objective and multi-disciplinary design optimization techniques achieved during 1994–2021. At first, we introduce briefly Evolutionary Algorithms (EAs) of increasing complexity as accelerated optimizers. After that, we introduce the hybridization of EAs with game strategies to gain higher efficiency. We review a series of papers where this technique is considered an accelerator of multi-objective optimizers and benchmarked on simple mathematical functions and simple aeronautical model optimization problems using friendly design frameworks. Results from numerical examples from real-life design applications related to aeronautics and civil engineering, with the chronologically improved EAs models and hybridized game EAs, are listed and briefly summarized and discussed. This article aims to provide young scientists and engineers a review of the development of numerical optimization methods and results in the field of EA-based design optimization, which can be further improved by, e.g., tools of artificial intelligence and machine learning.KeywordsSingle/multi-disciplinary design optimizationEvolutionary algorithmsGame strategiesHybridized gamesAeronauticsCivil engineering
... Later, it was extended to other metaheuristic algorithms and, in particular, to genetic algorithms. The efficiency and effectiveness of distributed genetic algorithms was demonstrated by Muhlenbein et al. [12] who applied a parallel distribution version of GA to different function optimization problems that are commonly employed to test optimization algorithms performances. ...
Article
Full-text available
Active debris removal missions require an accurate planning for maximizing mission payout, by reaching the maximum number of potential orbiting targets in a given region of space. Such a problem is known to be computationally demanding and the present paper provides a technique for preliminary mission planning based on a novel evolutionary optimization algorithm, which identifies the best sequence of debris to be captured and/or deorbited. An original archipelago structure is adopted for improving algorithm capabilities to explore the search space. Several crossover and mutation operators and migration schemes are also tested in order to identify the best set of algorithm parameters for the considered class of optimization problems. The algorithm is numerically tested for a fictitious cloud of debris in the neighbourhood of Sun-synchronous orbit, including cases with multiple chasers.
... We identify the Rastrigin function [45] that has long been known as a representative example, through which the performance comparison among the multitude of solving techniques can be clarified [46]. Considering the unique nature of containing a large number of variables defining the objective and constraints, we expand the Rastrigin function such that n is a sufficiently large number: ...
Preprint
Full-text available
There is no question to the fact that electric vehicles (EVs) are the most viable solution to the climate change that the planet has long been combating. Along the same line, it is a salient subject to expand the availability of charging infrastructure, which quintessentially necessitates the optimization of the charger's locations. This paper proposes to formulate the optimal EV charger location problem into a facility location problem (FLP). As an effort to find an efficient method to solve the well-known nonpolynomial deterministic (NP)-hard problem, we present a comparative quantification among several representative solving techniques.
... Our implementation of the SA algorithm incorporating reheating (Sect. B.2.3) was tested against a classical gradient descent minimisation algorithm on the Rastrigin function Mühlenbein et al. (1991). This function is a punitive 2D CF used in the optimisation development as a stress test for any global minimum search. ...
Article
Full-text available
Context. Recent instrumental developments have aimed to build large digital radio telescopes made of ~100k antennas. The massive data rate required to digitise all elements drives the instrumental design towards the hierarchical distribution of elements by groups of 𝒪 (10) that form small analogue phased arrays that lower the computational burden by one to two orders of magnitude. Aims. We study possible optimal layouts for a tile composed of five to 22 identical elements. We examine the impact of the tile layout on the overall response of an instrument. Methods. We used two optimisation algorithms to find optimal arrangements of elements in the tile using: (i) a deterministic method (Kogan) based on beam pattern derivative properties; and (ii) a stochastic method (modified simulated annealing) to find global optima minimising the side-lobe level while increasing the field of view (FOV) of the tile, a required condition for all-sky surveys. Results. We find that optimal tile arrangements are compact circular arrays that present some degree of circular symmetry while not being superimposable to any rotated version of themselves. The ‘optimal’ element number is found to be 16 or 17 antennas per tile. These could provide a maximum side-lobe level (SLL) of −33 dB (−24 dB) used with dipole (isotropic) elements. Due to constraints related to the analogue phasing implementation, we propose an approaching solution but with a regular arrangement on an equilateral lattice with 19 elements. By introducing random relative rotations between tiles, we compared and found that the 19-element equilateral tile results in better grating lobe mitigation and a larger FOV than that of rectangular tiles of 16 antennas. Conclusions. Optimal tile arrangements and their regular versions are useful to maximise the sensitivity of new-generation hierarchical radio telescopes. The proposed solution was implemented in NenuFAR, a pathfinder of SKA-LOW at the Nançay Radio Observatory.
... It is continuous, differentiable, separable and multi-modal. This function is given as: The Rastrigin function [25] is a typical multi-modal test function, whose d-dimensional (d = 1, 2, ...) variant is defined as: ...
Article
Full-text available
" ANNALES UNIVERSITATIS SCIENTIARUM BUDAPESTINENSIS DE ROLANDO EOTVOS NOMINATAE SECTIO COMPUTATORICA, 53. pp. 175-199. ISSN 0138-9491" Features of several well-known global and local optimization methods were discussed and their robustness and efficiency were tested on several artificial test functions in Matlab environment. The tested local methods were the interior-point, the quasi-Newton method, Nelder–Mead simplex, the pattern search, the NEWUOA and the BOBYQA methods. The global methods were the genetic algorithm (GA), the simulated annealing (SA), the particle swarm optimization (PSO), and the covariance matrix adaptation evolutionary strategy (CMA-ES) methods (see subsections 2.2 and 2.3 for their details). Furthermore, a novel global optimization method, called FOCusing robusT Optimization with Uncertainty-based Sampling (FOCTOPUS), which proved to be very efficient in the optimization of constrained and highly correlated parameters of combustion kinetic models, was also tested. The test functions were selected in such a way that they had a variety of features: uni-modal and multi-modal, differentiable and non-differentiable, separable and non-separable, low dimensional and high dimensional. The following test functions were used: the 20D Alpine function, the 4D Ackley function, the Cross-in-tray 2D function, the Hartmann-6D function, the Holder table 2D function, the 5D Rastrigin function, the 5D Rosenbrock function, the 4D modified Rosenbrock function, the 20D Zakharov function and a typical 2D multi-modal function. The general conclusion here is that, the global methods performed well on the multi-modal and high-dimensional test functions while the local methods were superior in the case of low-dimensional and unimodal test functions. For the highly multi-modal test functions, the GA was better than all the other methods. The FOCTOPUS method proved to be inferior to GA for most of the test functions, thus its application cannot be generally recommended.
... Formal description of the PGA According to (Mühlenbein, Schomisch, and Born, 1991), the PGA is a black-box solver which can be applied to the following class of problems: ...
Thesis
This thesis presents the supervision, analysis and optimization of power distribution systems considering the penetration of distributed energy resources and energy storage systems. The power distribution system planning is becoming an increasingly issue due to the deregulation of the power industry, the environmental policy changes, the introduction of new technologies and the transformation towards a smart power distribution grid definition. In consequence, the use of modeling and numerical evaluation tools is getting more attention for the system planners and operators. This has resulted in the development of a real-time experimental platform belonged to the Bank of the Energy concept. The platform covers all aspects of the challenges of future power system requirements related to the optimization of the local energy production and consumption. A hardware/software setup with emphasis to the utilization of real-time simulation and hardware in the loop testing with some typical reference applications are described. Additionally, it is proposed a novel methodology for battery energy storage systems (BESS) integration over a real distribution system using the parallel computing capabilities of the experimental platform. The placement of BESS is performed by a sensitivity analysis, while the output power rating sizing is deployed using a genetic algorithm. The outcomes of this methodology demonstrate the effectiveness of the proposed parallelization technique and show that voltage profile improvement and losses reduction are possible introducing the BESS into the system
... A 2-dimensional modified Rastrigin function used by Echard et al. [48] was selected to document it. It is a modification of the Rastrigin function, which is a standard benchmark for optimization algorithms [111]. This modification features regions with both positive and negative values of the function, which are interpreted as safe and failure events, respectively. ...
Article
The paper presents a new efficient and robust method for rare event probability estimation for computational models of an engineering product or a process returning categorical information only, for example, either success or failure. For such models, most of the methods designed for the estimation of failure probability, which use the numerical value of the outcome to compute gradients or to estimate the proximity to the failure surface, cannot be applied. Even if the performance function provides more than just binary output, the state of the system may be a non-smooth or even a discontinuous function defined in the domain of continuous input variables. This often happens because the mathematical model features non-smooth components or discontinuities (e.g., in the constitutive laws), bifurcations, or even domains in which no reasonable model response is obtained. In these cases, the classical gradient-based methods usually fail. We propose a simple yet efficient algorithm, which performs a sequential adaptive selection of points from the input domain of random variables to extend and refine a simple distance-based surrogate model. Two different tasks can be accomplished at any stage of sequential sampling: (i) estimation of the failure probability, and (ii) selection of the best possible candidate for the subsequent model evaluation if further improvement is necessary. The proposed criterion for selecting the next point for model evaluation maximizes the expected probability classified by using the candidate. Therefore, the perfect balance between global exploration and local exploitation is maintained automatically. If there are more rare events such as failure modes, the method can be generalized to estimate the probabilities of all these event types. Moreover, when the numerical value of model evaluation can be used to build a smooth surrogate, the algorithm can accommodate this information to increase the accuracy of the estimated probabilities. Lastly, we define a new simple yet general geometrical measure of the global sensitivity of the rare-event probability to individual variables, which is obtained as a by-product of the proposed refinement algorithm.
... To verify the feasibility and effectiveness of the proposed method, four typical benchmark functions [44], the Rastrigin, Sphere, Ackley and Griewank functions, are used to test the efficiency of the proposed mechanical parameter inversion model, and the results are compared with the PSO-RBFNN and QPSO-RBFNN models. The functions of Rastrigin [F 1 (x)], Sphere [F 2 (x)], Ackley [F 3 (x)] and Griewank [F 4 (x)] can be expressed as follows [55,56]: ...
Article
Full-text available
The mechanical parameter inversion model is an essential part of ensuring dam health; it provides a parametric basis for assessing the safe operational behavior of dams using numerical simulation techniques. Due to the complicated nonlinear mapping relationship between the roller compacted concrete (RCC) dam's mechanical parameters and various environmental quantities, as well as conventional statistical models, machine learning methods, and neural networks fail to consider the inputs of fuzzy uncertainty factors. Therefore, the accuracy, efficiency, and stability of inversion models are usually affected by their modeling methods. In this paper, a novel hybrid model for mechanical parameter inversion of an RCC dam is proposed, which uses a radial basis function neural network (RBFNN) to establish the nonlinear mapping relationship between the dam mechanical parameters and the environmental quantities, and the modified particle swarm optimization (PSO) algorithm is used to find the optimal parameters of the model. The modified PSO algorithm makes the inertia weight ω dynamically adjust with the number of iterations to improve the randomness and diversity of the particle population, and population crossover and mutation are introduced to improve the global search ability and convergence speed of the algorithm. The proposed hybrid model is verified and comparatively analyzed by four typical mathematical test functions, and the results show that the proposed model exhibits good performance in parameter inversion accuracy, convergence speed, stability and robustness. Finally, the model is applied to the mechanical parameter inversion analysis of an RCC gravity dam in Henan Province in China. The results show that the proposed model is feasible and reasonable for practical engineering applications, and the relative error between the results obtained by inputting the inverted parameters into the numerical model and monitoring data was within 10%. The methodology derived from this study can provide technical support and a reference for the mechanical parameter inversion analysis of similar dam projects.
... A 2-dimensional modified Rastrigin function used by Echard et al. [48] was selected to document it. It is a modification of the Rastrigin function, which is a standard benchmark for optimization algorithms [111]. This modification features regions with both positive and negative values of the function, which are interpreted as safe and failure events, respectively. ...
Preprint
Full-text available
The paper presents a new efficient and robust method for rare event probability estimation for computational models of an engineering product or a process returning categorical information only, for example, either success or failure. For such models, most of the methods designed for the estimation of failure probability, which use the numerical value of the outcome to compute gradients or to estimate the proximity to the failure surface, cannot be applied. Even if the performance function provides more than just binary output, the state of the system may be a non-smooth or even a discontinuous function defined in the domain of continuous input variables. In these cases, the classical gradient-based methods usually fail. We propose a simple yet efficient algorithm, which performs a sequential adaptive selection of points from the input domain of random variables to extend and refine a simple distance-based surrogate model. Two different tasks can be accomplished at any stage of sequential sampling: (i) estimation of the failure probability, and (ii) selection of the best possible candidate for the subsequent model evaluation if further improvement is necessary. The proposed criterion for selecting the next point for model evaluation maximizes the expected probability classified by using the candidate. Therefore, the perfect balance between global exploration and local exploitation is maintained automatically. The method can estimate the probabilities of multiple failure types. Moreover, when the numerical value of model evaluation can be used to build a smooth surrogate, the algorithm can accommodate this information to increase the accuracy of the estimated probabilities. Lastly, we define a new simple yet general geometrical measure of the global sensitivity of the rare-event probability to individual variables, which is obtained as a by-product of the proposed algorithm.
... It is designed to optimize stochastic, multi-objective, and high-dimensional black-box problems. We show that GEO outperforms baseline methods in finding Pareto fronts of Styblinski-Tang [42], Ackley [39], Rastrigin [36][37] [38], Rosenbrock [40][41], ZDT1, ZDT2, and ZDT3 [43] test functions. Also, by converting Cartpole-V1 [44] (and LeNet-5 [45]) to high-dimensional black-box problems, we show that GEO can generate optimized sequences (and images) without observing intermediate states (and without the direct gradient flow from target functions). ...
Preprint
Many scientific and technological problems are related to optimization. Among them, black-box optimization in high-dimensional space is particularly challenging. Recent neural network-based black-box optimization studies have shown noteworthy achievements. However, their capability in high-dimensional search space is still limited. This study investigates a novel black-box optimization method based on evolution strategy and generative neural network model. We designed the algorithm so that the evolutionary strategy and the generative neural network model work cooperatively with each other. This hybrid model enables reliable training of surrogate networks; it optimizes multi-objective, high-dimensional, and stochastic black-box functions. In this experiment, our method outperforms baseline optimization methods, including an NSGA-II and Bayesian optimization.
... The overall calibration process takes approximatively 12 hours on a standard PC desktop equipped with an Intel processor (11th Generation Intel i7, 4 cores/8 threads with 16GB of RAM). We can accelerate the code via parallel computing: by running in parallel the set of parameters for each generation of the combined genetic algorithm and/or swarm algorithm [32]. The speed up of such code is known to be close to optimal, i.e., the acceleration factor is close to the number of processors used. ...
Preprint
Full-text available
The workflow in a large medical procedural suite is characterized by high variability of input and suboptimum throughput. Today, Electronic Health Record systems do not address the problem of workflow efficiency: there is still high frustration from medical staff who lack real-time awareness and need to act in response of events based on their personal experiences rather than anticipating. In a medical procedural suite, there are many nonlinear coupling mechanisms between individual tasks that could wrong and therefore is difficult for any individual to control the workflow in real-time or optimize it in the long run. We propose a system approach by creating a digital twin of the procedural suite that assimilates Electronic Health Record data and supports the process of making rational, data-driven, decisions to optimize the workflow on a continuous basis. In this paper, we focus on long-term improvements of gastroenterology outpatient centers as a prototype example and use six months of data acquisition in two different clinical sites to validate the artificial intelligence algorithms.
... It was first proposed by Rastrigin [49] as a 2-dimensional function and has been generalized by Rudolph [50]. The generalized version was popularized by Hoffmeister and Bäck [51] and Mühlenbein et al. [52]. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima. ...
Article
Full-text available
Parameter identification is an important research topic with a variety of applications in industrial and environmental problems. Usually, a functional has to be minimized in conjunction with parameter identification; thus, there is a certain similarity between the parameter identification and optimization. A number of rigorous and efficient algorithms for optimization problems were developed in recent decades for the case of a convex functional. In the case of a non-convex functional, the metaheuristic algorithms dominate. This paper discusses an optimization method called modified bee colony algorithm (MBC), which is a modification of the standard bees algorithm (SBA). The SBA is inspired by a particular intelligent behavior of honeybee swarms. The algorithm is adapted for the parameter identification of reaction-dominated pore-scale transport when a non-convex functional has to be minimized. The algorithm is first checked by solving a few benchmark problems, namely finding the minima for Shekel, Rosenbrock, Himmelblau and Rastrigin functions. A statistical analysis was carried out to compare the performance of MBC with the SBA and the artificial bee colony (ABC) algorithm. Next, MBC is applied to identify the three parameters in the Langmuir isotherm, which is used to describe the considered reaction. Here, 2D periodic porous media were considered. The simulation results show that the MBC algorithm can be successfully used for identifying admissible sets for the reaction parameters in reaction-dominated transport characterized by low Pecklet and high Damkholer numbers. Finite element approximation in space and implicit time discretization are exploited to solve the direct problem.
... As the 3D-VDG model computation step cost ~5 h per iteration, one could simply run this step externally on large computational clusters using iMOD-SEAWAT, which utilizes distributed memory parallelization for faster computation times (Verkaik et al., 2021). In a practical setting however, we suggest fully parallelizing the optimization itself, using methods such as evolutionary algorithms (Brauer et al., 2002;Mühlenbein et al., 1991) and parallel Bayesian optimization (e.g. González et al., 2015;Kandasamy et al., 2017) where function evaluations can be done in parallel rather than sequentially. ...
Article
Full-text available
Freshwater aquifers in low elevation coastal zones are known to be threatened by saltwater intrusion (SWI). As these areas host a significant share of the world's population, an excellent understanding of this phenomenon is required to effectively manage the availability of freshwater. SWI is a dynamic process, therefore saline groundwater distributions can change quickly over time – particularly in stressed areas with anthropogenic drivers. To model these changes, regional 3D variable-density groundwater (3D-VDG) flow and coupled salt transport models are often used to estimate the current (and future distributions) of saline groundwater. Unfortunately, parameterising 3D-VDG models is a challenging task with many uncertainties. Generally, uncertainty is reduced through the addition of observational data – such as Airborne Electromagnetic (AEM) surveys or ground-based information – that offer information about parameters such as salinity and hydraulic head. Recent research has shown the ability of AEM surveys to provide accurate 3D groundwater salinity models across regional scales, as well as highlighting the potential for good survey repeatability. To this end we investigated the novel approach of using repeat AEM surveys (flown over the same area at different points in time) and 3D-VDG models to jointly improve the parameterisation of 3D-VDG models - while simultaneously providing a detailed 3D map of groundwater salinity distributions. Using detailed 3D synthetic models, the results of this study quantitatively highlight the usefulness of this approach, while offering practical information on implementation and further research.
... Benchmark functions (cost functions) are widely used for testing the efficiency of the optimization algorithms. Some of the well-known benchmark functions are such as: the Sphere function [56][57][58], the Ackley function [57,59,60], the Levy function [61], the Rastrigin function [62,63], the Three-hump Camel function [64], the Rosenbrock function [56,57,65], and the Griewank function [57,66]. ...
Article
Full-text available
Nowadays, the economy of countries highly depends on the agriculture productivity which has a great effect on the development of human civilization. Sometimes, plant diseases cause a major reduction in agricultural products. This paper proposes a new approach for the automatic detection and classification of plant leaf diseases based on using the ELM deep learning algorithm on a real dataset of plant leaf images. The proposed approach uses the k-means clustering algorithm for image segmentation and applies the GLCM for feature extraction. The BDA optimization algorithm is employed for feature selection, and lastly the ELM algorithm is used for plant leaf diseases classification. The presented approach optimizes the input weights and hidden biases for ELM. The dataset used in this study includes 73 plant leaf images, such that we tested our approach on four diseases that usually affect plants, including: Alternaria alternata, Anthracnose, Bacterial blight, and Cercospora leaf spot. The experimental results show that the proposed approach has achieved encouraging results in terms of these classification measures: accuracy, error rate, recall, F score, and AUC which are 94%, 6%, 92%, 95%, and 96% respectively. Babu
... The topology of the demes is predefined and constant, resembling the possible paths that an individual could choose, thus determining the reachable sub-populations. This is known as the Migration, or Island model [3][4][5][6]. A possible structure of the demes is shown in Figure 2. ...
Conference Paper
Full-text available
This paper gives the structure optimization of Single-phase Induction Motors based on Genetic Algorithms in MATLAB environment. Genetic Algorithms are applied for optimization of the parameters of the motor in order to achieve the best performance possible for the lowest cost of production. A version of Simple Genetic Algorithm is compared to a Multi-population Genetic Algorithm.
... In this implementation, a number of subpopulations perform the standard GA, and after certain number of generations, some individuals migrate between them. This strategy can enhance the quality of the results comparing to traditional GA [27]. Further parameters of MPGA are the number of subpopulations, the number of generations required until migration happens, and the topology of the migration. ...
Article
Full-text available
Force Density Method is a well-known form-finding method for discrete networks that is based on geometrical equilibrium of forces and could be used to design efficient structural forms. The choice of force density distribution along the structure is mostly upon user which in most cases is set be constant, with peripheral members having relatively larger force density to prevent excessive shrinking. In order to direct FDM towards more efficient structures, an optimization strategy can be used to inform the form-finding process by minimizing certain objective function, e.g. weight of the structure. Desired structural, constructional or geometrical constraints can also be incorporated in this framework that otherwise user may not have direct control over. It has been shown that considerable weight reduction is possible compared to uniform force density in the structure while satisfying additional constraints. In this way, form-finding can be augmented and novel structural forms can be designed.
... This method uses the search in the space of the subsets based on the estimation of the accuracy resulting from the selection of special subset under classification algorithm conditions. Researchers have focused on evolutionary search algorithms such as genetic algorithm (GA), [16] simulated annealing (SA), [17][18][19] particle swarm optimization, [20,21] and cultural algorithm [22] over the past decade. ...
Article
Full-text available
Background: Mass spectrometry is a method for identifying proteins and could be used for distinguishing between proteins in healthy and nonhealthy samples. This study was conducted using mass spectrometry data of ovarian cancer with high resolution. Usually, diagnostic and monitoring tests are done according to sensitivity and specificity rates; thus, the aim of this study is to compare mass spectrometry of healthy and cancerous samples in order to find a set of biomarkers or indicators with a reasonable sensitivity and specificity rates. Methods: Therefore, combination methods were used for choosing the optimum feature set as t-test, entropy, Bhattacharya, and an imperialist competitive algorithm with K-nearest neighbors classifier. The resulting feature from each method was feed to the C5 decision tree with 10-fold cross-validation to classify data. Results: The most important variables using this method were identified and a set of rules were extracted. Similar to most frequent features, repetitive patterns were not obtained; the generalized rule induction method was used to identify the repetitive patterns. Conclusion: Finally, the resulting features were introduced as biomarkers and compared with other studies. It was found that the resulting features were very similar to other studies. In the case of the classifier, higher sensitivity and specificity rates with a lower number of features were achieved when compared with other studies.
... (d) Rastrigin function The Rastrigin function is another difficult function to deal with for global optimization with genetic algorithm due to the large search space and its many local minima [36]. The general d-dimensional formulation is the following: ...
... GA, first put forth by John Holland [43], has been used in different applications and resulted in outstanding outcomes. In GA, the correct responses of a generation are combined in order to attain the optimal solution, based on the theory of survival of the fittest in living organisms [44]. In each step, chromosomes which are the counterparts of the countries in ICA, are randomly selected from the current populations (parents) and used to produce the next generation. ...
Article
The use of cement as a curing agent has been widely adopted in soft soil engineering to increase the strength of soft soil. The cemented soil is gradually exposed to the air and in a natural environment becomes unsaturated. Unconfined compressive strength (UCS) of the unsaturated cemented soils is a key parameter for assessing their strength behaviour. UCS determination of unsaturated cemented soils by using laboratory methods is a complex, time-consuming, and expensive procedure due to the difficulty in suction control. This study aims to model the UCS of unsaturated cemented Wenzhou clay, i.e., capture the nonlinear relations between UCS and its influential variables, including cement content (%), dry density (g/cm3) and suction (MPa) for the first time by using machine learning approaches. Toward this aim, three advanced computational frameworks are developed based on hybrid evolutionary approaches in which evolutionary optimisation algorithms, including genetic algorithm (GA), particle swarm optimisation (PSO) and imperialist competitive algorithm (ICA) are hybridised with artificial neural network (ANN). Results show that developed models have a great ability to mimic the nonlinear relationships between UCS and its influential variables and PSO-ANN presents the best performance among three models on the training and testing datasets. To facilitate engineering application, an engineering database for Wenzhou soft clay at different cement ratios (up to 11%), suctions (up to 300MPa) and dry densities (1∼1.5 g/cm3) is built by using the developed PSO-ANN model.
... Appendix A.1. Creating an Instance of Box Figure A1 shows how to create an instance of the Box problem to find the minimum point of a 2D Rastrigin function, a popular continuous optimization test function in the scientific community [60,61]. As illustrated in the example and in the code comments, the main steps are (1) define the search space S, by specifying the number of dimensions in the Rastrigin problem and the (regular) bounds; and (2) create an instance of the Box problem by passing to the constructor the aforementioned S, the fitness function, the optimization's purpose, and whether to bound the outlying solutions' dimensions. 1 import torch 2 from gpol . ...
Article
Full-text available
Several interesting libraries for optimization have been proposed. Some focus on individual optimization algorithms, or limited sets of them, and others focus on limited sets of problems. Frequently, the implementation of one of them does not precisely follow the formal definition, and they are difficult to personalize and compare. This makes it difficult to perform comparative studies and propose novel approaches. In this paper, we propose to solve these issues with the General Purpose Optimization Library (GPOL): a flexible and efficient multipurpose optimization library that covers a wide range of stochastic iterative search algorithms, through which flexible and modular implementation can allow for solving many different problem types from the fields of continuous and combinatorial optimization and supervised machine learning problem solving. Moreover, the library supports full-batch and mini-batch learning and allows carrying out computations on a CPU or GPU. The package is distributed under an MIT license. Source code, installation instructions, demos and tutorials are publicly available in our code hosting platform (the reference is provided in the Introduction).
Article
Full-text available
Wireless sensor networks (WSNs) play a critical role in environmental sensing and data transmission. However, their performances are often hindered by challenges like localization accuracy and storage capacity. Existing variants of DV-Hop algorithm suffer from issues like high memory usage, low localization accuracy, and limited applicability in realistic three-dimensional (3D) environments. To overcome these challenges and solve the localization problem for DV-Hop based WSN nodes in 3D space, this research proposes a novel hybrid optimizer called PCWCO (Parallel Compact Willow Catkin Optimization Algorithm). The PCWCO algorithm incorporates compact technique and a new parallel strategy into the Willow Catkin Optimization (WCO) framework, aiming to reduce memory usage while enhancing solution quality. Rigorous numerical validations are conducted using benchmark functions from the CEC2017 to assess the performance of the proposed PCWCO optimizer. The results demonstrate that PCWCO exhibits competitive performance compared to classical intelligent optimization algorithms. Moreover, we synergistically integrate the PCWCO algorithm with DV-Hop to form a hybrid approach called PCWCO-3D-DV-Hop to facilitate the localization efficiency of WSN nodes in 3D space.
Article
With the widespread application of Evolutionary Algorithms (EAs), their performance needs to be evaluated using more than the usual performance metrics. In the EA literature, various metrics assess the convergence ability of these algorithms. However, many of them require prior knowledge of the Pareto-optimal front. Recently, two Karush–Kuhn–Tucker Proximity Metrics (KKTPMs) have been introduced to measure convergence without needing prior knowledge. One relies on the Augmented Achievement Scalarization Function (AASF) method (AASF-KKTPM), and the other on Benson’s method (B-KKTPM). However, both require specific parameters and reference points, making them computationally expensive. In this paper, we introduce a novel version of KKTPM applicable to single-, multi-, and many-objective optimization problems, utilizing the Penalty-based Boundary Intersection (PBI) method (PBI-KKTPM). Additionally, we introduce an approximate approach to reduce the computational burden of solving PBI-KKTPM optimization problems. Through extensive computational experiments across 23 case studies, our proposed metric demonstrates a significant reduction in computational cost, ranging from 20.68% to 60.03% compared to the computational overhead associated with the AASF-KKTPM metric, and from 16.48% to 61.15% compared to the computational overhead associated with the B-KKTPM metric. Noteworthy features of the proposed metric include its independence from knowledge of the true Pareto-optimal front and its applicability as a termination criterion for EAs. Another feature of the proposed metric is its ability to deal with black box problems very efficiently.
Chapter
The objective of the current study is to choose the best model with the highest accuracy rate using three robust hybrid artificial intelligence-based models: the ANN-GA, ANN-PSO and ANN-RSA. To do so, a sample of COVID-19 confirmed cases in India between August 1, 2021, and July 26, 2022, is first compiled. A random allocation of 70% (30%) of the total observation has been chosen as training (testing) data. After that, the LM method is used to train an ANN model. Accordingly, the appropriate number of hidden neurons is determined to be 9 using the R^2 and RMSE criterion. To achieve the highest accuracy rate, ANN-GA, ANN-PSO, and ANN-RSA models are developed using the presented ANN model. The optimized model's R-values during the training and test phases, according to ANN-GA and ANN-PSO, are 0.99 and 0.95, respectively. The R-values for ANN-RSA varied from 0.99 to 0.96. hence, the ANN-RSA demonstrated superior performance in forecasting COVID-19 cases in India.
Thesis
Full-text available
Low elevation coastal zones (LECZs), defined here as areas ≤10 m above mean sea-level, have attracted people for millennia. With their abundant resources and access to trading ports, today these areas host nearly 800 million people – a figure that is predicted to rise to 1.4 billion by 2060. Naturally, it follows that with population growth comes an increased demand for freshwater. Globally, about 50% of the world’s population rely on groundwater to satisfy basic requirements – and LECZs are no exception. However, owing to anthropogenic activity, these aquifers are highly stressed and vulnerable to saltwater intrusion – where freshwater can be displaced by saline groundwater. As a result, aquifers within LECZs require effective management, which in turn requires an excellent regional understanding of fresh-saline groundwater distributions. Airborne electromagnetic (AEM) surveys offer a rapid and cost- effective method to map this, and thus are increasingly used for these purposes. Despite increasing popularity, AEM is relatively poorly understand in terms of regional (provincial or country scale) groundwater salinity mapping. The primary objective of this thesis is therefore twofold: 1) to better understand the uncertainties involved and 2), use this understanding to develop novel mapping methods. Consequently, the following research questions were formulated: 1. What is the effect of using different inversion methods and parameters on mapping results? 2. How are results affected by different quantities of available data? 3. Based on the results of chapters 2 and 3, what further methodological improvements can we make? 4. Are groundwater salinity movements sensitive to repeated AEM surveys? To answer these research questions, I used data from the Province of Zeeland, The Netherlands. Zeeland experienced sea-level transgressions in the early Holocene, followed by the construction of by man-made coastal defences – which allowed the recent freshening of shallow aquifers. As a result, much of Zeeland comprises shallow rainwater lenses (often as little as 1 – 2m thick), and therefore offers a fascinating study area for applied groundwater research. Furthermore, a recently undertaken, high quality AEM survey and plentiful ground-based data provide an ideal testing ground. Throughout the thesis, these data were used either directly, or as basis for the construction of highly detailed synthetic models.
Article
Full-text available
Reflective Fourier ptychographic microscopy has much potential for industrial surface inspection due to the ability to overcome the physical limits of the numerical aperture of the optical microscopy. However, the time cost for misalignment calibration and Fourier ptychography (FP) recovery has been a big issue for industrial applications which require fast output. Here, we introduce a misalignment estimation method which is accelerated through the whale optimization algorithm by running in parallel on Central Processing Units (CPUs), named pWOA, to reduce computing time. The proposed method shows more accurate and faster calibration compared to other population-based algorithms, including the parallel genetic algorithm and the parallel particle swarm optimization, and much faster than that of the exhaustive search both in simulations and in real experiments. In addition, this cost-effective technique can address global non-convex optimization problems with heavy loss functions including reflective FP.
Article
In order to provide theoretical reference for the determination of structural parameter tolerance at the initial design stage of manipulator body structure’s precision, so as to improve the geometric positioning accuracy of manipulator end-effector, a parameter tolerance optimal allocation method based on the optimal precision model is proposed. This method does not need tolerance-cost model and relevant statistical information, reaching the balance between accuracy and manufacturing cost. This study takes ROKAE XB7 6-DOF series manipulator as the research object. Based on the synthesis error of extremum model and optimal precision model, genetic algorithm is used to optimize the tolerance allocation of DH-parameters. Through the simulation analysis and experimental verification of the positioning error of the manipulator, the results show that the designed precision of the manipulator can meet the design requirements. The tolerance optimal allocation method proposed in this study can be applied to the geometric precision design of the 6-DOF series manipulator.
Article
Gradient-related first-order methods have become the workhorse of large-scale numerical optimization problems. Many of these problems involve nonconvex objective functions with multiple saddle points, which necessitates an understanding of the behavior of discrete trajectories of first-order methods within the geometrical landscape of these functions. This paper concerns convergence of first-order discrete methods to a local minimum of nonconvex optimization problems that comprise strict-saddle points within the geometrical landscape. To this end, it focuses on analysis of discrete gradient trajectories around saddle neighborhoods, derives sufficient conditions under which these trajectories can escape strict-saddle neighborhoods in linear time, explores the contractive and expansive dynamics of these trajectories in neighborhoods of strict-saddle points that are characterized by gradients of moderate magnitude, characterizes the non-curving nature of these trajectories, and highlights the inability of these trajectories to re-enter the neighborhoods around strict-saddle points after exiting them. Based on these insights and analyses, the paper then proposes a simple variant of the vanilla gradient descent algorithm, termed Curvature Conditioned Regularized Gradient Descent (CCRGD) algorithm, which utilizes a check for an initial boundary condition to ensure its trajectories can escape strict-saddle neighborhoods in linear time. Convergence analysis of the CCRGD algorithm, which includes its rate of convergence to a local minimum, is also presented in the paper. Numerical experiments are then provided on a test function as well as a low-rank matrix factorization problem to evaluate the efficacy of the proposed algorithm.
Chapter
Industrial clustering can be considered as a result of two types of forces: the centripetal force, which encourages the concentration of the manufacturing activities, and centrifugal force, which acts in the opposite direction. To explain the agglomeration process, we develop an agent-based version of Krugman model (1991) which allows us considering less restrictive and real hypothesis on building up the model. In contrast to Krugman’s model which considers the workforce displacement between regions and assumes the firm’s size as an unlimited endogenous variable, the proposed model explicates the workers’ displacement at the level of firms in different regions and further introduces the effect of “ carrying capacity “ of firms, a concept very common in ecological models. We implement the agent-based model (ABM) with the goal of exploring the spatial distribution of firms across regions to see whether the workforce will concentrate. For this purpose, several scenarios were tested for different values of the key parameters of our ABM. The latter are: (1) the transport cost (τ), (2) the share of income spent on industrial goods (μ), (3) the elasticity of substitution (σ), (4) the initial nominal wage differential between regions (∆W) and (5) the carrying capacity of firms (Cap). Simulations have been carried under two initial conditions: an equal repartition of firms between regions and an unequal one. Simulation results suggest that reducing transport costs can have drastic effects on the disparity of industries. In case of high transport costs, decreasing the wage differential between regions reduces the spatial inequality. Further, the limited capacity of a firm to hire labor can slow down the migration process, which leads to a reduction in regional inequality.
Chapter
Reducing active distribution network power loss has been a significant concern in distribution networks’ safe and efficient functioning. This study proposes a solution to optimize reconfiguration to overcome substantial network loss in local areas based on an improved bats algorithm (IBA) for reactive power compensation optimization. The bat algorithm (BA) is adjusted with an adaptive inertia weighting factor and the stochastic operator to improve convergence speed and precision. An exponential fitness function is constructed by considering the topological structure of the distribution network. According to the experimental results of a 33-node case study, both the global optimization accuracy and the voltage quality of the regional network are improved, e.g., the active network loss of the distribution network dropped from 6. 56 to 5.36%, and the voltage qualification rate increased from 80. 61 to 92. 86%. Compared results also demonstrate that the proposed scheme provides a better-optimized reconfiguration of reactive power compared to others.KeywordsDistribution networkReactive power optimizationReduction of net lossImproved bats algorithm
Article
For low failure probability prediction, subset simulation can reduce the number of simulations significantly compared to the traditional MCS method for a target prediction error limit. To further reduce the computational effort for cases where the performance function evaluation is tedious and time-consuming, the performance function is approximated by a sequentially updated (instead of a global) Kriging model. For this purpose, an active learning technique with a new learning and stopping criterion is employed to efficiently select points to train the computationally cheaper Kriging model at each simulation level, which is used to estimate the intermediate threshold and generate a new simulation sample. The updated Kriging model at the final subset simulation level is used to compute the conditional failure probability. The failure probability is estimated based on an initial simulation sample size N, and an updated N is computed and employed to obtain the final failure probability within a desired bound on the variability. The efficiency (in terms of the number of expensive evaluations using the actual performance function) and prediction error (represented by the mean square error (MSE)) of the proposed method are benchmarked using several examples. The method is shown to be more efficient (using lesser expensive evaluations) with smaller MSE for problems having low failure probabilities compared with selected existing methods.
Article
Accurate chemical kinetics are essential for reactor design and operation. However, despite recent advances in “big data” approaches, availability of kinetic data is often limited in industrial practice. Herein, we present a comparative proof‐of‐concept study for kinetic parameter estimation from limited data. Cross‐validation (CV) is implemented to nonlinear least‐squares (LS) fitting and evaluated against Markov chain Monte Carlo (MCMC) and genetic algorithm (GA) routines using synthetic data generated from a simple model reaction. As expected, conventional LS is fastest but least accurate in predicting true kinetics. MCMC and GA are effective for larger data sets but tend to overfit to noise for limited data. Cross‐validation least‐square (LS‐CV) strongly outperforms these methods at much reduced computational cost, especially for significant noise. Our findings suggest that implementation of cross‐validation with conventional regression provides an efficient approach to kinetic parameter estimation with high accuracy, robustness against noise, and only minimal increase in complexity. This article is protected by copyright. All rights reserved.
Chapter
This research proposes a solution to the economic load dispatch (ELD) problem using the enhanced flower pollination algorithm (EFPA). The EFPA has captured advanced features, e.g., simple structure, quick search, and implementation, due to introducing a random jump perturbation in the global pollination phase and updating the switching probability according to the optimal global value of each iteration. The mathematically expressed ELD is described as a typical problem of multi-constraint nonlinear optimization. Two case calculation experiments will be used to assess and evaluate the proposed system’s optimization efficiency from multi-dimensional economic perspectives with its feasibility and efficacy solution. The validation results show that the proposed scheme has more convergence speed and robustness than the other comparative methods.
Article
Parameter optimization or “data fitting” is a computational process that identifies a set of parameter values that best describe an experimental data set. Parameter optimization is commonly carried out using a computer program utilizing a non-linear least squares (NLLS) algorithm. These algorithms work by continuously refining a user supplied initial guess resulting in a systematic increase in the goodness of fit. A well-understood problem with this class of algorithms is that in the case of models with correlated parameters the optimized output parameters are initial guess dependent. This dependency can potentially introduce user bias into the resultant analysis. While many optimization programs exist, few address this dilemma. Here we present a data analysis tool, MENOTR, that is capable of overcoming the initial guess dependence in parameter optimization. Several case studies with published experimental data are presented to demonstrate the capabilities of this tool. The results presented here demonstrate how to effectively overcome the initial guess dependence of NLLS leading to greater confidence that the resultant optimized parameters are the best possible set of parameters to describe an experimental data set. While the optimization strategies implemented within MENOTR are not entirely novel, the application of these strategies to optimize parameters in kinetic and thermodynamic biochemical models is uncommon. MENOTR was designed to require minimal modification to accommodate a new model making it immediately accessible to researchers with a limited programming background. We anticipate that this toolbox can be used in a wide variety of data analysis applications. Prototype versions of this toolbox have been used in a number of published investigations already, as well as ongoing work with chemical-quenched flow, stopped-flow, and molecular tweezers data sets. Statement of significance Non-linear least squares (NLLS) is a common form of parameter optimization in biochemistry kinetic and thermodynamic investigations These algorithms are used to fit experimental data sets and report corresponding parameter values. The algorithms are fast and able to provide good quality solutions for models involving few parameters. However, initial guess dependence is a well-known drawback of this optimization strategy that can introduce user bias. An alternative method of parameter optimization are genetic algorithms (GA). Genetic algorithms do not have an initial guess dependence but are slow at arriving at the best set of fit parameters. Here, we present MENOTR, a parameter optimization toolbox utilizing a hybrid GA/NLLS algorithm. The toolbox maximizes the strength of each strategy while minimizing the inherent drawbacks.
Article
Interatomic potential (i.e. force-field) plays a vital role in atomistic simulation of materials. Empirical potentials like the embedded atom method (EAM) and its variant angular-dependent potential (ADP) have proven successful in many metals. In the past few years, machine learning has became a compelling approach for modeling interatomic interactions. Powered by big data and efficient optimizers, machine learning interatomic potentials can generally approximate to the the accuracy of the first-principles calculations based on the quantum mechanics theory. In this works, we successfully developed a route to express EAM and ADP within machine learning framework in highly-vectorizable form and further incorperated several physical constraints into the training. As it is proved in this work, the performances of empirical potentials can be significantly boosted with few traning data. For energy and force predictions, machine tuned EAM and ADP, can be almost as accurate as the computationally expensive spectral neighbor analysis potential (SNAP) on the fcc Ni, bcc Mo and Mo-Ni alloy systems. Machine learned EAM and ADP can also reproduce some key materials properties, such as elastic constants, melting temperatures and surface energies, close to the first-principles accuracy. Our results suggest a new and systematic route for developing machine learning interatomic potentials. All the new algorithms have been implemented in our program TensorAlloy. Program summary Program Title: TensorAlloy CPC Library link to program files: https://doi.org/10.17632/w8htd7vmwh.2 Code Ocean capsule: https://codeocean.com/capsule/1671487 Licensing provisions: LGPL Programming language: Python 3.7 Journal reference of previous version: Comput. Phys. Commun. 250 (2020) 107057, https://doi.org/10.1016/j.cpc.2019.107057 Does the new version supersede the previous version?: Yes Reasons for the new version: This new version is a significant extension to the previous version. Now machine learning approaches and physical constraints can be used together to tune empirical potentials (for example the embbeded atom method). Machine learning optimized empirical potentials can be almost as accurate as machine learning interaction potentials but run much faster. Nature of problem: Optimizing empirical potentials with machine learning approaches and physical constraints. Solution method: The TensorAlloy program is built upon TensorFlow and the virtual-atom approach. we successfully developed a route to express the embbeded atom method and the angular-dependent potential within machine learning framework in highly-vectorizable form and further enhanced the potentials with physical constraints. Machine learning can significantly boost their performances with few training data. Additional comments including restrictions and unusual features: This program needs TensorFlow 1.14.*. Neither newer or older TensorFlow is supported.
Chapter
As per the World Health Organization (WHO), Coronaviruses represents a huge virus family that creates diseases in humans/animals. The newly discovered coronavirus is known as Covid-19 (Cov-19). In December 2019, this virus broke out in Wuhan, China causing massive havoc worldwide. The design, development, theory and application of standards related to computation form the Computational Intelligence (CI) methods. Conventionally, the 3 key components of CI are the Artificial Neural Networks (ANN), Fuzzy System (FS), & Computation related to Evolution (EC). Lately, techniques like chaotic systems, support vector machines (SVM), etc. have been included into the CI techniques. Machine Learning (ML) enables systems to automatically learn without being programmed explicitly. Deep Learning (DL) represents a family of ML techniques based on ANN. A great potential has been observed while applying CI, ML, and DL techniques for predicting Cov-19. In this regard, the key objective of this chapter is to present an extensive review to the readers on how CI, ML, and DL techniques can be utilized to effectively predict Cov-19. The chapter deals with the review of the different CI, ML, and DL techniques such as ANN, FS, and EC that have been applied for Cov-19 prediction. The application and suitability of CI, ML, and DL techniques for screening and treating the patients, tracing the contacts along with Cov-19 forecasting is discussed in detail. A discussion of why certain CI, ML, and DL techniques are useful for the Cov-19 prediction is also presented.
Chapter
Coronavirus is a single-stranded RNA virus, with a size of about 120 nm. Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV2) is an infectious disease caused by a recently discovered coronavirus Disease 2019 (COVID-19) novel coronavirus and is a major global pandemic that has already taken more than 600 thousand lives. This disease is spreading rapidly across the globe and already has more than 14 million confirmed cases worldwide. It can be latent with low or no symptoms at all for initial days after transmission. The exponential increase in new cases each day calls for swift, responsive measures and rapid control at the individual, societal, and governmental levels. Lack of awareness about COVID-19 and negligence in implementation of protective measures poses a greater threat to human lives in this pandemic. There is a shortage in hospital wards as well as the absence of proper medicine for recovery. Necessary consumer products like masks and sanitizers are short in supply and unable to quench the demand. This chapter aims to present a clear and concise idea of the nature of COVID-19, its classification, and its characteristics. Classification refers to the hierarchical naming and the reason behind the protocol accepted. The characteristics refer to the nature of the virus, sources of spreading, its strengths and weaknesses, just to name a few. Causes and symptoms related to this disease are explained in detail. Modes of transmission and infection are discussed. The latest and the most accurate data and findings are listed and described. Common misconceptions and misinformation are examined, analyzed, and debunked. The after-effects and steps to take are reviewed. The preventive, as well as curative measures, are conferred. Current and future challenges are also enumerated and summarized.
Article
Side-channel analysis achieves key recovery by analyzing physical signals generated during the operation of cryptographic devices. Power consumption is one kind of these signals and can be regarded as a multimedia form. In recent years, many artificial intelligence technologies have been combined with classical side-channel analysis methods to improve the efficiency and accuracy. A simple genetic algorithm was employed in Correlation Power Analysis (CPA) when apply to cryptographic algorithms implemented in parallel. However, premature convergence caused failure in recovering the whole key, especially when plenty of large S-boxes were employed in the target primitive, such as in the case of AES. In this article, we investigate the reason of premature convergence and propose a Multiple Sieve Method (MS-CPA), which overcomes this problem and reduces the number of traces required in correlation power analysis. Our method can be adjusted to combine with key enumeration algorithms and further improves the efficiency. Simulation experimental results depict that our method reduces the required number of traces by and , compared to classic CPA and the Simple-Genetic-Algorithm-based CPA (SGA-CPA), respectively, when the success rate is fixed to . Real experiments performed on SAKURA-G confirm that the number of traces required for recovering the correct key in our method is almost equal to the minimum number that makes the correlation coefficients of correct keys stand out from the wrong ones and is much less than the numbers of traces required in CPA and SGA-CPA. When combining with key enumeration algorithms, our method has better performance. For the traces number being 200 (noise standard deviation ), the attacks success rate of our method is , which is much higher than the classic CPA with key enumeration ( success rate). Moreover, we adjust our method to work on that DPA contest v1 dataset and achieve a better result (40.04 traces) than the winning proposal (42.42 traces).
Article
Full-text available
In Part II of our paper, two stochastic methods for global optimization are described that, with probability 1, find all relevant local minima of the objective function with the smallest possible number of local searches. The computational performance of these methods is examined both analytically and empirically.
Article
Full-text available
This paper reports work done over the past three years using rank-based allocation of reproductive trials. New evidence and arguments are presented which suggest that allocating reproductive trials according to rank is superior to fitness proportionate reproduction. Ranking can not only be used to slow search speed, but also to increase search speed when appropriate. Furthermore, the use of ranking provides a degree of control over selective pressure that is not possible with fitness proportionate reproduction. The use of rank-based allocation of reproductive trials is discussed in the context of 1) Holland's schema theorem, 2) DeJong's standard test suite, and 3) a set of neural net optimization problems that are larger than the problems in the standard test suite. The GENITOR algorithm is also discussed; this algorithm is specifically designed to allocate reproductive trials according to rank.
Article
A new algorithm for combinatorial search optimization is developed. This algorithm is based on orthogonal arrays as planning schemes and search graph techniques as representation schemes. Based on the algorithm, a discrete formulation is given to model two search domains. As an application, the algorithm is used to deal with the problem of least cost tolerance allocation with optimum process selection. Studies are performed to compare between different orthogonal array and column assignment and number of design levels with respect to optimum. The proposed algorithm is capable of dealing with continuous, discrete linear and nonlinear functions and is validated by testcases versus other local and global search methods.
Article
Two general convergence proofs for random search algorithms. The authors review the literature and show how the results extend those available for specific variants of the conceptual algorithm studied. An examination is made of the convergence results to examine convergence rates and to actually design implementable methods. A report is presented on some computational experience.
Article
This paper introduces a new method for the global unconstrained minimization of a differentiable objective function. The method is based on search trajectories, which are defined by a differential equation and exhibit certain similarities to the trajectories of steepest descent. The trajectories depend explicitly on the value of the objective function and aim at attaining a given target level, while rejecting all larger local minima. Convergence to the gloal minimum can be proven for a certain class of functions and appropriate setting of two parameters.
Article
A new multi-start algorithm for global unconstrained minimization is presented in which the search trajectories are derived from the equation of motion of a particle in a conservative force field, where the function to be minimized represents the potential energy. The trajectories are modified to increase the probability of convergence to a comparatively low local minimum, thus increasing the region of convergence of the global minimum. A Bayesian argument is adopted by which, under mild assumptions, the confidence level that the global minimum has been attained may be computed. When applied to standard and other test functions, the algorithm never failed to yield the global minimum.
Article
In this stochastic approach to global optimization, clustering techniques are applied to identify local minima of a real valued objective function that are potentially global. Three different methods of this type are described; their accuracy and efficiency are analyzed in detail.
Article
Evolution algorithms for combinatorial optimization have been proposed in the 70's. They did not have a major influence. With the availability of parallel computers, these algorithms will become more important.In this paper we discuss the dynamics of three different classes of evolution algorithms: network algorithms derived from the replicator equation, Darwinian algorithms and genetic algorithms inheriting genetic information.We present a new genetic algorithm which relies on intelligent evolution of individuals. With this algorithm, we have computed the best solution of a famous travelling salesman problem. The algorithm is inherently parallel and shows a superlinear speedup in multiprocessor systems.
Article
Problems concerning parallel computers can be understood and solved using results of natural sciences.We show that the mapping problem—assigning processes to processors—can be reduced to the graph partitioning problem. We solve the partitioning problem by an evolution method derived from biology.The evolution method is then applied to the travelling salesman problem. The competition part of the evolution method gives an almost linear speedup compared to the sequential method. A cooperation method leads to new heuristics giving better results than the known heuristics.
Conference Paper
The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation.
Conference Paper
Evolution Strategies (ESs) and Genetic Algorithms (GAs) are compared in a formal as well as in an experimental way. It is shown, that both are identical with respect to their major working scheme, but nevertheless they exhibit significant differences with respect to the details of the selection scheme, the amount of the genetic representation and, especially, the self-adaptation of strategy parameters.
Article
Thesis (Ph. D.)--University of Michigan, 1975. Includes bibliographical references (leaves 253-256). Photocopy.
Article
The task of optimizing a complex system presents at least two levels of problems for the system designer. First, a class of optimization algorithms must be chosen that is suitable for application to the system. Second, various parameters of the optimization algorithm need to be tuned for efficiency. A class of adaptive search procedures called genetic algorithms (GA) has been used to optimize a wide variety of complex systems. GA's are applied to the second level task of identifying efficient GA's for a set of numerical optimization problems. The results are validated on an image registration problem. GA's are shown to be effective for both levels of the systems optimization problem.
Workshop on Parallel Problem Soloing from Nature
  • Schwefel
  • Ppsn-First
  • Internat
Schwefel, ed., PPSN-First Internat. Workshop on Parallel Problem Soloing from Nature, Dortmund, FRG (October 1-3, 1990) Preprints, 1990.
Evolution in time and space-the parallel genetic algorithm
  • H Miihlenbein
H. Miihlenbein, Evolution in time and space-the parallel genetic algorithm, in: G. Rawlins, ed., Foundations of Genetic Algorithms (Morgan Kaufman, Los Altos, CA, 1991)@BULLET
Evolution algorithm in combinatorial optimization
  • H Mühlenbein
  • M Gorges-Schleuter
  • O Krämer