IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council

Published by Institute of Electrical and Electronics Engineers
Publications
In this paper we present an adaptive multi-element generalized polynomial chaos (ME-gPC) method, which can achieve hp-convergence in random space. ME-gPC is based on the decomposition of random space and generalized polynomial chaos (gPC). Using proper numerical schemes to maintain the local orthogonality on-the-fly, we perform gPC locally and adaptively. The key idea is to combine the polynomial chaos method of h version and p version. The adaptive ME-gPC shows good performance in dealing with problems related to long-term integration, large perturbation and discontinuities. Benchmarks and applications of ME-gPC are presented.
 
In the above titled paper (ibid., vol. 12, no. 1, pp. 41-63, Feb. 08), Fig. 20 was wrong. Its replacement is presented here.
 
In the above titled paper (ibid., vol. 12, no. 6, pp. 714-723, Dec. 08), there was an error in the pseudo-code for the incremental hypervolume by slicing objectives (IHSO) that might prevent its easy implementation. The corrected pseudo-code is presented here.
 
We propose a crossover-based adaptive local search (LS) operation for enhancing the performance of standard differential evolution (DE) algorithm. Incorporating LS heuristics is often very useful in designing an effective evolutionary algorithm for global optimization. However, determining a single LS length that can serve for a wide range of problems is a critical issue. We present a LS technique to solve this problem by adaptively adjusting the length of the search, using a hill-climbing heuristic. The emphasis of this paper is to demonstrate how this LS scheme can improve the performance of DE. Experimenting with a wide range of benchmark functions, we show that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm. Performance comparisons with other LS heuristics and with some other well-known evolutionary algorithms from literature are also presented.
 
A Basic Trap function ‘ ’ p “ • ” ’ – .
A Complex Trap function ‘ ’ p “ • ” ’ – .
Absorption times for a (1+1)-EA on a 㠒 ä ae  ç trap function.
Methods are developed to numerically analyze an evolutionary algorithm (EA) that applies mutation and selection on a bit-string representation to find the optimum for a bimodal unitation function called a trap function. This research bridges part of the gap between the existing convergence velocity analysis of strictly unimodal functions and global convergence results assuming the limit of infinite time. As a main result of this analysis, a new so-called (1 : λ)-EA is proposed, which generates offspring using individual mutation rates p<sub>i</sub>. While a more traditional EA using only one mutation rate is not able to find the global optimum of the trap function within an acceptable (nonexponential) time, our numerical investigations provide evidence that the new algorithm overcomes these limitations. The analysis tools used for the analysis, based on absorbing Markov chains and the calculation of transition probabilities, are demonstrated to provide an intuitive and useful method for investigating the capabilities of EAs to bridge the gap between a local and a global optimum in bimodal search spaces.
 
Representation of a process plan.  
The use of intelligent techniques in the manufacturing field has been growing the last decades due to the fact that most manufacturing optimization problems are combinatorial and NP hard. This paper examines recent developments in the field of evolutionary computation for manufacturing optimization. Significant papers in various areas are highlighted, and comparisons of results are given wherever data are available. A wide range of problems is covered, from job shop and flow shop scheduling, to process planning and assembly line balancing
 
Concatenation of P 1 .ˆ y, P 2 .ˆ y, . . . , P K .ˆ y constitutesˆyconstitutesˆ constitutesˆy.  
Averaged best fitness values of four PSO variants (using a ring topology) on f 1 of 2, 5, 10, 20, 50, and 100 dimensions.  
Averaged best fitness values for sep-CMA-ES and CCPSO2 on functions of 500 dimensions. (a) f 1 ShiftedSphere. (b) f 2 SchwefelProblem. (c) f 3 ShiftedRosenbrock. (d) f 4 ShiftedRastrigin. (e) f 5 ShiftedGriewank. (f) f 6 ShiftedAckley. (g) f 7 ShiftedRastrigin. (h) f 3r ShiftedRotatedRosenbrock. (i) f 4r ShiftedRotatedRastrigin.  
Averaged best fitness values for sep-CMA-ES and CCPSO2 on functions of 1000 dimensions. (a) f 1 ShiftedSphere. (b)f 2 SchwefelProblem. (c) f 3 ShiftedRosenbrock. (d) f 4 ShiftedRastrigin. (e) f 5 ShiftedGriewank. (f) f 4 ShiftedRastrigin. (g) f 7 ShiftedRastrigin. (h) f 3r ShiftedRotatedRosenbrock. (i) f 4r ShiftedRotatedRastrigin.  
Averaged best fitness values for sep-CMA-ES and CCPSO2 on f 5r and f 6r of 500 dimensions. (a) f 5r . (b) f 6r .  
This paper presents a new cooperative coevolving particle swarm optimization (CCPSO) algorithm in an attempt to address the issue of scaling up particle swarm optimization (PSO) algorithms in solving large-scale optimization problems (up to 2000 real-valued variables). The proposed CCPSO2 builds on the success of an early CCPSO that employs an effective variable grouping technique random grouping. CCPSO2 adopts a new PSO position update rule that relies on Cauchy and Gaussian distributions to sample new points in the search space, and a scheme to dynamically determine the coevolving subcomponent sizes of the variables. On high-dimensional problems (ranging from 100 to 2000 variables), the performance of CCPSO2 compared favorably against a state-of-the-art evolutionary algorithm sep-CMA-ES, two existing PSO algorithms, and a cooperative coevolving differential evolution algorithm. In particular, CCPSO2 performed significantly better than sep-CMA-ES and two existing PSO algorithms on more complex multimodal problems (which more closely resemble real-world problems), though not as well as the existing algorithms on unimodal functions. Our experimental results and analysis suggest that CCPSO2 is a highly competitive optimization algorithm for solving large-scale and complex multimodal optimization problems.
 
Over the last decade, a variety of evolutionary algorithms (EAs) have been proposed for solving multiobjective optimization problems. Especially more recent multiobjective evolutionary algorithms (MOEAs) have been shown to be efficient and superior to earlier approaches. An important question however is whether we can expect such improvements to converge onto a specific efficient MOEA that behaves best on a large variety of problems. In this paper, we argue that the development of new MOEAs cannot converge onto a single new most efficient MOEA because the performance of MOEAs shows characteristics of multiobjective problems. While we point out the most important aspects for designing competent MOEAs in this paper, we also indicate the inherent multiobjective tradeoff in multiobjective optimization between proximity and diversity preservation. We discuss the impact of this tradeoff on the concepts and design of exploration and exploitation operators. We also present a general framework for competent MOEAs and show how current state-of-the-art MOEAs can be obtained by making choices within this framework. Furthermore, we show an example of how we can separate nondomination selection pressure from diversity preservation selection pressure and discuss the impact of changing the ratio between these components.
 
Based on evolutionary computation (EC) concepts, we developed an adaptive evolutionary planner/navigator (EP/N) as a novel approach to path planning and navigation. The EP/N is characterized by generality, flexibility, and adaptability. It unifies off-line planning and online planning/navigation processes in the same evolutionary algorithm which 1) accommodates different optimization criteria and changes in these criteria, 2) incorporates various types of problem-specific domain knowledge, and 3) enables good tradeoffs among near-optimality of paths, high planning efficiency, and effective handling of unknown obstacles. More importantly, the EP/N can self-tune its performance for different task environments and changes in such environments, mostly through adapting probabilities of its operators and adjusting paths constantly, even during a robot's motion toward the goal
 
First Page of the Article
 
A simple optimization procedure for constraint based problems which works without an objective function is described. The absence of an objective function makes the problem formulation particularly simple. The new method lends itself to parallel computation and is well suited for tasks where a family of solutions is required, trade-off situations have to be dealt with or the design center has to be found. ________________________________________ 1) International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn -Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email: rainer.storn@zfe.siemens.de. 2 1. Introduction The design of a technical system is usually associated with the process of properly choosing some system parameters such that the technical system meets its specifications. The parameter choosing process can also be regarded as an o...
 
Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a new view on penalty function methods in terms of the dominance of penalty and objective functions. Some of the pitfalls of naive penalty methods are discussed in these terms. The new ranking method is tested using a (μ, λ) evolution strategy on 13 benchmark problems. Our results show that suitable ranking alone (i.e., selection), without the introduction of complicated and specialized variation operators, is capable of improving the search performance significantly
 
The potential and effectiveness of the newly developed Pareto-based multiobjective evolutionary algorithms (MOEA) for solving a real-world power system multiobjective nonlinear optimization problem are comprehensively discussed and evaluated in this paper. Specifically, nondominated sorting genetic algorithm, niched Pareto genetic algorithm, and strength Pareto evolutionary algorithm (SPEA) have been developed and successfully applied to an environmental/economic electric power dispatch problem. A new procedure for quality measure is proposed in this paper in order to evaluate different techniques. A feasibility check procedure has been developed and superimposed on MOEA to restrict the search to the feasible region of the problem space. A hierarchical clustering algorithm is also imposed to provide the power system operator with a representative and manageable Pareto-optimal set. Moreover, an approach based on fuzzy set theory is developed to extract one of the Pareto-optimal solutions as the best compromise one. These multiobjective evolutionary algorithms have been individually examined and applied to the standard IEEE 30-bus six-generator test system. Several optimization runs have been carried out on different cases of problem complexity. The results of MOEA have been compared to those reported in the literature. The results confirm the potential and effectiveness of MOEA compared to the traditional multiobjective optimization techniques. In addition, the results demonstrate the superiority of the SPEA as a promising multiobjective evolutionary algorithm to solve different power system multiobjective optimization problems.
 
Multiple-objective metaheuristics, e.g., multiple-objective evolutionary algorithms, constitute one of the most active fields of multiple-objective optimization. Since 1985, a significant number of different methods have been proposed. However, only few comparative studies of the methods were performed on large-scale problems. We continue two comparative experiments on the multiple-objective 0/1 knapsack problem reported in the literature. We compare the performance of two multiple-objective genetic local search (MOGLS) algorithms to the best performers in the previous experiments using the same test instances. The results of our experiment indicate that our MOGLS algorithm generates better approximations to the nondominated set in the same number of functions evaluations than the other algorithms
 
This paper introduces the ant colony system (ACS), a distributed algorithm that is applied to the traveling salesman problem (TSP). In the ACS, a set of cooperating agents called ants cooperate to find good solutions to TSPs. Ants cooperate using an indirect form of communication mediated by a pheromone they deposit on the edges of the TSP graph while building solutions. We study the ACS by running experiments to understand its operation. The results show that the ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and we conclude comparing ACS-3-opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs
 
The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. This paper analyzes a particle's trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions
 
Schematic view of the situation in which function space F is 3-dimensional. The uniform prior over this space, ~ 1 lies along the diagonal. Diierent algorithms a give diierent vectors v lying in the cone surrounding the diagonal. A particular problem is represented by its prior ~ p lying on the simplex. The algorithm that will perform best will be the algorithm in the cone having the largest inner product with ~ p.
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms
 
The premise behind all evolutionary methods is ldquosurvival of the fittest,rdquo and consequently, individuals require a quantitative fitness measure. This paper proposes a novel strategy for evaluating individual's relative strengths and weaknesses, as well as representing these in the form of a binary string fitness characterization (BSFC); in addition, as customary, an overall fitness value is assigned to each individual. Utilizing the BSFC, we demonstrate both novel population evaluation measures and a pairwise mating strategy, comparative partner selection (CPS), with the aim of evolving a population that promotes effective solutions by reducing population-wide weaknesses. This strategy is tested with six standard genetic programming benchmarking problems.
 
We propose the use of a new algorithm to solve multiobjective optimization problems. Our proposal adapts the well-known scatter search template for single-objective optimization to the multiobjective domain. The result is a hybrid metaheuristic algorithm called Archive-Based hYbrid Scatter Search (AbYSS), which follows the scatter search structure but uses mutation and crossover operators from evolutionary algorithms. AbYSS incorporates typical concepts from the multiobjective field, such as Pareto dominance, density estimation, and an external archive to store the nondominated solutions. We evaluate AbYSS with a standard benchmark including both unconstrained and constrained problems, and it is compared with two state-of-the-art multiobjective optimizers, NSGA-II and SPEA2. The results obtained indicate that, according to the benchmark and parameter settings used, AbYSS outperforms the other two algorithms as regards the diversity of the solutions, and it obtains very competitive results according to the convergence to the true Pareto fronts and the hypervolume metric.
 
One of the major obstacles to achieving robots capable of operating in real-world environments is enabling them to cope with a continuous stream of unanticipated situations. In previous work, it was demonstrated that a robot can autonomously generate self-models, and use those self-models to diagnose unanticipated morphological change such as damage. In this paper, it is shown that multiple physical quadrupedal robots with similar morphologies can share self-models in order to accelerate modeling. Further, it is demonstrated that quadrupedal robots which maintain separate self-modeling algorithms but swap self-models perform better than quadrupedal robots that rely on a shared self-modeling algorithm. This finding points the way toward more robust robot teams: a robot can diagnose and recover from unanticipated situations faster by drawing on the previous experiences of the other robots.
 
This paper presents a search method that combines elements from evolutionary and local search paradigms by the systematic use of crossover operations, generally used as structured exchange of genes between a series of solutions in genetic algorithms. Crossover operations here are particularly utilized as a systematic means to generate several possible solutions from two superior solutions. To test the effectiveness of the method, it has been applied to the resource-constrained project scheduling problem. The computational experiments show that the application of the method to this problem is promising.
 
A convergence acceleration operator (CAO) is described which enhances the search capability and the speed of convergence of the host multiobjective optimization algorithm. The operator acts directly in the objective space to suggest improvements to solutions obtained by a multiobjective evolutionary algorithm (MOEA). The suggested improved objective vectors are then mapped into the decision variable space and tested. This method improves upon prior work in a number of important respects, such as mapping technique and solution improvement. Further, the paper discusses implications for many-objective problems and studies the impact of the use of the CAO as the number of objectives increases. The CAO is incorporated with two leading MOEAs, the non-dominated sorting genetic algorithm and the strength Pareto evolutionary algorithm and tested. Results show that the hybridized algorithms consistently improve the speed of convergence of the original algorithm while maintaining the desired distribution of solutions. It is shown that the operator is a transferable component that can be hybridized with any MOEA.
 
An improved resource allocation scheme is proposed in this paper which uses genetic algorithms (GAs) in conjunction with the recently developed plane cover multiple-access (PCMA) scheme in order to maximize the attainable capacity of packet-based wireless cellular networks. The studied problem has been proven to be in the class of nondeterministic polynomial (NP)-hard problem, therefore, the powerful search capability of the GA is a key factor in improving the performance of cellular resource allocation. Computer simulation results suggest that the proposed approach outperforms the "uniform" and the "greedy" algorithm-based "min " methods in terms of the number of serviced users.
 
Classification is one of the fundamental tasks of data mining. Most rule induction and decision tree algorithms perform a local, greedy search to generate classification rules that are often more complex than necessary. Evolutionary algorithms for pattern classification have recently received increased attention because they can perform global searches. In this paper, we propose a new approach for discovering classification rules by using gene expression programming (GEP), a new technique of genetic programming (GP) with linear representation. The antecedent of discovered rules may involve many different combinations of attributes. To guide the search process, we suggest a fitness function considering both the rule consistency gain and completeness. A multiclass classification problem is formulated as multiple two-class problems by using the one-against-all learning method. The covering strategy is applied to learn multiple rules if applicable for each class. Compact rule sets are subsequently evolved using a two-phase pruning method based on the minimum description length (MDL) principle and the integration theory. Our approach is also noise tolerant and able to deal with both numeric and nominal attributes. Experiments with several benchmark data sets have shown up to 20% improvement in validation accuracy, compared with C4.5 algorithms. Furthermore, the proposed GEP approach is more efficient and tends to generate shorter solutions compared with canonical tree-based GP classifiers.
 
A hybrid evolutionary technique is proposed for data mining tasks, which combines a principle inspired by the immune system, namely the clonal selection principle, with a more common, though very efficient, evolutionary technique, gene expression programming (GEP). The clonal selection principle regulates the immune response in order to successfully recognize and confront any foreign antigen, and at the same time allows the amelioration of the immune response across successive appearances of the same antigen. On the other hand, gene expression programming is the descendant of genetic algorithms and genetic programming and eliminates their main disadvantages, such as the genotype-phenotype coincidence, though it preserves their advantageous features. In order to perform the data mining task, the proposed algorithm introduces the notion of a data class antigen, which is used to represent a class of data, the produced rules are evolved by our clonal selection algorithm (CSA), which extends the recently proposed CLONALG algorithm. In CSA, among other new features, a receptor editing step has been incorporated. Moreover, the rules themselves are represented as antibodies that are coded as GEP chromosomes in order to exploit the flexibility and the expressiveness of such encoding. The proposed hybrid technique is tested on a set of benchmark problems in comparison to GEP. In almost all problems considered, the results are very satisfactory and outperform conventional GEP both in terms of prediction accuracy and computational efficiency.
 
We prove some convergence properties for a class of ant colony optimization algorithms. In particular, we prove that for any small constant ε > 0 and for a sufficiently large number of algorithm iterations t, the probability of finding an optimal solution at least once is P*(t) &ges; 1 - ε and that this probability tends to 1 for t→∞. We also prove that, after an optimal solution has been found, it takes a finite number of iterations for the pheromone trails associated to the found optimal solution to grow higher than any other pheromone trail and that, for t→∞, any fixed ant will produce the optimal solution during the tth iteration with probability P &ges; 1 εˆ(τ<sub>min</sub>, τ<sub>max</sub>), where τ<sub>min</sub> and τ<sub>max</sub> are the minimum and maximum values that can be taken by pheromone trails
 
Two learning methods for acquiring position evaluation for small Go boards are studied and compared. In each case the function to be learned is a position-weighted piece counter and only the learning method differs. The methods studied are temporal difference learning (TDL) using the self-play gradient-descent method and coevolutionary learning, using an evolution strategy. The two approaches are compared with the hope of gaining a greater insight into the problem of searching for "optimal" zero-sum game strategies. Using tuned standard setups for each algorithm, it was found that the temporal-difference method learned faster, and in most cases also achieved a higher level of play than coevolution, providing that the gradient descent step size was chosen suitably. The performance of the coevolution method was found to be sensitive to the design of the evolutionary algorithm in several respects. Given the right configuration, however, coevolution achieved a higher level of play than TDL. Self-play results in optimal play against a copy of itself. A self-play player will prefer moves from which it is unlikely to lose even when it occasionally makes random exploratory moves. An evolutionary player forced to perform exploratory moves in the same way can achieve superior strategies to those acquired through self-play alone. The reason for this is that the evolutionary player is exposed to more varied game-play, because it plays against a diverse population of players.
 
This paper presents a generalization of the graph- based genetic programming (GP) technique known as Cartesian genetic programming (CGP). We have extended CGP by utilizing automatic module acquisition, evolution, and reuse. To benchmark the new technique, we have tested it on: various digital circuit problems, two symbolic regression problems, the lawnmower problem, and the hierarchical if-and-only-if problem. The results show the new modular method evolves solutions quicker than the original nonmodular method, and the speedup is more pronounced on larger problems. Also, the new modular method performs favorably when compared with other GP methods. Analysis of the evolved modules shows they often produce recognizable functions. Prospects for further improvements to the method are discussed.
 
This paper investigates how an evolutionary algorithm with an indirect encoding exploits the property of phenotypic regularity, an important design principle found in natural organisms and engineered designs. We present the first comprehensive study showing that such phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases. Such an ability to produce regular solutions that can exploit the regularity of problems is an important prerequisite if evolutionary algorithms are to scale to high-dimensional real-world problems, which typically contain many regularities, both known and unrecognized. The indirect encoding in this case study is HyperNEAT, which evolves artificial neural networks (ANNs) in a manner inspired by concepts from biological development. We demonstrate that, in contrast to two direct encoding controls, HyperNEAT produces both regular behaviors and regular ANNs, which enables HyperNEAT to significantly outperform the direct encodings as regularity increases in three problem domains. We also show that the types of regularities HyperNEAT produces can be biased, allowing domain knowledge and preferences to be injected into the search. Finally, we examine the downside of a bias toward regularity. Even when a solution is mainly regular, some irregularity may be needed to perfect its functionality. This insight is illustrated by a new algorithm called HybrID that hybridizes indirect and direct encodings, which matched HyperNEAT's performance on regular problems yet outperformed it on problems with some irregularity. HybrID's ability to improve upon the performance of HyperNEAT raises the question of whether indirect encodings may ultimately excel not as stand-alone algorithms, but by being hybridized with a further process of refinement, wherein the indirect encoding produces patterns that exploit problem regularity and the refining process modifies that pattern to capture irregularities. This- - paper thus paints a more complete picture of indirect encodings than prior studies because it analyzes the impact of the continuum between irregularity and regularity on the performance of such encodings, and ultimately suggests a path forward that combines indirect encodings with a separate process of refinement.
 
In recent years, the concept of "autonomous mental development" (AMD) has been applied to the construction of artificial systems such as conversational agents, in order to resolve some of the difficulties involved in the manual definition of their knowledge bases and behavioral patterns. AMD is a new paradigm for developing autonomous machines, which are adaptive and flexible to the environment. Language development, a kind of mental development, is an important aspect of intelligent conversational agents. In this paper, we propose an intelligent conversational agent and its language development mechanism by putting together five promising techniques: Bayesian networks, pattern matching, finite-state machines, templates, and genetic programming (GP). Knowledge acquisition implemented by finite-state machines and templates, and language learning by GP are used for language development. Several illustrations and usability tests show the usefulness of the proposed developmental conversational agent
 
We propose an integrated technique of genetic programming (GP) and reinforcement learning (RL) to enable a real robot to adapt its actions to a real environment. Our technique does not require a precise simulator because learning is achieved through the real robot. In addition, our technique makes it possible for real robots to learn effective actions. Based on this proposed technique, we acquire common programs, using GP, which are applicable to various types of robots. Through this acquired program, we execute RL in a real robot. With our method, the robot can adapt to its own operational characteristics and learn effective actions. In this paper, we show experimental results from two different robots: a four-legged robot "AIBO" and a humanoid robot "HOAP-1." We present results showing that both effectively solved the box-moving task; the end result demonstrates that our proposed technique performs better than the traditional Q-learning method.
 
Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.
 
A novel approach for the autonomous virulence adaptation (AVA) of competing populations in a coevolutionary optimization framework is presented. Previous work has demonstrated that setting an appropriate virulence, v, of populations accelerates coevolutionary optimization by avoiding detrimental periods of disengagement. However, since the likelihood of disengagement varies both between systems and over time, choosing the ideal value of v is problematic. The AVA technique presented here uses a machine learning approach to continuously tune v as system engagement varies. In a simple, abstract domain, AVA is shown to successfully adapt to the most productive values of v. Further experiments, in more complex domains of sorting networks and maze navigation, demonstrate AVA's efficiency over reduced virulence and the layered Pareto coevolutionary archive.
 
The centrality of the decision maker (DM) is widely recognized in the multiple criteria decision-making community. This translates into emphasis on seamless human-computer interaction, and adaptation of the solution technique to the knowledge which is progressively acquired from the DM. This paper adopts the methodology of reactive search optimization (RSO) for evolutionary interactive multiobjective optimization. RSO follows to the paradigm of “learning while optimizing,” through the use of online machine learning techniques as an integral part of a self-tuning optimization scheme. User judgments of couples of solutions are used to build robust incremental models of the user utility function, with the objective to reduce the cognitive burden required from the DM to identify a satisficing solution. The technique of support vector ranking is used together with a k-fold cross-validation procedure to select the best kernel for the problem at hand, during the utility function training procedure. Experimental results are presented for a series of benchmark problems.
 
Self-adapting: encoding aspect.
Average best fitness curves of self-adaptive DE algorithm for selected benchmark functions. All results are means of 50 runs. (a) Test function f. (b) Test function f. (c) Test function f. (d) Test function f .
Evolutionary processes of DE for functions f and f. Results were averaged over 30 independent runs. (a) Test function f. (b) Test function f. (c) Test function f. (d) Test function f. (parameter) settings were the same as proposed in Section VII-B. The results were averaged over 30 independent runs. The selected function problems are depicted in Figs. 3 and 4.
CR and F values by self-adaptive DE for functions f , f , and f , respectively. Dot is plotted when best fitness value in generation is improved. (a) Test function f. (b) Test function f. (c) Test function f. (d) Test function f. (e) Test function f. (f) Test function f .
CR and F values by self-adaptive DE for functions f and f , respectively. Dot is plotted when best fitness value in generation is improved. (a) Test function f. (b) Test function f. (c) Test function f. (d) Test function f .
We describe an efficient technique for adapting control parameter settings associated with differential evolution (DE). The DE algorithm has been used in many practical cases and has demonstrated good convergence properties. It has only a few control parameters, which are kept fixed throughout the entire evolutionary process. However, it is not an easy task to properly set control parameters in DE. We present an algorithm-a new version of the DE algorithm-for obtaining self-adaptive control parameter settings that show good performance on numerical benchmark problems. The results show that our algorithm with self-adaptive control parameter settings is better than, or at least comparable to, the standard DE algorithm and evolutionary algorithms from literature when considering the quality of the solutions obtained
 
Typical analog and radio frequency (RF) circuit sizing optimization problems are computationally hard and require the handling of several conflicting cost criteria. Many researchers have used sequential stochastic refinement methods to solve them, where the different cost criteria can either be combined into a single-objective function to find a unique solution, or they can be handled by multiobjective optimization methods to produce tradeoff solutions on the Pareto front. This paper presents a method for solving the problem by the former approach. We propose a systematic method for incorporating the tradeoff wisdom inspired by the circuit domain knowledge in the formulation of the composite cost function. Key issues have been identified and the problem has been divided into two parts: a) normalization of objective functions and b) assignment of weights to objectives in the cost function. A nonlinear, parameterized normalization strategy has been proposed and has been shown to be better than traditional linear normalization functions. Further, the designers' problem specific knowledge is assembled in the form of a partially ordered set, which is used to construct a hierarchical cost graph for the problem. The scalar cost function is calculated based on this graph. Adaptive mechanisms have been introduced to dynamically change the structure of the graph to improve the chances of reaching the near-optimal solution. A correlated double sampling offset-compensated switched capacitor analog integrator circuit and an RF low-noise amplifier in an industry-standard 0.18mum CMOS technology have been chosen for experimental study. Optimization results have been shown for both the traditional and the proposed methods. The results show significant improvement in both the chosen design problems
 
The Internet and World Wide Web are becoming more and more dynamic in terms of their content and use. Information retrieval (IR) efforts aim to keep up with this dynamic environment by designing intelligent systems which can deliver Web content in real time to various wired or wireless devices. Evolutionary and adaptive systems (EASs) are emerging as typical examples of such systems. This paper contains one of the first attempts to gather and evaluate the nature of current research on Web-based IR using EAS and proposes future research directions in parallel to developments on the Web environments.
 
The self-adaptation response ( ) for the (1; 10)-SA-ES with N = 30. Figure left: log-normal rule (57/58) used. Figure right: symmetric two-point rule (60) used.
Due to the flexibility in adapting to different fitness landscapes, self-adaptive evolutionary algorithms (SA-EAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SA-EA operators should have for successful applications in real-valued search spaces. Specifically, population mean and variance of a number of SA-EA operators such as various real-parameter crossover operators and self-adaptive evolution strategies are calculated for this purpose. Simulation results are shown to verify the theoretical calculations. The postulations and population variance calculations explain why self-adaptive genetic algorithms and evolution strategies have shown similar performance in the past and also suggest appropriate strategy parameter values, which must be chosen while applying and comparing different SA-EAs
 
A variety of previous works exist on maintaining population diversity of genetic algorithms (GAs). Dual-population GA (DPGA) is a type of multipopulation GA (MPGA) that uses an additional population as a reservoir of diversity. The main population is similar to that of an ordinary GA and evolves to find good solutions. The reserve population evolves to maintain and provide diversity to the main population. While most MPGAs use migration as a means of information exchange between different populations, DPGA uses crossbreeding because the two populations have entirely different fitness functions. The reserve population cannot provide useful diversity to the main population unless the two maintain an appropriate distance. Therefore, DPGA adjusts the distance dynamically to achieve an appropriate balance between exploration and exploitation. The experimental results on various classes of problems using binary, real-valued, and order-based representations show that DPGA quite often outperforms not only the standard GAs but also other GAs having additional mechanisms of diversity preservation.
 
In this paper, an adaptive tradeoff model (ATM) is proposed for constrained evolutionary optimization. In this model, three main issues are considered: (1) the evaluation of infeasible solutions when the population contains only infeasible individuals; (2) balancing feasible and infeasible solutions when the population consists of a combination of feasible and infeasible individuals; and (3) the selection of feasible solutions when the population is composed of feasible individuals only. These issues are addressed in this paper by designing different tradeoff schemes during different stages of a search process to obtain an appropriate tradeoff between objective function and constraint violations. In addition, a simple evolutionary strategy (ES) is used as the search engine. By integrating ATM with ES, a generic constrained optimization evolutionary algorithm (ATMES) is derived. The new method is tested on 13 well-known benchmark test functions, and the empirical results suggest that it outperforms or performs similarly to other state-of-the-art techniques referred to in this paper in terms of the quality of the resulting solutions.
 
This paper presents a hybrid genetic algorithm (GA) with an adaptive application of genetic operators for solving the 3-matching problem (3MP), an NP-complete graph problem. In the 3MP, we search for the partition of a point set into minimal total cost triplets, where the cost of a triplet is the Euclidean length of the minimal spanning tree of the three points. The problem is a special case of grouping and facility location problems. One common problem with GA applied to hard combinatorial optimization, like the 3MP, is to incorporate problem-dependent local search operators into the GA efficiently in order to find high-quality solutions. Small instances of the problem can be solved exactly, but for large problems, we use local optimization. We introduce several general heuristic crossover and local hill-climbing operators, and apply adaptation to choose among them. Our GA combines these operators to form an effective problem solver. It is hybridized as it incorporates local search heuristics, and it is adaptive as the individual recombination/improvement operators are fired according to their online performance. Test results show that this approach gives approximately the same or even slightly better results than our previous, fine tuned GA without adaptation. It is better than a grouping GA for the partitioning considered. The adaptive combination of operators eliminates a large set of parameters, making the method more robust, and it presents a convenient way to build a hybrid problem solver
 
Search algorithms for Pareto optimization are designed to obtain multiple solutions, each offering a different trade-off of the problem objectives. To make the different solutions available at the end of an algorithm run, procedures are needed for storing them, one by one, as they are found. In a simple case, this may be achieved by placing each point that is found into an "archive" which maintains only nondominated points and discards all others. However, even a set of mutually nondominated points is potentially very large, necessitating a bound on the archive's capacity. But with such a bound in place, it is no longer obvious which points should be maintained and which discarded; we would like the archive to maintain a representative and well-distributed subset of the points generated by the search algorithm, and also that this set converges. To achieve these objectives, we propose an adaptive archiving algorithm, suitable for use with any Pareto optimization algorithm, which has various useful properties as follows. It maintains an archive of bounded size, encourages an even distribution of points across the Pareto front, is computationally efficient, and we are able to prove a form of convergence. The method proposed here maintains evenness, efficiency, and cardinality, and provably converges under certain conditions but not all. Finally, the notions underlying our convergence proofs support a new way to rigorously define what is meant by "good spread of points" across a Pareto front, in the context of grid-based archiving schemes. This leads to proofs and conjectures applicable to archive sizing and grid sizing in any Pareto optimization algorithm maintaining a grid-based archive.
 
This paper proposes a novel optimization algorithm for image-space matching and three-dimensional space analysis, using an adapted scheme of evolutionary computation that employs the concept of symbiosis in a collective of homogeneous populations. It is applied to the automatic generation of disparity surfaces used for depth estimation in stereo vision. The global task of approximating the complete disparity surface is decomposed to a large number of smaller local problems, each solvable by a smaller processing unit. Coevolution is sustained in such a way as to counteract the arbitrary decomposition of the original super-problem, so that the local evolutions of all the subproblems become interlocked. This, in the long run, provides a consistent global solution, and it does so via an asynchronous and massively parallel architecture. The entire surface is partitioned to a set of adjoining patches represented by distinct species or populations, with phenotypes corresponding to different polynomial functionals. The credit assignment functions take into account both self and symbiotic terms in an adaptive and dynamic manner, in order to produce disparity patches that are fit within their own domain and at the same time fit in association with their symbionts. This persistent propagation of local interactions to a global scale throughout evolution generates a unified disparity surface composed of the many smaller patch surfaces.
 
Many different algorithms have been developed in the last few decades for solving complex real-world search and optimization problems. The main focus in this research has been on the development of a single universal genetic operator for population evolution that is always efficient for a diverse set of optimization problems. In this paper, we argue that significant advances to the field of evolutionary computation can be made if we embrace a concept of self-adaptive multimethod optimization in which multiple different search algorithms are run concurrently, and learn from each other through information exchange using a common population of points. We present an evolutionary algorithm, entitled A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO), that implements this concept of self adaptive multimethod search. This method simultaneously merges the strengths of the covariance matrix adaptation (CMA) evolution strategy, genetic algorithm (GA), and particle swarm optimizer (PSO) for population evolution and implements a self-adaptive learning strategy to automatically tune the number of offspring these three individual algorithms are allowed to contribute during each generation. Benchmark results in 10, 30, and 50 dimensions using synthetic functions from the special session on real-parameter optimization of CEC 2005 show that AMALGAM-SO obtains similar efficiencies as existing algorithms on relatively simple unimodal problems, but is superior for more complex higher dimensional multimodal optimization problems. The new search method scales well with increasing number of dimensions, converges in the close proximity of the global minimum for functions with noise induced multimodality, and is designed to take full advantage of the power of distributed computer networks.
 
Top-cited authors
Kalyan Deb
  • Michigan State University
Qingfu Zhang
  • City University of Hong Kong
David H. Wolpert
  • Santa Fe Institute
Marco Dorigo
  • Université Libre de Bruxelles
Luca Maria Gambardella
  • University of Applied Sciences and Arts of Southern Switzerland