IEEE Transactions on Evolutionary Computation

Published by IEEE (Institute of Electrical and Electronics Engineers)

Articles


Numerical Solution of Stochastic Differential Equations
  • Article
  • Full-text available

December 2008

·

4,749 Reads

·

In this paper we present an adaptive multi-element generalized polynomial chaos (ME-gPC) method, which can achieve hp-convergence in random space. ME-gPC is based on the decomposition of random space and generalized polynomial chaos (gPC). Using proper numerical schemes to maintain the local orthogonality on-the-fly, we perform gPC locally and adaptively. The key idea is to combine the polynomial chaos method of h version and p version. The adaptive ME-gPC shows good performance in dealing with problems related to long-term integration, large perturbation and discontinuities. Benchmarks and applications of ME-gPC are presented.
Download
Share



Fig. 1. Generation alternation model of DE.
Fig. 2. Proposed DEahcSPX algorithm and the adaptive LS scheme AHCXLS. I is the individual on which the AHCXLS is applied and n is the total number of individuals that take part in the crossover operation. BestIndex return the index of the best individual of the current generation. Other symbols represent standard notations.
Fig. 3. The simplex crossover (SPX) operation.
Fig. 4. Convergence curves of DE and DEahcSPX algorithm for selected functions (N = 30). X axis represents fitness evaluations (FEs) and Y axis represents
Fig. 5. Convergence curves to show the sensitivities of DE and DEahcSPX to population size for selected functions (N = 30). X axis represents FEs and Y axis represents Error values. (a) F (P = 50). (b) F (P = 200). (c) F (P = 300). (d) F (P = 300). (e) F (P = 50). (f) F (P = 100). (g) F (P = 200). (h) F (P = 100).

+1

Iba, H.: Accelerating differential evolution using an adaptive local search. IEEE Trans. Evol. Comput. 12(1), 107-125

March 2008

·

2,241 Reads

We propose a crossover-based adaptive local search (LS) operation for enhancing the performance of standard differential evolution (DE) algorithm. Incorporating LS heuristics is often very useful in designing an effective evolutionary algorithm for global optimization. However, determining a single LS length that can serve for a wide range of problems is a critical issue. We present a LS technique to solve this problem by adaptively adjusting the length of the search, using a hill-climbing heuristic. The emphasis of this paper is to demonstrate how this LS scheme can improve the performance of DE. Experimenting with a wide range of benchmark functions, we show that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm. Performance comparisons with other LS heuristics and with some other well-known evolutionary algorithms from literature are also presented.

Fig. 1. A Basic Trap function ‘ ’ p “ • ” ’ – .
Fig. 2. A Complex Trap function ‘ ’ p “ • ” ’ – .
Fig. 6. Absorption times for a (1+1)-EA on a 㠒 ä ae  ç trap function.
Back, T.: An analysis of the Behavior of Simplified Evolutionary Algorithms on Trap Functions. IEEE Trans. on Evol. Comp. 7(1), 11-22

March 2003

·

87 Reads

Methods are developed to numerically analyze an evolutionary algorithm (EA) that applies mutation and selection on a bit-string representation to find the optimum for a bimodal unitation function called a trap function. This research bridges part of the gap between the existing convergence velocity analysis of strictly unimodal functions and global convergence results assuming the limit of infinite time. As a main result of this analysis, a new so-called (1 : λ)-EA is proposed, which generates offspring using individual mutation rates p<sub>i</sub>. While a more traditional EA using only one mutation rate is not able to find the global optimum of the trap function within an acceptable (nonexponential) time, our numerical investigations provide evidence that the new algorithm overcomes these limitations. The analysis tools used for the analysis, based on absorbing Markov chains and the calculation of transition probabilities, are demonstrated to provide an intuitive and useful method for investigating the capabilities of EAs to bridge the gap between a local and a global optimum in bimodal search spaces.

Fig. 1. Representation of a process plan.  
Zalzala, A.M.S.: Recent developments in evolutionary computation for manufacturing optimization: problems, solutions and comparisons. IEEE Transactions on Evolutionary Computation 4(2), 93-113

August 2000

·

699 Reads

The use of intelligent techniques in the manufacturing field has been growing the last decades due to the fact that most manufacturing optimization problems are combinatorial and NP hard. This paper examines recent developments in the field of evolutionary computation for manufacturing optimization. Significant papers in various areas are highlighted, and comparisons of results are given wherever data are available. A wide range of problems is covered, from job shop and flow shop scheduling, to process planning and assembly line balancing

Fig. 1. Concatenation of P 1 .ˆ y, P 2 .ˆ y, . . . , P K .ˆ y constitutesˆyconstitutesˆ constitutesˆy.  
Fig. 2. Averaged best fitness values of four PSO variants (using a ring topology) on f 1 of 2, 5, 10, 20, 50, and 100 dimensions.  
Fig. 5. Averaged best fitness values for sep-CMA-ES and CCPSO2 on functions of 500 dimensions. (a) f 1 ShiftedSphere. (b) f 2 SchwefelProblem. (c) f 3 ShiftedRosenbrock. (d) f 4 ShiftedRastrigin. (e) f 5 ShiftedGriewank. (f) f 6 ShiftedAckley. (g) f 7 ShiftedRastrigin. (h) f 3r ShiftedRotatedRosenbrock. (i) f 4r ShiftedRotatedRastrigin.  
Fig. 6. Averaged best fitness values for sep-CMA-ES and CCPSO2 on functions of 1000 dimensions. (a) f 1 ShiftedSphere. (b)f 2 SchwefelProblem. (c) f 3 ShiftedRosenbrock. (d) f 4 ShiftedRastrigin. (e) f 5 ShiftedGriewank. (f) f 4 ShiftedRastrigin. (g) f 7 ShiftedRastrigin. (h) f 3r ShiftedRotatedRosenbrock. (i) f 4r ShiftedRotatedRastrigin.  
Fig. 7. Averaged best fitness values for sep-CMA-ES and CCPSO2 on f 5r and f 6r of 500 dimensions. (a) f 5r . (b) f 6r .  
Yao, X.: Cooperatively coevolving particle swarms for large scale optimization. IEEE Trans. Evol. Comput. 16(2), 210-224

April 2012

·

562 Reads

This paper presents a new cooperative coevolving particle swarm optimization (CCPSO) algorithm in an attempt to address the issue of scaling up particle swarm optimization (PSO) algorithms in solving large-scale optimization problems (up to 2000 real-valued variables). The proposed CCPSO2 builds on the success of an early CCPSO that employs an effective variable grouping technique random grouping. CCPSO2 adopts a new PSO position update rule that relies on Cauchy and Gaussian distributions to sample new points in the search space, and a scheme to dynamically determine the coevolving subcomponent sizes of the variables. On high-dimensional problems (ranging from 100 to 2000 variables), the performance of CCPSO2 compared favorably against a state-of-the-art evolutionary algorithm sep-CMA-ES, two existing PSO algorithms, and a cooperative coevolving differential evolution algorithm. In particular, CCPSO2 performed significantly better than sep-CMA-ES and two existing PSO algorithms on more complex multimodal problems (which more closely resemble real-world problems), though not as well as the existing algorithms on unimodal functions. Our experimental results and analysis suggest that CCPSO2 is a highly competitive optimization algorithm for solving large-scale and complex multimodal optimization problems.

Thierens, D.: The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Transactions on Evolutionary Computation 7(2), 174-188

May 2003

·

160 Reads

Over the last decade, a variety of evolutionary algorithms (EAs) have been proposed for solving multiobjective optimization problems. Especially more recent multiobjective evolutionary algorithms (MOEAs) have been shown to be efficient and superior to earlier approaches. An important question however is whether we can expect such improvements to converge onto a specific efficient MOEA that behaves best on a large variety of problems. In this paper, we argue that the development of new MOEAs cannot converge onto a single new most efficient MOEA because the performance of MOEAs shows characteristics of multiobjective problems. While we point out the most important aspects for designing competent MOEAs in this paper, we also indicate the inherent multiobjective tradeoff in multiobjective optimization between proximity and diversity preservation. We discuss the impact of this tradeoff on the concepts and design of exploration and exploitation operators. We also present a general framework for competent MOEAs and show how current state-of-the-art MOEAs can be obtained by making choices within this framework. Furthermore, we show an example of how we can separate nondomination selection pressure from diversity preservation selection pressure and discuss the impact of changing the ratio between these components.

Adaptive evolutionary planner/navigator for mobile robots IEEE Transactions on Evolutionary Computation 1: 18-28

May 1997

·

128 Reads

Based on evolutionary computation (EC) concepts, we developed an adaptive evolutionary planner/navigator (EP/N) as a novel approach to path planning and navigation. The EP/N is characterized by generality, flexibility, and adaptability. It unifies off-line planning and online planning/navigation processes in the same evolutionary algorithm which 1) accommodates different optimization criteria and changes in these criteria, 2) incorporates various types of problem-specific domain knowledge, and 3) enables good tradeoffs among near-optimality of paths, high planning efficiency, and effective handling of unknown obstacles. More importantly, the EP/N can self-tune its performance for different task environments and changes in such environments, mostly through adapting probabilities of its operators and adjusting paths constantly, even during a robot's motion toward the goal






Stron, R.: System Design by Constraint Adaption and Differential Evolution. IEEE Transactions on Evolutionary Computation 3, 22-34

May 1999

·

45 Reads

A simple optimization procedure for constraint based problems which works without an objective function is described. The absence of an objective function makes the problem formulation particularly simple. The new method lends itself to parallel computation and is well suited for tasks where a family of solutions is required, trade-off situations have to be dealt with or the design center has to be found. ________________________________________ 1) International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn -Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email: rainer.storn@zfe.siemens.de. 2 1. Introduction The design of a technical system is usually associated with the process of properly choosing some system parameters such that the technical system meets its specifications. The parameter choosing process can also be regarded as an o...

Yao, X.: Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput. 4, 284-294

October 2000

·

1,940 Reads

Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a new view on penalty function methods in terms of the dominance of penalty and objective functions. Some of the pitfalls of naive penalty methods are discussed in these terms. The new ranking method is tested using a (μ, λ) evolution strategy on 13 benchmark problems. Our results show that suitable ranking alone (i.e., selection), without the introduction of complicated and specialized variation operators, is capable of improving the search performance significantly

Abido, M.A.: Multiobjective evolutionary algorithms for electric power dispatch problem. IEEE Trans. Evol. Comput. 10(3), 315-329

July 2006

·

777 Reads

The potential and effectiveness of the newly developed Pareto-based multiobjective evolutionary algorithms (MOEA) for solving a real-world power system multiobjective nonlinear optimization problem are comprehensively discussed and evaluated in this paper. Specifically, nondominated sorting genetic algorithm, niched Pareto genetic algorithm, and strength Pareto evolutionary algorithm (SPEA) have been developed and successfully applied to an environmental/economic electric power dispatch problem. A new procedure for quality measure is proposed in this paper in order to evaluate different techniques. A feasibility check procedure has been developed and superimposed on MOEA to restrict the search to the feasible region of the problem space. A hierarchical clustering algorithm is also imposed to provide the power system operator with a representative and manageable Pareto-optimal set. Moreover, an approach based on fuzzy set theory is developed to extract one of the Pareto-optimal solutions as the best compromise one. These multiobjective evolutionary algorithms have been individually examined and applied to the standard IEEE 30-bus six-generator test system. Several optimization runs have been carried out on different cases of problem complexity. The results of MOEA have been compared to those reported in the literature. The results confirm the potential and effectiveness of MOEA compared to the traditional multiobjective optimization techniques. In addition, the results demonstrate the superiority of the SPEA as a promising multiobjective evolutionary algorithm to solve different power system multiobjective optimization problems.

Jaszkiewicz, A.: On the Performance of Multiple-Objective Genetic Local Search on the 0/1 Knapsack Problem - A Comparative Experiment. IEEE Trans. on Evolutionary Computation 6, 402-412

September 2002

·

216 Reads

Multiple-objective metaheuristics, e.g., multiple-objective evolutionary algorithms, constitute one of the most active fields of multiple-objective optimization. Since 1985, a significant number of different methods have been proposed. However, only few comparative studies of the methods were performed on large-scale problems. We continue two comparative experiments on the multiple-objective 0/1 knapsack problem reported in the literature. We compare the performance of two multiple-objective genetic local search (MOGLS) algorithms to the best performers in the previous experiments using the same test instances. The results of our experiment indicate that our MOGLS algorithm generates better approximations to the nondominated set in the same number of functions evaluations than the other algorithms

Gambardella, L.M.: Ant Colony System: A cooperative learning approach to the Traveling Salesman Problem. IEEE Tr. Evol. Comp. 1, 53-66

May 1997

·

3,036 Reads

This paper introduces the ant colony system (ACS), a distributed algorithm that is applied to the traveling salesman problem (TSP). In the ACS, a set of cooperating agents called ants cooperate to find good solutions to TSPs. Ants cooperate using an indirect form of communication mediated by a pheromone they deposit on the edges of the TSP graph while building solutions. We study the ACS by running experiments to understand its operation. The results show that the ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and we conclude comparing ACS-3-opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs

Kennedy, J.: The Particle Swarm: Explosion, Stability and Convergence in a Multi-Dimensional Complex Space. IEEE Trans. on Evolutionary Computation 6, 58-73

March 2002

·

3,124 Reads

The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. This paper analyzes a particle's trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions

Figure 1: Schematic view of the situation in which function space F is 3-dimensional. The uniform prior over this space, ~ 1 lies along the diagonal. Diierent algorithms a give diierent vectors v lying in the cone surrounding the diagonal. A particular problem is represented by its prior ~ p lying on the simplex. The algorithm that will perform best will be the algorithm in the cone having the largest inner product with ~ p.
Macready, W.G.: No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation 1(1), 67-82

May 1997

·

2,563 Reads

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms

Nandi, A.K.: Binary String Fitness Characterization and Comparative Partner Selection in Genetic Programming. IEEE Trans. on Evolutionary Computation 12(6), 724-735

January 2009

·

26 Reads

The premise behind all evolutionary methods is ldquosurvival of the fittest,rdquo and consequently, individuals require a quantitative fitness measure. This paper proposes a novel strategy for evaluating individual's relative strengths and weaknesses, as well as representing these in the form of a binary string fitness characterization (BSFC); in addition, as customary, an overall fitness value is assigned to each individual. Utilizing the BSFC, we demonstrate both novel population evaluation measures and a pairwise mating strategy, comparative partner selection (CPS), with the aim of evolving a population that promotes effective solutions by reducing population-wide weaknesses. This strategy is tested with six standard genetic programming benchmarking problems.

AbYSS: Adapting Scatter Search to Multiobjective Optimization

September 2008

·

1,582 Reads

We propose the use of a new algorithm to solve multiobjective optimization problems. Our proposal adapts the well-known scatter search template for single-objective optimization to the multiobjective domain. The result is a hybrid metaheuristic algorithm called Archive-Based hYbrid Scatter Search (AbYSS), which follows the scatter search structure but uses mutation and crossover operators from evolutionary algorithms. AbYSS incorporates typical concepts from the multiobjective field, such as Pareto dominance, density estimation, and an external archive to store the nondominated solutions. We evaluate AbYSS with a standard benchmark including both unconstrained and constrained problems, and it is compared with two state-of-the-art multiobjective optimizers, NSGA-II and SPEA2. The results obtained indicate that, according to the benchmark and parameter settings used, AbYSS outperforms the other two algorithms as regards the diversity of the solutions, and it obtains very competitive results according to the convergence to the true Pareto fronts and the hypervolume metric.

Accelerating Self-Modeling in Cooperative Robot Teams

May 2009

·

34 Reads

One of the major obstacles to achieving robots capable of operating in real-world environments is enabling them to cope with a continuous stream of unanticipated situations. In previous work, it was demonstrated that a robot can autonomously generate self-models, and use those self-models to diagnose unanticipated morphological change such as damage. In this paper, it is shown that multiple physical quadrupedal robots with similar morphologies can share self-models in order to accelerate modeling. Further, it is demonstrated that quadrupedal robots which maintain separate self-modeling algorithms but swap self-models perform better than quadrupedal robots that rely on a shared self-modeling algorithm. This finding points the way toward more robust robot teams: a robot can diagnose and recover from unanticipated situations faster by drawing on the previous experiences of the other robots.

An Accelerating Two-Layer Anchor Search With Application to the Resource-Constrained Project Scheduling Problem

January 2011

·

15 Reads

This paper presents a search method that combines elements from evolutionary and local search paradigms by the systematic use of crossover operations, generally used as structured exchange of genes between a series of solutions in genetic algorithms. Crossover operations here are particularly utilized as a systematic means to generate several possible solutions from two superior solutions. To test the effectiveness of the method, it has been applied to the resource-constrained project scheduling problem. The computational experiments show that the application of the method to this problem is promising.

Top-cited authors