Conference Paper

Exploiting gradient information in numerical multi--objective evolutionary optimization

DOI: 10.1145/1068009.1068138 Conference: Genetic and Evolutionary Computation Conference, GECCO 2005, Proceedings, Washington DC, USA, June 25-29, 2005
Source: DBLP

ABSTRACT Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.

Download full-text


Available from: Edwin D. de Jong, Apr 24, 2014
7 Reads
  • Source
    • "prove the performance of many-objective optimization problems . On the one hand, mathematical programming techniques (e.g., [4], [19]) allow—if gradient information is at hand— to compute a descent direction at every given non optimal point regardless of the size of the descent cone nor of the value of k, and hence it can be argued that the probability for improvement is one. On the other hand, the use of gradient information within a memetic strategy results in a certain additional cost [37] and in case the model is highly multimodal [i.e., problem 3)] the effect of the local search on the overall performance is questionable. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study the influence of the number of objectives of a continuous multiobjective optimization problem on its hardness for evolution strategies which is of particular interest for many-objective optimization problems. To be more precise, we measure the hardness in terms of the evolution (or convergence) of the population toward the set of interest, the Pareto set. Previous related studies consider mainly the number of nondominated individuals within a population which greatly improved the understanding of the problem and has led to possible remedies. However, in certain cases this ansatz is not sophisticated enough to understand all phenomena, and can even be misleading. In this paper, we suggest alternatively to consider the probability to improve the situation of the population which can, to a certain extent, be measured by the sizes of the descent cones. As an example, we make some qualitative considerations on a general class of uni-modal test problems and conjecture that these problems get harder by adding an objective, but that this difference is practically not significant, and we support this by some empirical studies. Further, we address the scalability in the number of objectives observed in the literature. That is, we try to extract the challenges for the treatment of many-objective problems for evolution strategies based on our observations and use them to explain recent advances in this field.
    IEEE Transactions on Evolutionary Computation 09/2011; 15(4-15):444 - 455. DOI:10.1109/TEVC.2010.2064321 · 3.65 Impact Factor
  • Source
    • "It should be noted that hybridizing evolutionary algorithms with mathematical programming techniques has been attempted in the past [14], [15], [16], [17]. Hybrid evolutionary algorithms are also often referred to as memetic algorithms owing to their use of local search techniques which are traditionally faster than a typical evolutionary algorithm. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the performance assessment of the hybrid Archive-based Micro Genetic Algorithm (AMGA) on a set of bound-constrained synthetic test problems is reported. The hybrid AMGA proposed in this paper is a combination of a classical gradient based single-objective optimization algorithm and an evolutionary multi-objective optimization algorithm. The gradient based optimizer is used for a fast local search and is a variant of the sequential quadratic programming method. The Matlab implementation of the SQP (provided by the fmincon optimization function) is used in this paper. The evolutionary multi-objective optimization algorithm AMGA is used as the global optimizer. A scalarization scheme based on the weighted objectives is proposed which is designed to facilitate the simultaneous improvement of all the objectives. The scalarization scheme proposed in this paper also utilizes reference points as constraints to enable the algorithm to solve non-convex optimization problems. The gradient based optimizer is used as the mutation operator of the evolutionary algorithm and a suitable scheme to switch between the genetic mutation and the gradient based mutation is proposed. The hybrid AMGA is designed to balance local versus global search strategies so as to obtain a set of diverse non-dominated solutions as quickly as possible. The simulation results of the hybrid AMGA are reported on the bound-constrained test problems described in the CEC09 benchmark suite.
    Evolutionary Computation, 2009. CEC '09. IEEE Congress on; 06/2009
  • Source
    • "For example, [12] and [22] have incorporated a Gaussian Random Field Metamodel into the algorithm and [20] has adopted an approximation strategy. Gradient based and/or a directional local search strategies have also been used as a surrogate assisted MOEA for problems with differentiable objectives [4] [7] [6] [3] [15]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the major difficulties when applying Multiobjective Evolutionary Algorithms (MOEA) to real world problems is the large number of objective function evaluations. Approxi- mate (or surrogate) methods offer the possibility of reducing the number of evaluations, without reducing solution qual- ity. Artificial Neural Network (ANN) based models are one approach that have been used to approximate the future front from the current available fronts with acceptable ac- curacy levels. However, the associated computational costs limit their effectiveness. In this work, we introduce a simple approach that has comparatively smaller computational cost and we have developed this model as a variation operator that can be used in any kind of multiobjective optimizer. When designing this model, we have considered the whole search procedure as a dynamic system that takes available objective values in current front as input and generates ap- proximated design variables for the next front as output. Initial simulation experiments have produced encouraging results in comparison to NSGA-II. Our motivation was to increase the speed of the hosting optimizer. We have com- pared the performance of the algorithm with respect to the total number of function evaluation and Hypervolume met- ric. This variation operator has worst case complexity of O(nkN3), where N is the population size, n and k is the number of design variables and objectives respectively.
    Genetic and Evolutionary Computation Conference, GECCO 2008, Proceedings, Atlanta, GA, USA, July 12-16, 2008; 01/2008
Show more