Conference Paper

Exploiting gradient information in numerical multi--objective evolutionary optimization

DOI: 10.1145/1068009.1068138 Conference: Genetic and Evolutionary Computation Conference, GECCO 2005, Proceedings, Washington DC, USA, June 25-29, 2005
Source: DBLP


Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.

Download full-text


Available from: Edwin D. de Jong, Apr 24, 2014
  • Source
    • "prove the performance of many-objective optimization problems . On the one hand, mathematical programming techniques (e.g., [4], [19]) allow—if gradient information is at hand— to compute a descent direction at every given non optimal point regardless of the size of the descent cone nor of the value of k, and hence it can be argued that the probability for improvement is one. On the other hand, the use of gradient information within a memetic strategy results in a certain additional cost [37] and in case the model is highly multimodal [i.e., problem 3)] the effect of the local search on the overall performance is questionable. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study the influence of the number of objectives of a continuous multiobjective optimization problem on its hardness for evolution strategies which is of particular interest for many-objective optimization problems. To be more precise, we measure the hardness in terms of the evolution (or convergence) of the population toward the set of interest, the Pareto set. Previous related studies consider mainly the number of nondominated individuals within a population which greatly improved the understanding of the problem and has led to possible remedies. However, in certain cases this ansatz is not sophisticated enough to understand all phenomena, and can even be misleading. In this paper, we suggest alternatively to consider the probability to improve the situation of the population which can, to a certain extent, be measured by the sizes of the descent cones. As an example, we make some qualitative considerations on a general class of uni-modal test problems and conjecture that these problems get harder by adding an objective, but that this difference is practically not significant, and we support this by some empirical studies. Further, we address the scalability in the number of objectives observed in the literature. That is, we try to extract the challenges for the treatment of many-objective problems for evolution strategies based on our observations and use them to explain recent advances in this field.
    IEEE Transactions on Evolutionary Computation 09/2011; 15(4-15):444 - 455. DOI:10.1109/TEVC.2010.2064321 · 3.65 Impact Factor
  • Source
    • "It should be noted that hybridizing evolutionary algorithms with mathematical programming techniques has been attempted in the past [14], [15], [16], [17]. Hybrid evolutionary algorithms are also often referred to as memetic algorithms owing to their use of local search techniques which are traditionally faster than a typical evolutionary algorithm. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the performance assessment of the hybrid Archive-based Micro Genetic Algorithm (AMGA) on a set of bound-constrained synthetic test problems is reported. The hybrid AMGA proposed in this paper is a combination of a classical gradient based single-objective optimization algorithm and an evolutionary multi-objective optimization algorithm. The gradient based optimizer is used for a fast local search and is a variant of the sequential quadratic programming method. The Matlab implementation of the SQP (provided by the fmincon optimization function) is used in this paper. The evolutionary multi-objective optimization algorithm AMGA is used as the global optimizer. A scalarization scheme based on the weighted objectives is proposed which is designed to facilitate the simultaneous improvement of all the objectives. The scalarization scheme proposed in this paper also utilizes reference points as constraints to enable the algorithm to solve non-convex optimization problems. The gradient based optimizer is used as the mutation operator of the evolutionary algorithm and a suitable scheme to switch between the genetic mutation and the gradient based mutation is proposed. The hybrid AMGA is designed to balance local versus global search strategies so as to obtain a set of diverse non-dominated solutions as quickly as possible. The simulation results of the hybrid AMGA are reported on the bound-constrained test problems described in the CEC09 benchmark suite.
    Evolutionary Computation, 2009. CEC '09. IEEE Congress on; 06/2009
    • "Again, exact derivatives are used, and some problems can be found if the objectives have different ranges, because the largest direction of simultaneous descent will be biased towards the objective with the largest range. • In [1], they analytically describe the complete set of nondominated simultaneously improving directions using the exact gradient of each objective function, and this set is considered as a multi-objective gradient. In order to use this information, at the end of a generation, a set of candidate solutions is determined. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the field of single-objective optimization, hybrid variants of gradient-based methods and evolutionary algorithms have been shown to perform better than an evolutionary method by itself. This same idea has been recently used in Evolutionary Multiobjective Optimization (EMO), obtaining also very promising results. In most cases, gradient information is used along the whole process, which involves a high computational cost, mainly related to the computation of the step lengths required. In contrast, in this paper we propose the use of gradient information only at the beginning of the search process. We will show that this sort of scheme maintains results of good quality while considerably decreasing the computational cost. In our work, we adopt a steepest descent method to generate some nondominated points which are then used to seed the initial population of a multi-objective evolutionary algorithm (MOEA), which will spread them along the Pareto front. The MOEA adopted in our case is the NSGA-II, which is representative of the state-of-the-art in the area. To validate our proposal, we adopt box-constrained continuous problems (the ZDT test suite). The gradients required are approximated using quadratic regressions. Our proposed approach performs a total of 2000 objective function evaluations, which is much lower than the number of evaluations normally adopted with the ZDT test suite in the specialized literature. Our results are compared with respect to the ldquopurerdquo NSGA-II (i.e., without using gradient-based information) so that the potential benefit of these initial solutions fed into the population can be properly assessed.
    Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2008, June 1-6, 2008, Hong Kong, China; 06/2008
Show more