Conference Paper

Exploiting gradient information in numerical multi--objective evolutionary optimization.

DOI: 10.1145/1068009.1068138 Conference: Genetic and Evolutionary Computation Conference, GECCO 2005, Proceedings, Washington DC, USA, June 25-29, 2005
Source: DBLP

ABSTRACT Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.

0 Bookmarks
 · 
77 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Two techniques for the numerical treatment of multi-objective optimization problems---a continuation method and a particle swarm optimizer---are combined in order to unite their particular advantages. Continuation methods can be applied very efficiently to perform the search along the Pareto set, even for high-dimensional models, but are of local nature. In contrast, many multi-objective particle swarm optimizers tend to have slow convergence, but instead accomplish the 'global task' well. An algorithm which combines these two techniques is proposed, some convergence results for continuous models are provided, possible realizations are discussed, and finally some numerical results are presented indicating the strength of this novel approach.
    Engineering Optimization 05/2008; 40(5):383-402. · 1.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Metamodel-assisted evolutionary algorithms are low-cost optimization methods for CPU-demanding problems. Memetic algorithms combine global and local search methods, aiming at improving the quality of promising solutions. This article proposes a metamodel-assisted memetic algorithm which combines and extends the capabilities of the aforementioned techniques. Herein, metamodels undertake a dual role: they perform a low-cost pre-evaluation of population members during the global search and the gradient-based refinement of promising solutions. This reduces significantly the number of calls to the evaluation tool and overcomes the need for computing the objective function gradients. In multi-objective problems, the selection of individuals for refinement is based on domination and distance criteria. During refinement, a scalar strength function is maximized and this proves to be beneficial in constrained optimization. The proposed metamodel-assisted memetic algorithm employs principles of Lamarckian learning and is demonstrated on mathematical and engineering applications.
    Engineering Optimization 01/2009; 41(10):909-923. · 1.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper examines a multiobjective formulation of the United States Navy's Sailor Assignment Problem (SAP) and examines the performance of two widely-used multiobjective evolutionary algorithms (MOEAs) on large instances of this problem. The performance of the algorithms is examined with respect to both solution quality and diversity, and the algorithms are shown to provide inadequate diversity along the Pareto front. A domain-specific local improvement operator is introduced into the MOEAs, producing significant performance increases over the evolutionary algorithms alone. This hybrid MOEA approach is applied to the sailor assignment problem and shown to provide greater diversity along the Pareto front.

Full-text (2 Sources)

Download
10 Downloads
Available from
May 21, 2014