Conference Paper

Exploiting gradient information in numerical multi--objective evolutionary optimization.

DOI: 10.1145/1068009.1068138 Conference: Genetic and Evolutionary Computation Conference, GECCO 2005, Proceedings, Washington DC, USA, June 25-29, 2005
Source: DBLP

ABSTRACT Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.

0 Bookmarks
 · 
63 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The production planning optimization for mineral processing is important for non-renewable raw mineral resource utilization. This paper presents a nonlinear multiobjective programming model for a mineral processing production planning (MPPP) for optimizing five production indices, including its iron concentrate output, the concentrate grade, the concentration ratio, the metal recovery, and the production cost. A gradient-based hybrid operator is proposed in two evolutionary algorithms named the gradient-based NSGA-II (G-NSGA-II) and the gradient-based SPEA2 (G-SPEA2) for MPPP optimization. The gradient-based operator of the proposed hybrid operator is normalized as a strictly convex cone combination of negative gradient direction of each objective, and is provided to move each selected point along some descent direction of the objective functions to the Pareto front, so as to reduce the invalid trial times of crossover and mutation. Two theorems are established to reveal a descent direction for the improvement of all objective functions. Experiments on standard test problems, namely ZDT 1-3, CONSTR, SRN, and TNK, have demonstrated that the proposed algorithms can improve the chance of minimizing all objectives compared to pure evolutionary algorithms in solving the multiobjective optimization problems with differentiable objective functions under short running time limitation. Computational experiments in MPPP application case have indicated that the proposed algorithms can achieve better production indices than those of NSGA-II, T-NSGA-FD, T-NSGA-SP, and SPEA2 in the case of small number of generations. Also, those experimental results show that the proposed hybrid operators have better performance than that of pure gradient-based operators in attaining either a broad distribution or maintaining much diversity of obtained non-dominated solutions.
    IEEE Transactions on Evolutionary Computation 09/2011; · 4.81 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study the influence of the number of objectives of a continuous multiobjective optimization problem on its hardness for evolution strategies which is of particular interest for many-objective optimization problems. To be more precise, we measure the hardness in terms of the evolution (or convergence) of the population toward the set of interest, the Pareto set. Previous related studies consider mainly the number of nondominated individuals within a population which greatly improved the understanding of the problem and has led to possible remedies. However, in certain cases this ansatz is not sophisticated enough to understand all phenomena, and can even be misleading. In this paper, we suggest alternatively to consider the probability to improve the situation of the population which can, to a certain extent, be measured by the sizes of the descent cones. As an example, we make some qualitative considerations on a general class of uni-modal test problems and conjecture that these problems get harder by adding an objective, but that this difference is practically not significant, and we support this by some empirical studies. Further, we address the scalability in the number of objectives observed in the literature. That is, we try to extract the challenges for the treatment of many-objective problems for evolution strategies based on our observations and use them to explain recent advances in this field.
    IEEE Transactions on Evolutionary Computation 09/2011; · 4.81 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In multi-objective optimization the hypervolume indicator is a measure for the size of the space within a reference set that is dominated by a set of $\mu$ points. It is a common performance indicator for judging the quality of Pareto front approximations. As it does not require a-priori knowledge of the Pareto front it can also be used in a straightforward manner for guiding the search for finite approximations to the Pareto front in multi-objective optimization algorithm design. In this paper we discuss properties of the gradient of the hypervolume indicator at vectors that represent approximation sets to the Pareto front. An expression for relating this gradient to the objective function values at the solutions in the approximation set and their partial derivatives is described for arbitrary dimensions $m \geq 2$ as well as an algorithm to compute the gradient field efficiently based on this information. We show that in the bi-objective and tri-objective case these algorithms are asymptotically optimal with time complexity in $\Theta(\mu d + \mu \log \mu)$ for $d$ being the dimension of the search space and $\mu$ being the number of points in the approximation set. For the case of four objective functions the time complexity is shown to be in $\mathcal{O}(\mu d + \mu^2)$. The tight computation schemes reveal fundamental structural properties of this gradient field that can be used to identify zeros of the gradient field. This paves the way for the formulation of stopping conditions and candidates for optimal approximation sets in multi-objective optimization.
    01/2014;

Full-text (2 Sources)

View
6 Downloads
Available from
May 21, 2014