# Nikolaus HansenInria (National Institute for Research in Computer Science and Control)

Nikolaus Hansen

PhD

## About

205

Publications

60,550

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

23,984

Citations

Citations since 2016

Introduction

## Publications

Publications (205)

We present concepts and recipes for the anytime performance assessment whenfor benchmarking optimization algorithms in a blackbox scenario. We consider runtime—oftentimes measured in number of blackbox evaluations needed to reach a target quality—to be a universally measurable cost for solving a problem. Starting from the graph that depicts the sol...

Evolution Strategies (ESs) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ESs involving a population and recombination, two...

We consider stochastic algorithms derived from methods for solving deterministic optimization problems, especially comparison-based algorithms derived from stochastic approximation algorithms with a constant step-size. We develop a methodology for proving geometric convergence of the parameter sequence {θn}n⩾0 of such algorithms. We employ the ordi...

Several test function suites are being used for numerical benchmarking of multiobjective optimization algorithms. While they have some desirable properties, like wellunderstood Pareto sets and Pareto fronts of various shapes, most of the currently used functions possess characteristics that are arguably under-represented in real-world problems such...

Scaling-invariant functions preserve the order of points when the points are scaled by the same positive scalar (usually with respect to a unique reference point). Composites of strictly monotonic functions with positively homogeneous functions are scaling-invariant with respect to zero. We prove in this paper that also the reverse is true for larg...

Evolution Strategies (ES) are stochastic derivative-free optimization algorithms whose most prominent representative, the CMA-ES algorithm, is widely used to solve difficult numerical optimization problems. We provide the first rigorous investigation of the linear convergence of step-size adaptive ES involving a population and recombination, two in...

The paper Comparing Results of 31 Algorithms from the Black-Box Optimization Benchmarking BBOB-2009 received the 2020 SIGEVO Impact Award for ten-year impact, which was announce at the ACM GECCO 2020 conference. The work compares the performance of 31 algorithms that had been benchmarked with the Comparing Continuous Optimizer platform (COCO) for t...

Scaling-invariant functions preserve the order of points when the points are scaled by the same positive scalar (with respect to a unique reference point). Composites of strictly monotonic functions with positively homogeneous functions are scaling-invariant with respect to zero. We prove in this paper that the reverse is true for large classes of...

Benchmarking of optimization solvers is an important and compulsory task for performance assessment that in turn can help in improving the design of algorithms. It is a repetitive and tedious task. Yet, this task has been greatly automatized in the past ten years with the development of the Comparing Continuous Optimizers platform (COCO). In this c...

This paper introduces a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), denoted as gl-CMA-ES, that utilizes the Graphical Lasso regularization. Our goal is to efficiently solve partially separable optimization problems of a certain class by performing stochastic search with a search model parameterized by a sparse precision...

We introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarki...

We explore the arguably simplest way to build an effective surrogate fitness model in continuous search spaces. The model complexity is linear or diagonal-quadratic or full quadratic, depending on the number of available data. The model parameters are computed from the Moore-Penrose pseudoinverse. The model is used as a surrogate fitness for CMA-ES...

Uniform Random Search is considered the simplest of all randomized search strategies and thus a natural baseline in benchmarking. Yet, in continuous domain it has its search domain width as a parameter that potentially has a strong effect on its performance. In this paper, we investigate this effect on the well-known 24 functions from the bbob test...

We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of adaptive diagonal decoding (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix
D
that expresses coordinate-wise vari...

We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of adaptive diagonal decoding (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix D that expresses coordinate-wise varia...

We present a framework to build a multiobjective algorithm from single-objective ones. This framework addresses the p × n-dimensional problem of finding p solutions in an n-dimensional search space, maximizing an indicator by dynamic subspace optimization. Each single-objective algorithm optimizes the indicator function given p − 1 fixed solutions....

The bbob-largescale test suite, containing 24 single-objective functions in continuous domain, extends the well-known single-objective noiseless bbob test suite, which has been used since 2009 in the BBOB workshop series, to large dimension. The core idea is to make the rotational transformations R, Q in search space that appear in the bbob test su...

A platform for comparing continuous optimizers in a black-box setting.
URL: https://github.com/numbbo/coco

In this paper we analyze theoretical properties of bi-objective convex-quadratic problems. We give a complete description of their Pareto set and prove the convexity of their Pareto front. We show that the Pareto set is a line segment when both Hessian matrices are proportional. We then propose a novel set of convex-quadratic test problems, describ...

In this paper we analyze theoretical properties of bi-objective convex-quadratic problems. We give a complete description of their Pareto set and prove the convexity of their Pareto front. We show that the Pareto set is a line segment when both Hessian matrices are proportional. We then propose a novel set of convex-quadratic test problems, describ...

We develop a methodology to prove geometric convergence of the parameter sequence $\{\theta_n\}_{n\geq 0}$ of a stochastic algorithm. The convergence is measured via a function $\Psi$ that is similar to a Lyapunov function. Important algorithms that motivate the introduction of this methodology are stochastic algorithms deriving from optimization m...

In the context of numerical constrained optimization, we investigate stochastic algorithms, in particular evolution strategies, handling constraints via augmented Lagrangian approaches. In those approaches, the original constrained problem is turned into an unconstrained one and the function optimized is an augmented Lagrangian whose parameters are...

Quality gain is the expected relative improvement of the function value in a single step of a search algorithm. Quality gain analysis reveals the dependencies of the quality gain on the parameters of a search algorithm, based on which one can derive the optimal values for the parameters. In this paper, we investigate evolution strategies with weigh...

In this paper, we investigate a new approach for adapting population size in the CMA-ES. This method is based on tracking the information in each slot of S successive iterations to decide whether we should increase or decrease or keep the population size in the next slot of S iterations. The information which we collect is the non-decrease of the m...

Numerical benchmarking of multiobjective optimization algorithms is an important task needed to understand and recommend algorithms. So far, two main approaches to assessing algorithm performance have been pursued: using set quality indicators, and the (empirical) attainment function and its higher-order moments as a generalization of empirical cum...

We investigate the evolution strategy with weighted recombination on a general convex quadratic function. We derive the asymptotic quality gain in the limit of the dimension to the positive infinity, and derive the optimal recombination weights and the optimal step-size. This work is an extension of the previous work that has derived the asymptotic...

We analyze linear convergence of an evolution strategy for constrained optimization with an augmented Lagrangian constraint handling approach. We study the case of multiple active linear constraints and use a Markov chain approach---used to analyze randomized optimization algorithms in the unconstrained case---to establish linear convergence under...

We focus on a variant of covariance matrix adaptation evolution strategy (CMA-ES) with a restricted covariance matrix model, namely VkD-CMA, which is aimed at reducing the internal time complexity and the adaptation time in terms of function evaluations. We tackle the shortage of the VkD-CMA—the model of the restricted covariance matrices needs to...

We consider the problem of minimizing a function f subject to a single inequality constraint \(g(\mathbf x ) \le 0\), in a black-box scenario. We present a covariance matrix adaptation evolution strategy using an adaptive augmented Lagrangian method to handle the constraint. We show that our algorithm is an instance of a general framework that allo...

We propose a general methodology to construct large-scale testbeds for the benchmarking of continuous optimization algorithms. Our approach applies an orthogonal transformation on raw functions that involve only a linear number of operations. The orthogonal transformation is sampled from a parametrized family of transformations that are the product...

We address the question of linear convergence of evolution strategies on constrained optimization problems. In particular, we analyze a (1+1)-ES with an augmented Lagrangian constraint handling approach on functions defined on a continuous domain, subject to a single linear inequality constraint. We identify a class of functions for which it is pos...

The S-metric-Selection Evolutionary Multi-objective Optimization Algorithm (SMS-EMOA) is one of the best-known indicator-based multi-objective optimization algorithms. It employs the S-metric or hypervolume indicator in its (steady-state) selection by deleting in each iteration the solution that has the smallest contribution to the hypervolume indi...

The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective f...

In this paper, we benchmark a variant of the well-known NSGA-II algorithm of Deb et al. on the biobjective family bbob-biobj test suite of the Comparing Continuous Optimizers platform COCO. To this end, we employ the implementation of MATLAB's family gamultiobj toolbox with its default settings and a population size of 100.

Pure random search is undeniably the simplest stochastic search algorithm for numerical optimization. Essentially the only thing to be determined to implement the algorithm is its sampling space, the influence of which on the performance on the bi-objective family bbob-biobj test suite of the COCO platform is investigated here. It turns out that th...

The unbounded population multi-objective covariance matrix adaptation evolution strategy (UP-MO-CMA-ES) aims at maximizing the total hypervolume covered by all evaluated points. It adds all non-dominated solutions found to its population and employs Gaussian mutations with adaptive covariance matrices to also solve ill-conditioned problems. A novel...

In this paper, we benchmark the Regularity Model-Based Multiobjective Estimation of Distribution Algorithm family RM-MEDA of Zhang et al. on the bi-objective family bbob-biobj test suite of the Comparing Continuous Optimizers (COCO) platform. It turns out that, starting from about 200 times dimension many function evaluations, family RM-MEDA shows...

We propose a novel variant of the covariance matrix adaptation evolution strategy (CMA-ES) using a covariance matrix parameterized with a smaller number of parameters. The motivation of a restricted covariance matrix is twofold. First, it requires less internal time and space complexity that is desired when optimizing a function on a high dimension...

In this paper, we consider comparison-based adaptive stochastic algorithms for solving numerical optimisation problems. We consider a specific subclass of algorithms that we call comparison-based step-size adaptive randomized search (CB-SARS), where the state variables at a given iteration are a vector of the search space and a positive parameter,...

We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime i...

This document details the rationales behind assessing the performance of numerical black-box optimizers on multi-objective problems within the COCO platform and in particular on the biobjective test suite bbob-biobj. The evaluation is based on a hypervolume of all non-dominated solutions in the archive of candidate solutions and measures the runtim...

The bbob-biobj test suite contains 55 bi-objective functions in continuous domain which are derived from combining functions of the well-known single-objective noiseless bbob test suite. Besides giving the actual function definitions and presenting their (known) properties, this documentation also aims at giving the rationale behind our approach in...

We present an experimental setup and procedure for benchmarking numerical optimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon the relevance of termination and restarts. We introduce the concept of recommendations f...

COCO is a platform for Comparing Continuous Optimizers in a black-box setting. It aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. We present the rationals behind the development of the platform as a general proposition for a guideline towards better benchmarking...

This paper analyzes a (1, λ)-Evolution Strategy, a randomized comparison-based adaptive search algorithm, optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using...

Algorithm benchmarking plays a vital role in designing new optimization algorithms and in recommending efficient and robust algorithms for practical purposes. So far, two main approaches have been used to compare algorithms in the evolutionary multiobjective optimization (EMO) field: (i) displaying empirical attainment functions and (ii) reporting...

Evolution strategies (ES
) are evolutionary algorithms that date back to the 1960s and that are most commonly applied to black-box optimization problems in continuous search spaces. Inspired by biological evolution, their original formulation is based on the application of mutation, recombination and selection in populations of candidate solutions....

We derive a stochastic search procedure for parameter optimization from two first principles: (1) imposing the least prior assumptions, namely by maximum entropy sampling, unbiasedness and invariance; (2) exploiting all available information under the constraints imposed by (1). We additionally require that two of the most basic functions can be so...

Step-size adaptation for randomised search algorithms like evolution strategies is a crucial feature for their performance. The adaptation must, depending on the situation, sustain a large diversity or entertain fast convergence to the desired optimum. The assessment of step-size adaptation mechanisms is therefore non-trivial and often done in too...

We propose a novel natural gradient based stochastic search algorithm, VD-CMA, for the optimization of high dimensional numerical functions. The algorithm is comparison-based and hence invariant to monotonic transformations of the objective function. It adapts a multivariate normal distribution with a restricted covariance matrix with twice the dim...

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely
accepted as a robust derivative-free continuous optimization algorithm for
non-linear and non-convex optimization problems. CMA-ES is well known to be
almost parameterless, meaning that only one hyper-parameter, the population
size, is proposed to be tuned by the user. In this p...

In this paper, we consider \emph{comparison-based} adaptive stochastic algorithms for solving numerical optimisation problems. We consider a specific subclass of algorithms called comparison-based step-size adaptive randomized search (CB-SARS), where the state variables at a given iteration are a vector of the search space and a positive parameter,...

This paper analyses a $(1,\lambda)$-Evolution Strategy, a randomised
comparison-based adaptive search algorithm, on a simple constraint optimisation
problem. The algorithm uses resampling to handle the constraint and optimizes a
linear function with a linear constraint. Two cases are investigated: first the
case where the step-size is constant, and...

We derive a stochastic search procedure for parameter optimization from two first principles: (1) imposing the least prior assumptions, namely by maximum entropy sampling, unbiasedness and invariance; (2) exploiting all available information under the constraints imposed by (1). We additionally require that two of the most basic functions can be so...

In the context of unconstraint numerical optimization, this paper
investigates the global linear convergence of a simple probabilistic
derivative-free optimization algorithm (DFO). The algorithm samples a candidate
solution from a standard multivariate normal distribution scaled by a step-size
and centered in the current solution. This solution is...

In this paper, we consider \emph{comparison-based} adaptive stochastic
algorithms for solving numerical optimisation problems. We consider a specific
subclass of algorithms called \cprs (CB-SARS), where the state variables at a
given iteration are a vector of the search space and a positive parameter, the
step-size, typically controlling the overal...

Success rule based step-size adaptation, namely the one-fifth success rule, has shown to be effective for single parent evolution strategies (ES), e.g. the (1+1)-ES. The success rule remains feasible in non-elitist single parent strategies, where the target success rate must be roughly inversely proportional to the population size. This success rul...

This paper evaluates the performance of a variant of the local-meta-model CMA-ES (lmm-CMA) in the BBOB 2013 expensive setting. The lmm-CMA is a surrogate variant of the CMA-ES algorithm. Function evaluations are saved by building, with weighted regression, full quadratic meta-models to estimate the candidate solutions' function values. The quality...

Six population-based methods for real-valued black box optimization are thoroughly compared in this article. One of them, Nelder-Mead simplex search, is rather old, but still a popular technique of direct search. The remaining five POEMS, G3PCX, Cauchy ...

This paper deals with a statistical model fitting procedure for non-stationary time series. This procedure selects the parameters of a piecewise autoregressive model using the Minimum Description Length principle. The existing chromosome representation of the piecewise autoregressive model and its corresponding optimisation algorithm are improved....

The Information-Geometric Optimization (IGO) has been introduced as a unified framework for stochastic search algorithms. Given a parametrized family of probability distributions on the search space, the IGO turns an arbitrary optimization problem on the search space into an optimization problem on the parameter space of the probability distributio...

This paper investigates two variants of the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Active covariance matrix adaptation allows for negative weights in the covariance matrix update rule such that bad steps are (actively) taken into account when updating the covariance matrix of the sample distribution. On the other hand,...

Mirrored mutations and active covariance matrix adaptation are two recent ideas to improve the well-known covariance matrix adaptation evolution strategy (CMA-ES)---a state-of-the-art algorithm for numerical optimization. It turns out that both mechanisms can be implemented simultaneously. In this paper, we investigate the impact of mirrored mutati...