An optimal adaptive algorithm for the approximation of concave functions

Université de Montréal, Montréal, Quebec, Canada
Mathematical Programming (Impact Factor: 1.8). 07/2006; 107(3):357-366. DOI: 10.1007/s10107-003-0502-7
Source: DBLP


Motivated by the study of parametric convex programs, we consider approximation of concave functions by piecewise affine functions.
Using dynamic programming, we derive a procedure for selecting the knots at which an oracle provides the function value and
one supergradient. The procedure is adaptive in that the choice of a knot is dependent on the choice of the previous knots.
It is also optimal in that the approximation error, in the integral sense, is minimized in the worst case.

Download full-text


Available from: Gilles Savard, Jul 10, 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose two new line search methods for convex functions. These new methods exploit the convexity property of the function, contrary to existing methods. The first method is an improved version of the golden section method. For the second method it is proven that after two evaluations the objective gap is at least halved. The practical efficiency of the methods is shown by applying our methods to a real-life bus and buffer size optimization problem and to several classes of convex functions.
    SIAM Journal on Optimization 01/2007; 18(1):338-363. DOI:10.1137/04061115X · 1.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, piecewise linear upper and lower bounds for univariate convex functions are derived that are only based on function value information. These upper and lower bounds can be used to approximate univariate convex functions. Furthermore, new Sandwich algo- rithms are proposed, that iteratively add new input data points in a systematic way, until a desired accuracy of the approximation is obtained. We show that our new algorithms that use only function-value evaluations converge quadratically under certain conditions on the derivatives. Under other conditions, linear convergence can be shown. Some numeri- cal examples, including a Strategic investment model, that illustrate the usefulness of the algorithm, are given.
    Informs Journal on Computing 02/2007; 23(4). DOI:10.2139/ssrn.1012289 · 1.08 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We derive worst-case bounds, with respect to the L p norm, on the error achieved by algorithms aimed at approximating a concave function of a single variable, through the evaluation of the function and its subgradient at a fixed number of points to be determined. We prove that, for p larger than 1, adaptive algorithms outperform passive ones. Next, for the uniform norm, we propose an improvement of the Sandwich algorithm, based on a dynamic programming formulation of the problem.
    Journal of Optimization Theory and Applications 05/2014; 161(2). DOI:10.1007/s10957-013-0410-9 · 1.51 Impact Factor