Warren L Hare

Warren L Hare
University of British Columbia - Okanagan | UBC Okanagan · Department of Mathematics and Statistics

Ph.D.

About

121
Publications
36,141
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,094
Citations
Citations since 2016
67 Research Items
1699 Citations
20162017201820192020202120220100200300
20162017201820192020202120220100200300
20162017201820192020202120220100200300
20162017201820192020202120220100200300
Additional affiliations
January 2012 - present
University of British Columbia - Okanagan
Position
  • University of British Columbia
Education
September 2000 - April 2004
Simon Fraser University
Field of study
  • Mathematics (Optimization)
September 1998 - March 2000
University of Alberta
Field of study
  • Mathematics (Optimization)

Publications

Publications (121)
Preprint
Full-text available
Advancements in radiation therapy technologies are characterized by personalized treatments plans and increased conformality of the radiation dose to the tumour. Gel dosimeters are a potential tool for measuring these complex dose distributions. Here we develop a method to reduce the storage size of optical CT system matrices through use of polar c...
Preprint
Full-text available
This work presents a novel matrix-based method for constructing an approximation Hessian using only function evaluations. The method requires less computational power than interpolation-based methods and is easy to implement in matrix-based programming languages such as MATLAB. As only function evaluations are required, the method is suitable for u...
Preprint
Full-text available
The centred simplex gradient (CSG) is a popular gradient approximation technique in derivative-free optimization. Its computation requires a perfectly symmetric set of sample points and is known to provide an accuracy of O($\Delta^2$) where $\Delta$ is the radius of the sampling set. In this paper, we consider the situation where the set of sample...
Preprint
Full-text available
Nonconvex minimization algorithms often benefit from the use of second-order information as represented by the Hessian matrix. When the Hessian at a critical point possesses negative eigenvalues, the corresponding eigenvectors can be used to search for further improvement in the objective function value. Computing such eigenpairs can be computation...
Article
A new convex quadratically‐constrained quadratic programming (QCQP) model is proposed for modeling side‐slopes volumes in the minimization of earthwork operations to compute the vertical alignment of a resource road while satisfying design and safety constraints. The new QCQP model is convex but nonlinear; it is compared to a state‐of‐the‐art mixed...
Article
Full-text available
We consider the problem of minimizing an objective function that is provided by an oracle. We assume that while the optimization problem seeks a real-valued solution, the oracle is capable of accepting complex-valued input and returning complex-valued output. We explore using complex-variables in order to approximate gradients and Hessians within a...
Article
Full-text available
This paper presents a new algorithm to build feasible solutions to a MILP formulation of the vertical alignment problem in road design. This MILP involves a large number of special ordered set of type 2 variables used to describe piecewise linear functions. The principle of the algorithm is to successively solve LPs adapted from the MILP by replaci...
Article
Full-text available
This manuscript defines a bi-objective optimization model to finds road profiles that optimize the road construction cost and the vehicle operating costs, specifically the fuel consumption. The research implements and validates the formula for the fuel consumption cost. It further presents and examines a variety of well-known methods: three classic...
Article
Full-text available
Optical computed tomography (CT) is one of the leading modalities for imaging gel dosimeters. There exist many prototype designs, as well as some commercial optical CT scanners that have showcased the value that gel dosimeters can provide to improve 3D dose verification for radiation treatments. However, due to factors including image accuracy, sca...
Preprint
Full-text available
The properties of positive bases make them a useful tool in derivative-free optimization (DFO) and an interesting concept in mathematics. The notion of the \emph{cosine measure} helps to quantify the quality of a positive basis. It provides information on how well the vectors in the positive basis uniformly cover the space considered. The number of...
Preprint
Full-text available
Model-based methods are popular in derivative-free optimization (DFO). In most of them, a single model function is built to approximate the objective function. This is generally based on the assumption that the objective function is one blackbox. However, some real-life and theoretical problems show that the objective function may consist of severa...
Article
Full-text available
Changing weather patterns may impose increased risk to the creditworthiness of financial institutions in the agriculture sector. To reduce the credit risk caused by climate change, financial institutions need to update their agricultural lending portfolios to consider climate change scenarios. In this paper we introduce a framework to compute the o...
Article
Full-text available
Purpose: A system matrix can be built in order to account for the refractions in an optical computed tomography (CT) system. In order to utilize this system matrix, iterative methods are employed to solve the image reconstruction problem. The purpose of this study is to compare potential iterative algorithms to solve this image reconstruction prob...
Preprint
Full-text available
14 Purpose: A system matrix can be built in order to account for the refractions in 15 an optical computed tomography (CT) system. In order to utilize this system matrix, 16 iterative methods are employed to solve the image reconstruction problem. The purpose 17 of this study is to compare potential algorithms to solve this image reconstruction 18...
Preprint
Full-text available
This work investigates the asymptotic behaviour of the gradient approximation method called the generalized simplex gradient (GSG). This method has an error bound that at first glance seems to tend to infinity as the number of sample points increases, but with some careful construction, we show that this is not the case. For functions in finite dim...
Article
Full-text available
Using the Moore–Penrose pseudoinverse this work generalizes the gradient approximation technique called the centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the generalized centred simplex gradient. We develop error bounds and, under a full-rank condition, show that the error boun...
Article
Full-text available
Variational Analysis studies mathematical objects under small variations. With regards to optimization, these objects are typified by representations of first-order or second-order information (gradients, subgradients, Hessians, etc). On the other hand, Derivative-Free Optimization studies algorithms for continuous optimization that do not use firs...
Article
Full-text available
Optical computed tomography (CT) is one of the leading modalities for imaging gel dosimeters for 3D radiation dosimetry. There exist multiple scanner designs that have showcased excellent 3D dose verification capabilities of optical CT gel dosimetry. However, due to multiple experimental and reconstruction based factors there is currently no single...
Article
Full-text available
We consider the question of numerically approximating the derivative of a smooth function using only function evaluations. In particular, we examine the regression gradient, the generalized simplex gradient and the generalized centered simplex gradient, three numerical techniques based on using function values at a collection of sample points to co...
Article
Full-text available
Originally developed in 1954, positive bases and positive spanning sets have been found to be a valuable concept in derivative-free optimization (DFO). The quality of a positive basis (or positive spanning set) can be quantified via the cosine measure and convergence properties of certain DFO algorithms are intimately linked to the value of this me...
Preprint
Full-text available
Using the Moore--Penrose pseudoinverse, this work generalizes the gradient approximation technique called centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the \emph{generalized centred simplex gradient}. We develop error bounds and, under a full-rank condition, show that the error...
Article
Inter-story isolation (ISI) is a passive control technique that involves the placement of seismic isolation devices between stories. Multiple isolation layers installed at different story levels can reduce the response of the isolation devices, while maintaining control of the primary building. A multi-objective optimization (MOO) study is conducte...
Preprint
Full-text available
Originally developed in 1954, positive bases and positive spanning sets have been found to be a valuable concept in derivative-free optimization (DFO). The quality of a positive basis (or positive spanning set) can be quantified via the {\em cosine measure} and convergence properties of certain DFO algorithms are intimately linked to the value of t...
Article
Full-text available
When building a road, it is critical to select a vertical alignment which ensures design and safety constraints. Finding such a vertical alignment is not necessarily a feasible problem, and the models describing it generally involve a large number of variables and constraints. This paper is dedicated to rapidly proving the feasibility or the infeas...
Preprint
Full-text available
In this paper, we examine the framework to estimate financial risk called conditional-value-at-risk (CVaR) and examine models to optimize portfolios by minimizing CVaR. We note that total risk can be a function of multiple risk factors combined in a linear or nonlinear forms. We demonstrate that, when using CVaR, several common nonlinear models can...
Preprint
Full-text available
Portfolio optimization is the process of choosing the best investment decision across a set of financial instruments or assets. Investors seek to maximize their (expected) returns, but higher expected return usually means taking on more risk. So, investors are faced with a trade-off between risk and expected return. Most researchers have considered...
Preprint
Full-text available
Originally developed in 1954, positive bases and positive spanning sets have been found to be a valuable concept in derivative-free optimization (DFO). The quality of a positive basis (or positive spanning set) can be quantified via the cosine measure and convergence properties of certain DFO algorithms are intimately linked to the value of this me...
Preprint
Full-text available
This paper presents a new algorithm to build feasible solutions to a MILPs formulation of the vertical alignment problem in road design. This MILP involves a large number of SOS2 variables used to describe piece-wise linear functions. The principle of the algorithm is to successively solve LPs adapted from the MILP by replacing the SOS2 constraints...
Article
Full-text available
Multi-fidelity algorithms for solving the horizontal alignment problem in road design are considered. A multi-fidelity surrogate model is built and quantile regression is used to understand its accuracy at various fidelity levels. Two algorithms are compared: a generalized pattern search algorithm with adaptive precision control, and a trust-region...
Preprint
Prox-regularity is a generalization of convexity that includes all C2, lower-C2, strongly amenable and primal-lower-nice functions. The study of prox-regular functions provides insight on a broad spectrum of important functions. Parametrically prox-regular (para-prox-regular) functions are a further extension of this family, produced by adding a pa...
Preprint
The NC-proximal average is a parametrized function used to continuously transform one proper, lsc, prox-bounded function into another. Until now, it has been defined for two functions. The purpose of this article is to redefine it so that any finite number of functions may be used. The layout generally follows that of [11], extending those results...
Article
Full-text available
In Variational Analysis, VU-theory provides a set of tools that is helpful for understanding and exploiting the structure of nonsmooth functions. The theory takes advantage of the fact that at any point, the space can be separated into two orthogonal subspaces: one that describes the direction of nonsmoothness of the function, and the other on whic...
Preprint
Full-text available
Engineering design problems are often multi-objective in nature, which means trade-offs are required between conflicting objectives. In this study, we examine the multi-objective algorithms for the optimal design of reinforced concrete structures. We begin with a review of multi-objective optimization approaches in general and then present a more f...
Article
Engineering design problems are often multi-objective in nature, which means trade-offs are required between conflicting objectives. In this study, we examine the multi-objective algorithms for the optimal design of reinforced concrete structures. We begin with a review of multi-objective optimization approaches in general and then present a more f...
Article
Full-text available
Proximal gradient methods have been found to be highly effective for solving minimization problems with non-negative constraints or L1-regularization. Under suitable nondegeneracy conditions, it is known that these algorithms identify the optimal sparsity pattern for these types of problems in a finite number of iterations. However, it is not known...
Preprint
Full-text available
The $\mathcal{VU}$-algorithm is a superlinearly convergent method for minimizing nonsmooth, convex functions. At each iteration, the algorithm works with a certain $\mathcal{V}$-space and its orthogonal $\U$-space, such that the nonsmoothness of the objective function is concentrated on its projection onto the $\mathcal{V}$-space, and on the $\math...
Preprint
Full-text available
We consider the question of numerically approximating the derivative of a smooth function using only function evaluations. In particular, we examine the regression gradient, the generalized simplex gradient and the generalized centered simplex gradient, three numerical techniques based on using function values at a collection of sample points to co...
Article
Full-text available
We consider the challenge of numerically comparing optimization algorithms that employ random-restarts under the assumption that only limited test data is available. We develop a bootstrapping technique to estimate the incumbent solution of the optimization problem over time as a stochastic process. The asymptotic properties of the estimator are ex...
Preprint
Full-text available
In Variational Analysis, VU-theory provides a set of tools that is helpful for understanding and exploiting the structure of nonsmooth functions. The theory takes advantage of the fact that at any point, the space can be separated into two orthogonal subspaces, one that describes the direction of nonsmoothness of the function, and the other on whic...
Preprint
Full-text available
Simplex gradients, essentially the gradient of a linear approximation, are a popular tool in derivative-free optimization (DFO). In 2015, a product rule, a quotient rule and a sum rule for simplex gradients were introduced by Regis [14]. Unfortunately, those calculus rules only work under a restrictive set of assumptions. The purpose of this paper...
Article
Full-text available
The subdifferential of a function is a generalization for nonsmooth functions of the concept of gradient. It is frequently used in variational analysis, particularly in the context of nonsmooth optimization. The present work proposes algorithms to reconstruct a polyhedral subdifferential of a function from the computation of finitely many direction...
Article
Full-text available
Locating proximal points is a component of numerous minimization algorithms. This work focuses on developing a method to find the proximal point of a convex function at a point, given an inexact oracle. Our method assumes that exact function values are at hand, but exact subgradients are either not available or not useful. We use approximate subgra...
Chapter
Full-text available
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Historically, model-based DFO has often assumed that the objective function is sm...
Chapter
For a full appreciation of the material in this book, it is assumed that the reader has followed courses in Multivariate Calculus and Linear Algebra. This chapter contains definitions, notation, and results that, although fairly common in mathematics literature, are not necessarily covered in standard multivariate calculus and linear algebra course...
Chapter
Section 1.4 enumerated some common features of target applications of BBO. A recurring difficulty in such applications originates from the fact that the evaluation of the objective and constraint functions is computationally expensive. In some situations, a surrogate optimization problem is available. That is, a problem that is considerably cheaper...
Chapter
While the line-search-based method of Chapter 10 is quick and easy to understand, it does not exploit the full power of a model function. In essence, the model-based descent algorithm for unconstrained optimization only uses gradient approximations to confirm descent directions. This can be seen in the convergence analysis of Chapter 10, where only...
Chapter
As mentioned in Chapter 6, the generalised pattern search (GPS) algorithm is an advancement on the CS algorithm first introduced in Chapter 3 The GPS algorithm follows the same basic logic behind the CS algorithm, namely it iteratively searches a collection of points seeking improvement over the incumbent solution. In order to increase the flexibil...
Chapter
In 1965, John Nelder and Roger Mead published a short (6 pages) note on a new method for unconstrained minimisation. The resulting algorithm has become one of the most widely applied and researched optimization algorithms in the world. Originally, Nelder and Mead titled their method the “Simplex Method”, as the method hinged around using function e...
Chapter
Chapter 3 introduced a first practical DFO algorithm for unconstrained optimization, the coordinate search (CS) algorithm. While it was proven to converge to the first order in some circumstance (see Theorem 3.4), it was also noted that the algorithm can fail on very simple nondifferentiable convex functions (see Example 3.3). The CS algorithm is a...
Chapter
Through this book, and in all areas of optimization, it should always be remembered that whenever derivatives are available, they should be used. Derivatives, or more generally gradients, provide powerful information about a function. In unconstrained optimization, descent directions and first order optimality hinge on these basic objects of multiv...
Chapter
Chapter 3 proposed the coordinate search (CS) algorithm for unconstrained optimization. The algorithm worked based on local exploration around the incumbent solution in the positive and negative orthogonal coordinate directions. In Chapter 7, this algorithm was expanded to allow a broader use of search directions, resulting in the generalised patte...
Chapter
In this chapter, we explore some naive approaches to solving BBO problems, and explain why they are not acceptable. This will lead to a better understanding of why DFO is preferred for real applications and provide some foundational material that is used throughout this book.
Chapter
In this introductory chapter, we present a high-level description of optimization, blackbox optimization, and derivative-free optimization. We introduce some basic optimization notation used throughout this book, and some of the standard classifications of optimization problems. We end with three examples where blackbox optimization problems have a...
Chapter
Throughout this book, we have considered the general problem of minimising a multivariate objective function f over a constraint set \(\Omega \subseteq \mathbb{R}^{n}\). Let us now be more specific about the nature of the variables and of the constraint set.
Chapter
Optimization algorithms for blackbox functions can be broadly split into two categories: heuristic and non-heuristic. A heuristic is any approach that, while supported by some argument of why it should succeed, does not include a guarantee of success. In the framework of blackbox optimization, we take this statement to mean that the algorithm does...
Chapter
Model-based methods in DFO proceed from the idea that if it is possible to build a “good” model of the true objective function, then information from the model can be used to guide the optimization. In Chapter 9, we studied several methods for constructing model functions using only objective function evaluations. We also defined fully linear, as a...
Chapter
There are situations in which the optimization problem is driven by more than one objective function. Typically, these objectives are conflicting; for example, one may wish to maximise the solidity of a structure, while minimising its weight. In such a situation, one desires to take into account the relative tradeoffs between pairs of solutions.
Article
Full-text available
Benchmarking of optimization algorithms is a complicated task that involves many subtle considerations to yield a fair and unbiased evaluation. In this paper, we systematically review the benchmarking process of optimization algorithms, and discuss the challenges of fair comparison. We provide suggestions for each step of the comparison process and...
Article
Full-text available
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approxim...
Article
Full-text available
The VU-algorithm is a superlinearly convergent method for minimizing nonsmooth convex functions. At each iteration, the algorithm separates R n into the V-space and the orthogonal U-space, such that the nonsmoothness of the objective function is concentrated on its projection onto the V-space, and on the U-space the projection is smooth. This struc...
Article
Full-text available
Computing explicitly the \(\varepsilon \)-subdifferential of a proper function amounts to computing the level set of a convex function namely the conjugate minus a linear function. The resulting theoretical algorithm is applied to the the class of (convex univariate) piecewise linear–quadratic functions for which existing numerical libraries allow...
Article
Full-text available
The vertical alignment optimization problem for road design aims to generate a vertical alignment of a new road with a minimum cost, while satisfying safety and design constraints. We present a new model called multi-haul quasi network flow (MH-QNF) for vertical alignment optimization that improves the accuracy and reliability of previous mixed int...
Book
This book is designed as a textbook, suitable for self-learning or for teaching an upper-year university course on derivative-free and blackbox optimization. The book is split into 5 parts and is designed to be modular; any individual part depends only on the material in Part I. Part I of the book discusses what is meant by Derivative-Free and Bla...
Article
Full-text available
Positive bases, which play a key role in understanding derivative free optimization methods that use a direct search framework, are positive spanning sets that are positively linearly independent. The cardinality of a positive basis in $\R^n$ has been established to be between $n+1$ and $2n$ (with both extremes existing). The lower bound is immedia...
Article
Full-text available
Optimization of three-dimensional road alignments is a nonlinear non-convex optimization problem. The development of models that fully optimize a three-dimensional road alignment problem is challenging due to numerous factors involved and complexities in the geometric specification of the alignment. In this study, we developed a novel bi-objective...
Article
Full-text available
Many theoretical and experimental studies have used heuristic methods to investigate the dynamic behaviour of the passive coupling of adjacent structures. However, few papers have used optimization techniques with guaranteed convergence in order to increase the efficiency of the passive coupling of adjacent structures. In this paper, the combined p...
Article
Full-text available
Finding an optimal alignment connecting two end-points in a specified corridor is a complex problem that requires solving three interrelated sub-problems, namely the horizontal alignment, vertical alignment and earthwork optimization problems. In this research, we developed a novel bi-level optimization model combining those three problems. In the...
Article
Full-text available
Derivative-Free optimization (DFO) focuses on designing methods to solve optimization problems without the analytical knowledge of gradients of the objective function. There are two main families of DFO methods: model-based methods and direct search methods. In model-based DFO methods, a model of the objective function is constructed using only obj...
Article
Full-text available
This is a discussion of the paper "Modeling an Augmented Lagrangian for Improved Blackbox Constrained Optimization," (Gramacy, R.~B., Gray, G.~A., Digabel, S.~L., Lee, H.~K.~H., Ranjan, P., Wells, G., and Wild, S.~M., Technometrics, 61, 1--38, 2015).
Article
Full-text available
This paper addresses the problem of finding multiple near-optimal, spatially-dissimilar paths that can be considered as alternatives in the decision making process, for finding optimal corridors in which to construct a new road. We further consider combinations of techniques for reducing the costs associated with the computation and increasing the...