• Home
  • Gabriel Jarry-Bolduc
Gabriel Jarry-Bolduc

Gabriel Jarry-Bolduc
UBC · Department of Mathematics

PhD Candidate in mathematics at UBC

About

16
Publications
1,112
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
28
Citations
Introduction
Skills and Expertise
Education
September 2017 - May 2022
September 2016 - May 2017
University of Maine
Field of study
  • Mathematics
September 2014 - April 2016

Publications

Publications (16)
Article
Full-text available
An explicit formula based on matrix algebra to approximate the diagonal entries of a Hessian matrix with any number of sample points is introduced. When the derivative-free technique called generalized centered simplex gradient is used to approximate the gradient, then the formula can be computed for only one additional function evaluation. An erro...
Preprint
Full-text available
The properties of positive bases make them a useful tool in derivative-free optimization (DFO) and an interesting concept in mathematics. The notion of the \emph{cosine measure} helps to quantify the quality of a positive basis. It provides information on how well the vectors in the positive basis uniformly cover the space considered. The number of...
Preprint
Full-text available
Model-based methods are popular in derivative-free optimization (DFO). In most of them, a single model function is built to approximate the objective function. This is generally based on the assumption that the objective function is one blackbox. However, some real-life and theoretical problems show that the objective function may consist of severa...
Preprint
Full-text available
An explicit formula to approximate the diagonal entries of the Hessian is introduced. When the derivative-free technique called \emph{generalized centered simplex gradient} is used to approximate the gradient, then the formula can be computed for only one additional function evaluation. An error bound is introduced and provides information on the f...
Preprint
Full-text available
This work investigates the asymptotic behaviour of the gradient approximation method called the generalized simplex gradient (GSG). This method has an error bound that at first glance seems to tend to infinity as the number of sample points increases, but with some careful construction, we show that this is not the case. For functions in finite dim...
Article
Full-text available
Using the Moore–Penrose pseudoinverse this work generalizes the gradient approximation technique called the centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the generalized centred simplex gradient. We develop error bounds and, under a full-rank condition, show that the error boun...
Preprint
Full-text available
This work introduces the nested-set Hessian approximation, a second-order approximation method that can be used in any derivative-free optimization routine that requires such information. It is built on the foundation of the generalized simplex gradient and proved to have an error bound that is on the order of the maximal radius of the two sets use...
Article
Full-text available
We consider the question of numerically approximating the derivative of a smooth function using only function evaluations. In particular, we examine the regression gradient, the generalized simplex gradient and the generalized centered simplex gradient, three numerical techniques based on using function values at a collection of sample points to co...
Article
Full-text available
A simplex, the convex hull of a set of \(n+1\) affinely independent points, is a useful tool in derivative-free optimization. The term uniform simplex was used by Audet and Hare (Derivative-free and blackbox optimization. Springer series in operations research and financial engineering, Springer, Cham, 2017). The purpose of this paper is to provide...
Article
Full-text available
Originally developed in 1954, positive bases and positive spanning sets have been found to be a valuable concept in derivative-free optimization (DFO). The quality of a positive basis (or positive spanning set) can be quantified via the cosine measure and convergence properties of certain DFO algorithms are intimately linked to the value of this me...
Preprint
Full-text available
Using the Moore--Penrose pseudoinverse, this work generalizes the gradient approximation technique called centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the \emph{generalized centred simplex gradient}. We develop error bounds and, under a full-rank condition, show that the error...
Preprint
Full-text available
Originally developed in 1954, positive bases and positive spanning sets have been found to be a valuable concept in derivative-free optimization (DFO). The quality of a positive basis (or positive spanning set) can be quantified via the {\em cosine measure} and convergence properties of certain DFO algorithms are intimately linked to the value of t...
Preprint
Full-text available
Originally developed in 1954, positive bases and positive spanning sets have been found to be a valuable concept in derivative-free optimization (DFO). The quality of a positive basis (or positive spanning set) can be quantified via the cosine measure and convergence properties of certain DFO algorithms are intimately linked to the value of this me...
Preprint
Full-text available
We consider the question of numerically approximating the derivative of a smooth function using only function evaluations. In particular, we examine the regression gradient, the generalized simplex gradient and the generalized centered simplex gradient, three numerical techniques based on using function values at a collection of sample points to co...
Preprint
Full-text available
Simplex gradients, essentially the gradient of a linear approximation, are a popular tool in derivative-free optimization (DFO). In 2015, a product rule, a quotient rule and a sum rule for simplex gradients were introduced by Regis [14]. Unfortunately, those calculus rules only work under a restrictive set of assumptions. The purpose of this paper...

Network

Cited By