Jorge Nocedal’s research while affiliated with Northwestern University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (149)


Analysis of a new algorithm for one-dimensional minimization
  • Article

January 1979

·

10 Reads

·

14 Citations

Computing

·

Jorge Nocedal

Davidon has recently introduced a new approach to optimization using the idea of nonlinear scaling. In this paper we study the algorithm that results when applying his ideas to the one-dimensional case. We show that the algorithm is locally convergent withQ-order equal 2 and compare it with the method of cubic interpolation.Krzlich wurde von Davidon fr Optimierungsprobleme ein neuer Weg vorgeschlagen, bei dem die Idee der nichtlinearen Skalierung verwendet wird. Der Algorithmus wird in der vorliegenden Arbeit analysiert fr den eindimensionalen Fall. Es wird gezeigt, da der Algorithmus lokal konvergiert mit quadratischerQ-Konvergenz und die Konvergenzeigenschaften werden mit denjenigen der Methode der kubischen Interpolation verglichen.







On the Convergence of Successive Linear Programming Algorithms

25 Reads

·

6 Citations

We analyze the global convergence properties of a class of penalty methods for non- linear programming. These methods include successive linear programming approaches, and more specifically the SLP-EQP approach presented in (1). Every iteration requires the solution of two trust region subproblems involving linear and quadratic models, respectively. The interaction between the trust regions of these subproblems requires careful consideration. It is shown under mild assumptions that there exist an accumu- lation point which is a critical point for the penalty function.


Fig. 1. Performance, in terms of CPU time, on 95 unconstrained problems.
Fig. 2. Peformance, in terms of function evaluations, on 187 unconstrained problems.
Fig. 3. Comparison, in terms of CPU time, on 27 equality constrained problems.
Fig. 5. Comparison, in terms of CPU time, on 67 constrained problems.
Fig. 6. Comparison, in terms of function evaluations, on 258 constrained problems.
Assesing the potential of interior point methods for nonlinear optimization
  • Article
  • Full-text available

94 Reads

·

11 Citations

Download

A Line Search Penalty Method for Nonlinear Optimization

20 Reads

·

2 Citations

Line search algorithms for nonlinear programming must include safeguards to enjoy global convergence properties. This paper describes an exact penalization approach that extends the class of problems that can be solved with line search SQP methods. In the new algorithm, the penalty parameter is adjusted at every iteration to ensure sufficient progress in linear feasibility and to promote acceptance of the step. A trust region is used to assist in the determination of the penalty parameter (but not in the step computation). It is shown that the algorithm enjoys favorable global convergence properties. Numerical experiments illustrate the behavior of the algorithm on various difficult situations.


Citations (83)


... Various algorithms have been designed to solve deterministic equality-constrained optimization problems (see [6,11] for further references), while recent research has focused on developing stochastic optimization algorithms. There has been a growing interest in adapting line search and trust region methods in stochastic framework for unconstrained optimization problems [1-5, 9, 10, 12, 14, 20, 22, 24, 26-28], but significantly fewer algorithms have been proposed to solve stochastic equality-constrained optimization problems (see [6] for further references and [7,13,15,34,37]). ...

Reference:

IPAS: An Adaptive Sample Size Method for Weighted Finite Sum Problems with Linear Equality Constraints
Constrained Optimization in the Presence of Noise
  • Citing Article
  • August 2023

SIAM Journal on Optimization

... Deterministic version of this condition has been used in [2] for the analysis of a proximal inexact trust-region algorithm. A stochastic version imposed in expectation was used in [4] and an alternative, that is meant to be more practical, is suggested in [60]. Further variants for general constrained optimization are proposed in [8]. ...

Constrained and composite optimization via adaptive sampling methods
  • Citing Article
  • May 2023

IMA Journal of Numerical Analysis

... In this paper, we focus on noise-aware algorithms for solving such problems, i.e., algorithms that exploit information about the noise and that are adaptive. In the unconstrained and bounded noise setting, several noise-aware algorithms that leverage noise-level dependent constants (e.g., ϵ f and ϵ g ) to evaluate the acceptability of steps within line search [5,6,29,48] or trust region [2,12,30,44] methods have been proposed. A natural extension of these algorithms to the constrained setting assumes bounded noise in the objective function and associated derivatives, and possibly in the constraint functions. ...

A trust region method for noisy unconstrained optimization
  • Citing Article
  • March 2023

Mathematical Programming

... Our findings indicate a general superiority of newly developed methods over the basic version of IGD (inexact gradient descent) method without momentum. As discussed in [20,45], IGD in general outperforms other well-developed methods in derivative-free optimization including FMINSEARCH, i.e., the Nelder-Mead simplex-based method from [25], the implicit filtering algorithms [10], and the random gradient-free algorithm for smooth optimization proposed by Nesterov and Spokoiny [34]. As a consequence, IGDm can be recommended as a preferable optimizer for derivative-free smooth (convex and nonconvex) optimization problems. ...

On the numerical performance of finite-difference-based methods for derivative-free optimization
  • Citing Article
  • September 2022

Optimization Methods and Software

... Finally, it would be interesting to compare our methods with recent results on adaptive finite-difference methods [35], which automatically adjust the finite-difference interval to balance truncation error and measurement error, making them suitable for noisy derivativefree optimization. We keep these questions for further research. ...

Adaptive Finite-Difference Interval Estimation for Noisy Derivative-Free Optimization
  • Citing Article
  • August 2022

SIAM Journal on Scientific Computing

... Byrd et al. [3] have proposed a stochastic quasi-Newton method in limited memory form through subsampled Hessian-vector products. Shi et al. [23] have proposed practical extensions of the BFGS and L-BFGS methods for nonlinear optimization that are capable of dealing with noise by employing a new linesearch technique. Xie et al. [24] have considered the convergence analysis of quasi-Newton methods when there are (bounded) errors in both function and gradient evaluations, and established conditions under which an Armijo-Wolfe linesearch on the noisy function yields sufficient decrease in the true objective function. ...

A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization
  • Citing Article
  • March 2022

SIAM Journal on Optimization

... In this paper, we focus on noise-aware algorithms for solving such problems, i.e., algorithms that exploit information about the noise and that are adaptive. In the unconstrained and bounded noise setting, several noise-aware algorithms that leverage noise-level dependent constants (e.g., ϵ f and ϵ g ) to evaluate the acceptability of steps within line search [5,6,29,48] or trust region [2,12,30,44] methods have been proposed. A natural extension of these algorithms to the constrained setting assumes bounded noise in the objective function and associated derivatives, and possibly in the constraint functions. ...

Analysis of the BFGS Method with Errors
  • Citing Article
  • January 2020

SIAM Journal on Optimization

... In this paper, we focus on noise-aware algorithms for solving such problems, i.e., algorithms that exploit information about the noise and that are adaptive. In the unconstrained and bounded noise setting, several noise-aware algorithms that leverage noise-level dependent constants (e.g., ϵ f and ϵ g ) to evaluate the acceptability of steps within line search [5,6,29,48] or trust region [2,12,30,44] methods have been proposed. A natural extension of these algorithms to the constrained setting assumes bounded noise in the objective function and associated derivatives, and possibly in the constraint functions. ...

Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
  • Citing Article
  • March 2018

SIAM Journal on Optimization

... Due to the importance of machine learning and deep learning, [29] and [30] analyze quasi-Newton methods performance in these fields. Also, [31] and [32] seek to determine a suitable batch selection method for training machine learning models. ...

A Progressive Batching L-BFGS Method for Machine Learning

·

·

Jorge Nocedal

·

[...]

·