ArticlePublisher preview available

Convergence analysis of a nonmonotone projected gradient method for multiobjective optimization problems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this work we consider an extension of the projected gradient method (PGM) to constrained multiobjective problems. The projected gradient scheme for multiobjective optimization proposed by Graña Drummond and Iusem and analyzed by Fukuda and Graña Drummond is extended to include a nonmonotone line search based on the average of the successive previous functions values instead of the traditional Armijo-like rules. Under standard assumptions, stationarity of the accumulation points is established. Moreover, under standard convexity assumptions, we prove full convergence to weakly Pareto optimal solutions of any sequence produced by the proposed algorithm.
Optimization Letters (2019) 13:1365–1379
https://doi.org/10.1007/s11590-018-1353-8
ORIGINAL PAPER
Convergence analysis of a nonmonotone projected
gradient method for multiobjective optimization problems
N. S. Fazzio1
·M. L. Schuverdt1
Received: 30 January 2018 / Accepted: 3 November 2018 / Published online: 16 November 2018
© Springer-Verlag GmbH Germany, part of Springer Nature 2018
Abstract
In this work we consider an extension of the projected gradient method (PGM) to con-
strained multiobjective problems. The projected gradient scheme for multiobjective
optimization proposed by Graña Drummond and Iusem and analyzed by Fukuda and
Graña Drummond is extended to include a nonmonotone line search based on the aver-
age of the successive previous functions values instead of the traditional Armijo-like
rules. Under standard assumptions, stationarity of the accumulation points is estab-
lished. Moreover, under standard convexity assumptions, we prove full convergence to
weakly Pareto optimal solutions of any sequence produced by the proposed algorithm.
Keywords Multiobjective optimization ·Projected gradient methods ·Nonmonotone
line search ·Global convergence
1 Introduction
We will consider the constrained multiobjective optimization problem (MOP) of the
form:
Minimize F(x)subject to xC(1)
where F:RnRr,F(x)=(F1(x),...,Fr(x)) is a continuously differentiable
vectorial function in Rnand CRnis a closed and convex set.
In a multicriteria setting there are many optimality definitions. Throughout this
paper, we are interested in the Pareto and weak Pareto optimality concepts. A feasible
point of problem (1) is called Pareto optimum or efficient solution [19]ifthereis
no xCsuch that F(x)F(x)and F(x)= F(x). A point xCis said to
be a weak Pareto optimum point or a weakly efficient solution if there is no xC
BM. L. Schuverdt
schuverd@mate.unlp.edu.ar
N. S. Fazzio
nadiafazzio@gmail.com
1CONICET, Department of Mathematics, FCE, University of La Plata, La Plata, Argentina
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... In addition to aforementioned techniques, classical derivative methods have been extended to solve MOPs: Steepest descent method [20,21], initially proposed for multicriteria optimization and then for vector optimization, this method converges to a critical point of the objective function; Projected gradient method [59] developed for constrained MOPs, this method converges to critical and weak efficient points for convex and nonconvex MOPs, respectively. Other classical derivative methods Projected gradient method [60,61,62,63,64,65], Newton's method [22], quasi-Newton method [23,24,25,26,27,28], conjugate gradient method [29,30], proximal gradients [31,32] are also extended from scalar optimization to MOPs. ...
... Drummond and Iusem [59] have been presented that the projected gradient method converges to critical and weak efficient points for nonconvex and convex MOPs, respectively. Further development related to projected gradient methods can be seen in the literature [60,61,62,63,64,65] and references therin. To the best knowledge of the authors, there is no projected gradient method developed for uncertain constrained multiobjective optimization problems. ...
... If x * is an efficient (weakly efficient) solution, then F(x * ) is called a non-dominated (weakly non-dominated) point, and the set of efficient solutions and the set of non-dominated points (Pareto front) are called the efficient set and the non-dominated set, respectively. For further details of these definitions, see references [60,61,62,63,64,65]. 6 In uncertain multiobjective optimization, input data that are uncertain affect how the optimization problem is formulated. ...
Preprint
Full-text available
Numerous real-world applications of uncertain multiobjective optimization problems (UMOPs) can be found in science, engineering, business, and management. To handle the solution of uncertain optimization problems, robust optimization is a relatively new field. An extended version of the projected gradient method (PGM) for a deterministic smooth multiobjective optimization problem (MOP) is presented in the current study as a PGM for UMOP. An objective-wise worst-case cost (OWWC) type robust counterpart is considered, and the PGM is used to solve a UMOP by using OWWC. A projected gradient descent algorithm is created using theoretical findings. It is demonstrated that the projected gradient descent algorithm's generated sequence converges to the robust counterpart's weak Pareto optimal solution, which will be the robust weak Pareto optimal solution for UMOP. Under a few reasonable presumptions, the projected gradient descent algorithm's full convergent behavior is also justified. Finally, numerical tests are presented to validate the proposed method.
... In recent years, several methods have been obtained by extending well-known optimization algorithms for single-objective optimization. Notable examples include the steepest descent method proposed in [15], projected gradient methods [11,18,19,1], the Newton's method proposed in [14], proximal-point methods [3,2], trust-region methods [36,5], and also nonmonotone line search methods [13,29]. ...
... Various selections for ν k result in different nonmonotone step-size rules. Specifically, considering suitable choices for ν k , our method encompasses instances of the methods proposed in [11,13,29]. Assuming that the sequence {ν k } k≥0 is summable, and that the ith objective function has Hölder continuous gradient with constant H i and smoothness parameter θ i ∈ (0, 1], we show that the proposed method takes no more than O ϵ − 1+ 1 θ min iterations to find a ϵ-approximate Pareto critical point of a problem with m objectives and θ min = min i=1,...,m {θ i }. ...
... • an instance of the nonmonotone projected gradient method proposed in [13], when m k = 0 for all k ≥ 0; and ...
Preprint
Full-text available
In this work we propose a general nonmonotone line-search method for nonconvex multi\-objective optimization problems with convex constraints. At the kth iteration, the degree of nonmonotonicity is controlled by a vector νk\nu_{k} with nonnegative components. Different choices for νk\nu_{k} lead to different nonmonotone step-size rules. Assuming that the sequence {νk}k0\left\{\nu_{k}\right\}_{k\geq 0} is summable, and that the ith objective function has H\"older continuous gradient with smoothness parameter θi(0,1]\theta_i \in(0,1], we show that the proposed method takes no more than O(ϵ(1+1θmin))\mathcal{O}\left(\epsilon^{-\left(1+\frac{1}{\theta_{\min}}\right)}\right) iterations to find a ϵ\epsilon-approximate Pareto critical point for a problem with m objectives and θmin=mini=1,,m{θi}\theta_{\min}= \min_{i=1,\dots, m} \{\theta_i\}. In particular, this complexity bound applies to the methods proposed by Drummond and Iusem (Comput. Optim. Appl. 28: 5--29, 2004), by Fazzio and Schuverdt (Optim. Lett. 13: 1365--1379, 2019), and by Mita, Fukuda and Yamashita (J. Glob. Optim. 75: 63--90, 2019). The generality of our approach also allows the development of new methods for multiobjective optimization. As an example, we propose a new nonmonotone step-size rule inspired by the Metropolis criterion. Preliminary numerical results illustrate the benefit of nonmonotone line searches and suggest that our new rule is particularly suitable for multiobjective problems in which at least one of the objectives has many non-global local minimizers.
... This line of research was initiated by Fliege and Svaiter in 2000 with the extension of the steepest descent method [23] (see also [32]). Since then, several methods have been studied, including Newton [12,22,28,31,51], quasi-Newton [1,33,34,39,42,44,[46][47][48], conjugate gradient [27,29,37], conditional gradient [2,10], projected gradient [3,20,24,25,30], and proximal methods [5,8,9,11,13]. ...
... On the other hand, the superlinear convergence rate depends on whether r k j satisfies Assumption 4.2. Next, we explore suitable choices for the multiplier μ k ∈ m in (20) to ensure that r k j satisfies the aforementioned assumption. In what follows, we will assume that Assumption 4.1 holds. ...
... Furthermore, since {x k } ⊂ U , by (20) and using continuity arguments, there exists a constantc > 0 such that ...
Article
Full-text available
We propose a modified BFGS algorithm for multiobjective optimization problems with global convergence, even in the absence of convexity assumptions on the objective functions. Furthermore, we establish a local superlinear rate of convergence of the method under usual conditions. Our approach employs Wolfe step sizes and ensures that the Hessian approximations are updated and corrected at each iteration to address the lack of convexity assumption. Numerical results shows that the introduced modifications preserve the practical efficiency of the BFGS method.
... Apart from scalarization methods, several classical derivative-based methods for scalar optimization have been extended to multiobjective optimization. These derivative-based methods for multiobjective optimization are as follows: steepest descent method [6,31]; Newton's method [7]; quasi-Newton method [9,10,[32][33][34][35]; Conjugate gradient method [36,37]; Projected gradient method [38][39][40][41]; projected gradient method [1]; and proximal gradient method [42,43]. Apart from DMOP, we define an uncertain multiobjective optimization problem as follows: ...
Preprint
Full-text available
In this article, we extend our previous work (Applicable Analysis, 2024, pp. 1-25) on the steepest descent method for uncertain multiobjective optimization problems. While that study established local convergence, it did not address global convergence and the rate of convergence of the steepest descent algorithm. To bridge this gap, we provide rigorous proofs for both global convergence and the linear convergence rate of the steepest descent algorithm. Global convergence analysis strengthens the theoretical foundation of the steepest descent method for uncertain multiobjective optimization problems, offering deeper insights into its efficiency and robustness across a broader class of optimization problems. These findings enhance the method's practical applicability and contribute to the advancement of robust optimization techniques.
... The concept of iterative methods for solving MOPs was first introduced in Fliege and Svaiter (2000). Since then, several authors have expanded upon this area, including the development of Newton's method Fliege et al. (2009), quasi Newton method Ansary and Panda (2015); Lai et al. (2020); Mahdavi-Amiri and Salehi Sadaghiani (2020); Morovati et al. (2018); Povalej (2014); Qu et al. (2011), conjugate gradient method Gonçalves and Prudente (2020); Lucambio Pérez and Prudente (2018), projected gradient method Cruz et al. (2011);Drummond and Iusem (2004); Fazzio and Schuverdt (2019); Fukuda and Drummond (2011); Fukuda and Graña Drummond (2013); Zhao and Yao (2022), and proximal gradient method Bonnel et al. (2005); Ceng et al. (2010). Convergence properties are a common characteristic of these methods. ...
Preprint
Full-text available
This paper introduces a nonlinear conjugate gradient method (NCGM) for addressing the robust counterpart of uncertain multiobjective optimization problems (UMOPs). Here, the robust counterpart is defined as the minimum across objective-wise worst-case scenarios. There are some drawbacks to using scalarization techniques to solve the robust counterparts of UMOPs, such as the pre-specification and restrictions of weights, and function importance that is unknown beforehand. NCGM is free from any kind of priori chosen scalars or ordering information of objective functions as accepted in scalarization methods. With the help of NCGM, we determine the critical point for the robust counterpart of UMOP, which is the robust critical point for UMOP. To tackle this robust counterpart using the NCGM, the approach involves constructing and solving a subproblem to determine a descent direction. Subsequently, a new direction is derived based on parameter selection methods such as Fletcher-Reeves, conjugate descent, Dai-Yuan, Polak-Ribieˋ\grave{e}re-Polyak, and Hestenes-Stiefel. An Armijo-type inexact line search is employed to identify an appropriate step length. Utilizing descent direction and step length, a sequence is generated, and convergence of the proposed method is established. The effectiveness of the proposed method is verified and compared against an existing method using a set of test problems.
... • the projected gradient search direction is computed using an exogenous sequence {β k }, β k > 0, as it has been suggested in [34], • nonmonotone line searches are considered. ...
Article
Full-text available
Based on the recently introduced Scaled Positive Approximate Karush–Kuhn–Tucker condition for single objective problems, we derive a sequential necessary optimality condition for multiobjective problems with equality and inequality constraints as well as additional abstract set constraints. These necessary sequential optimality conditions for multiobjective problems are subject to the same requirements as ordinary (pointwise) optimization conditions: we show that the updated Scaled Positive Approximate Karush–Kuhn–Tucker condition is necessary for a local weak Pareto point to the problem. Furthermore, we propose a variant of the classical Augmented Lagrangian method for multiobjective problems. Our theoretical framework does not require any scalarization. We also discuss the convergence properties of our algorithm with regard to feasibility and global optimality without any convexity assumption. Finally, some numerical results are given to illustrate the practical viability of the method.
... However, NM line searches are still a rather recent and insufficient research topic in multiobjective descent methods. In the multiobjective setting, NM line search algorithms were previously considered, for example, in Chen et al. (2023), Fazzio and Schuverdt (2019), Mita et al. (2019), Qu et al. (2017) and Upadhayay et al. (2022). ...
Article
Full-text available
This study analyzes the conditional gradient method for constrained multiobjective optimization problems, also known as the Frank–Wolfe method. We assume that the objectives are continuously differentiable, and the constraint set is convex and compact. We employ an average-type nonmonotone line search, which takes the average of the recent objective function values. The asymptotic convergence properties without convexity assumptions on the objective functions are established. We prove that every limit point of the sequence of iterates that is obtained by the proposed method is a Pareto critical point. An iteration-complexity bound is provided regardless of the convexity assumption on the objective functions. The effectiveness of the suggested approach is demonstrated by applying it to several benchmark test problems. In addition, the efficiency of the proposed algorithm in generating approximations of the entire Pareto front is compared to the existing Hager–Zhang conjugate gradient method, the steepest descent method, the monotone conditional gradient method, and a nonmonotone conditional gradient method. In finding empirical comparison, we utilize two commonly used performance matrices—inverted generational distance and hypervolume indicators.
... Recently, a nonmonotone line search approach was developed for multiobjective optimization by Qu et al. [33], decreasing the function in contrast to the componentwise maximum of a prespecified number of recent function values. Also, Fazzio and Schuverdt [16] introduced another nonmonotone line search scheme which guarantees a reduction in the function value in contrast to an average of a prespecified number of recent function values. Moreover, Mita et al. [31] used maximum, average, and hybrid-type of a prespecified number of recent function values in the nonmonotone line search strategy for multiobjective optimization. ...
Article
Full-text available
In order to increase the probability of applying more recent information, a forgetting factor is embedded in the nonmonotone line search technique for minimization of the multiobjective problem concerning the partial order induced by a closed, convex, and pointed cone. The method is shown to be globally convergent without convexity assumption on the objective function. Moreover, to improve behavior of the classical steepest descent method, an accelerated scheme is presented. Ultimately, computational advantages of the algorithms are depicted on a class of standard test problems.
Article
In this work we propose a general nonmonotone line-search method for nonconvex multiobjective optimization problems with convex constraints. At the kth iteration, the degree of nonmonotonicity is controlled by a vector νk\nu _{k} with nonnegative components. Different choices for νk\nu _{k} lead to different nonmonotone step-size rules. Assuming that the sequence {νk}k0\left\{ \nu _{k}\right\} _{k\ge 0} is summable, and that the ith objective function has Hölder continuous gradient with smoothness parameter θi(0,1]\theta _i \in (0,1], we show that the proposed method takes no more than O(ϵ(1+1θmin))\mathcal {O}\left( \epsilon ^{-\left( 1+\frac{1}{\theta _{\min }}\right) }\right) iterations to find a ϵ\epsilon -approximate Pareto critical point for a problem with m objectives and θmin=mini=1,,m{θi}\theta _{\min }= \min _{i=1,\dots , m} \{\theta _i\}. In particular, this complexity bound applies to the methods proposed by Drummond and Iusem (Comput Optim Appl 28:5–29, 2004), by Fazzio and Schuverdt (Optim Lett 13:1365–1379, 2019), and by Mita, Fukuda and Yamashita (J Glob Optim 75:63–90, 2019). The generality of our approach also allows the development of new methods for multiobjective optimization. As an example, we propose a new nonmonotone step-size rule inspired by the Metropolis criterion. Preliminary numerical results illustrate the benefit of nonmonotone line searches and suggest that our new rule is particularly suitable for multiobjective problems in which at least one of the objectives has many non-global local minimizers.
Article
Full-text available
We present a rigorous and comprehensive survey on extensions to the multicriteria setting of three well-known scalar optimization algorithms. Multiobjective versions of the steepest descent, the projected gradient and the Newton methods are analyzed in detail. At each iteration, the search directions of these methods are computed by solving real-valued optimization problems and, in order to guarantee an adequate objective value decrease, Armijo-like rules are implemented by means of a backtracking procedure. Under standard assumptions, convergence to Pareto (weak Pareto) optima is established. For the Newton method, superlinear convergence is proved and, assuming Lipschitz continuity of the objectives second derivatives, it is shown that the rate is quadratic
Article
Full-text available
In 2004, Graña Drummond and Iusem proposed an extension of the projected gradient method for constrained vector optimization problems. Using this method, an Armijo-like rule, implemented with a backtracking procedure, was used in order to determine the step lengths. The authors just showed stationarity of all cluster points and, for another version of the algorithm (with exogenous step lengths), under some additional assumptions, they proved convergence to weakly efficient solutions. In this work, first we correct a slight mistake in the proof of a certain continuity result in that 2004 article, and then we extend its convergence analysis. Indeed, under some reasonable hypotheses, for convex objective functions with respect to the ordering cone, we establish full convergence to optimal points of any sequence produced by the projected gradient method with an Armijo-like rule, no matter how poor the initial guesses may be.
Article
Full-text available
In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative errors on the search directions. At each iteration, a decrease of the objective value is obtained by means of an Armijo-like rule. The convergence results of this new method extend those obtained by Fukuda and Graña Drummond for the exact version. For partial orders induced by both pointed and nonpointed cones, under some reasonable hypotheses, global convergence to weakly efficient points of all sequences generated by the inexact projected gradient method is established for convex (respect to the ordering cone) objective functions. In the convergence analysis we also establish a connection between the so-called weighting method and the one we propose.
Article
This paper proposes two nonmonotone gradient algorithms for a class of vector optimization problems with a convex objective function. We establish both the global and local convergence results for the new algorithms. We then apply the new algorithms to a portfolio optimization problem under multi-criteria considerations.
Article
Recently, a new nonlinear conjugate gradient scheme was developed which satisfies the descent condition g T k d k ≤ −7/8 V g k V 2 and which is globally convergent whenever the line search fulfills the Wolfe conditions. This article studies the convergence behavior of the algorithm; extensive numerical tests and comparisons with other methods for large-scale unconstrained optimization are given.
Article
Consider the sequence obtained by applying the gradient projection method to the problem of minimizing a continuously differentiable functional over a closed convex subset of a real Hilbert space. In this paper we show that any cluster point of this sequence must be a constrained stationary point.