Article

Linear Convergence of Subgradient Algorithm for Convex Feasibility on Riemannian Manifolds

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We study the convergence issue of the subgradient algorithm for solving the convex feasibility problems in Riemannian manifolds, which was first proposed and analyzed by Bento and Melo [J. Optim. Theory Appl., 152(2012), pp. 773-785]. The linear convergence property about the subgradient algorithm for solving the convex feasibility problems with the Slater condition in Riemannian manifolds are established, and some step sizes rules are suggested for finite convergence purposes, which are motivated by the work due to De Pierro Iusem [Appl. Math. Optim., 17(1988), pp. 225-235]. As a by-product, the convergence result of this algorithm is obtained for the convex feasibility problem without the Slater condition assumption. These results extend and/or improve the corresponding known ones in both the Euclidean space and Riemannian manifolds.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Ferreira and Oliveira [13] first considered and studied subgradient method for convex optimization problems on complete Riemannian manifolds with nonnegative sectional curvature. By using the technique proposed in [14,15], Wang [11] extended the subgradient method on Riemannian manifolds whose CONTACT Moin Uddin umoin2723@gmail.com sectional curvature is bounded below. ...
... Thus {x k } has a cluster point in C * . Hence, in view of Theorem 2.8, the sequence {x k } converges to a solution of problem (14). [19] for Riemannian manifolds of negative sectional curvature. ...
... We consider the minimization problem (14) where the component functions are defined on a Riemannian manifold whose sectional curvature is nonpositive and show that the incremental subgradient method proposed in [19] cannot be applied. Example 4.6: Let P n and P n ++ be the set of symmetric matrices and the set of positive definite symmetric matrices, respectively. ...
... For example, nonlinear Riemannian conjugate gradient methods have been widely studied in [15,28,31,32] for unconstrained optimization. Firstorder methods [6,36] and proximal point algorithms [11,22] have been reported for unconstrained/constrained Riemannian optimization. Riemannian stochastic gradient methods were proposed in [7,17,33] for Riemannian stochastic optimization. ...
... In contrast to [5], this paper tries to consider a Riemannian optimization problem with complicated constraints, such as the intersection of many convex sets [1,6,36], the set of minimizers of a convex function [11,22], and the intersection of sublevel sets of convex functions [36]. The problem is a hierarchical constrained optimization problem with three stages, as follows. ...
... In contrast to [5], this paper tries to consider a Riemannian optimization problem with complicated constraints, such as the intersection of many convex sets [1,6,36], the set of minimizers of a convex function [11,22], and the intersection of sublevel sets of convex functions [36]. The problem is a hierarchical constrained optimization problem with three stages, as follows. ...
Article
Full-text available
This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The problem enables us to consider Riemannian hierarchical optimization problems over complicated sets, such as the intersection of many closed convex sets, the set of all minimizers of a nonsmooth convex function, and the intersection of sublevel sets of nonsmooth convex functions. We focus on adaptive learning rate optimization algorithms, which adapt step-sizes (referred to as learning rates in the machine learning field) to find optimal solutions quickly. We then propose a Riemannian stochastic fixed point optimization algorithm, which combines fixed point approximation methods on Riemannian manifolds with the adaptive learning rate optimization algorithms. We also give convergence analyses of the proposed algorithm for nonsmooth convex and smooth nonconvex optimization. The analysis results indicate that, with small constant step-sizes, the proposed algorithm approximates a solution to the problem. Consideration of the case in which step-size sequences are diminishing demonstrates that the proposed algorithm solves the problem with a guaranteed convergence rate. This paper also provides numerical comparisons that demonstrate the effectiveness of the proposed algorithms with formulas based on the adaptive learning rate optimization algorithms, such as Adam and AMSGrad.
... More precisely, several optimization problems are quite difficult to post in linear spaces and require Riemannian manifold framework, for example, human spine [16], eigenvalue optimization problems [17] and center of mass problems on Riemannian manifolds [18]. Many non-convex and constrained optimization problems from Euclidean space can be treated as convex and unconstrained ones on Riemannian manifolds, see, for example, [11,12,[19][20][21][22][23][24][25] and the references therein. The subgradient method was first considered and studied by Ferreira and Oliveira [26] in the framework of Riemannian manifolds, and further extensively studied in [22]. ...
... Moreover, the subgradient method (2) was first considered and investigated in [21] to study the convex feasibility problems on a complete Riemannian manifold with nonnegative sectional curvature. It is further improved and modified in [25] with lower bounded sectional curvature. To solve convex optimization problems on Riemannian manifolds with sectional curvature bounded from below, Wang [24] extended subgradient algorithm (2) from Euclidean space to the Riemannian manifold by employing the dynamic stepsize rule (iii) and established a convergence result. ...
... This together with (25) implies that ...
Article
To find the optimal solution of a sum of geodesic quasi-convex functions, we introduce a new path incremental quasi-subgradient method in the setting of a Riemannian manifold whose sectional curvature is nonnegative. To study the convergence analysis of the proposed algorithm, some auxiliary results related to geodesic quasi-convex functions and an existence result for a Greenberg-Pierskalla quasi-subgradient of the geodesic quasi-convex function in the setting of Riemannian manifolds are established. The convergence result of the proposed algorithm with the dynamic step size is presented in the case when the optimal solution is unknown. To demonstrate practical applicability, we show that the proposed method can be used to find a solution of the (geodesic) quasi-convex feasibility problems and the sum of ratio problems in the setting of Riemannian manifolds.
... Therefore, a different approach is necessary". This question was partially answered by Wang et al. [55,56] who extended the results in [10] to Riemannian manifolds with sectional curvature bounded from below, i.e., K ≥ −δ with δ > 0. ...
... In Hilbert space, compactness of one of the sets is a sufficient condition to innately boundedly regularity which is the key of the proof in [3]; see [6]. The relationship between these concepts is not known in the nonlinear setting; • We extent to a more general setting some results by Dykstra [29] (in Euclidean space), Boyle and Dykstra [13], Bauschke [5] (in Hilbert spaces), among other related works; • Finally, motivated by the question raised by Bento and Melo [10], partially answered by Wang et al. [55,56], we prove the convergence of the gradient method for solving convex feasibility problems on Hadamard manifolds. ...
... Under the Slater condition Bento and Melo [10] analyzed the convergence of a subgradient method for solving a convex feasibility problem in Riemannian manifolds with nonnegative sectional curvature. Their results were extended to Riemannian manifolds with sectional curvature bounded from below by Wang et al. [55,56]. Bačák et al. [3] proved the convergence of von Neumann's algorithm (alternating projection method) in CAT(0) spaces (also known as Hadamard spaces), which include Hadamard manifolds, among others. ...
Preprint
Full-text available
We establish weak and strong convergence results of the sequence generated by the alternating projection method to a finite collection of closed and convex sets in CAT(0) spaces using both random and cyclic projection. We also answer an open question on solving convex feasibility problems on Hadamard manifolds using the gradient method. To this end, we extend some results on subdifferential inclusion and a relation between the concepts of Kurdyka-Łojasiewicz inequality and error bounds to the context of Hadamard manifolds.
... For example, nonlinear Riemannian conjugate gradient methods have been widely studied in [15,28,31,32] for unconstrained optimization. First-order methods [6,36] and proximal point algorithms [11,22] have been reported for unconstrained/constrained Riemannian optimization. Riemannian stochastic gradient methods were proposed in [7,17,33] for Riemannian stochastic optimization. ...
... In contrast to [5], this paper tries to consider a Riemannian optimization problem with complicated constraints, such as the intersection of many convex sets [1,6,36], the set of minimizers of a convex function [11,22], and the intersection of sublevel sets of convex functions [36]. The problem is a hierarchical constrained optimization problem with three stages, as follows. ...
... In contrast to [5], this paper tries to consider a Riemannian optimization problem with complicated constraints, such as the intersection of many convex sets [1,6,36], the set of minimizers of a convex function [11,22], and the intersection of sublevel sets of convex functions [36]. The problem is a hierarchical constrained optimization problem with three stages, as follows. ...
Preprint
This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The problem enables us to consider Riemannian hierarchical optimization problems over complicated sets, such as the intersection of many closed convex sets, the set of all minimizers of a nonsmooth convex function, and the intersection of sublevel sets of nonsmooth convex functions. We focus on adaptive learning rate optimization algorithms, which adapt step-sizes (referred to as learning rates in the machine learning field) to find optimal solutions quickly. We then propose a Riemannian stochastic fixed point optimization algorithm, which combines fixed point approximation methods on Riemannian manifolds with the adaptive learning rate optimization algorithms. We also give convergence analyses of the proposed algorithm for nonsmooth convex and smooth nonconvex optimization. The analysis results indicate that, with small constant step-sizes, the proposed algorithm approximates a solution to the problem. Consideration of the case in which step-size sequences are diminishing demonstrates that the proposed algorithm solves the problem with a guaranteed convergence rate. This paper also provides numerical comparisons that demonstrate the effectiveness of the proposed algorithms with formulas based on the adaptive learning rate optimization algorithms, such as Adam and AMSGrad.
... They are intended to provide theoretical support for efficient computational implementations of algorithms. Works on this subject include, but are not limited to [1,2,32,42,45,48,55,57,59,60,73,81,87]. The Riemannian machinery from the theoretical point of view allows, by the introduction of a suitable metric, a nonconvex Euclidean problem to be seen as a Riemannian convex problem. ...
... The step-size rule of Strategy 18.18 was introduced in [66] and has been used in several papers, including [11,14,81]. Remark 18.19 Since (0, +∞) → tanh(t)/t is decreasing, for anyd > d 0 , we choose 0 < α < 2 tanh( √ |κ|d)/( √ κ|d)) < 2 tanh( √ |κ|d 0 )/( √ κ|d 0 )) in Strategy 18.18. ...
... As in the Euclidean context, the subgradient method is quite simple and possess nice convergence properties. After this work, the subgradient method in the Riemannian setting has been studied in different contexts; see, for instance, [9,11,37,42,79,81,[91][92][93]. In [11] the subgradient method was introduced to solve convex feasibility problems on complete Riemannian manifolds with non-negative sectional curvatures. ...
... For this purpose, the concepts and techniques of optimization from Euclidian space have frequently been extended to Riemannian context in recent years. Papers dealing with this subject include [23,24,37,38,26,41,42]. ...
... Our analysis of the method is presented with three different finite procedures to determine the step size, namely, Lipschitz step size, adaptive step size, and Armijo's step size. Note that we use a recent inequality established in [37,38]. Numerical ...
... We recall some concepts and basic properties about convexity in the Riemannian context. For more details see, for example, [36,30,37]. ...
Article
Full-text available
The gradient method for minimizing a differentiable convex function on Riemannian manifolds with lower bounded sectional curvature is analyzed in this paper. An analysis of the method with three different finite procedures for determining the step size (namely, Lipschitz step size, adaptive step size, and Armijo’s step size) is presented. The first procedure requires that the objective function has Lipschitz continuous gradient, which is not necessary for the other approaches. Convergence of the whole sequence to a minimizer, without any level set boundedness assumption, is proved. The iteration-complexity bound for functions with Lipschitz continuous gradient is also presented. Numerical experiments are provided to illustrate the effectiveness of the method in this new setting and certify the theoretical results. In particular, we consider the problem of finding the Riemannian center of mass and the so-called Karcher mean. Our numerical experiences indicate that the adaptive step size is a promising scheme that is worth considering.
... IME/UFG, Avenida Esperança, s/n, Campus Samambaia, 74690-900 Goiânia, Brazil concepts and techniques of optimization from Euclidean space to Riemannian context have been quite frequent in recent years. Papers dealing with this subject include, but are not limited to [11,17,18,20,27,29]. ...
... In order to deal with non-smooth convex optimization problems on complete Riemannian manifolds with non-negative sectional curvature, [12] extended and analysed the subgradient method which, as in the Euclidean context, is quite simple and possesses nice convergence properties. After this pioneering work, the subgradient method in the Riemannian setting has been studied in different contexts; see, for instance, [1,4,14,27,29]. In [4], the subgradient method was introduced to solve convex feasibility problems on complete Riemannian manifolds with non-negative sectional curvatures, and recently in [27,29] this method has been analysed in manifolds with lower bounded sectional curvatures and significant improvements were introduced. More recently, an asymptotic analysis of the subgradient method with exogenous step-size and dynamic step-size for convex optimization was considered in the context of manifolds with lower bounded sectional curvatures [28]. ...
... After this pioneering work, the subgradient method in the Riemannian setting has been studied in different contexts; see, for instance, [1,4,14,27,29]. In [4], the subgradient method was introduced to solve convex feasibility problems on complete Riemannian manifolds with non-negative sectional curvatures, and recently in [27,29] this method has been analysed in manifolds with lower bounded sectional curvatures and significant improvements were introduced. More recently, an asymptotic analysis of the subgradient method with exogenous step-size and dynamic step-size for convex optimization was considered in the context of manifolds with lower bounded sectional curvatures [28]. ...
Article
Full-text available
The subgradient method for convex optimization problems on complete Riemannian manifolds with lower bounded sectional curvature is analysed in this paper. Iteration-complexity bounds of the subgradient method with exogenous step-size and Polyak's step-size are stablished, completing and improving recent results on the subject. ARTICLE HISTORY
... IME/UFG, Avenida Esperança, s/n, Campus Samambaia, 74690-900 Goiânia, Brazil concepts and techniques of optimization from Euclidean space to Riemannian context have been quite frequent in recent years. Papers dealing with this subject include, but are not limited to [11,17,18,20,27,29]. ...
... In order to deal with non-smooth convex optimization problems on complete Riemannian manifolds with non-negative sectional curvature, [12] extended and analysed the subgradient method which, as in the Euclidean context, is quite simple and possesses nice convergence properties. After this pioneering work, the subgradient method in the Riemannian setting has been studied in different contexts; see, for instance, [1,4,14,27,29]. In [4], the subgradient method was introduced to solve convex feasibility problems on complete Riemannian manifolds with non-negative sectional curvatures, and recently in [27,29] this method has been analysed in manifolds with lower bounded sectional curvatures and significant improvements were introduced. More recently, an asymptotic analysis of the subgradient method with exogenous step-size and dynamic step-size for convex optimization was considered in the context of manifolds with lower bounded sectional curvatures [28]. ...
... After this pioneering work, the subgradient method in the Riemannian setting has been studied in different contexts; see, for instance, [1,4,14,27,29]. In [4], the subgradient method was introduced to solve convex feasibility problems on complete Riemannian manifolds with non-negative sectional curvatures, and recently in [27,29] this method has been analysed in manifolds with lower bounded sectional curvatures and significant improvements were introduced. More recently, an asymptotic analysis of the subgradient method with exogenous step-size and dynamic step-size for convex optimization was considered in the context of manifolds with lower bounded sectional curvatures [28]. ...
Preprint
Full-text available
The subgradient method for convex optimization problems on complete Riemannian manifolds with lower bounded sectional curvature is analyzed in this paper. Iteration-complexity bounds of the subgradient method with exogenous step-size and Polyak's step-size are stablished, completing and improving recent results on the subject.
... For this purpose, extensions of concepts and techniques of optimization from Euclidean space to Riemannian context have been quite frequently in recent years. Papers dealing with this subject include, but are not limited to [21,22,24,35,36,38,39]. ...
... The analysis of the method is presented with three different finite procedures for determining the stepsize, namely, Lipschitz stepsize, adaptive stepsize and Armijo's stepsize. It should be noted that we use a recent inequality established in [35,36]. Numerical experiments are provided to illustrate the effectiveness of the method in this new setting and certify the obtained theoretical results. ...
... We proceeded to recall some concepts and basic properties about convexity in the Riemannin context. For more details see, for example, [28,34,35]. For any two points p, q ∈ M, Γ pq denotes the set of all geodesic segments γ : [0, 1] → M with γ(0) = p and γ(1) = q. ...
Preprint
Full-text available
The gradient method for minimize a differentiable convex function on Riemannian manifolds with lower bounded sectional curvature is analyzed in this paper. The analysis of the method is presented with three different finite procedures for determining the stepsize, namely, Lipschitz stepsize, adaptive stepsize and Armijo's stepsize. The first procedure requires that the objective function has Lipschitz continuous gradient, which is not necessary for the other approaches. Convergence of the whole sequence to a minimizer, without any level set boundedness assumption, is proved. Iteration-complexity bound for functions with Lipschitz continuous gradient is also presented. Numerical experiments are provided to illustrate the effectiveness of the method in this new setting and certify the obtained theoretical results. In particular, we consider the problem of finding the Riemannian center of mass and the so-called Karcher's mean. Our numerical experiences indicate that the adaptive stepsize is a promising scheme that is worth considering.
... Optimization methods in the Riemannian setting have received considerable research attention in recent years; see, for example, [1][2][3][4]. An advantage of this approach is the possibility to transform some Euclidean non-convex problems into Riemannian convex problems, by introducing a suitable metric, and thus enabling the modification of numerical methods for the purpose of finding a global minimizer; see [5][6][7][8]. ...
... In order to deal with non-smooth convex optimization problems on Riemanian manifolds, the authors of [15] proposed and analyzed a subgradient method that is considerably simple and possesses desirable convergence properties. Since then, the subgradient method in the Riemannian setting has been studied from different contexts; see, for instance, [1,[3][4][5]. One of the most interesting optimization methods is the proximal point method, which was first proposed in the linear context by [16] and extensively studied by [17]. ...
... Proposition 2.1 Let γ 1 and γ 2 be geodesic segments such that γ 1 (0) = γ 2 (0) and γ 1 is minimal. Then, letting 1 ...
Article
Full-text available
This paper considers optimization problems on Riemannian manifolds and analyzes iteration-complexity for gradient and subgradient methods on manifolds with non-negative curvature. By using tools from the Riemannian convex analysis and exploring directly the tangent space of the manifold, we obtain different iteration-complexity bounds for the aforementioned methods, complementing and improving related results. Moreover, we also establish iteration-complexity bound for the proximal point method on Hadamard manifolds.
... The early works dealing with this issue include [3,4,5,6]. In the recent years there has been increasing interest in the development of geometric optimization algorithms which exploit the differential structure of the nonlinear manifold, papers published on this topic include, but are not limited to [7,8,9,10,11,12,13,14,15,8,16,17,18]. In this paper, instead of focusing on problems of finding singularities of gradient vector fields on Riemannian manifolds, which includes finding local minimizers, we consider the more general problem of finding singularities of vector fields. ...
... Since, [9, Lemma 2.4, item (iv)] implies that lim p→p Z(p) − Pp p Z(p) = 0, we have from the last equality that (12) holds and the proof is concluded. ...
Preprint
In this paper we study the Newton's method for finding a singularity of a differentiable vector field defined on a Riemannian manifold. Under the assumption of invertibility of covariant derivative of the vector field at its singularity, we establish the well definition of the method in a suitable neighborhood of this singularity. Moreover, we also show that the generated sequence by Newton method converges for the solution with superlinear rate.
... • Motivated by the question raised by Bento and Melo [13], partially answered by Wang et al. [48,49], about how to solve convex feasibility problems on Hadamard manifolds, we use the concept of Kurdyka-Łojasiewicz inequality and error bounds to analyze the convergence of the gradient method for solving convex feasibility problems on Hadamard manifolds. In [13], a convergence result is obtained if the intersection of the sets has a non-empty interior (Slater condition). ...
... Under the Slater condition, Bento and Melo [13] analyzed the convergence of a subgradient method for solving a convex feasibility problem in Riemannian manifolds with nonnegative sectional curvature. Their results were extended to Riemannian manifolds with sectional curvature bounded from below by Wang et al. [48,49]. Bačák et al. [6] proved the convergence of von Neumann's algorithm (alternating projection method) in complete CAT(0) spaces (also known as Hadamard spaces), which include Hadamard manifolds, among others. ...
Article
Full-text available
This paper studies the interplay between the concepts of error bounds and the Kurdyka–Łojasiewicz (KL) inequality on Hadamard manifolds. To this end, we extend some properties and existence results of a solution for differential inclusions on Hadamard manifolds. As a second contribution, we show how the KL inequality can be used to obtain the convergence of the gradient method for solving convex feasibility problems on Hadamard manifolds. The convergence results of the alternating projection method are also established for cyclic and random projections on Hadamard manifolds and, more generally, CAT(0) spaces.
... Each element of ∂f (p) is called a subgradient of f at p. We remark that ∂f (p) is nonempty for all p ∈ int(domf ); see [23,Prop. 2.5]. ...
... The following result is proved in [23,Prop. 2.5]. ...
Preprint
Full-text available
In this paper, we extend a recently established subgradient method for the computation of Riemannian metrics that optimizes certain singular value functions associated with dynamical systems. This extension is threefold. First, we introduce a projected subgradient method which results in Riemannian metrics whose parameters are confined to a compact convex set and we can thus prove that a minimizer exists; second, we allow inexact subgradients and study the effect of the errors on the computed metrics; and third, we analyze the subgradient algorithm for three different choices of step sizes: constant, exogenous and Polyak. The new methods are illustrated by application to dimension and entropy estimation of the H\'enon map.
... Moreover, intrinsic Riemannian structures enable new research directions that aid in developing competitive optimization algorithms; see [18,20,22,23,26,27]. More optimization concepts and techniques in the Riemannian context are available in [21,25,[28][29][30][31][32][33][34] and the bibliographies therein. ...
... The following results are important for the next sections. The proof of the results, which will be omitted herein, is of the same notion as those presented in the proof of [30,Lemma 3.2], with some minor technical adjustments required to adapt to our goals. For simplifying our notations herein, we define κ < 0,κ := |κ|. ...
Article
Full-text available
The steepest descent method for multiobjective optimization on Riemannian manifolds with lower bounded sectional curvature is analyzed. The aim of this study is twofold. First, an asymptotic analysis of the method is presented with three different finite procedures for determining the stepsize: Lipschitz, adaptive, and Armijo-type stepsizes. Second, by assuming the Lipschitz continuity of a Jacobian, iteration-complexity bounds for the method with these three stepsize strategies are presented. In addition, some examples that satisfy the hypotheses of the main theoretical results are provided. Finally, the aforementioned examples are presented through numerical experiments.
... Moreover, intrinsic Riemannian structures can also opens up new research directions that aid in developing competitive optimization algorithms; see [18,20,22,23,26,27]. More about concepts and techniques of optimization on Riemannian context can be found in [21,25,[28][29][30][31][32][33][34] and the bibliographies therein. ...
... The next result plays an important role in next sections. Its proof, which will be omitted here, follows the same ideas as those presented in the proof of [30,Lemma 3.2], with some minor technical adjustments needed to settle it to our goals. For simplifying our notations throughout the paper, we define κ < 0,κ := |κ|. ...
Preprint
Full-text available
The steepest descent method for multiobjective optimization on Riemannian manifolds with lower bounded sectional curvature is analyzed in this paper. The aim of the paper is twofold. Firstly, an asymptotic analysis of the method is presented with three different finite procedures for determining the stepsize, namely, Lipschitz stepsize, adaptive stepsize and Armijo-type stepsize. The second aim is to present, by assuming that the Jacobian of the objective function is componentwise Lipschitz continuous, iteration-complexity bounds for the method with these three stepsizes strategies. In addition, some examples are presented to emphasize the importance of working in this new context. Numerical experiments are provided to illustrate the effectiveness of the method in this new setting and certify the obtained theoretical results.
... The early works dealing with this issue include [3,4,5,6]. In the recent years there has been increasing interest in the development of geometric optimization algorithms which exploit the differential structure of the nonlinear manifold, papers published on this topic include, but are not limited to [7,8,9,10,11,12,13,14,15,8,16,17,18]. In this paper, instead of focusing on problems of finding singularities of gradient vector fields on Riemannian manifolds, which includes finding local minimizers, we consider the more general problem of finding singularities of vector fields. ...
... Since, [9, Lemma 2.4, item (iv)] implies that lim p→p Z(p) − Pp p Z(p) = 0, we have from the last equality that (12) holds and the proof is concluded. ...
Article
Full-text available
In this paper we study the Newton's method for finding a singularity of a differentiable vector field defined on a Riemannian manifold. Under the assumption of invertibility of covariant derivative of the vector field at its singularity, we establish the well definition of the method in a suitable neighborhood of this singularity. Moreover, we also show that the generated sequence by Newton method converges for the solution with superlinear rate.
... Considering the sectional curvature of a Riemannian manifold is bounded from above by certain nonnegative constant and using the theory of CAT(k) spaces, [27] and [41] extended the convergence of the proximal point method to solve VIP and Equilibrium problems in arbitrary Riemannian manifolds respectively. Then, following those ideas, some researchers extended other methods as gradient, subgradient to solve convex minimization problems, see [14,39,42]. ...
Article
Full-text available
This paper studies the convergence of the proximal point method for quasiconvex functions in finite dimensional complete Riemannian manifolds. We prove initially that, in the general case, when the objective function is proper and lower semicontinuous, each accumulation point of the sequence generated by the method, if it exists, is a limiting critical point of the function. Then, under the assumptions that the sectional curvature of the manifold is bounded above by some non negative constant and the objective function is quasiconvex we analyze two cases. When the constant is zero, the global convergence of the algorithm to a limiting critical point is assured and if it is positive, we prove the local convergence for a class of quasiconvex functions, which includes Lipschitz functions.
... As previously mentioned, there has been a significant increase in the number of works focusing on concepts and techniques of nonlinear programming and convex analysis in the Riemannian setting, see [2,51]. In addition to the theoretical issues addressed, which have an interest of their own, the Riemannian machinery provides support to design efficient algorithms to solve optimization problem in this setting; see, for instance, [1,20,25,31,33,34,36,45,52,56] and references therein. ...
Article
Full-text available
In this paper, we propose a Riemannian version of the difference of convex algorithm (DCA) to solve a minimization problem involving the difference of convex (DC) function. The equivalence between the classical and simplified Riemannian versions of the DCA is established. We also prove that under mild assumptions the Riemannian version of the DCA is well defined and every cluster point of the sequence generated by the proposed method, if any, is a critical point of the objective DC function. Some duality relations between the DC problem and its dual are also established. To illustrate the algorithm’s effectiveness, some numerical experiments are presented.
... Recently, Sun et al. [23] proposed some accelerated SMCG methods. More advances about SMCG methods can be referred to [2,24,25]. Compared to CG methods, SMCG methods have the following characteristics: ...
Article
Full-text available
Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization and have received increasing attention recently. The search directions of SMCG methods are generated by minimizing an approximate model with the approximate matrix over the two-dimensional subspace spanned by the current gradient and the latest step. The main drawback of SMCG methods is that the parameter in the search directions must be determined when calculating the search directions. The parameter is crucial to SMCG methods and is difficult to be determined properly. An alternative solution for this drawback might be to exploit a new way to derive SMCG methods independent of . The projection technique has been used successfully to derive conjugate gradient directions such as the Dai–Kou conjugate gradient direction (Dai and Kou in SIAM J Optim 23(1):296–320, 2013). Motivated by the above two observations, we use a projection technique to derive a new SMCG method independent of . More specifically, we project the search direction of memoryless quasi-Newton method into the above two-dimensional subspace and derive a new search direction, which is proved to be descent. Remarkably, the proposed method without any line search enjoys the finite termination property for two-dimensional strictly convex quadratic functions. An adaptive scaling factor in the search direction is exploited based on the finite termination property. The proposed method does not need to determine the parameter and can be regarded as an extension of the Dai–Kou conjugate gradient method. The global convergence of the proposed method is established under the suitable assumptions. Numerical comparisons on the 147 test functions from the CUTEst library indicate that the proposed method is very promising.
... But in the setting of Riemannian manifolds, constraint, non-convex and non-quasi-convex optimization problems can be formulated as unconstraint, convex and quasi-convex, respectively, in the geodesic sense with the appropriate Riemannian metric, see, e.g. [1,[12][13][14][15][16][17][18][19][20][21][22][23] and the references therein. Over the last two decades, several types of optimization problems have been described and analysed in the framework of Riemannian manifolds, such as the centre of mass problems, eigenvalue problems, human spine, invariant subspace computations, constraint minimization problems, and boundary value problems, etc., see, e.g. ...
Article
This paper is committed to studying Karush-Kuhn-Tucker (in short, KKT) type necessary and sufficient optimality conditions for non-smooth quasi-convex (geodesic sense) optimization problems on Riemannian manifolds. Recently, Ansari et al. [Ansari QH, Babu F, Zeeshan M. Incremental quasi-subgradient method for minimizing geodesic quasi-convex function on Riemannian manifolds with applications. Numer Funct Anal Optim. 2022;42(13):1492–1521. doi: 10.1080/01630563.2021.2001823] defined the quasi-subdifferential on Riemannian manifolds and established the existence results of the quasi-subdifferential. We provide several auxiliary results for the quasi-subdifferential in the current study. We offer the KKT optimality conditions for the quasi-convex optimization problems on Riemannian manifolds with or without the Slater-constraint qualifications. To verify the suggested outcomes, we formulate numerical examples. In addition, we also provide our results in the Euclidean spaces, which are original and distinct from earlier findings in the Euclidean spaces.
... Subsequently, a lot of modified BB methods [17][18][19] were proposed. More advance about gradient methods can be referee [20][21][22][23][24][25][26]. ...
Article
As we know, the stepsize is extremely crucial to gradient method. A new type of stepsize is introduced for gradient method in the paper, which is generated by minimizing the norm of the approximate model of the gradient along the line of negative gradient direction. Based on the retard technique, we present a new gradient method by minimizing adaptively the approximate model of the objective function and the norm of the approximate model of the gradient along the line of negative gradient direction for strictly convex quadratic minimization. The convergence of the proposed method is established. The numerical experiments on four groups of convex quadratic minimization problems illustrate that the proposed method is very promising. We also extend the new gradient method for convex quadratic minimization to general unconstrained optimization by incorporating a nonmonotone line search. The convergence of the resulting method is established. The numerical experiment on the 147 test functions from the CUTEst library indicate that the resulting method is superior to some efficient gradient methods including the BBQ method (SIAM J. Optim. 31(4), 3068-3096, 2021) and is competitive to two famous conjugate gradient software packages CGOPT (1.0) and CG_ \_ DESCENT (5.0).
... Moreover, assuming a Slater type qualification condition, a variant of the algorithm presented by Bento and Nelo [5], which generates a sequence with finite convergence property, i.e., a feasible point is obtained after a finite number of iterations has been analysed. Wang et al [30,31] then studied the convergence issue of the subgradient algorithm for solving the convex feasibility problems in Riemannian manifolds, which was first proposed and analysed by Bento and Nelo [5]. The linear convergence property about the subgradient algorithm for solving the convex feasibility problems with the Slater condition in Riemannian manifolds are established. ...
Article
Full-text available
Bilevel programming problems are often reformulated using the Karush-Kuhn-Tucker conditions for the lower level problem resulting in a mathematical program with complementarity constraints (MPCC). First, we present KKT reformulation of the bilevel optimization problems on Riemannian manifolds. Moreover, we show that global optimal solutions of theMPCCcorrespond to global optimal solutions of the bilevel problem on the Riemannian manifolds provided the lower level convex problem satisfies the Slater?s constraint qualification. But the relationship between the local solutions of the bilevel problem and its corresponding MPCC is incomplete equivalent. We then also show by examples that these correspondences can fail if the Slater?s constraint qualification fails to hold at lower-level convex problem. In addition,M- and C-type optimality conditions for the bilevel problem on Riemannian manifolds are given.
... In particular, solving the convex feasibility problem for some models of Hadamard manifolds. Wang et al. [39] proved that this method converges linearly to C 1 ∩ · · · ∩ C N . Furthermore, they have observed the influence of the curvature on the rate of convergence. ...
Article
Full-text available
In this paper, we provide a necessary and sufficient condition under which the method of alternating projections on Hadamard spaces converges strongly. This result is new even in the context of Hilbert spaces. In particular, we found the circumstance under which the iteration of a point by projections converges strongly and we answer partially the main question that motivated Bruck’s paper (J Math Anal Appl 88:319–322, 1982). We apply this condition to generalize Prager’s theorem for Hadamard manifolds and generalize Sakai’s theorem for a larger class of the sequences with full measure with respect to Bernoulli measure. In particular, we answer to a long-standing open problem concerning the convergence of the successive projection method (Aleyner and Reich in J Convex Anal 16:633–640, 2009). Furthermore, we study the method of alternating projections for a nested decreasing sequence of convex sets on Hadamard manifolds, and we obtain an alternative proof of the convergence of the proximal point method.
... As aforementioned, the number of works dealing with concepts and techniques of nonlinear programming and convex analysis in the Riemannian scenario have also increased, see [2,55]. In addition to the theoretical issues addressed, which have an interest of their own, the Riemannian machinery provides support to design efficient algorithms to solve optimization problem in this setting; papers on this subject include [1,20,26,[36][37][38]41,49,57,62] and references therein. In this sense, the concept of conjugate of a convex function was recently presented in the Riemannian setting, which is an important tool in convex analysis and play an important role in the theory of duality on Riemannian manifolds, see [8,9]. ...
Preprint
Full-text available
In this paper, we propose a Riemannian version of the difference of convex algorithm (DCA) to solve a minimization problem involving the difference of convex (DC) function. We establish the equivalence between the classical and simplified Riemannian versions of the DCA. We also prove that, under mild assumptions, the Riemannian version of the DCA is well-defined, and every cluster point of the sequence generated by the proposed method, if any, is a critical point of the objective DC function. Additionally, we establish some duality relations between the DC problem and its dual. To illustrate the effectiveness of the algorithm, we present some numerical experiments.
... Recently, some important notions, techniques and approaches in Euclidean spaces have been extended to Riemannian manifold settings; see, e.g., [12,21,22,26,28,40] and the references therein. As pointed out in [3], such extensions are natural and, in general, nontrivial; and enjoy some important advantages; see, e.g., [1,33,34,41,24] for more details. ...
Preprint
We study the convergence issue for inexact descent algorithm (employing general step sizes) for multiobjective optimizations on general Riemannian manifolds (without curvature constraints). Under the assumption of the local convexity/quasi-convexity, local/global convergence results are established. On the other hand, without the assumption of the local convexity/quasi-convexity, but under a Kurdyka-{\L}ojasiewicz-like condition, local/global linear convergence results are presented, which seem new even in Euclidean spaces setting and improve sharply the corresponding results in [24] in the case when the multiobjective optimization is reduced to the scalar case. Finally, for the special case when the inexact descent algorithm employing Armijo rule, our results improve sharply/extend the corresponding ones in [3,2,38].
... Recently, Wang et al. [48] proposed a new approach for convergence of subgradient method dealing with both positive and negative sectional curvature as long as it is bounded from below; see also Wang et al. [49]. This approach was used by Ferreira et al. [15] to obtain iteration-complexity of the gradient method in Riemannian manifolds with lower bounded sectional curvature, and hence, (16) ...
Article
Full-text available
We study the convergence of a modified proximal point method for DC functions in Hadamard manifolds. We use the iteration computed by the proximal point method for DC function extended to the Riemannian context by Souza and Oliveira (J Glob Optim 63:797–810, 2015) to define a descent direction which improves the convergence of the method. Our method also accelerates the classical proximal point method for convex functions. We illustrate our results with some numerical experiments.
... Computing the Riemannian center of mass is just one of the many optimization problems arising in various applications, which are posed on manifolds and require a manifold structure (not necessarily with linear structure). Therefore, a large amount of classical notions and methods have been extended for problems formulated on Riemannian manifolds as well as on other spaces with nonlinear structures, e.g., [12,[14][15][16][17][18][19][20][21]. As far as we know, one first propose dealing with the steepest descent method for continuously differentiable functions in the Riemannian setting was presented by Luemberger [22] and later by Gabay [23] both in the particular case where M is the inverse image of regular value. ...
Article
Full-text available
In this paper, we perform the steepest descent method for computing Riemannian center of mass on Hadamard manifolds. To this end, we extend convergence of the method to the Hadamard setting for continuously differentiable (possible nonconvex) functions which satisfy the Kurdyka–Łojasiewicz property. Some numerical experiments computing L1L1L^1 and L2L2L^2 center of mass in the context of positive definite symmetric matrices are presented using two different stepsize rules.
... Due to such applications, interest in the development of optimization tools as well as mathematical programming methods to Riemannian settings has increased significantly. Papers published on this topic include, but are not limited to, [2,3,7,18,19,24,27,33,47,48,50,[54][55][56]. ...
Article
The aim of this paper is to present an extragradient method for variational inequality associated to a point-to-set vector field in Hadamard manifolds and to study its convergence properties. In order to present our method the concept of ϵ\epsilon-enlargement of maximal monotone vector fields is used and its lower-semicontinuity is stablished in order to obtain the convergence of the method in this new context.
... From this starting point, the monotonicity of vector fields in the Riemannian framework has awakened the interest of many researchers; see for instance [9][10][11][12][13][14]. Indeed, during the last decades optimization problems on Riemannian manifolds have become very popular and various algorithms have been extended to the Riemannian setting for solving different types of problems such as gradient method [15][16][17], subgradient method [18,19], Newton's method [20,21], proximal point method [22][23][24]; convex feasibility problem [25,26]; variational inequality problem [14,27]; trust-region problem [28]; weak sharp minima [29]. The reason is that many optimization problems arising in various applications are posed on manifolds and require a manifold structure (not necessarily with linear structure), such as geometric models for the human spine [21], some eigenvalue optimization problems [30], and so on. ...
Article
Full-text available
We study some conditions for a monotone bifunction to be maximally monotone by using a corresponding vector field associated to the bifunction and vice versa. This approach allows us to establish existence of solutions to equilibrium problems in Hadamard manifolds obtained by perturbing the equilibrium bifunction. © 2018 Springer Science+Business Media, LLC, part of Springer Nature
... However, in many practical applications the natural structure of the data are modeled as constrained optimization problems, where the constraints are non-linear and non-convex, more specially, the constraints are Riemannian manifolds; see [1, 6, 8-13, 22, 25, 30, 31, 33, 35, 43, 45]. Due to such applications, interest in the development of optimization tools as well as mathematical programming methods to Riemannian settings has increased significantly; papers published on this topic include, but are not limited to, [2,3,7,18,19,23,26,32,46,47,49,[52][53][54]. ...
Preprint
The aim of this paper is to present an extragradient method for variational inequality associated to a point-to-set vector field in Hadamard manifolds and to study its convergence properties. In order to present our method the concept of ϵ\epsilon-enlargement of maximal monotone vector fields is used and its lower-semicontinuity is stablished in order to obtain the convergence of the method in this new context.
Article
Full-text available
Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371–413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1–32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210–239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.
Article
Full-text available
In this paper, we extend a recently established subgradient method for the computation of Riemannian metrics that optimizes certain singular value functions associated with dynamical systems. This extension is threefold. First, we introduce a projected subgradient method which results in Riemannian metrics whose parameters are confined to a compact convex set and we can thus prove that a minimizer exists; second, we allow inexact subgradients and study the effect of the errors on the computed metrics; and third, we analyze the subgradient algorithm for three different choices of step sizes: constant, exogenous, and Polyak. The new methods are illustrated by application to dimension and entropy estimation of the Hénon map.
Preprint
We study the convergence issue for the gradient algorithm (employing general step sizes) for optimization problems on general Riemannian manifolds (without curvature constraints). Under the assumption of the local convexity/quasi-convexity (resp. weak sharp minima), local/global convergence (resp. linear convergence) results are established. As an application, the linear convergence properties of the gradient algorithm employing the constant step sizes and the Armijo step sizes for finding the Riemannian LpL^p (p[1,+)p\in[1,+\infty)) centers of mass are explored, respectively, which in particular extend and/or improve the corresponding results in \cite{Afsari2013}.
Article
In this paper, we propose and analyse a path-based incremental target level algorithm for minimizing a constrained convex optimization on complete Riemannian manifolds with lower bounded sectional curvature, where the object function consists of the sum of a large number of component functions. This algorithm extends, to the context of Riemannian manifolds, an incremental subgradient method emplying a version of dynamic stepsize rule. Some convergence results and iteration-complexity bounds of the algorithm are established.
Article
The subgradient algorithms for convex optimizations on Riemannian Manifolds of sectional curvatures bounded from below are studied; and the convergence results of the algorithms (employing diminishing step sizes and dynamic step sizes) are established. Some numerical experiments are provided to illustrate the convergence performances of the algorithms.
Article
Full-text available
In this paper, we extend the proximal point algorithm for vector optimization from the Euclidean space to the Riemannian context. Under suitable assumptions on the objective function the well definition and full convergence of the method to a weak efficient point is proved.
Article
Full-text available
Bearing in mind the notion of monotone vector field on Riemannian manifolds, see [12--16], we study the set of their singularities and for a particularclass of manifolds develop an extragradient-type algorithm convergent to singularities of such vector fields. In particular, our method can be used forsolving nonlinear constrained optimization problems in Euclidean space, with a convex objective function and the constraint set a constant curvature Hadamard manifold. Our paper shows how tools of convex analysis on Riemannian manifolds can be used to solve some nonconvex constrained problem in a Euclidean space.
Article
Full-text available
Monotone vector fields on Riemannian manifolds will be introduced. Their first order characterizations will be given. The connection with one parameter transformation groups, the Lie derivative and conformal vector fields will be outlined.
Article
Full-text available
The maximal monotonicity notion in Banach spaces is extended to Riemannian manifolds of nonpositive sectional curvature, Hadamard manifolds, and proved to be equivalent to the upper semicontinuity. We consider the problem of finding a singularity of a multivalued vector field in a Hadamard manifold and present a general proximal point method to solve that problem, which extends the known proximal point algorithm in Euclidean spaces. We prove that the sequence generated by our method is well defined and converges to a singularity of a maximal monotone vector field, whenever it exists. Applications in minimization problems with constraints, minimax problems and variational inequality problems, within the framework of Hadamard manifolds, are presented.
Article
Full-text available
We study infinitesimal properties of nonsmooth (nondifferentiable) functions on smooth manifolds. The eigenvalue function of a matrix on the manifold of symmetric matrices gives a natural example of such a nonsmooth function. A subdifferential calculus for lower semicontinuous functions is developed here for studying constrained optimization problems, nonclassical problems of calculus of variations, and generalized solutions of first-order partial differential equations on manifolds. We also establish criteria for monotonicity and invariance of functions and sets with respect to solutions of differential inclusions.
Article
Full-text available
In this paper we consider the minimization problem with constraints. We will show that if the set of constraints is a Riemannian manifold of nonpositive sectional curvature, and the objective function is convex in this manifold, then the proximal point method in Euclidean space is naturally extended to solve that class of problems. We will prove that the sequence generated by our method is well defined and converge to a minimizer point. In particular we show how tools of Riemannian geometry, more specifically the convex analysis in Riemannian manifolds, can be used to solve nonconvex constrained problem in Euclidean, space.
Article
Full-text available
The relationship between monotonicity and accretivity on Riemannian manifolds is studied in this paper and both concepts are proved to be equivalent in Hadamard manifolds. As a consequence an iterative method is obtained for approx-imating singularities of Lipschitz continuous, strongly monotone mappings. We also establish the equivalence between the strong convexity of functions and the strong Communicated by J.-C. Yao. 692 J Optim Theory Appl (2010) 146: 691–708 monotonicity of its subdifferentials on Riemannian manifolds. These results are then applied to solve the minimization of convex functions on Riemannian manifolds.
Article
Full-text available
We consider spectral functions f ∘ λ where f is any permutation-invariant mapping from C n to R, and λ is the eigenvalue map from the set of n × n complex matrices to C n , ordering the eigenvalues lexicographically. For example, if f is the function “maximum real part”, then f ∘ λ is the spectral abscissa, while if f is “maximum modulus”, then f ∘ λ is the spectral radius. Both these spectral functions are continuous, but they are neither convex nor Lipschitz. For our analysis, we use the notion of subgradient extensively analyzed in Variational Analysis, R.T. Rockafellar and R. J.-B. Wets (Springer, 1998). We show that a necessary condition for Y to be a subgradient of an eigenvalue function f ∘ λ at X is that Y * commutes with X. We also give a number of other necessary conditions for Y based on the Schur form and the Jordan form of X. In the case of the spectral abscissa, we refine these conditions, and we precisely identify the case where subdifferential regularity holds. We conclude by introducing the notion of a semistable program: maximize a linear function on the set of square matrices subject to linear equality constraints together with the constraint that the real parts of the eigenvalues of the solution matrix are non-positive. Semistable programming is a nonconvex generalization of semidefinite programming. Using our analysis, we derive a necessary condition for a local maximizer of a semistable program, and we give a generalization of the complementarity condition familiar from semidefinite programming.
Article
Full-text available
Firmly nonexpansive mappings are introduced in Hadamard manifolds, a particular class of Riemannian manifolds with nonpositive sectional curvature. The resolvent of a set-valued vector field is defined in this setting and by means of this concept, a strong relationship between monotone vector fields and firmly nonexpansive mappings is established. This fact is then used to prove that the resolvent of a maximal monotone vector field has full domain. The Yosida approximation of a set-valued vector field is also introduced, analyzing its properties from which the asymptotic behavior of the resolvent is studied. Regarding the singularities of a set-valued monotone vector field, existence results are proved under certain boundary condition. As a consequence, the existence of fixed points for continuous pseudo-contractive mappings is obtained. KeywordsHadamard manifold–Firmly nonexpansive mapping–Resolvent–Yosida approximation–Maximal monotone vector field–Pseudo-contractive mapping
Article
Full-text available
We establish the existence and uniqueness results for variational inequality problems on Riemannian manifolds and solve completely the open problem proposed in [S.Z. Németh, Variational inequalities on Hadamard manifolds, Nonlinear Anal. 52 (2003) 1491–1498]. Also the relationships between the constrained optimization problem and the variational inequality problems as well as the projections on Riemannian manifolds are studied.
Article
Full-text available
In this paper, a subgradient type algorithm for solving convex feasibility problem on Riemannian manifold is proposed and analysed. The sequence generated by the algorithm converges to a solution of the problem, provided the sectional curvature of the manifold is non-negative. Moreover, assuming a Slater type qualification condition, we analyse a variant of the first algorithm, which generates a sequence with finite convergence property, i.e., a feasible point is obtained after a finite number of iterations. Some examples motivating the application of the algorithm for feasibility problems, nonconvex in the usual sense, are considered.
Article
Full-text available
Existence and location of Nash equilibrium points are studied for a large class of a finite family of payoff functions whose domains are not necessarily convex in the usual sense. The geometric idea is to embed these non-convex domains into suitable Riemannian manifolds regaining certain geodesic convexity properties of them. By using recent non-smooth analysis on Riemannian manifolds and a variational inequality for acyclic sets, an efficient location result of Nash equilibrium points is given. Some examples show the applicability of our results.
Article
Full-text available
We prove that if two smooth manifolds intersect transversally, then the method of alternating projections converges locally at a linear rate. We bound the speed of convergence in terms of the angle between the manifolds, which in turn we relate to the modulus of metric regularity for the intersection problem, a natural measure of conditioning. We discuss a variety of problem classes where the projections are computationally tractable, and we illustrate the method numerically on a problem of finding a low-rank solution of a matrix equation.
Article
Full-text available
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper. Comment: The condensed matter interest is as new methods for minimizing Kohn-Sham orbitals under the constraints of orthonormality and as "geometrically correct" generalizations and extensions of the analystically continued functional approach, Phys. Rev. Lett. 69, 1077 (1992). The problem of orthonormality constraints is quite general and the methods discussed are also applicable in a wide range of fields. To appear in SIAM Journal of Matrix Analysis and Applications, in press for sometime in August-October 1998; 52 pages, 8 figures
Article
Under the assumption that the sectional curvature of the manifold is bounded from below, we establish convergence result about the cyclic subgradient projection algorithm for convex feasibility problem presented in a paper by Bento and Melo on Riemannian manifolds (J Optim Theory Appl 152, 773-785, 2012). If, additionally, we assume that a Slater type condition is satisfied, then we further show that, without changing the step size, this algorithm terminates in a finite number of iterations. Clearly, our results extend the corresponding ones due to Bento and Melo and, in particular, we solve partially the open problem proposed in the paper by Bento and Melo.
Article
We consider variational inequality problems for set-valued vector fields on general Riemannian manifolds. The existence results of the solution, convexity of the solution set, and the convergence property of the proximal point algorithm for the variational inequality problems for set-valued mappings on Riemannian manifolds are established. Applications to convex optimization problems on Riemannian manifolds are provided.
Article
By using recently developed theory which extends the idea of weak convergence into CAT(0) space we prove the convergence of the alternating projection method for convex closed subsets of a CAT(0) space. Given the right notion of weak convergence it turns out that the generalization of the well-known results in Hilbert spaces is straightforward and allows the use of the method in a nonlinear setting. As an application, we use the alternating projection method to minimize convex functionals on a CAT(0) space.
Article
We establish a maximum principle for viscosity subsolutions and supersolutions of equations of the form ut+F(t,dxu)=0, u(0,x)=u0(x), where u0:M→R is a bounded uniformly continuous function, M is a Riemannian manifold, and F:[0,∞)×T∗M→R. This yields uniqueness of the viscosity solutions of such Hamilton–Jacobi equations.
Article
The convex feasibility problem in image recovery is discussed. An optimality criterion is introduced to define a unique solution, and computational tractability imposes that many constraints be left out of the recovery process, in the conventional approach. Any image which satisfies all the constraints arising from the data and a priori knowledge is an acceptable solution. The field originated in the early 1970s with the formulation of topographic reconstruction and band-limited extrapolation problems as affined feasibility problems. The field has benefited from a regained interest in the convex feasibility problem on the part of several groups of researchers, and efficient parallel alternatives to the rudimentary PCS algorithm. The lack of a general purpose, globally convergent method for solving noncomplex feasibility problems seems to be an insurmountable obstacle.
Article
One kind of the L-average Lipschitz condition is introduced to covariant derivatives of sections on Rie- mannian manifolds. A convergence criterion of Newton's method and the radii of the uniqueness balls of the singular points for sections on Riemannian manifolds, which is independent of the curvatures, are established under the assumption that the covariant derivatives of the sections satisfy this kind of the L-average Lipschitz condition. Some applications to special cases including Kantorovich's condition and the -condition as well as Smale's -theory are provided. In particular, the result due to Ferreira and Svaiter (Kantorovich's Theorem on Newton's method in Riemannian manifolds, J. Complexity 18 (2002) 304-329) is extended while the results due to Dedieu Priouret, Malajovich (Newton's method on Riemannian manifolds: covariant alpha theory, IMA J. Numer. Anal. 23 (2003) 395-419) are improved significantly. Moreover, the corresponding results due to Alvarez, Bolter, Munier (A unifying local convergence result for Newton's method in Rieman- nian manifolds, Found. Comput. Math. to appear) for vector fields and mappings on Riemannian manifolds are also extended.
Article
In this paper, we study the rate of convergence of the cyclic projection algorithm applied to finitely many semi-algebraic convex sets. We establish an explicit convergence rate estimate which relies on the maximum degree of the polynomials that generate the semi-algebraic convex sets and the dimension of the underlying space. We achieve our results by exploiting the algebraic structure of the semi-algebraic convex sets.
Article
In radiation therapy one is confronted with the task of formulating a treatment plan which delivers a specified dose to a tumour but avoids irreparable damage to surrounding uninvolved structures. Radiation therapy treatment planning (RTTP) involves an inverse and a forward problem. The inverse problem is to devise a treatment plan, i.e. a radiation beam configuration and beam weighting, which provides a specified dose distribution to the delineated region. The forward problem is to calculate the dose distribution within the patient that results from the weighted radiation beam configuration. Since no analytic closed-form mathematical formulation of the forward operator exists, the inverse problem actually calls for computerised inversion of data. This inversion is achieved by constructing a fully discretised model that leads to a system of linear inequalities. These inequalities are solved either by a row-action method or a block-Cimmino algorithm which allows the assignment of weights within each block of inequalities.
Article
A unified proof is given of the maximum principle for optimal control with various kinds of constraints by using a multiplier rule on metric spaces.
Article
A cyclically controlled method of subgradient projections (CSP) for the convex feasibility problem of solving convex inequalities is presented. The features of this method make it an efficient tool in handling huge and sparse problems. A particular application to an image reconstruction problem of emission computerized tomography is mentioned.
Article
We give several unifying results, interpretations, and examples regarding the convergence of the von Neumann alternating projection algorithm for two arbitrary closed convex nonempty subsets of a Hilbert space. Our research is formulated within the framework of Fejr monotonicity, convex and set-valued analysis. We also discuss the case of finitely many sets.
Article
The Newton method for estimating a critical point of a real function is formulated in a coordinate free manner on an arbitrary Lie group. Convergence proofs for the numerical method are given. An application of the general approach to computing the eigenvalues of a symmetric matrix is given, and the resultant algorithm is compared with the classical shifted QR algorithm. Properties of the method described suggest that it is of interest for certain computations in online and adaptive environments.
Article
We present an iterative technique for finding zeroes of vector fields on Riemannian manifolds. As a special case we obtain a “nonlinear averaging algorithm” that computes the centroid of a mass distribution μ supported in a set of small enough diameter D in a Riemannian manifold M. We estimate the convergence rate of our general algorithm and the more special Riemannian averaging algorithm. The algorithm is also used to provide a constructive proof of Karcher's theorem on the existence and local uniqueness of the center of mass, under a somewhat stronger requirement than Karcher's on D. Another corollary of our results is a proof of convergence, for a fairly large open set of initial conditions, of the “GPA algorithm” used in statistics to average points in a shape-space, and a quantitative explanation of why the GPA algorithm converges rapidly in practice; see [D. Groisser, On the convergence of some Procrustean averaging algorithms, Preprint, 2003].We also show that a mass distribution in M with support Q has a unique center of mass in a (suitably defined) convex hull of Q.
Article
This is the first paper dealing with the study of weak sharp minima for constrained optimization problems on Riemannian manifolds, which are important in many applications. We consider the notions of local weak sharp minima, boundedly weak sharp minima, and global weak sharp minima for such problems and establish their complete characterizations in the case of convex problems on finite-dimensional Riemannian manifolds and Hadamard manifolds. A number of the results obtained in this paper are also new for the case of conventional problems in finite-dimensional Euclidean spaces. Our methods involve appropriate tools of variational analysis and generalized differentiation on Riemannian and Hadamard manifolds developed and efficiently implemented in this paper.
Article
We study the problem of finding the global Riemannian center of mass of a set of data points on a Riemannian manifold. Specifically, we investigate the convergence of constant step-size gradient descent algorithms for solving this problem. The challenge is that often the underlying cost function is neither globally differentiable nor convex, and despite this one would like to have guaranteed convergence to the global minimizer. After some necessary preparations we state a conjecture which we argue is the best (in a sense described) convergence condition one can hope for. The conjecture specifies conditions on the spread of the data points, step-size range, and the location of the initial condition (i.e., the region of convergence) of the algorithm. These conditions depend on the topology and the curvature of the manifold and can be conveniently described in terms of the injectivity radius and the sectional curvatures of the manifold. For manifolds of constant nonnegative curvature (e.g., the sphere and the rotation group in R3\mathbb{R}^{3}) we show that the conjecture holds true (we do this by proving and using a comparison theorem which seems to be of a different nature from the standard comparison theorems in Riemannian geometry). For manifolds of arbitrary curvature we prove convergence results which are weaker than the conjectured one (but still superior over the available results). We also briefly study the effect of the configuration of the data points on the speed of convergence.
Article
To study a geometric model of the human spine we are led to finding a constrained minimum of a real valued function defined on a product of special orthogonal groups. To take advantge of its Lie group structure we consider Newton's method on this manifold. Comparisons between measured spines and computed spines show the pertinence of this approach.
Article
The connection between the crystallographic phase problem and the feasible set approach is explored. It is argued that solving the crystallographic phase problem is formally equivalent to a feasible set problem using a statistical operator interpretable via a log-likelihood functional, projection onto the non-convex set of experimental structure factors coupled with a phase-extension constraint and mapping onto atomic positions. In no way does this disagree with or dispute any of the existing statistical relationships available in the literature; instead it expands understanding of how the algorithms work. Making this connection opens the door to the application of a number of well developed mathematical tools in functional analysis. Furthermore, a number of known results in image recovery can be exploited both to optimize existing algorithms and to develop new and improved algorithms.
Article
Solving a convex set theoretic image recovery problem amounts to finding a point in the intersection of closed and convex sets in a Hilbert space. The projection onto convex sets (POCS) algorithm, in which an initial estimate is sequentially projected onto the individual sets according to a periodic schedule, has been the most prevalent tool to solve such problems. Nonetheless, POCS has several shortcomings: it converges slowly, it is ill suited for implementation on parallel processors, and it requires the computation of exact projections at each iteration. We propose a general parallel projection method (EMOPSP) that overcomes these shortcomings. At each iteration of EMOPSP, a convex combination of subgradient projections onto some of the sets is formed and the update is obtained via relaxation. The relaxation parameter may vary over an iteration-dependent, extrapolated range that extends beyond the interval [0,2] used in conventional projection methods. EMOPSP not only generalizes existing projection-based schemes, but it also converges very efficiently thanks to its extrapolated relaxations. Theoretical convergence results are presented as well as numerical simulations
Article
Explains set theoretic estimation, which is governed by the notion of feasibility and produces solutions whose sole property is to be consistent with all information arising from the observed data and a priori knowledge. Each piece of information is associated with a set in the solution space, and the intersection of these sets, the feasibility set, represents the acceptable solutions. The practical use of the set theoretic framework stems from the existence of efficient techniques for finding these solutions. Many scattered problems in systems science and signal processing have been approached in set theoretic terms over the past three decades. The author synthesizes a single, general framework from these various approaches, examines its fundamental philosophy, goals, and analytical techniques, and relates it to conventional methods.< >
Article
Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of these algorithms, a very broad and flexible framework is investigated . Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given. 1991 M.R. Subject Classification. Primary 47H09, 49M45, 65-02, 65J05, 90C25; Secondary 26B25, 41A65, 46C99, 46N10, 47N10, 52A05, 52A41, 65F10, 65K05, 90C90, 92C55. Key words and phrases. Angle between two subspaces, averaged mapping, Cimmino's method, computerized tomography, convex feasibility problem, convex function, convex inequalities, convex programming, convex set, Fej'er monotone sequence, firmly nonexpansive mapping, H...