Article

Line Search For Generalized Alternating Projections

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper is about line search for the generalized alternating projections (GAP) method. This method is a generalization of the von Neumann alternating projections method, where instead of performing alternating projections, relaxed projections are alternated. The method can be interpreted as an averaged iteration of a nonexpansive mapping. Therefore, a recently proposed line search method for such algorithms is applicable to GAP. We evaluate this line search and show situations when the line search can be performed with little additional cost. We also present a variation of the basic line search for GAP - the projected line search. We prove its convergence and show that the line search condition is convex in the step length parameter. We show that almost all convex optimization problems can be solved using this approach and numerical results show superior performance with both the standard and the projected line search, sometimes by several orders of magnitude, compared to the nominal method.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The generalized alternating projections (GAP)[25]for two closed, convex and nonempty sets U and V, with U ∩ V = ∅, is then defined by the iteration x k+1 := Sx k , ...
... The operator S is averaged and the iterates converge to the fixed-point set fixS under the following assumption, see e.g.[25]where these results are collected., 2]and that either of the following holds A1. α 1 , α 2 ∈ (0, 2) A2. α ∈ (0, 1) with either α 1 = 2 or α 2 = 2 A3. ...
... Remark 4 For linear subspaces U, V, under the Assumption 1 case A1 or A2, we have fixS = U ∩ V, see e.g.[25]. For case A3 we have fixS = V ∩ U + (V ⊥ ∩ U ⊥ ), see[17]. ...
Article
Generalized alternating projections is an algorithm that alternates relaxed projections onto a finite number of sets to find a point in their intersection. We consider the special case of two linear subspaces, for which the algorithm reduces to a matrix teration. For convergent matrix iterations, the asymptotic rate is linear and decided by the magnitude of the subdominant eigenvalue. In this paper, we show how to select the three algorithm parameters to optimize this magnitude, and hence the asymptotic convergence rate. The obtained rate depends on the Friedrichs angle between the subspaces and is considerably better than known rates for other methods such as alternating projections and Douglas-Rachford splitting. We also present an adaptive scheme that, online, estimates the Friedrichs angle and updates the algorithm parameters based on this estimate. A numerical example is provided that supports our theoretical claims and shows very good performance for the adaptive method.
... The end user's participation in the transactive market has to bid its favorable amount of electricity and consumption modification based on the market clearing price [24]. Transactive control has been used in commercial and residential buildings for thermostatically controlled loads (TCL), heating, ventilation, etc., which can be accomplished through demand response (DR) programs [25][26][27][28][29]. Figure 6a shows the TE-based concepts in control, which involves DR approaches and price reaction. Under DR approaches, we have time of use, direct load control, real-time pricing, under price reaction, market price signal, locational marginal price, and shadow price signal. ...
... Transactive energy based control and management [19][20][21][22][23][24][25][26][27][28][29][44][45][46][47][48][49][50][51][52][53][54] Transactive control, Transactive network management, P2P markets. Transactive based control:--A transactive control involves using market data and local information in a fully decentralized environment for smoothing network fluctuations and network balance. ...
Article
Full-text available
Transactive energy is a highly effective technique for peers to exchange and trade energy resources. Several interconnected blocks, such as generation businesses, prosumers, the energy market, energy service providers, transmission and distribution networks, and so on, make up a transactive energy framework. By incorporating the prosumers concept and digitalization into energy systems at the transmission and distribution levels, transactive energy systems have the exciting potential to reduce transmission losses, lower electric infrastructure costs, increase reliability, increase local energy use, and lower customers’ electricity bills at the transmission and distribution levels. This article provides a state-of-the-art review of transactive energy concepts, primary drivers, architecture, the energy market, control and management, network management, new technologies, and the flexibility of the power system, which will help researchers comprehend the various concepts involved.
... The rate of linear convergence of these methods is known to be the cosine of the Friedrichs angle between the subspaces for DR [4], and the squared cosine of this angle for AP [10]. Several relaxations and generalizations of these methods have been proposed, such as the relaxed and the partial relaxed alternating projections (RAP, PRAP) [1,7,23], the generalized alternating projections (GAP) [15,17], the relaxed averaged alternating reflections (RAAR) [21], and the generalized Douglas-Rachford (GDR) [13], among others. We note that AAMR can also be seen as a modified version of DR, since both methods coincide when α = 1 2 and β = 1 in (2). ...
... Then, by (15), the induction hypothesis (13) and (17), we obtain ...
Article
Full-text available
The averaged alternating modified reflections (AAMR) method is a projection algorithm for finding the closest point in the intersection of convex sets to any arbitrary point in a Hilbert space. This method can be seen as an adequate modification of the Douglas–Rachford method that yields a solution to the best approximation problem. In this paper we consider the particular case of two subspaces in a Euclidean space. We obtain the rate of linear convergence of the AAMR method in terms of the Friedrichs angle between the subspaces and the parameters defining the scheme, by studying the linear convergence rates of the powers of matrices. We further optimize the value of these parameters in order to get the minimal convergence rate, which turns out to be better than the one of other projection methods. Finally, we provide some numerical experiments that demonstrate the theoretical results.
... The rate of linear convergence of these methods is known to be the cosine of the Friedrichs angle between the subspaces for DR [4], and the squared cosine of this angle for AP [10]. Several relaxations and generalizations of these methods have been proposed, such as the relaxed and the partial relaxed alternating projections (RAP, PRAP) [1,7,23], the generalized alternating projections (GAP) [15,17], the relaxed averaged alternating reflections (RAAR) [21], and the generalized Douglas-Rachford (GDR) [13], among others. We note that AAMR can also be seen as a modified version of DR, since both methods coincide when α = 1 2 and β = 1 in (2). ...
... Then, by (15), the induction hypothesis (13) and (17), we obtain ...
Preprint
Full-text available
The averaged alternating modified reflections (AAMR) method is a projection algorithm for finding the closest point in the intersection of convex sets to any arbitrary point in a Hilbert space. This method can be seen as an adequate modification of the Douglas--Rachford method that yields a solution to the best approximation problem. In this paper we consider the particular case of two subspaces in a Euclidean space. We obtain the rate of linear convergence of the AAMR method in terms of the Friedrichs angle between the subspaces and the parameters defining the scheme, by studying the linear convergence rates of the powers of matrices. We further optimize the value of these parameters in order to get the minimal convergence rate, which turns out to be better than the one of other projection methods. Finally, we provide some numerical experiments that demonstrate the theoretical results.
... Algorithm 3 is clearly more efficient than a direct application of the ADMM algorithm of [29] to the decomposed primal-dual pair of (vectorized) SDPs (13)- (14). In fact, the cost of the conic projection (39b) is the same for both algorithms, but the sequence of block elim- inations and applications of the matrix inversion lemma we have described greatly reduces the cost of the affine projection step: we only need to invert/factorize an m × m matrix, instead of the (n 2 + 2n d + m + 1) × (n 2 + 2n d + m + 1) matrix Q (as we noted before, n 2 + 2n d + m + 1 is usually very large). ...
... We remark that the current implementation of our algo- rithms is sequential, but many steps can be carried out in parallel, so further computational gains may be achieved by taking full advantage of distributed computing architectures. Be- sides, it would be interesting to integrate some acceleration techniques (e.g., [14,37]) that are promising to improve the convergence performance of ADMM in practice. ...
Article
Full-text available
We employ chordal decomposition to reformulate a large and sparse semidefinite program (SDP), either in primal or dual standard form, into an equivalent SDP with smaller positive semidefinite (PSD) constraints. In contrast to previous approaches, the decomposed SDP is suitable for the application of first-order operator-splitting methods, enabling the development of efficient and scalable algorithms. In particular, we apply the alternating directions method of multipliers (ADMM) to solve decomposed primal- and dual-standard-form SDPs. Each iteration of such ADMM algorithms requires a projection onto an affine subspace, and a set of projections onto small PSD cones that can be computed in parallel. We also formulate the homogeneous self-dual embedding (HSDE) of a primal-dual pair of decomposed SDPs, and extend a recent ADMM-based algorithm to exploit the structure of our HSDE. The resulting HSDE algorithm has the same leading-order computational cost as those for the primal or dual problems only, with the advantage of being able to identify infeasible problems and produce an infeasibility certificate. All algorithms are implemented in the open-source MATLAB solver CDCS. Numerical experiments on a range of large-scale SDPs demonstrate the computational advantages of the proposed methods compared to common state-of-the-art solvers.
Article
Full-text available
Recently, several authors have shown local and global convergence rate results for Douglas–Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that rate. In this paper, we present tight global linear convergence rate bounds for that class of problems. We also provide tight linear convergence rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear convergence results are obtained by proving the stronger property that the Douglas–Rachford operator is contractive.
Book
Full-text available
This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space. A concise exposition of related constructive fixed point theory is presented, that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, best approximation theory, and convex feasibility. The book is accessible to a broad audience, and reaches out in particular to applied scientists and engineers, to whom these tools have become indispensable.
Article
Full-text available
We introduce a first order method for solving very large cone programs to modest accuracy. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of lower accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available and certificates of infeasibility or unboundedness otherwise, it does not rely on any explicit algorithm parameters, and the per-iteration cost of the method is the same as applying the splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We provide a reference implementation called SCS, which can solve large problems to modest accuracy quickly and is parallelizable across multiple processors. We conclude with numerical examples illustrating the efficacy of the technique; in particular, we demonstrate speedups of several orders of magnitude over state-of-the-art interior-point solvers.
Article
Full-text available
We present an O(√nL)-iteration homogeneous and self-dual linear programming (LP) algorithm. The algorithm possesses the following features: • It solves the linear programming problem without any regularity assumption concerning the existence of optimal, feasible, or interior feasible solutions. • It can start at any positive primal-dual pair, feasible or infeasible, near the central ray of the positive orthant (cone), and it does not use any big M penalty parameter or lower bound. • Each iteration solves a system of linear equations whose dimension is almost the same as that solved in the standard (primal-dual) interior-point algorithms. • If the LP problem has a solution, the algorithm generates a sequence that approaches feasibility and optimality simultaneously; if the problem is infeasible or unbounded, the algorithm will correctly detect infeasibility for at least one of the primal and dual problems.
Article
Full-text available
MANY mathematical and applied problems can be reduced to finding some common point of a system (finite or infinite) of convex sets. Usually each of the sets is such that it is not difficult to find the projection of any point on to this set. In this paper we shall consider various methods of finding points from the intersection of sets, using projection on to a separate set as an elementary operation. The strong convergence of the sequences obtained in this way is proved. Applications are given to various problems, including the problem of best approximation and problems of optimal control. Particular attention is paid in the latter case to problems with restrictions on the phase coordinates.
Book
This reference text, now in its second edition, offers a modern unifying presentation of three basic areas of nonlinear analysis: convex analysis, monotone operator theory, and the fixed point theory of nonexpansive operators. Taking a unique comprehensive approach, the theory is developed from the ground up, with the rich connections and interactions between the areas as the central focus, and it is illustrated by a large number of examples. The Hilbert space setting of the material offers a wide range of applications while avoiding the technical difficulties of general Banach spaces. The authors have also drawn upon recent advances and modern tools to simplify the proofs of key results making the book more accessible to a broader range of scholars and users. Combining a strong emphasis on applications with exceptionally lucid writing and an abundance of exercises, this text is of great value to a large audience including pure and applied mathematicians as well as researchers in engineering, data science, machine learning, physics, decision sciences, economics, and inverse problems. The second edition of Convex Analysis and Monotone Operator Theory in Hilbert Spaces greatly expands on the first edition, containing over 140 pages of new material, over 270 new results, and more than 100 new exercises. It features a new chapter on proximity operators including two sections on proximity operators of matrix functions, in addition to several new sections distributed throughout the original chapters. Many existing results have been improved, and the list of references has been updated. Heinz H. Bauschke is a Full Professor of Mathematics at the Kelowna campus of the University of British Columbia, Canada. Patrick L. Combettes, IEEE Fellow, was on the faculty of the City University of New York and of Université Pierre et Marie Curie – Paris 6 before joining North Carolina State University as a Distinguished Professor of Mathematics in 2016.
Article
Many popular first order algorithms for convex optimization, such as forward-backward splitting, Douglas-Rachford splitting, and the alternating direction method of multipliers (ADMM), can be formulated as averaged iteration of a nonexpansive mapping. In this paper we propose a line search for averaged iteration that preserves the theoretical convergence guarantee, while often accelerating practical convergence. We discuss several general cases in which the additional computational cost of the line search is modest compared to the savings obtained.
Article
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.
Article
Properties of compositions and convex combinations of averaged nonexpansive operators are investigated and applied to the design of new fixed point algorithms in Hilbert spaces. An extended version of the forward-backward splitting algorithm for finding a zero of the sum of two monotone operators is obtained.
Article
Splitting algorithms for the sum of two monotone operators. We study two splitting algorithms for (stationary and evolution) problems involving the sum of two monotone operators. These algorithms are well known in the linear case and are here extended to the case of multivalued monotone operators. We prove the convergence of these algorithms, we give some applications to the obstacle problem and to minimization problems; and finally we present numerical computations comparing these algorithms to some other classical methods.
Article
In various numerical problems one is confronted with the task of solving a system of linear inequalities: (1.1) (i = 1, … ,m) assuming, of course, that the above system is consistent. Sometimes one has, in addition, to minimize a given linear form l ( x ). Thus, in linear programming one obtains a problem of the latter type.
Article
Determining xed points of nonexpansive mappings is a frequent problem in mathematics and physical sciences. An algorithm for nding common xed points of nonexpansive mappings in Hilbert space, essentially due to Halpern, is analyzed. The main theorem extends Wittmann's recent work and partially generalizes a result by Lions. Algorithms of this kind have been applied to the convex feasibility problem.
Article
IN this paper we consider an iterative method of finding the common point of convex sets. This method can be regarded as a generalization of the methods discussed in [1–4]. Apart from problems which can be reduced to finding some point of the intersection of convex sets, the method considered can be applied to the approximate solution of problems in linear and convex programming.
Article
Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of these algorithms, a very broad and flexible framework is investigated . Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given. 1991 M.R. Subject Classification. Primary 47H09, 49M45, 65-02, 65J05, 90C25; Secondary 26B25, 41A65, 46C99, 46N10, 47N10, 52A05, 52A41, 65F10, 65K05, 90C90, 92C55. Key words and phrases. Angle between two subspaces, averaged mapping, Cimmino's method, computerized tomography, convex feasibility problem, convex function, convex inequalities, convex programming, convex set, Fej'er monotone sequence, firmly nonexpansive mapping, H...
Numerical optimization. Springer series in operations research and financial engineering
  • J Nocedal
  • S Wright
J. Nocedal and S. Wright. Numerical optimization. Springer series in operations research and financial engineering. Springer, New York, NY, 2nd edition, 2006.
Line search for averaged operator iteration Available: http://arxiv.org/abs
  • P Giselsson
  • M Fält
  • S Boyd
P. Giselsson, M. Fält, and S. Boyd. Line search for averaged operator iteration. Available: http://arxiv.org/abs/1603.06772v2, 2016.