Article

Variable-step preconditioned conjugate gradient method for partial symmetric eigenvalue problems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In two previous papers by Neymeyr [Linear Algebra Appl. 322 (1–3) (2001) 61; 322 (1–3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive definite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant (but still sharp in decisive quantities) convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such a practically important method as the locally optimal block preconditioned conjugate gradient eigensolver.
Article
Full-text available
The following estimate for the Rayleigh--Ritz method is proved: l-l||( u,u)| # #A u- l u#sin#{u; U}, = 1. Here A is a bounded self-adjoint operator in a real Hilbert/euclidian space, one of its eigenpairs, U a trial subspace for the Rayleigh--Ritz method, and l, u} a Ritz pair. This inequality makes it possible to analyze the fine structure of the error of the Rayleigh--Ritz method, in particular, it shows that Ce , if an eigenvector u is close to the trial subspace with accuracy e and a Ritz vector u is an e approximation to another eigenvector, with a different eigenvalue. Generalizations of the estimate to the cases of eigenspaces and invariant subspaces are suggested, and estimates of approximation of eigenspaces and invariant subspaces are proved. 1.
Article
Full-text available
A short survey of some results on preconditioned iterative methods for symmetric eigenvalue problems is presented. The survey is by no means complete and reflects the author's personal interests and biases, with emphasis on author's own contributions. The author surveys most of the important theoretical results and ideas which have appeared in the Soviet literature, adding references to work published in the western literature mainly to preserve the integrity of the topic. The aim of this paper is to introduce a systematic classification of preconditioned eigensolvers, separating the choice of a preconditioner from the choice of an iterative method. A formal definition of a preconditioned eigensolver is given. Recent developments in the area are mainly ignored, in particular, on Davidson's method. Domain decomposition methods for eigenproblems are included in the framework of preconditioned eigensolvers.
Article
Full-text available
We describe new algorithms of the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) Method for symmetric eigenvalue problems, based on a local optimization of a three-term recurrence, and suggest several other new methods. To be able to compare numerically different methods in the class, with different preconditioners, we propose a common system of model tests, using random preconditioners and initial guesses. As the "ideal" control algorithm, we advocate the standard preconditioned conjugate gradient method for nding an eigenvector as an element of the null-space of the corresponding homogeneous system of linear equations under the assumption that the eigenvalue is known. We recommend that every new preconditioned eigensolver be compared with this "ideal" algorithm on our model test problems in terms of the speed of convergence, costs of every iterations and memory requirements. We provide such comparison for our LOBPCG method. Numerical results establish that our algo...
Article
Full-text available
. We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigenspaces of a symmetric positive definite operator A defined on a finite dimensional real Hilbert space V . In our applications, the dimension of V is large and the cost of inverting A is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning for A. Estimates will be provided which show that the preconditioned method converges linearly when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors. 1. Introduction. In this paper, we shall be concerned with computing a modest number of the smallest eigenvalues and their corresponding eigenvectors of a large symmetric illconditioned system. More explicitly, let A be a symmetric and positive definit...
Article
Full-text available
. We show that a modification of preconditioned gradient-type iterative methods for partial generalized eigenvalue problems makes it possible to implement them in a subspace. We propose such methods and estimate their convergence rate. We also describe iterative methods for finding a group of eigenvalues, propose preconditioners, suggest a practical way of computing the initial guess, and consider a model example. These methods are most effective for finding minimal eigenvalues of simple discretizations of elliptic operators with piece-wise constant coefficients in domains composed of rectangles or parallelepipeds. The iterative process is carried out on the interfaces between the subdomains. Its rate of convergence does not decrease when the mesh gets finer, and each iteration has a quite modest cost. This process is effective and parallelizable. Key words. eigenvalue problem, iterations in a subspace, preconditioner AMS(MOS) subject classifications. 65F35 1. Introduction. We first...
Article
Preface Acknowledgements 1. Direct solution methods 2. Theory of matrix eigenvalues 3. Positive definite matrices, Schur complements, and generalized eigenvalue problems 4. Reducible and irreducible matrices and the Perron-Frobenius theory for nonnegative matrices 5. Basic iterative methods and their rates of convergence 6. M-matrices, convergent splittings, and the SOR method 7. Incomplete factorization preconditioning methods 8. Approximate matrix inverses and corresponding preconditioning methods 9. Block diagonal and Schur complement preconditionings 10. Estimates of eigenvalues and condition numbers for preconditional matrices 11. Conjugate gradient and Lanczos-type methods 12. Generalized conjugate gradient methods 13. The rate of convergence of the conjugate gradient method Appendices.
Article
A new incomplete factorization method is proposed, differing from previous ones by the way in which the diagonal entries of the triangular factors are defined. A comparison is given with the dynamic modified incomplete factorization methods of Axelsson–Barker and Beauwens, and with the relaxed incomplete Cholesky method of Axelsson and Lindskog. Theoretical arguments show that the new method is at least as robust as both previous ones, while numerical experiments made in the discrete PDE context show an effective improvement in many practical circumstances, particularly for anisotropic problems.
Article
To precondition large sparse linear systems resulting from the discretization of second-order elliptic partial differential equations, many recent works focus on the so-called algebraic multilevel methods. These are based on a block incomplete factorization process applied to the system matrix partitioned in hierarchical form. They have been shown to be both robust and efficient in several circumstances, leading to iterative solution schemes of optimal order of computational complexity. Now, despite the procedure is essentially algebraic, previous works focus generally on a specific context and consider schemes that use classical grid hierarchies with characteristic mesh sizes h,2h,4h, etc. Therefore, these methods require some extra information besides the matrix of the linear system and lack of robustness in some situations where semi-coarsening would be desirable. In this paper, we develop a general method that can be applied in a black box fashion to a wide class of problems, ranging from 2D model Poisson problems to 3D singularly perturbed convection–diffusion equations. It is based on an automatic coarsening process similar to the one used in the AMG method, and on coarse grid matrices computed according to a simple and cheap aggregation principle. Numerical experiments illustrate the efficiency and the robustness of the proposed approach. Copyright © 2002 John Wiley & Sons, Ltd.
Article
The discretization of eigenvalue problems for partial differential operators is a major source of matrix eigenvalue problems having very large dimensions, but only some of the smallest eigenvalues together with the eigenvectors are to be determined. Preconditioned inverse iteration (a “matrix-free” method) derives from the well-known inverse iteration procedure in such a way that the associated system of linear equations is solved approximately by using a (multigrid) preconditioner. A new convergence analysis for preconditioned inverse iteration is presented. The preconditioner is assumed to satisfy some bound for the spectral radius of the error propagation matrix resulting in a simple geometric setup. In this first part the case of poorest convergence depending on the choice of the preconditioner is analyzed. In the second part the dependence on all initial vectors having a fixed Rayleigh quotient is considered. The given theory provides sharp convergence estimates for the eigenvalue approximations showing that multigrid eigenvalue/vector computations can be done with comparable efficiency as known from multigrid methods for boundary value problems.
Article
Algebraic multigrid methods are designed for the solution of (sparse) linear systems of equations using multigrid principles. In contrast to standard multigrid methods, AMG does not take advantage of the origin of a particular system of equations at hand, nor does it exploit any underlying geometrical situation. Fully automatically and based solely on algebraic information contained in the given matrix, AMG constructs a sequence of “grids” and corresponding operators. A special AMG algorithm will be presented. For a wide range of problems (including certain problems which do not have a continuous background) this algorithm yields an iterative method which exhibits a convergence behavior typical for multigrid methods.
Conference Paper
In the early eighties the direct application of a multigrid technique for solving the partial eigenvalue problem of computing few of the smallest eigenvalues and their corresponding eigenvectors of a differential operator was proposed by A. Brandt, S. McCormick and J. Ruge [SIAM J. Sci. Stat. Comput. 4, 244–260 (1983; Zbl 0517.65083)]. In the present paper an experimental study of the method for model linear elasticity problems is carried out. Based on these results we give some practical advices for a good choice of multigrid-related parameters.
Chapter
The state of the art in algebraic multgrid (AMG) methods is discussed. The interaction between the relaxation process and the coarse grid correction necessary for proper behavior of the solution probes is discussed in detail. Sufficient conditions on relaxation and interpolation for the convergence of the V-cycle are given. The relaxation used in AMG, what smoothing means in an algebraic setting, and how it relates to the existing theory are considered. Some properties of the coarse grid operator are discussed, and results on the convergence of two-level and multilevel convergence are given. Details of an algorithm particularly studied for problems obtained by discretizing a single elliptic, second order partial differential equation are given. Results of experiments with such problems using both finite difference and finite element discretizations are presented.
Article
The aim of this paper is to provide a convergence analysis for a preconditioned subspace iteration, which is designated to determine a modest number of the smallest eigenvalues and its corresponding invariant subspace of eigenvectors of a large, symmetric positive definite matrix. The algorithm is built upon a subspace implementation of preconditioned inverse iteration, i.e. the well-known inverse iteration procedure, where the occurring system of linear equations is approximately solved by using a preconditioner. This step is followed by a Rayleigh-Ritz projection so that preconditioned inverse iteration is always applied to the Ritz vectors of the actual subspace of approximate eigenvectors. The given theory provides sharp convergence estimates for the Ritz values and is mainly built on arguments exploiting the geometry underlying preconditioned inverse iteration.
Article
. A new multilevel preconditioner is proposed for the iterative solution of linear systems whose coefficient matrix is a symmetric M--matrix arising from the discretization of a second order elliptic PDE. It is based on a recursive block incomplete factorization of the matrix partitioned in a two-by-two block form, in which the submatrix related to the fine grid nodes is approximated by a MILU factorization, and the Schur complement computed from a diagonal approximation of the same submatrix. A general algebraic analysis proves optimal order convergence under mild assumptions related to the quality of the approximations of the Schur complement on the one hand, and of the fine grid submatrix on the other hand. This analysis does not require a red black ordering of the fine grid nodes, nor a transformation of the matrices in hierarchical form. Considering more specifically 5 point finite difference approximations of 2D problems, we prove that the spectrum of the preconditioned system is...
Il'in, About one iterative method for solving the partial eigenvalue problem
  • A V Gavrilin
Preconditioned iterative methods in a subspace
  • N S Bakhvalov
  • A V Knyazev