ResearchPDF Available

Abstract and Figures

We study the solution of the linear least-squares problem minx ∥b−Ax∥_2 where the matrix A ∈ IR^{m×n} (m ≥ n) has rank n and is large and sparse. We assume that A is available as a matrix, not an operator. The preconditioning of this problem is difficult because the matrix A does not have the properties of differential problems that make standard preconditioners effective. Incomplete Cholesky techniques applied to the normal equations do not produce a well conditioned problem. We attempt to bypass the ill-conditioning by finding an n by n nonsingular submatrix B of A that reduces the Euclidean norm of AB^{−1}. We use B to precondition a symmetric quasi-definite linear system whose condition number is then independent of the condition number of A and has the same solution as the original least-squares problem. We illustrate the performance of our approach on some standard test problems and show it is competitive with other approaches.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Iterative methods are often suitable for solving least-squares problems ,w here is large and sparse. The use of the conjugate gradient method with a nonsingular square submatrix of as preconditioner was first suggested by Lauchli in 1961. This conjugate gradient method has recently been extended by Yuan to generalized least-squares problems. In this paper we consider the problem of finding a suitable submatrix and its LU factorization for a sparse rectangular matrix . We give three algorithms based on the sparse LU factorization algorithm by Gilbert and Peierls. Numerical results are given, which indicate that our preconditioners can be effective.
Article
Full-text available
Pseudoskeleton approximation and some other problems require the knowledge of sufficiently well-conditioned submatrix in a large-scale matrix. The quality of a submatrix can be measured by modulus of its determinant, also known as volume. In this paper we discuss a search algorithm for the maximum-volume submatrix which already proved to be useful in several matrix and tensor approximation algorithms. We investigate the behavior of this algorithm on random matrices and present some its applications, including maximization of a bivariate functional.
Chapter
In the introduction (cf. Section 1.1) we already indicated that there is a strong connection between the Krylov subspace Kn(A; r0) and the space П n−1 of all polynomials of degree not exceeding n − 1 In particular, any basis of the polynomial space generates a basis for the Krylov subspace. In this section we will discuss two distinguished examples of this connection.
Chapter
The parameter dependent schemes require some a priori information about the underlying scheme. In this chapter we show how to estimate the spectrum of a given symmetric indefinite matrix and how to approximate its eigenvalue distribution.
Article
It is shown how the maximal-volume concept from interpolation theory can be formulated for matrix approximation problems using low-rank matrices.
Article
Two incomplete orthogonal decomposition methods (Gram-Schmidt and Givens) for the solution of A T Ax=b are investigated. These two methods are applied to four small examples. The numerical efficencies are compared including the incomplete Cholesky-conjugate gradient method.
Article
Symmetric quasi-definite matrices arise in numerous applications, notably in interior point methods in mathematical programming. Several authors have derived various properties of these matrices. This article provides a list of some previously known properties and adds a number of others that are believed to be new.