Article

Quasiseparable Hessenberg reduction of real diagonal plus low rank matrices and applications

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We present a novel algorithm to perform the Hessenberg reduction of an n×nn\times n matrix A of the form A=D+UVA = D + UV^* where D is diagonal with real entries and U and V are n×kn\times k matrices with knk\le n. The algorithm has a cost of O(n2k)O(n^2k) arithmetic operations and is based on the quasiseparable matrix technology. Applications are shown to solving polynomial eigenvalue problems and some numerical experiments are reported in order to analyze the stability of the approach

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... More recently methods for diagonal plus low rank matrices have been used in combination with interpolation techniques in order to solve generalized nonlinear eigenvalue problems [1,7]. Standard Hessenberg reduction algorithms for rank structured matrices [12,14] are both theoretically and practically ineffective as the size of the correction increases since their complexities depend quadratically or even cubically on k, or they suffer from possible instabilities [8]. The aim of this paper is to describe a novel efficient reduction scheme which attains the cost of O(n 2 k) arithmetic operations (ops). ...
Preprint
We develop two fast algorithms for Hessenberg reduction of a structured matrix A=D+UVHA = D + UV^H where D is a real or unitary n×nn \times n diagonal matrix and U,VCn×kU, V \in\mathbb{C}^{n \times k}. The proposed algorithm for the real case exploits a two--stage approach by first reducing the matrix to a generalized Hessenberg form and then completing the reduction by annihilation of the unwanted sub-diagonals. It is shown that the novel method requires O(n2k)O(n^2k) arithmetic operations and it is significantly faster than other reduction algorithms for rank structured matrices. The method is then extended to the unitary plus low rank case by using a block analogue of the CMV form of unitary matrices. It is shown that a block Lanczos-type procedure for the block tridiagonalization of (D)\Re(D) induces a structured reduction on A in a block staircase CMV--type shape. Then, we present a numerically stable method for performing this reduction using unitary transformations and we show how to generalize the sub-diagonal elimination to this shape, while still being able to provide a condensed representation for the reduced matrix. In this way the complexity still remains linear in k and, moreover, the resulting algorithm can be adapted to deal efficiently with block companion matrices.
... The general case of unitary plus low rank matrices has been recently addressed in [8]. Extensions for efficiently handling corrections with larger rank, necessary to deal with block companion matrices, have been recently presented in [4,12,29]. ...
Preprint
Full-text available
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing com plexity. Recently, fast and reliable eigensolvers dealing with low rank perturbations of unitary and Hermitian matrices were proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, e.g., the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of an Hermitian or unitary matrix plus a low rank perturbation. We propose necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Based on these conditions we are able to identify the closest Hermitian and unitary plus low rank matrix in Frobenius and spectral norm and a practical Lanczos iteration to detect the low rank perturbation is presented. Numerical tests prove that this straightforward algorithm is robust with respect to noise.
... More recently methods for diagonal plus low rank matrices have been used in combination with interpolation techniques in order to solve generalized nonlinear eigenvalue problems [1,7]. Standard Hessenberg reduction algorithms for rank structured matrices [12,14] are both theoretically and practically ineffective as the size of the correction increases since their complexities depend quadratically or even cubically on k, or they suffer from possible instabilities [8]. The aim of this paper is to describe a novel efficient reduction scheme which attains the cost of O(n 2 k) arithmetic operations (ops). ...
Article
Full-text available
We develop two fast algorithms for Hessenberg reduction of a structured matrix A=D+UVHA = D + UV^H where D is a real or unitary n×nn \times n diagonal matrix and U,VCn×kU, V \in\mathbb{C}^{n \times k}. The proposed algorithm for the real case exploits a two--stage approach by first reducing the matrix to a generalized Hessenberg form and then completing the reduction by annihilation of the unwanted sub-diagonals. It is shown that the novel method requires O(n2k)O(n^2k) arithmetic operations and it is significantly faster than other reduction algorithms for rank structured matrices. The method is then extended to the unitary plus low rank case by using a block analogue of the CMV form of unitary matrices. It is shown that a block Lanczos-type procedure for the block tridiagonalization of (D)\Re(D) induces a structured reduction on A in a block staircase CMV--type shape. Then, we present a numerically stable method for performing this reduction using unitary transformations and we show how to generalize the sub-diagonal elimination to this shape, while still being able to provide a condensed representation for the reduced matrix. In this way the complexity still remains linear in k and, moreover, the resulting algorithm can be adapted to deal efficiently with block companion matrices.
... Another advantage of this representation is that any matrix in the form "diagonal plus low-rank" can be reduced to Hessenberg form H by means of Givens rotation with a low number of arithmetic operations provided that the diagonal is real. Moreover, the function ppxq " detpxI´Hq as well as the Newton correction ppxq{p 1 pxq can be computed in Opnm 2 q operations [8]. This fact can be used to implement the Aberth iteration in Opn 2 m 3 q ops instead of Opnm 4`n2 m 3 q of [5]. ...
Article
A new class of linearizations and ℓ-ifications for matrix polynomials of degree n is proposed. The ℓ-ifications in this class have the form where D is a block diagonal matrix polynomial with blocks of size m, W is an matrix polynomial and , for a suitable integer q. The blocks can be chosen a priori, subjected to some restrictions. Under additional assumptions on the blocks the matrix polynomial is a strong ℓ-ification, i.e., the reversed polynomial of defined by is an ℓ-ification of . The eigenvectors of the matrix polynomials and are related by means of explicit formulas. Some practical examples of ℓ-ifications are provided. A strategy for choosing in such a way that is a well conditioned linearization of is proposed. Some numerical experiments that validate the theoretical results are reported.
Article
Full-text available
Novel new, interesting mathematics. Explore active areas of research in applied mathematics and topics in linear and matrix algebra. Reviews and surveys, taxonomies. Approximate and exact solution, bootstrap sample, clustering, constrained optimization, convex optimization, dimension reduction, eigensolver, high dimension solution, image classification, fault detection, fault detection and sensors, fuzzy logic, fuzzy programming, fast algorithm, forecasting with predictive model, hierarchical tree, image analysis, image and vision, image classification, iterative control, linear matrix inequality, linear programming, machine learning, massively parallel, neural net, parameter estimation, parameters linear and nonlinear, parameter uncertainty, pattern analysis, periodic solutions, predictive model for control, principal component analysis, sentiment detection, rank, slow fast decomposition, time interval, statistics, variable detection, variable selection, variance, variance and covariance.
Article
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low‐rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low‐rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low‐rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew‐Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low‐rank perturbation. Numerical tests prove that this straightforward algorithm is effective.
Article
Full-text available
We say that an m×mm\times m matrix polynomial P(x)=i=0nPixiP(x)=\sum_{i=0}^nP_i x^i is equivalent to an mq×mqmq\times mq matrix polynomial A(x), and write A(x)P(x)A(x)\approx P(x), if there exist mq×mqmq\times mq matrix polynomials E(x), F(x) such that detE(x)\det E(x) and detF(x)\det F(x) are nonzero constants and E(x)A(x)F(x)=Im(q1)P(x)E(x)A(x)F(x)=I_{m(q-1)}\oplus P(x). Given P(x) of degree n we provide an mq×mqmq\times mq matrix polynomial A(x) such that: A(x)P(x)A(x)\approx P(x), A#(x)P#(x)A^\#(x)\approx P^\#(x), where P#(x)=xnP(x1)P^\#(x)=x^nP(x^{-1}) is the reversed polynomial of P(x); A(x) has the form A(x)=D(x)+[Im,,Im]t[W1(x),,Wq(x)]A(x)=D(x)+[I_m,\ldots,I_m]^t[W_1(x),\ldots,W_q(x)], where D(x) is a diagonal matrix defined by D(x)=diag(b1(x)Im,,bq1(x)Im,bq(x)Pn+sImD(x)=\hbox{diag}(b_1(x)I_m,\ldots,b_{q-1}(x)I_m,b_q(x)P_n+sI_m, the polynomials b1(x),,bq(x)b_1(x),\ldots,b_q(x) are any co-prime monic polynomials of degree d1,,dqd_1,\ldots,d_q, respectively, while W1(x),,Wq(x)W_1(x),\ldots,W_q(x) are matrix polynomials of degree less than d1,,dqd_1,\ldots,d_q where d1++dq=nd_1+\cdots+d_q=n and s is a constant which makes bq(x)Pn+sImb_q(x)P_n+sI_m nonsingular modulo bi(x)b_i(x), i=1,,q1i=1,\ldots,q-1. An explicit expression of the eigenvectors of A(x) as functions of the eigenvalues is proven. For bi(x)=(xβi)Imb_i(x)=(x-\beta_i)I_m, i=1,,ni=1,\ldots,n, the matrix polynomial A(x) is a linear pencil of the form diagonal plus low-rank. Numerical experiments show that for suitable choices of β1,,βn\beta_1,\ldots,\beta_n obtained by means of the generalized Pellet theorem and the use of tropical roots, the eigenvalue problem for A(x) is much better conditioned than the eigenvalue problem for P(x).
Article
A new class of linearizations and ℓ-ifications for matrix polynomials of degree n is proposed. The ℓ-ifications in this class have the form where D is a block diagonal matrix polynomial with blocks of size m, W is an matrix polynomial and , for a suitable integer q. The blocks can be chosen a priori, subjected to some restrictions. Under additional assumptions on the blocks the matrix polynomial is a strong ℓ-ification, i.e., the reversed polynomial of defined by is an ℓ-ification of . The eigenvectors of the matrix polynomials and are related by means of explicit formulas. Some practical examples of ℓ-ifications are provided. A strategy for choosing in such a way that is a well conditioned linearization of is proposed. Some numerical experiments that validate the theoretical results are reported.
Article
The general properties and mathematical structures of semiseparable matrices were presented in volume 1 of Matrix Computations and Semiseparable Matrices. In volume 2, Raf Vandebril, Marc Van Barel, and Nicola Mastronardi discuss the theory of structured eigenvalue and singular value computations for semiseparable matrices. These matrices have hidden properties that allow the development of efficient methods and algorithms to accurately compute the matrix eigenvalues. This thorough analysis of semiseparable matrices explains their theoretical underpinnings and contains a wealth of information on implementing them in practice. Many of the routines featured are coded in Matlab and can be downloaded from the Web for further exploration. © 2008 The Johns Hopkins University Press. All rights reserved.
Article
We present an algorithm for the solution of polynomial equations and secular equations of the form S(x)=0S(x)=0 for S(x)=∑i=1naix−bi−1=0, which provides guaranteed approximation of the roots with any desired number of digits. It relies on the combination of two different strategies for dealing with the precision of the floating point computation: the strategy used in the package MPSolve of D. Bini and G. Fiorentino [D.A. Bini, G. Fiorentino, Design, analysis and implementation of a multi-precision polynomial rootfinder, Numer. Algorithms 23 (2000) 127–173] and the strategy used in the package Eigensolve of S. Fortune [S. Fortune, An iterated eigenvalue algorithm for approximating the roots of univariate polynomials, J. Symbolic Comput. 33 (5) (2002) 627–646]. The algorithm is based on the Ehrlich–Aberth (EA) iteration, and on several results introduced in the paper. In particular, we extend the concept and the properties of root-neighborhoods from polynomials to secular functions, provide perturbation results of the roots, obtain an effective stop condition for the EA iteration and guaranteed a posteriori error bounds. We provide an implementation, released in the package MPSolve 3.0, based on the GMP library. From the many numerical experiments it turns out that our code is generally much faster than MPSolve 2.0 and of the package Eigensolve. For certain polynomials, like the Mandelbrot or the partition polynomials the acceleration is dramatic. The algorithm exploits the parallel architecture of the computing platform.
Article
In this paper we propose a method for computing the roots of a monic matrix polynomial. To this end we compute the eigenvalues of the corresponding block companion matrix C. This is done by implementing the QR algorithm in such a way that it exploits the rank structure of the matrix. Because of this structure, we can rep- resent the matrix in Givens-weight representation. A similar method as in (S. Chandrasekaran, M. Gu, J. Xia, and J. Zhu. A fast QR algorithm for companion matrices. Operator Theory: Advances and Applications, 179:111-143, 2007), the bulge chasing, is used during the QR iteration. For practical usage, matrix C has to be brought in Hessenberg form before the QR iteration starts. During the QR iteration and the transformation to Hessenberg form, the property of the matrix being unitary plus low rank numerically deteriorates. A method to restore this property is used.
Book
Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography Index.
Article
Given the n×nn\times n matrix polynomial P(x)=i=0kPixiP(x)=\sum_{i=0}^kP_i x^i, we consider the associated polynomial eigenvalue problem. This problem, viewed in terms of computing the roots of the scalar polynomial detP(x)\det P(x), is treated in polynomial form rather than in matrix form by means of the Ehrlich-Aberth iteration. The main computational issues are discussed, namely, the choice of the starting approximations needed to start the Ehrlich-Aberth iteration, the computation of the Newton correction, the halting criterion, and the treatment of eigenvalues at infinity. We arrive at an effective implementation which provides more accurate approximations to the eigenvalues with respect to the methods based on the QZ algorithm. The case of polynomials having special structures, like palindromic, Hamiltonian, symplectic, etc., where the eigenvalues have special symmetries in the complex plane, is considered. A general way to adapt the Ehrlich-Aberth iteration to structured matrix polynomial is introduced. Numerical experiments which confirm the effectiveness of this approach are reported.
Article
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright
Article
In this paper we design a fast new algorithm for reducing an N × N quasiseparable matrix to upper Hessenberg form via a sequence of N − 2 unitary transformations. The new reduction is especially useful when it is followed by the QR algorithm to obtain a complete set of eigenvalues of the original matrix. In particular, it is shown that in a number of cases some recently devised fast adaptations of the QR method for quasiseparable matrices can benefit from using the proposed reduction as a preprocessing step, yielding lower cost and a simplification of implementation.
Article
The QR iteration method for tridiagonal matrices is in the heart of one classical method to solve the general eigenvalue problem. In this paper we consider the more general class of quasiseparable matrices that includes not only tridiagonal but also companion, comrade, unitary Hessenberg and semiseparble matrices. A fast QR iteration method exploiting the Hermitian quasiseparable structure (and thus generalizing the classical tridiagonal scheme) is presented. The algorithm is based on an earlier work [6], and it applies to the general case of Hermitian quasiseparable matrices of an arbitrary order. The algorithm operates on generators (i.e., a linear set of parameters defining the quasiseparable matrix), and the storage and the cost of one iteration are only linear. The results of some numerical experiments are presented. An application of this method to solve the general eigenvalue problem via qua-siseparable matrices will be analyzed separately elsewhere.
Article
Given N approximations to the zeros of an Nth-degree polynomial, N circular regions in the complex z-plane are determined whose union contains all the zeros, and each connected component of this union consisting of K such circular regions contains exactly K zeros. The bounds for the zeros provided by these circular regions are not excessively pessimistic; that is, whenever the approximations are sufficiently well separated and sufficiently close to the zeros of this polynomial, the radii of these circular regions are shown to overestimate the errors by at most a modest factor simply related to the configuration of the approximations. A few numerical examples are included.
Article
Gestructureerde matrices zijn zeer belangrijk in lineaire algebra omdat ze minder geheugenplaats innemen dan algemene matrices en omdat bewerkingen met deze matrices minder rekenwerk vergen dan wanneer volle, niet-gestructureerde matrices gebruikt worden. Diagonaal-plus-semiseparabele matrices vormen een klasse van zulke gestructureerde matrices. In dit proefschrift zoeken we naar een geschikte definitie voor deze klasse van matrices en een bijpassende voorstelling.Ook andere definities die in de literatuur gebruikt worden, worden belicht en de studie van diagonaal-plus-semiseparabele matrices wordt gemotiveerd. In een eerste deel construeren we een reductie algoritme dat elke symmetrische matrix omzet in een overeenkomstige diagonaal-plus-semiseparabele matrix. Dit algoritme heeft het Lanczos-Ritz convergentiegedrag en in elke stap wordt er een geneste deelruimte iteratie uitgevoerd. Deel 2 belicht twee basisproblemen uit numerieke lineaire algebra: het oplossen van lineaire systemen en het symmetrisch eigenwaardeprobleem. Eerst construeren we twee snelle algoritmes om diagonaal-plus-semiseparabele lineaire systemen op te lossen. Daarna besprekenwe drie verschillende technieken die gebruikt worden om het symmetrisch eigenwaardeprobleem op te lossen voor diagonaal-plus-semiseparabele matrices: een impliciet QR-algoritme, drie verschillende verdeel-en-heers algoritmes en een Cholesky-LR-algoritme. Het laatste algoritme is enkel toepasbaar op symmetrische, positief definiete diagonaal-plus-semiseparabele matrices. In het laatste deel introduceren we twee hogere rang uitbreidingen van semiseparabele matrices samen met een bijpassende voorstelling. Elke symmetrische matrix kan omgezet worden in een overeenkomstige hogere rang semiseparabele matrix en deze omzetting heeft het blok-Lanczos-Ritz convergentiegedrag gecombineerd met een soort van geneste deelruimte iteratie in elke stap. Numerieke experimenten zijn toegevoegd en de programma's zijn ter beschikking gesteld op het internet. In linear algebra structured matrices are of great interest because they consume less storage than arbitrary matrices and the computational cost of algorithms involving structured matrices is less than for dense, non-structured ones. Several problems can also be translated into similar problems with structured matrices. Diagonal-plus-semiseparable matrices form a class of such structured matrices. First we look for a suitable definition of this class of matrices and a corresponding representation. Other definitions used in the literature are discussed and the study of diagonal-plus-semiseparable matrices is motivated. In Part I of this thesis a reduction algorithm that transforms any symmetric matrix into a similar diagonal-plus-semiseparable one is presented which has a Lanczos-Ritz convergence behavior and performs a kind of nested subspace iteration at each step. It has the advantage that the diagonal can be chosen freely. Part II focuses on two basic problems in numerical linear algebra: the solution of linear systems and the symmetric eigenvalue problem. First, two fast algorithms for solving diagonal-plus-semiseparable linear systems are constructed. Next, three different techniques for solving the symmetric eigenvalue problem of diagonal-plus-semiseparable matrices are presented: an implicit QR-algorithm, three different divide-and-conquer algorithms and a Cholesky-LR-algorithm. The latter is onlyapplicable when the symmetric diagonal-plus-semiseparable matrix is positive definite. In a last part we introduce two higher rank extensions of semiseparable matrices together with a suitable representation. Any symmetric matrix can be transformed into a similar higher-order semiseparable one and this reduction algorithm has a block-Lanczos-Ritz behavior combined with a kind of nested subspace iteration at each step. Numerical experiments are included and the software is made freely available on the internet. Doctor in de Toegepaste Wetenschappen Faculteit Ingenieurswetenschappen Doctoral thesis Doctoraatsthesis
Basics. Completion problems. Multiplication and inversion algorithms
  • Yuli Eidelman
  • Israel Gohberg
  • Iulian Haimovici
Yuli Eidelman, Israel Gohberg, and Iulian Haimovici. Separable type representations of matrices and fast algorithms. Vol. 1, volume 234 of Operator Theory: Advances and Applications. Birkhäuser/Springer, Basel, 2014. Basics. Completion problems. Multiplication and inversion algorithms.