Article

A note on representations for the inverses of tridiagonal matrices

Authors:
  • Universidade Federal de Mato Grosso do Sul. Campo Grande - Brazil
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

SHARE LINK: http://www.tandfonline.com/eprint/NwUsJppIFNuZu8ccKVky/full ABSTRACT: In this note, we propose an explicit representation with the nested sums for the entries of the inverses of general tridiagonal nonsingular matrices. Its equivalence with other particular representations, based on the combinatorial expressions or the continued fractions, is considered. In addition, an analytical representation for the entries of the finite sections of the resolvent of Jacobi matrices, in terms of its related orthogonal polynomials, is observed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Several methods and techniques have been developed to solve linear difference equations with variable coefficients (see, for instance, [1,7,12,[14][15][16]20], and references therein). This topic continues to attract much attention due to their many applications in mathematics and applied sciences (see for instance [1,2,6,7]). In [20], Popenda gave explicit formulas for the general solutions of homogeneous and non-homogeneous second-order linear difference equations, with arbitrarily varying coefficients, using a direct computation. ...
... He extends his results on the explicit solution of a second-order linear homogeneous to a linear difference equation of unbounded order with variable coefficients in [15], and to the nonhomogeneous difference equation with variable coefficients in [14]. Despite these studies, recently in [1,2] the solutions of the second-order linear homogeneous difference equations with variable coefficients are exhibited under a representation based on nested sum and determinantal approaches. ...
... Recently, for linear difference equations of the second order with variable coefficients, expressions of type (11) and (12) of the determinantal solutions have been also described using nested sums in [1,2]. To achieve this connection, with the determinantal solutions of Equation (4), let us consider the following coefficients ...
Article
Full-text available
In this paper we establish some explicit formulas of (q,k)-Fibonacci–Pell sequences via linear difference equations of order 2 with variable coefficients, and explore some of their new properties. More precisely, our results are based on two approaches, namely, the determinantal and the nested sums approaches, and their closed relations. As applications, we investigate the q-analogue Cassini identities and examine a pair of Rogers–Ramanujan type identities.
... We recall that, when the coefficients a j (n) (0 ≤ j ≤ r − 1) of the linear recurrences equation (1) are constants, namely, a j (n) = a j , where a j (0 ≤ j ≤ r − 1) is a constant, it is well known in the literature that the explicit solutions are expressed under a combinatorial form in terms of coefficients a j (0 ≤ j ≤ r −1) or under an analytic form, using the roots of the associated characteristic polynomial P(z) = z r − a 0 z r −1 − · · · − a r −1 (see, for example, [11,12], and references therein). However, when the coefficients a j (n) (0 ≤ j ≤ r − 1) are variable, some methods have been elaborated in the literature for exhibiting formulas of the solutions of Eq. (1). That is, Kittapa gave solutions of the homogeneous linear difference equations of higher order with variable coefficients, in terms of a single matrix determinant (see [14]). ...
... Malik established a closed form the solutions of (1), in terms of coefficients for the homogeneous linear difference equations with variable coefficients (see [17]). In [1], solutions of the homogeneous linear difference equations of the second order are studied in terms of the determinental approach and using the nested sum. Whereas, some explicit expressions of solutions of the homogeneous linear difference equations with periodic coefficients are established in [4]. ...
... and θ (1) j , γ (1) j are given as in (12)-(14). Second, suppose that λ 1 = λ 2 = λ. ...
Article
Full-text available
In the present study, we are interested in solving the nonhomogeneous second-order linear difference equation with periodic coefficients of period p≥2, by bringing two new approaches enabling us to provide both analytic and combinatorial solutions to this family of equations. First, we get around the problem by converting this kind of equations to an equivalent family of nonhomogeneous linear difference equations of order p with constant coefficients. Second, we propose new expressions of the solutions of this family of equations, using our techniques of calculating the powers of product of companion matrices and some properties of generalized Fibonacci sequences. The study of the special case p=2 is provided. And to enhance the effectiveness of our approaches, some numerical examples are discussed.
... The inversion of tridiagonal matrices, and in particular of Jacobi's matrices, has attracted the attention of several researchers in the past and recent times [1][2][3][4][5][6][7][8][9][10][11][12][13], since this appears in numerous problems of both theoretical and applied mathematics. It is well known that Jacobi's matrix is symmetric and plays a fundamental role in the theory of orthogonal polynomials. ...
Article
Full-text available
We show that using Dunford-Taylor’s integral, a classical tool of functional analysis, it is possible to derive an expression for the inverse of a general non-singular complex-valued tridiagonal matrix. The special cases of Jacobi’s symmetric and Toeplitz (in particular symmetric Toeplitz) matrices are included. The proposed method does not require the knowledge of the matrix eigenvalues and relies only on the relevant invariants which are determined, in a computationally effective way, by means of a dedicated recursive procedure. The considered technique has been validated through several test cases with the aid of the computer algebra program Mathematica©.
... Block-tridiagonal matrices [1,2] generalize the well-known tridiagonal ones [1,[3][4][5] . These block-matrices are of use in both the theory and the applications [6][7][8][9] . ...
Article
After a short overview, improvements (based on the Kronecker product) are proposed for the eigenvalues of (N × N) block-Toeplitz tridiagonal (block-TT) matrices with (K × K) matrix-entries, common in applications. Some extensions of the spectral properties of the Toeplitz-tridiagonal matrices are pointed-out. The eigenvalues of diagonalizable symmetric and skew-symmetric block-TT matrices are studied. Besides, if certain matrix square-root is well-defined, it is proved that every block-TT matrix with commuting matrix-entries is isospectral to a related symmetric block-TT one. Further insight about the eigenvalues of hierarchical Hermitian block-TT matrices, of use in the solution of PDEs, is also achieved.
Article
In this work, we explore an extension of the Omega calculus in the context of matrix analysis introduced recently by Neto [Matrix analysis and Omega calculus. SIAM Rev. 2020;62(1):264–280]. We obtain Omega representations of analytic functions of three important classes of matrices: companion, tridiagonal, and triangular. Our representation recovers the main results of Chen and Louck [The combinatorial power of the companion matrix. Linear Algebra Appl. 1996;232:261–278] on the powers of the companion matrix. Furthermore, we generalize previous work on the powers of tridiagonal matrices due to Gutiérrez–Gutiérrez in [Powers of tridiagonal matrices with constant diagonals. Appl Math Comput. 2008;206(2):885–891], Öteleş and Akbulak [Positive integer powers of certain complex tridiagonal matrices. Appl Math Comput. 2013;219(21):10448–10455], and triangular matrices following Shur [A simple closed form for triangular matrix powers. Electron J Linear Algebra. 2011;22:1000–1003].
Article
Full-text available
Block tridiagonal matrices arise in applied mathematics, physics, and signal processing. Many applications require knowledge of eigenvalues and eigenvectors of block tridiagonal matrices. In this paper, we derive specific formulas for characteristic polynomials, and eigenvalues for a class of block tridiagonal matrices. We apply the results to determine the charactristic polynomial of some block Toeplitz symmetric tridiagonal matrices and give the explicit eigenvalues. Particularly, when the blocks are diagonal matrices, the eigenvalues are calculated explicitly.
Article
SHARE LINK: https://www.tandfonline.com/eprint/RyiHkpFibVkCJQrHruKX/full Abstract: The nice inversion properties of Toeplitz–Hessenberg matrices from a Hessenbergian representation of Catalan's numbers encourage us to provide explicit inversion formulas (in terms of basic arithmetical operations involving entries from the original matrix) for the non-singular Toeplitz–Hessenberg matrices. Our approach is based on an elementary matrix inflation method, joint with the nested sums formulas. These explicit inversion formulas are then extended to apply to the entries of the inverse of every non-singular properly Hessenberg matrix.
Article
To improve on the shortcomings observed in symbolic algorithms introduced recently for related matrices, a reliable numerical solver is proposed for computing the solution of the matrix linear equation . The matrix coefficient is a nonsingular bordered -tridiagonal matrix. The particular structure of is exploited through an incomplete or full Givens reduction, depending on the singularity of its associated -tridiagonal matrix. Then adapted back substitution and Sherman-Morrison’s formula can be applied. Specially the inverse of the matrix is computed. Moreover for a wide range of matrices , the solution of the vector linear equation can be computed in time. Numerical comparisons illustrate the results.
Article
Full-text available
We give explicit inverses of tridiagonal 2-Toeplitz and 3-Toeplitz matrices which generalize some well-known results concerning the inverse of a tridiagonal Toeplitz matrix.
Article
Full-text available
In this paper some results are reviewed concerning the characterization of inverses of symmetric tridiagonal and block tridiagonal matrices as well as results concerning the decay of the elements of the inverses. These results are obtained by relating the elements of inverses to elements of the Cholesky decompositions of these matrices. This gives explicit formulas for the elements of the inverse and gives rise to stable algorithms to compute them. These expressions also lead to bounds for the decay of the elements of the inverse for problems arising from discretization schemes.
Article
Full-text available
In this work, the sign distribution for all inverse elements of general tridiagonal H-matrices is presented. In addition, some computable upper and lower bounds for the entries of the inverses of diagonally dominant tridiagonal matrices are obtained. Based on the sign distribution, these bounds greatly improve some well-known results due to Ostrowski (1952) [23], Shivakumar and Ji (1996) [26],Nabben (1999) [21,22] and recently given by Peluso and Politi (2001) [24], Peluso and Popolizio (2008) [25] and so forth. It is also stated that the inverse of a general tridiagonal matrix may be described by 2n − 2 parameters ({θk}n k=2 and {ϕk}n−1 k=1) instead of 2n + 2 ones as givenby El-Mikkawy(2004) [3], El-Mikkawyand Karawia (2006) [4] and Huang and McColl (1997) [10]. According to these results, a new symbolic algorithm for finding the inverse of a tridiagonal matrix without imposing any restrictive conditions is presented, which improves some recent results. Finally, several applications to the preconditioning technology, the numerical solution of differential equations and the birth–death processes together with numerical tests are given.
Article
Full-text available
We obtain explicit formulas for the entries of the inverse of a nonsingular and irreducible tridiagonal k−Toeplitz matrix A. The proof is based on results from the theory of orthogonal polynomials and it is shown that the entries of the inverse of such a matrix are given in terms of Chebyshev polynomials of the second kind. We also compute the characteristic polynomial of A which enables us to state some conditions for the existence of A −1. Our results also extend known results for the case when the residue mod k of the order of A is equal to 0 or k−1 (Numer. Math., 10 (1967), pp. 153–161.).
Article
Full-text available
In this paper, we consider a general tridiagonal matrix and give the explicit formula for the elements of its inverse. For this purpose, considering usual continued fraction, we define backward continued fraction for a real number and give some basic results on backward continued fraction. We give the relationships between the usual and backward continued fractions. Then we reobtain the LU factorization and determinant of a tridiagonal matrix. Furthermore, we give an efficient and fast computing method to obtain the elements of the inverse of a tridiagonal matrix by backward continued fractions. Comparing the earlier result and our result on the elements of the inverse of a tridiagonal matrix, it is seen that our method is more convenient, efficient and fast.
Article
We solve the following physically motivated problem: to determine all finite Jacobi matrices J and corresponding indices i, j such that the Green's function ej,(zIJ)1ei\langle e_j,(zI-J)^{-1}e_{i}\rangle is proportional to an arbitrary prescribed function f(z). Our approach is via probability distributions and orthogonal polynomials. We introduce what we call the auxiliary polynomial of a solution in order to factor the map (J,i,j)[ej,(zIJ)1ei](J, i, j) \mapsto [\langle e_j, (zI-J)^{-1}e_i\rangle ] (where square brackets denote the equivalence class consisting of scalar multiples). This enables us to construct the solution set as a fibration over a connected, semi-algebraic coordinate base. The end result is a wealth of explicit constructions for Jacobi matrices. These reveal precise geometric information about the solution set, and provide the basis for new existence theorems.
Article
We consider several nested sums, and show how binomial coefficients, Stirling num-bers of the second kind and Gaussian binomial coefficients can be written as nested sums. We use this to find the rate of growth for diagonals of Stirling numbers of the second kind, as well as another proof of a known identity for Gaussian binomial coefficients.
Article
Expansion of higher transcendental functions in a small parameter are needed in many areas of science. For certain classes of functions this can be achieved by algebraic means. These algebraic tools are based on nested sums and can be formulated as algorithms suitable for an implementation on a computer. Examples, such as expansions of generalized hypergeometric functions or Appell functions are discussed. As a further application, we give the general solution of a two-loop integral, the so-called C-topology, in terms of multiple nested sums. In addition, we discuss some important properties of nested sums, in particular we show that they satisfy a Hopf algebra.
Article
SHARE LINK: http://www.tandfonline.com/eprint/zyJ8tqbTqbWu6MJbnBZr/full ABSTRACT: The nested sums applied to a general three-term recurrence relation permits us to give compact representations of orthogonal polynomial sequences, which satisfy such kind of linear recurrence. We illustrate this model on particular examples of classical as well as nonclassical orthogonal polynomials. Some related nontrivial combinatorial identities are also obtained.
Article
A formula for the inverse of a general tridiagonal matrix is given in terms of the principal minors.
Article
SHARE LINK: http://www.tandfonline.com/eprint/Fg7bcjuc7VXs2E4m2pCY/full ABSTRACT: For a large class of discrete matrix difference equations many qualitative problems remain unsolved. The companion matrix factorization is applied here to the shift matrices associated to linear non-autonomous area-preserving maps. It permits us to introduce second order linear difference equations, which provide a faster computation of the transition matrices with respect to numerical algorithms based on the standard product of matrices. In addition, compact representations for the main elements of these discrete planar systems can be provided when using the well-known solutions of linear difference equations. Some properties and applications of current interest are presented.
Article
In this paper we give a complete analysis for general tridiagonal matrix inversion for both non-block and block cases, and provide some very simple analytical formulae which immediately lead to closed forms for some special cases such as symmetric or Toeplitz tridiagonal matrices.
Article
The described algorithms enable one to find all solutions of parameterized linear difference equations within ΠΣ-fields, a very general class of difference fields. These algorithms can be applied to a very general class of multisums, for instance, for proving identities and simplifying expressions.
Article
This paper presents a simple algorithm for inverting nonsymmetric tridiagonal matrices that leads immediately to closed forms when they exist. Ukita's theorem is extended to characterize the class of matrices that have tridiagonal inverses.
Article
In this paper, explicit formulae for the elements of the inverse of a general tridiagonal matrix are presented by first extending results on the explicit solution of a second-order linear homogeneous difference equation with variable coefficients to the nonhomogeneous case, and then applying these extended results to a boundary value problem. A formula for the characteristic polynomial is obtained in the process. We also establish a connection between the matrix inverse and orthogonal polynomials. In addition, the case of a cyclic tridiagonal system is discussed.
Article
In this paper, we present an eigendecomposition of a tridiagonal matrix. Tridiagonal matrix powers and inverse are derived. As consequence, we get some relations verified by the coefficients of the inverse and the powers of a tridiagonal matrix.
Article
In the current work, the authors present a symbolic algorithm for finding the inverse of any general nonsingular tridiagonal matrix. The algorithm is mainly based on the work presented in [Y. Huang, W.F. McColl, Analytic inversion of general tridiagonal matrices, J. Phys. A 30 (1997) 7919–7933] and [M.E.A. El-Mikkawy, A fast algorithm for evaluating nth order tridiagonal determinants, J. Comput. Appl. Math. 166 (2004) 581–584]. It removes all cases where the numeric algorithm in [Y. Huang, W.F. McColl, Analytic inversion of general tridiagonal matrices, J. Phys. A 30 (1997) 7919–7933] fails. The symbolic algorithm is suited for implementation using Computer Algebra Systems (CAS) such as MACSYMA, MAPLE and MATHEMATICA. An illustrative example is given.