ArticlePDF Available

Fraction-free matrix factors: New forms for LU and QR factors

Authors:

Abstract and Figures

Gaussian elimination and LU factoring have been greatly studied from the algorithmic point of view, but much less from the point view of the best output format. In this paper, we give new output formats for fraction free LU factoring and for QR factoring. The formats and the algorithms used to obtain them are valid for any matrix system in which the entries are taken from an integral domain, not just for integer matrix systems. After discussing the new output format of LU factoring, the complexity analysis for the fraction free algorithm and fraction free output is given. Our new output format contains smaller entries than previously suggested forms, and it avoids the gcd computations required by some other partially fraction free computa- tions. As applications of our fraction free algorithm and format, we demonstrate how to construct a fraction free QR factorization and how to solve linear systems within a given domain.
Content may be subject to copyright.
A preview of the PDF is not available
... This strategy, however, led to the entries in the L and U matrices becoming very large, and an alternative form was presented in Zhou and Jeffrey [26], and is described below. Similarly, fraction-free Gram-Schmidt orthogonalization and Q R factorization were studied in Erlingsson et al. [10] and Zhou and Jeffrey [26]. Further extensions have addressed fraction-free full-rank factoring of non-invertible matrices and fraction-free computation of the Moore-Penrose inverse [15]. ...
... In its standard form, this decomposition requires algebraic extensions to the domain of A, but a fraction-free form is possible. The modified form given in [26] is Q D −1 R, and is proved below in Theorem 15. In [10], an exact-division algorithm for a fraction-free Gram-Schmidt orthogonal basis for the columns of a matrix A was given, but a complete fraction-free decomposition was not considered. ...
... In [10], an exact-division algorithm for a fraction-free Gram-Schmidt orthogonal basis for the columns of a matrix A was given, but a complete fraction-free decomposition was not considered. We now show that the algorithms in [10] and in [26] both lead to a systematic common factor in their results. We begin by considering a fraction-free form of the Cholesky decomposition of a symmetric matrix. ...
Article
Full-text available
We consider LU and QR matrix decompositions using exact computations. We show that fraction-free Gauß–Bareiss reduction leads to triangular matrices having a non-trivial number of common row factors. We identify two types of common factors: systematic and statistical. Systematic factors depend on the reduction process, independent of the data, while statistical factors depend on the specific data. We relate the existence of row factors in the LU decomposition to factors appearing in the Smith–Jacobson normal form of the matrix. For statistical factors, we identify some of the mechanisms that create them and give estimates of the frequency of their occurrence. Similar observations apply to the common factors in a fraction-free QR decomposition. Our conclusions are tested experimentally.
... The first proposals were based on inflating the initial data until all divisions were guaranteed exact, see for example Lee and Saunders [12]; Nakos et al. [14]; Corless and Jeffrey [4]. This strategy, however, led to the entries in the L and U matrices becoming very large, and an alternative form was presented in Zhou and Jeffrey [18]. Fractionfree Gram-Schmidt orthogonalization and QR factorization were similarly studied by Erlingsson et al. [5]; Zhou and Jeffrey [18]. ...
... This strategy, however, led to the entries in the L and U matrices becoming very large, and an alternative form was presented in Zhou and Jeffrey [18]. Fractionfree Gram-Schmidt orthogonalization and QR factorization were similarly studied by Erlingsson et al. [5]; Zhou and Jeffrey [18]. Further extensions have addressed fraction-free full-rank factoring of noninvertible matrices and fraction-free computation of the Moore-Penrose inverse [10]. ...
... A fraction-free QR decomposition, which is based on the LD −1 U decomposition, was given in Zhou and Jeffrey [18]. In this section, we present a refined version of this algorithm (see Theorem 12). ...
Preprint
We consider LU and QR matrix decompositions using exact computations. We show that fraction-free Gauss--Bareiss reduction leads to triangular matrices having a non-trivial number of common row factors. We identify two types of common factors: systematic and statistical. Systematic factors depend on the reduction process, independent of the data, while statistical factors depend on the specific data. We relate the existence of row factors in the LU decomposition to factors appearing in the Smith--Jacobson normal form of the matrix. For statistical factors, we identify some of the mechanisms that create them and give estimates of the frequency of their occurrence. Similar observations apply to the common factors in a fraction-free QR decomposition. Our conclusions are tested experimentally.
... It is clear that index reduction algorithms based on pure symbolic matrix factorization represent a viable alternative to classic SA techniques. However, symbolic matrix factorization feasibility is strongly tied to the performance of the symbolic computation kernel and its capabilities [25]. Large expressions can lead to strong performance degradation of the kernel. ...
... This toolkit, integrated into the Maple ® environment, is dedicated to symbolic linear algebra tasks. It builds upon the original research outlined in [25] and encompasses a collection of functionalities for symbolic full-pivoting LU, FFLU, QR, and GJ factorizations. Importantly, the LAST package is intended to be used in tandem with the previously presented LEM package [40], contributing to the mitigation of expression swell. ...
Article
Full-text available
We present an algorithm for the index reduction of first-order differential-algebraic equations. The proposed approach can be applied to generic differential-algebraic equations and exploits neither a priori knowledge nor ad hoc techniques to leverage the specific formulation of the system. The index reduction is performed only by using symbolic manipulation and linear algebra techniques. It is based on the successive separation of the differential and algebraic equations of the system and the subsequent differentiation of the algebraic part. Improved symbolic matrix factorization is used to perform the differential-algebraic equations partitioning, ensure numerical stability, and limit the expression swell of the reduced-index system. The effectiveness of the algorithm is validated through symbolic-numerical examples on a wide range of systems, including physical systems, engineering applications, and "artificial" differential-algebraic equations with specific properties. The proposed symbolic index reduction algorithm is implemented in Maple as part of an open-source library.
... We note that Zhou and Jeffrey [20] were the first to present an exact thin QR factorization called fraction-free QR. The contributions highlighted above significantly expand on their work; as we develop both thin and standard QR factorizations, prove that they have properties analogous to traditional QR factorizations, and show how to use them to solve both full-rank and rank-deficient linear systems. ...
... The contributions highlighted above significantly expand on their work; as we develop both thin and standard QR factorizations, prove that they have properties analogous to traditional QR factorizations, and show how to use them to solve both full-rank and rank-deficient linear systems. Thus, another contribution of this work is to generalize the ideas of [20] and relate them to the broader linear algebra body of knowledge. ...
Article
Full-text available
QR factorization is a key tool in mathematics, computer science, operations research, and engineering. This paper presents the roundoff-error-free (REF) QR factorization framework comprising integer-preserving versions of the standard and the thin QR factorizations and associated algorithms to compute them. Specifically, the standard REF QR factorization factors a given matrix AZm×nA\in {\mathbb {Z}}^{m\times n} as A=QDR, where QZm×mQ\in {\mathbb {Z}}^{m\times m} has pairwise orthogonal columns, D is a diagonal matrix, and RZm×nR\in {\mathbb {Z}}^{m\times n} is an upper trapezoidal matrix; notably, the entries of Q and R are integral, while the entries of D are reciprocals of integers. In the thin REF QR factorization, QZm×nQ\in {\mathbb {Z}}^{m\times n} also has pairwise orthogonal columns, and RZn×nR\in {\mathbb {Z}}^{n\times n} is also an upper triangular matrix. In contrast to traditional (i.e., floating-point) QR factorizations, every operation used to compute these factors is integral; thus, REF QR is guaranteed to be an exact orthogonal decomposition. Importantly, the bit-length of every entry in the REF QR factorizations (and within the algorithms to compute them) is bounded polynomially. Notable applications of our REF QR factorizations include finding exact least squares or exact basic solutions, xQn{\textbf{x}}\in {\mathbb {Q}}^n, to any given full column rank or rank deficient linear system Ax=bA {\textbf{x}}= {\textbf{b}}, respectively. In addition, our exact factorizations can be used as a subroutine within exact and/or high-precision quadratic programming. Altogether, REF QR provides a framework to obtain exact orthogonal factorizations of any rational matrix (as any rational/decimal matrix can be easily transformed into an integral matrix).
... Although known earlier, fraction-free methods for exact matrix computations became popular after Bareiss's study of Gaussian elimination [1]. Extensions to related topics, such as LU factoring, were considered in [9,10,15]. Gram-Schmidt orthogonalization and QR factoring were studied by [3], under the more descriptive name of exact division. Recent studies have looked at extending fraction-free LU factoring to non-invertible matrices [7] and rank profiling [2], and more generally to areas such as the Euclidean algorithm, and the Berlekamp-Massey algorithm [8]. ...
... A fraction-free (exact division) algorithm for Gram-Schmidt orthogonalization was described by [3]. An algorithm based on LU factoring has been described in [13,15]. The two approaches yield the same results. ...
Article
We consider exact matrix decomposition by Gauss-Bareiss reduction. We investigate two aspects of the process: common row and column factors and the influence of pivoting strategies. We identify two types of common factors: systematic and statistical. Systematic factors depend on the process, while statistical factors depend on the specific data. We show that existing fraction-free QR (Gram-Schmidt) algorithms create a common factor in the last column of Q. We relate the existence of row factors in LU decomposition to factors appearing in the Smith normal form of the matrix. For statistical factors, we identify mechanisms and give estimates of the frequency. Our conclusions are tested by experimental data. For pivoting strategies, we compare the sizes of output factors obtained by different strategies. We also comment on timing differences.
... This package is a Maple R ⃝ toolbox for symbolic linear algebra. It is based on the original works in [1,28] and offers a set of routines for symbolic full-pivoting LU, a Fraction-Free LU (FFLU), QR decomposition, and Gauss-Jordan (GJ) factorizations. The package LAST is designed to be used in conjunction with the LEM package [22] to limit the expression swell. ...
Conference Paper
In this work, a framework for the automatic symbolic index reduction and numerical integration of generic differential-algebraic equation systems is presented. The proposed approach does not exploit any a priori knowledge of the specific differential-algebraic system of equation formulation. The index reduction is performed only with the aim of linear algebra techniques. Hierarchical representation of expressions is conveniently used to limit expression swell and ensure the numerical stability of the solution. The effectiveness of the algorithm is validated through symbolic and numerical experiments on a multi-body model.
... The Last package, a Maple ® toolbox for symbolic linear algebra [19], builds upon the work presented in [40]. It offers routines for symbolic full-pivoting LU, Fraction-Free LU (FFLU), QR decomposition, and Gauss-Jordan (GJ) factorizations. ...
Article
Full-text available
Structural mechanics is pivotal in comprehending how structures respond to external forces and imposed displacements. Typically, the analysis of structures is performed numerically using the direct stiffness method, which is an implementation of the finite element method. This method is commonly associated with the numerical solution of large systems of equations. However, the underlying theory can also be conveniently used to perform the analysis of structures either symbolically or in a hybrid symbolic-numerical fashion. This approach is useful to mitigate the computational burden as the obtained partial or full symbolic solution can be simplified and used to generate lean code for efficient simulations. Nonetheless, the symbolic direct stiffness method is also useful for model reduction purposes, as it allows the derivation of small-scale models that can be used for diminishing simulation time. Despite the mentioned advantages, symbolic computation carries intrinsically complex operations. In particular, the symbolic solution of large linear systems of equations is hard to compute, and it may not always be available due to software capabilities. This paper introduces a toolbox named TrussMe-Fem, whose implementation is based on the direct stiffness method. TrussMe-Fem leverages Maple®'s symbolic computation and Matlab®'s numerical capabilities for symbolic and hybrid symbolic-numerical analyses and solutions of structures. Efficient code generation is also possible by exploiting the simplification of the problem's expressions. The challenges posed by symbolic computation on the solution of large linear systems are addressed by introducing novel routines for the symbolic matrix factorization with the hierarchical representation of large expressions. For this purpose, the TrussMe-Fem toolbox optionally uses the Lem and Last Maple® packages, which are also available as open-source software.
Article
We describe ECLES (Editing by Constrained LEast Squares), a general method for interactive local editing of objects that are defined by a list of parameters subject to a set of linear or affine constraints. The method is intended for situations where each edit action affects only a small set of the parameters; some of which (the “anchors”) are to be set to new given values, whereas the rest (the “derived” ones) are to be modified so as to preserve the constraints. We use exact integer arithmetic in order to reliably determine solvability, to detect and eliminate redundancies in the set of constraints, and to ensure that the solution exactly satisfies the constraints. We also use constrained least squares to choose a suitable solution when the constraints allow multiple solutions. Unlike the usual finite element approach, the method allows editing of any set of anchors with any sufficiently large set of derived parameters. As an example of application, we show how the method can be used to edit smooth ( C1) deformations of geometric mesh models.
Article
Full-text available
This report brings together many different aspects of Gauss elimination. The basic Gauss elimination (GE) algorithm is a fundamental tool of linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also included. Finally, GE is considered within the context of 'noisy' matrices. The effect of the noise in matrix entries on the effective rank of the matrix is the central aspect considered here.
Article
Full-text available
The Turing factorization is a generalization of the standard LU factoring of a square matrix. Among other advantages, it allows us to meet demands that arise in a symbolic context. For a rectangular matrix A, the generalized factors are written PA = LDU R, where R is the row-echelon form of A. For matrices with symbolic entries, the LDU R factoring is superior to the standard reduction to row-echelon form, because special case information can be recorded in a natural way. Special interest attaches to the continuity properties of the factors, and it is shown that conditions for discontinuous behaviour can be given using the factor D. We show that this is important, for example, in computing the Moore-Penrose inverse of a matrix containing symbolic entries.We also give a separate generalization of LU factoring to fraction-free Gaussian elimination.
Article
Recent methods for handling matrix problems over an integral domain are investigated from a unifying point of view. Emphasized are symbolic matrix inversion and numerically exact methods for solving Ax = b. New proofs are given for the theory of the multistep method. A proof for the existence and an algorithm for the exact solution of Tx = b, where T is a finite Toeplitz matrix, is given. This algorithm reduces the number of required single precision multiplications by a factor of order n over the corresponding Gaussian elimination method. The use of residue arithmetic is enhanced by a new termination process. The matrix inversion problem with elements in the ring of polynomials is reduced to operations over a Galois field. It is shown that interpolation methods are equivalent to congruence methods with linear modulus and that the Chinese remainder theorem over GF(x-pk) is the Lagrange interpolation formula. With regard to the numerical problem of exact matrix inversion, the One- and Two-step Elimination methods are critically compared with the methods using modular or residue arithmetic. Formulas for estimating maximum requirements for storage and timing of the salient parts of the algorithms are developed. The results of a series of recent tests, using existing codes, standard matrices and matrices with random elements are reported and summarized in tabular form. The paper concludes that the two-step elimination method be used for the inversion problem of numeric matrices, and in particular when a black-box approach to the matrix inversion problem is attempted such as in commercial time sharing systems. It is recommended that the inversion problem of matrices with elements over the polynomial ring be reduced to the numeric inversion problem with subsequent interpolation. An extensive Reference list is added.
Book
Computer algebra systems are now ubiquitous in all areas of science and engineering. This highly successful textbook, widely regarded as the 'bible of computer algebra', gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems. Designed to accompany one- or two-semester courses for advanced undergraduate or graduate students in computer science or mathematics, its comprehensiveness and reliability has also made it an essential reference for professionals in the area. Special features include: detailed study of algorithms including time analysis; implementation reports on several topics; complete proofs of the mathematical underpinnings; and a wide variety of applications (among others, in chemistry, coding theory, cryptography, computational logic, and the design of calendars and musical scales). A great deal of historical information and illustration enlivens the text. In this third edition, errors have been corrected and much of the Fast Euclidean Algorithm chapter has been renovated.
Article
A method is developed which permits integer-preserving elimination in systems of linear equations, AX = B, such that (a) the magnitudes of the coefficients in the transformed matrices are minimized, and (b) the computational efficiency is considerably increased in comparison with the corresponding ordinary (single-step) Gaussian elimination. The algorithms presented can also be used for the efficient evaluation of determinants and their leading minors. Explicit algorithms and flow charts are given for the two-step method. The method should also prove superior to the widely used fraction-producing Gaussian elimination when A is nearly singular.
Article
An improved algorithm for computing the minors of a (large) sparse matrix of polynomials is described, with emphasis on efficiency and optimal ordering. A possible application to polynomial resultant computation is discussed.
Article
This paper extends the ideas behind Bareiss's fraction-free Gauss elimination algorithm in a number of directions. First, in the realm of linear algebra, algorithms are presented for fraction-free LU "factorization" of a matrix and for fraction-free algorithms for both forward and back substitution. These algorithms are valid not just for integer computation but also for any matrix system where the entries are taken from a unique factorization domain such as a polynomial ring. The second part of the paper introduces the application of the fraction-free formulation to resultant algorithms for solving systems of polynomial equations. In particular, the use of fraction-free polynomial arithmetic and triangularization algorithms in computing the Dixon resultant of a polynomial system is discussed.