Conference PaperPDF Available

Abstract

Bernoulli polynomials and Bernoulli numbers have been extensively used in several areas of mathematics (especially in number theory) because they appear in many mathematical formulas: as in the residual term of the Euler-Maclaurian quadrature rule, in the Taylor series expansion of the trigonometric functions tan (x), csc (x) and cot (x), in the Taylor series expansion of the hyperbolic function tanh(x). They also appear in the well known exact expression of the even integer values of the Riemann zeta function.
Computing matrix functions by matrix Bernoulli series
Emilio Defez, Javier Ibáñez, Jesús Peinado, Pedro Alonso-Jordá, and José M. Alonso
Universitat Politècnica de València, Spain
Introduction
Computation of matrix functions has received attention in the last years because of its several applications in different
areas of science and technology. Of all matrix functions, the exponential matrix eA,ACn×n,stands out due to both
its applications in the resolution of systems of differential equations and the difficulty of its computation, see [1].
Among all the proposed methods to approximate the exponential matrix, there are two fundamental ones:
those based on rational Padé approximations [2], and
those based on polynomial approximations using either Taylor series developments [3] or serial developments of
Hermite matrix polynomials [4].
As demonstrated in the recent past years, polynomial approximations are generally more efficient than approximations
based on the Padé algorithm since they are more accurate despite a slightly higher cost. Polynomial approximations
use the basic property of Scaling-Squaring, based on the relationship
eA=eA/2s2s
.
For a given a matrix A, the algorithm determines a scaling factor sto compute an approximation Pm(A/2s)for eA/2s, so
that
eA(Pm(A/2s))2s.
Bernoulli polynomials and Bernoulli numbers have been extensively used in several areas of mathematics, as number
theory, and they appear in many mathematical formulas, such as the residual term of the Euler-Maclaurian quadrature
rule [5, p. 63], the Taylor series expansion of the trigonometric functions tan (x),csc (x)and cot (x)[5, p. 116-117]
and in the Taylor series expansion of the hyperbolic function tanh (x)[5, p. 125]. They are also employed in the well
known exact expression of the even values of the Riemann zeta function:
ξ(2k) = X
i1
1
i2k=(1)k1B2k(2π)2k
2(2k)! , k 1.
Moreover, they are even used for solving other problems, such as initial value problem [6], boundary value
problem [7], etc. An excelent survey about Bernoulli polynomials and its applicacions can be found in [8].
This paper presents a new series development of the exponential matrix in terms of the
Bernoulli matrix polynomials which shows that the polynomial approximations of the
exponential matrix are more accurate and less computationally expensive in most cases than
those based on Padé approximants.
Our proposal
On Bernoulli matrix polynomials
Bernoulli polynomials Bm(x)are defined in [5, p.588] as the coefficients of the generating function
g(x, t) = tetx
et1=X
m0
Bm(x)
m!tm,|t|<2π, (1)
where g(x, t)is an holomorphic function in Cfor the variable t, with an avoidable singularity in t= 0.
A Bernoulli polynomial Bm(x)has the explicit expression
Bm(x) =
m
X
k=0 m
kBkxmk.
where the Bernoulli numbers are defined by Bm=Bm(0). Therefore, it follows that the Bernoulli numbers satisfy
z
ez1=X
k0
Bk
k!zk,|z|<2π,
where
Bk=
k1
X
i=0 k
iBi
k+ 1 i, k 1,
with B0= 1. Note that B3=B5=· · · =B2k+1 = 0, for k1. Thus, for a matrix ACn×n, we define the m-th Bernoulli
matrix polynomial by the expression
Bm(A) =
m
X
k=0 m
kBkAmk.
In this way, the exponential of matrix Acan be computed as
eAt =et1
tX
k0
Bk(A)tk
k!,|t|<2π, (2)
where Bk(A)is the k-th Bernoulli matrix polynomial. If we take sas the scaling value of matrix Aand t= 1 in (2),
then we can compute the matrix exponential approximation as
eA2s(e1)
m
X
k=0
Bk(A2s)
k!.(3)
To use this formula operatively for the approximation of the exponential matrix, we must determine, for a given matrix
A, the scaling factor sand the degree mof the approximation (3).
References
[1] C. VAN LOAN.A study of the matrix exponential, Numerical Analysis Report, Tech. rep., Manchester Institute for
Mathematical Sciences, Manchester University (2006).
[2] G. A. BAKER, P. R. GRAVES-MORRIS.Padé Approximants, Encyclopedia of Mathematics and its Applications
Edition, Cambridge University Press, 1996.
[3] J. SASTRE, J. IBÁÑEZ, E. DEFEZ, P. RUIZ.New scaling-squaring taylor algorithms for computing the matrix expo-
nential, SIAM Journal on Scientific Computing 37 (1) (2015) A439–A455.
[4] J. SASTRE, J. IBÁÑEZ, E. DEFEZ, P. RUIZ.Efficient orthogonal matrix polynomial based method for computing
matrix exponential, Applied Mathematics and Computation 217 (14) (2011) 6451–6463.
[5] F. W. OLVER, D. W. LOZIER, R. F. BOISVERT, C. W. CLARK.NIST handbook of mathematical functions hardback
and CD-ROM, Cambridge University Press, 2010.
[6] E. TOHIDI, K. ERFANI, M. GACHPAZAN, S. SHATEYI.A new Tau method for solving nonlinear Lane-Emden type
equations via Bernoulli operational matrix of differentiation, Journal of Applied Mathematics 2013 (2013).
[7] A. W. ISLAM, M. A. SHARIF, E. S. CARLSON.Numerical investigation of double diffusive natural convection of co2
in a brine saturated geothermal reservoir, Geothermics 48 (2013) 101–111.
[8] O. KOUBA. Lecture Notes, Bernoulli Polynomials and Applications, arXiv preprint arXiv:1309.7560 (2013).
[9] AWAD H. AL-MOHY AND NICHOLAS J. HIGHAM.A new scaling and squaring algorithm for the matrix exponential,
SIAM Journal on Matrix Analysis and Applications 31(3), 2009.
[10] P. RUIZ, J. SASTRE, J. IBÁÑEZ, E. DEFEZ.High perfomance computing of the matrix exponential. Journal of
Computational and Applied Mathematics, v. 291, 370–379, 2016.
[11] N. J. HIGHAM.The Test Matrix Toolbox for MATLAB. Numerical Analysis Report No. 237, Manchester, Eng., 1993.
[12] T. G. WRIGHT.Eigtool, version 2.1.web.comlab.ox.ac.uk/pseudospectra/eigtool, 2009.
[13] NICHOLAS J. HIGHAM.Functions of Matrices: Theory and Computation. SIAM, Philadelphia, PA, USA, 2008.
References
[1] C. VAN LOAN.A study of the matrix exponential, Numerical Analysis Report, Tech. rep., Manchester Institute for
Mathematical Sciences, Manchester University (2006).
[2] G. A. BAKER, P. R. GRAVES-MORRIS.Padé Approximants, Encyclopedia of Mathematics and its Applications
Edition, Cambridge University Press, 1996.
[3] J. SASTRE, J. IBÁÑEZ, E. DEFEZ, P. RUIZ.New scaling-squaring taylor algorithms for computing the matrix expo-
nential, SIAM Journal on Scientific Computing 37 (1) (2015) A439–A455.
[4] J. SASTRE, J. IBÁÑEZ, E. DEFEZ, P. RUIZ.Efficient orthogonal matrix polynomial based method for computing
matrix exponential, Applied Mathematics and Computation 217 (14) (2011) 6451–6463.
[5] F. W. OLVER, D. W. LOZIER, R. F. BOISVERT, C. W. CLARK.NIST handbook of mathematical functions hardback
and CD-ROM, Cambridge University Press, 2010.
[6] E. TOHIDI, K. ERFANI, M. GACHPAZAN, S. SHATEYI.A new Tau method for solving nonlinear Lane-Emden type
equations via Bernoulli operational matrix of differentiation, Journal of Applied Mathematics 2013 (2013).
[7] A. W. ISLAM, M. A. SHARIF, E. S. CARLSON.Numerical investigation of double diffusive natural convection of co2
in a brine saturated geothermal reservoir, Geothermics 48 (2013) 101–111.
[8] O. KOUBA. Lecture Notes, Bernoulli Polynomials and Applications, arXiv preprint arXiv:1309.7560 (2013).
[9] AWAD H. AL-MOHY AND NICHOLAS J. HIGHAM.A new scaling and squaring algorithm for the matrix exponential,
SIAM Journal on Matrix Analysis and Applications 31(3), 2009.
[10] P. RUIZ, J. SASTRE, J. IBÁÑEZ, E. DEFEZ.High perfomance computing of the matrix exponential. Journal of
Computational and Applied Mathematics, v. 291, 370–379, 2016.
[11] N. J. HIGHAM.The Test Matrix Toolbox for MATLAB. Numerical Analysis Report No. 237, Manchester, Eng., 1993.
[12] T. G. WRIGHT.Eigtool, version 2.1.web.comlab.ox.ac.uk/pseudospectra/eigtool, 2009.
[13] NICHOLAS J. HIGHAM.Functions of Matrices: Theory and Computation. SIAM, Philadelphia, PA, USA, 2008.
Numerical Experiments
We have compared the next three routines to compute the exponential of a matrix:
expade: a MATLAB function based on the Padé rational approximation for the matrix exponential [9];
exptay: a MATLAB code based on the Taylor series evaluated by means of Paterson-Stockmeyer [10].
expber: the new MATLAB function based on Bernoulli series presented in this paper;
The test has consisted on 256 matrices of size 128 ×128 taken from the next 4 types:
100 diagonalizable matrices with real and complex eigenvalues.
100 non-diagonalizable complex matrices with eigenvalue multiplicity 1.
40 real matrices from the function matrix of the Matrix Computation Toolbox [11].
16 matrices taken from the Eigtool MATLAB package [12].
The “exact” matrix exponential has been computed using MATLAB symbolic versions a scaled Taylor Paterson-
Stockmeyer approximation, with 256 decimal digit arithmetic and trying several orders mand/or scaling parameters
s, all of them higher than the ones used by expade and exptay, respectively. We checked that their relative difference
was small enough. The algorithm accuracy was tested by computing the relative error
E=kexp(A)˜
Yk1
kexp(A)k1
,
where ˜
Yis the computed solution and exp(A)is the exact one. We also have used MATLAB function funm_condest1
to estimate the condition number of the matrix 1-norm.
Table 1 shows the percentage of cases in which the relative errors of expber are, respectively, lower than, greater
than, or equal to the relative errors of the other algorithms under test.
E(expber)<E(expade) 91.41% E(expber)<E(exptay) 67.97%
E(expber)>E(expade) 8.59% E(expber)>E(exptay) 30.47%
E(expber)=E(expade) 0.00% E(expber)=E(exptay) 1.56%
Table 1: Relative error comparison of expber with expade and exptay, respectively.
Figure 2 shows some results of this test:
Figure 2 a) shows the normwise relative errors. The solid line is the function kexpu, where kexp is the condition
number of matrix exponential function [13, Chapter 3] and u= 253 is the unit roundoff in the double precision
floating-point arithmetic.
In the performance profile (Fig. 2 b)), the αcoordinate varies between 1 and 5 in steps equal to 0.1, and the p
coordinate is the probability that the considered algorithm has a relative error lower than or equal to α-times the
smallest error over all methods.
The ratios of relative errors (Fig. 2 c)) are presented in decreasing order with respect to E(expber)/E(exptay)
and E(expber)/E(expade).
Figure 2 d) shows the ratio of the computational cost measured in the number of matrix products.
0 50 100 150 200 250 300
Matrix
10-20
10-15
10-10
10-5
100
Er
cond*u
expber
exptay
expade
12345
0
0.2
0.4
0.6
0.8
1
p
expber
exptay
expade
a) Normwise relative errors. b) Performance Profile.
0 50 100 150 200 250 300
Matrix
10-10
10-5
100
105
1010
Ratio relative errors
E(expber)/E(exptay)
E(expber)/E(expade)
0 50 100 150 200 250 300
Matrix
0.5
1
1.5
Ratio matrix products
M(expber)/M(exptay)
M(expber)/M(expade)
c) Relative error ratio. d) Matrix product ratio.
Fig. 2: Results of the test.
We have taken advantage of the previous developments regarding the MEX files for the exptay code in the new one,
thus, both exptay and expber are able to exploit an existing NVIDIA GPU in the node.
Table 2 shows the performance of exptay and expber for large matrices in both the CPU and GPU.
The CPU is formed by two processors with 20 cores each (Intel Xeon CPU E5-2698 v4 @2.20GHz).
The GPU is an NVIDIA Tesla P100-SXM2.
n= 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000
exptay CPU 0.23 0.51 0.90 1.88 2.54 4.11 5.17 7.62 10.29 13.22 17.30
GPU 0.11 0.18 0.27 0.53 0.83 1.22 1.73 2.29 2.89 3.47 4.71
expber CPU 0.23 0.85 1.06 1.92 2.78 4.02 5.48 7.50 10.11 14.12 17.00
GPU 0.10 0.27 0.29 0.57 0.89 1.35 1.81 2.37 3.11 3.51 4.47
Table 2: Comparison in time (sec.) for large matrices in CPU and GPU.
Conclusions
All the three implementations, i.e. expade,exptay, and expber, have similar numerical stability. Functions based
on polynomial approximations are more accurate than the one based on Padé approximants, being the new function
expber slightly more accurate than our former code exptay.
In terms of matrix products, expber and exptay perform exactly the same and both have a 7% lower computational
cost than expade.
The executions carried out in our test showed that the execution time of the exptay and expber is very similar as
expected.
Using our implementation for GPU, both routines exptay and expber perform better than the CPU version in the
target machine used in the experiments. In the whole range (n= 1000,...,6000) the speed up ranges from 2.1to 3.8.
Acknowledgements
This work has been partially supported by Spanish Ministerio de Economía y Competitividad and European Regional
Development Fund (ERDF) grants TIN2017-89314-P and by the Programa de Apoyo a la Investigación y Desarrollo
2018 of the Universitat Politècnica de València (PAID-06-18) grants SP20180016.
Acknowledgements
This work has been partially supported by Spanish Ministerio de Economía y Competitividad and European Regional
Development Fund (ERDF) grants TIN2017-89314-P and by the Programa de Apoyo a la Investigación y Desarrollo
2018 of the Universitat Politècnica de València (PAID-06-18) grants SP20180016.
... was demonstrated in Defez et al. 20 An efficient method based on (5) to approximate the exponential matrix also was presented and developed in Defez et al. 20 Setting t = 1 in (5) and using the definition of the matrix sine and cosine, it is easy to derive the following expressions: ...
... was demonstrated in Defez et al. 20 An efficient method based on (5) to approximate the exponential matrix also was presented and developed in Defez et al. 20 Setting t = 1 in (5) and using the definition of the matrix sine and cosine, it is easy to derive the following expressions: ...
... 21,22 In our implementations, we have used those based on the Paterson-Stockmeyer method. 21 According to it, an integer m k (order of the Bernoulli approximation polynomial) is chosen from the set 4,6,9,12,16,20,25,30,36, 42, … } . ...
Article
Full-text available
This paper presents a new series expansion based on Bernoulli matrix polynomials to approximate the matrix cosine function. An approximation based on this series is not a straightforward exercise since there exist different options to implement such a solution. We dive into these options and include a thorough comparative of performance and accuracy in the experimental results section that shows benefits and downsides of each one. Also, a comparison with the Padé approximation is included. The algorithms have been implemented in MATLAB and in CUDA for NVIDIA GPUs.
... to obtain approximations of the matrix exponential. A method based in (6) to approximate the exponential matrix has been presented in [13]. ...
Conference Paper
Full-text available
The computation of matrix trigonometric functions has received remarkable attention in the last decades due to its usefulness in the solution of systems of second order linear differential equations. Recently, several state-of-the-art algorithms have been provided for computing these matrix functions, in particular for the matrix cosine function.
Article
Full-text available
The matrix exponential plays a fundamental role in linear differential equations arising in engineering, mechanics, and control theory. The most widely used, and the most generally efficient, technique for calculating the matrix exponential is a combination of "scaling and squaring" with a Pade approximation. For alternative scaling and squaring methods based on Taylor series, we present two modifications that provably reduce the number of matrix multiplications needed to satisfy the required accuracy bounds, and a detailed comparison of the several algorithmic variants is provided.
Article
Full-text available
This work presents a new algorithm for matrix exponential computation that significantly simplifies a Taylor scaling and squaring algorithm presented previously by the authors, preserving accuracy. A Matlab version of the new simplified algorithm has been compared with the original algorithm, providing similar results in terms of accuracy, but reducing processing time. It has also been compared with two state-of-the-art implementations based on Padé approximations, one commercial and the other implemented in Matlab, getting better accuracy and processing time results in the majority of cases.
Article
Full-text available
A new and efficient numerical approach is developed for solving nonlinear Lane-Emden type equations via Bernoulli operational matrix of differentiation. The fundamental structure of the presented method is based on the Tau method together with the Bernoulli polynomial approximations in which a new operational matrix is introduced. After implementation of our scheme, the main problem would be transformed into a system of algebraic equations such that its solutions are the unknown Bernoulli coefficients. Also, under several mild conditions the error analysis of the proposed method is provided. Several examples are included to illustrate the efficiency and accuracy of the proposed technique and also the results are compared with the different methods. All calculations are done in Maple 13.
Article
Full-text available
The scaling and squaring method for the matrix exponential is based on the approximation eA ≈ (rm(2-sA))2s, where rm(x) is the [m/m] Padé approximant to ex and the integers m and s are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed overscaling, in which a value of s much larger than necessary is chosen, causing a loss of accuracy in floating point arithmetic. Building on the scaling and squaring algorithm of Higham [SIAM J. Matrix Anal. Appl., 26 (2005), pp. 1179-1193], which is used by the MATLAB function expm, we derive a new algorithm that alleviates the overscaling problem. Two key ideas are employed. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of from powers of rm. The second idea is to base the backward error analysis that underlies the algorithm on members of the sequence {∥Aκ∥ 1/κ} instead of ∥A∥, since for nonnormal matrices it is possible that ∥Aκ∥ 1/κ is much smaller than ∥A∥, and indeed this is likely when overscaling occurs in existing algorithms. The terms ∥Aκ∥ 1/κ are estimated without computing powers of A by using a matrix 1-norm estimator in conjunction with a bound of the form ∥Aκ∥ 1/κ ≤ max(∥Aρ∥ 1/ρ, ∥Aq∥) 1/q∥ that holds for certain fixed p and q less than κ. The improvements to the truncation error bounds have to be balanced by the potential for a large ∥A∥ to cause inaccurate evaluation of rm in floating point arithmetic. We employ rigorous error bounds along with some heuristics to ensure that rounding errors are kept under control. Our numerical experiments show that the new algorithm generally provides accuracy at least as good as the existing algorithm of Higham at no higher cost, while for matrices that are triangular or cause overscaling it usually yields significant improvements in accuracy, cost, or both.
Article
A thorough and elegant treatment of the theory of matrix functions and numerical methods for computing them, including an overview of applications, new and unpublished research results, and improved algorithms. Key features include a detailed treatment of the matrix sign function and matrix roots; a development of the theory of conditioning and properties of the Frechet derivative; Schur decomposition; block Parlett recurrence; a thorough analysis of the accuracy, stability, and computational cost of numerical methods; general results on convergence and stability of matrix iterations; and a chapter devoted to the f(A)b problem. Ideal for advanced courses and for self-study, its broad content, references and appendix also make this book a convenient general reference. Contains an extensive collection of problems with solutions and MATLAB implementations of key algorithms.
Article
In geologic sequestration or in CO2-based geothermal systems, CO2 is present on top of the brine phase. In this study we performed a numerical analysis of a geothermal reservoir that is impermeable from the sides and is open to CO2 at the top. For this configuration, double diffusive natural convection due to density and temperature differences across the height enhance the mass transfer rate of CO2 into the initially stagnant brine. The analysis is done using mass, momentum, energy conservation laws, and the Darcy laws. The objective is to understand the diffusion of CO2 over long periods of time after sequestration into a subsurface porous media geothermal aquifer. The problem parameters are the solutal Rayleigh number (100 ≤ Ras ≤ 10,000), the buoyancy ratio (2 ≤ N ≤ 100), the cavity aspect ratio (0.5 ≤ A ≤ 2), and a fixed Lewis number (Le = 301). Numerical computations do not exhibit natural convection effects for homogeneous initial conditions. Hence a sinusoidal perturbation is added for the initial top boundary condition. It is found that the CO2 plumes move faster when Ras is increased, however they slow down with decreasing N. For every simulation run, the average CO2 concentration (S¯=(∑ini∑jnjci,j/ni×nj)) is computed. Higher concentration rates in laterally wide reservoirs make better candidates than deeper aquifers for CO2 sequestration.
Article
The matrix exponential plays a fundamental role in the solution of differential systems which appear in different science fields. This paper presents an efficient method for computing matrix exponentials based on Hermite matrix polynomial expansions. Hermite series truncation together with scaling and squaring and the application of floating point arithmetic bounds to the intermediate results provide excellent accuracy results compared with the best acknowledged computational methods. A backward-error analysis of the approximation in exact arithmetic is given. This analysis is used to provide a theoretical estimate for the optimal scaling of matrices. Two algorithms based on this method have been implemented as MATLAB functions. They have been compared with MATLAB functions funm and expm obtaining greater accuracy in the majority of tests. A careful cost comparison analysis with expm is provided showing that the proposed algorithms have lower maximum cost for some matrix norm intervals. Numerical tests show that the application of floating point arithmetic bounds to the intermediate results may reduce considerably computational costs, reaching in numerical tests relative higher average costs than expm of only 4.43% for the final Hermite selected order, and obtaining better accuracy results in the 77.36% of the test matrices. The MATLAB implementation of the best Hermite matrix polynomial based algorithm has been made available online.
Article
This report brings together a wide variety of facts concerning the matrix exponential. Against a background of familiar results, we present an analysis of matrix functions (the exponential in particular) which exploits the Schur decomposition theorem. This helps us explore the behavior of a function of a matrix whose eigensystem is poorly conditioned. Finally, we investigate Pade approximation of the matrix exponential and feature in the discussion a potentially useful inverse error analysis.
Article
We describe version 3.0 of the Test Matrix Toolbox for Matlab 4.2. The toolbox contains a collection of test matrices, routines for visualizing matrices, routines for direct search optimization, and miscellaneous routines that provide useful additions to Matlab's existing set of functions. There are 58 parametrized test matrices, which are mostly square, dense, nonrandom, and of arbitrary dimension. The test matrices include ones with known inverses or known eigenvalues; ill-conditioned or rank deficient matrices; and symmetric, positive definite, orthogonal, defective, involutary, and totally positive matrices. The visualization routines display surface plots of a matrix and its (pseudo-) inverse, the field of values, Gershgorin disks, and two- and three-dimensional views of pseudospectra. The direct search optimization routines implement the alternating directions method, the multidirectional search method and the Nelder--Mead simplex method. We explain the need for collections of te...
Eigtool, version 2.1. web.comlab.ox.ac.uk/pseudospectra/eigtool
  • T G Wright
T. G. WRIGHT. Eigtool, version 2.1. web.comlab.ox.ac.uk/pseudospectra/eigtool, 2009.