ArticlePublisher preview available
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

This paper presents a new series expansion based on Bernoulli matrix polynomials to approximate the matrix cosine function. An approximation based on this series is not a straightforward exercise since there exist different options to implement such a solution. We dive into these options and include a thorough comparative of performance and accuracy in the experimental results section that shows benefits and downsides of each one. Also, a comparison with the Padé approximation is included. The algorithms have been implemented in MATLAB and in CUDA for NVIDIA GPUs.
This content is subject to copyright. Terms and conditions apply.
Received: 30 April 2020
DOI: 10.1002/mma.7041
SPECIAL ISSUE PAPER
On Bernoulli series approximation for the matrix cosine
Emilio Defez1Javier Ibáñez2José M. Alonso2Pedro Alonso-Jordá3
1Instituto de Matemática Multidisciplinar,
Universitat Politècnica de València,
Valencia, Spain
2Instituto de Instrumentación para
Imagen Molecular, Universitat Politècnica
de València, Valencia, Spain
3Department of Information Systems and
Computation, Universitat Politècnica de
València, Valencia, Spain
Correspondence
Pedro Alonso-Jordá, Department of
Information Systems and Computation,
Universitat Politècnica de València,
Camino de Vera s/n, 46022 Valencia,
Spain.
Email: palonso@upv.es
Communicated by: M. Tosun
Funding information
Spanish Ministerio de Econom𝚤ay
Competitividad and European Regional
Development Fund, Grant/Award
Number: TIN2017-89314-P; Universitat
Politecnica de Valencia, Grant/Award
Number: SP20180016
This paper presents a new series expansion based on Bernoulli matrix polyno-
mials to approximate the matrix cosine function. An approximation based on
this series is not a straightforward exercise since there exist different options to
implement such a solution. We dive into these options and include a thorough
comparative of performance and accuracy in the experimental results section
that shows benefits and downsides of each one. Also, a comparison with the Padé
approximation is included. The algorithms have been implemented in MATLAB
and in CUDA for NVIDIA GPUs.
KEYWORDS
matrix exponential and similar functions of matrices, polynomials and matrices
MSC CLASSIFICATION
65F60; 68W10
1INTRODUCTION AND NOTATION
In recent years, the study of matrix functions has been the subject of increasing focus due to its usefulness in various areas
of science and engineering, providing new and interesting problems to those already existing and already well known. Of
all matrix functions, it is certainly the matrix exponential which attracts much of the attention because of its connection
with systems of first order linear differential equations
Y(t)=AY (t)
Y(0)=Y0,ACr×r,
whose solution is given by Y(t)=eAtY0and where Cr×rrepresents the set of all complex square matrices of size r.The
hyperbolic matrix functions are applied in the study of the communicability analysis in complex networks1-3 and also in
the solution of coupled hyperbolic systems of partial differential equations.4In particular, the sine and cosine trigono-
metric matrix functions have been proven to be especially useful for solving systems of second-order linear differential
equations of the form:
d2
dt2Y(t)+A2Y(t)=0
Y(0)=Y0
Y(0)=Y
0
,ACr×r,
Math Meth Appl Sci. 2022;45:3239–3253. wileyonlinelibrary.com/journal/mma © 2020 John Wiley & Sons, Ltd. 3239
... Taking into account the precomputed values of Θ m k andΘ m k of Tables 1-3, Algorithm 4 computes the most appropriate values of polynomial order m and scaling parameter s. In fact, Algorithm 4 is an improvement on [38]'s Algorithm 4, where it is explained in further detail. The main difference with respect to [38] is that the new code can be used to determine the values of m and s independently of the nature of the error, covering relative, absolute, forward and backward errors. ...
... In fact, Algorithm 4 is an improvement on [38]'s Algorithm 4, where it is explained in further detail. The main difference with respect to [38] is that the new code can be used to determine the values of m and s independently of the nature of the error, covering relative, absolute, forward and backward errors. This is accommodated by means of Line 8 in Algorithm 4, which takes into account that the first non-zero term occupies position m i , for the relative backward error series, or m i + 1, for the absolute/relative forward or absolute backward error ones. ...
... Additionally, α m is approximated as α m ≈ A m 1/m , as justified in [38]. In line 18, p m i is the highest-order coefficient of the approximating polynomial. ...
Article
Full-text available
This paper presents three different alternatives to evaluate the matrix hyperbolic cosine using Bernoulli matrix polynomials, comparing them from the point of view of accuracy and computational complexity. The first two alternatives are derived from two different Bernoulli series expansions of the matrix hyperbolic cosine, while the third one is based on the approximation of the matrix exponential by means of Bernoulli matrix polynomials. We carry out an analysis of the absolute and relative forward errors incurred in the approximations, deriving corresponding suitable values for the matrix polynomial degree and the scaling factor to be used. Finally, we use a comprehensive matrix testbed to perform a thorough comparison of the alternative approximations, also taking into account other current state-of-the-art approaches. The most accurate and efficient options are identified as results.
... • We denote y = 3 if the evaluation of m and s use a norm estimation, similar to the given in reference [14]. ...
... • We denote y = 4 if the evaluation of m and s use other algorithm for the norm estimation, see reference [14] for more details. ...
... • We denote y = 5 if the evaluation of m and s is made without norm estimation (calculating the norms), see [14]. ...
Conference Paper
Full-text available
Hyperbolic Matrix functions cosh (A) and sinh (A) emerge in various areas of science and technology, and its computation has attracted significant attention due to their usefulness in the solution of systems of second-order linear differential equations. In this work, we introduce Bernoulli matrix polynomial series expansions for hyperbolic matrix cosine function cosh (A) in order to obtain accurate and powerful methods for their computation.
... Other algorithms have been developed for computing the matrix cosine based on Taylor series [6], [35], [36], with improvements on the error bounds or the cost of evaluation of the approximating polynomials. There are also algorithms for evaluating the matrix cosine based on approximating functions other than Taylor and Padé approximants, for example, algorithms based on Bernoulli matrix polynomials [10] and Hermite matrix polynomials [11]. ...
Conference Paper
Full-text available
Bernoulli polynomials and Bernoulli numbers have been extensively used in several areas of mathematics (especially in number theory) because they appear in many mathematical formulas: as in the residual term of the Euler-Maclaurian quadrature rule, in the Taylor series expansion of the trigonometric functions tan (x), csc (x) and cot (x), in the Taylor series expansion of the hyperbolic function tanh(x). They also appear in the well known exact expression of the even integer values of the Riemann zeta function.
Article
Full-text available
In this work we introduce a new method to compute the matrix cosine. It is based on recent new matrix polynomial evaluation methods for the Taylor approximation and a mixed forward and backward error analysis. The matrix polynomial evaluation methods allow to evaluate the Taylor polynomial approximation of the matrix cosine function more efficiently than using Paterson-Stockmeyer method. A sequential Matlab implementation of the new algorithm is provided, giving better efficiency and accuracy than state-of-the-art algorithms. Moreover, we provide an implementation in Matlab that can use NVIDIA GPUs easily and efficiently.
Article
Full-text available
In this work we introduce new rational-polynomial Hermite matrix expansions which allow us to obtain a new accurate and efficient method for computing the matrix cosine. This method is compared with other state-of-the-art methods for computing the matrix cosine, including a method based on Padé approximants, showing a far superior efficiency, and higher accuracy. The algorithm implemented on the basis of this method can also be executed either in one or two NVIDIA GPUs, which demonstrates its great computational capacity.
Article
Full-text available
This paper presents an implementation of one of the most up-to-day algorithms proposed to compute the matrix trigonometric functions sine and cosine. The method used is based on Taylor series approximations which intensively uses matrix multiplications. To accelerate matrix products, our application can use from one to four NVIDIA GPUs by using the NVIDIA cublas and cublasXt libraries. The application, implemented in C++, can be used from the Matlab command line thanks to the mex files provided. We experimentally assess our implementation in modern and very high-performance NVIDIA GPUs.
Article
Full-text available
In this article, we apply the method of lines (MOL) for solving the time-fractional diffusion equations (TFDEs). The use of MOL yields a system of fractional differential equations with the initial value. The solution of this system could be obtained in the form of Mittag–Leffler matrix function. A direct method which computes the Mittag–Leffler matrix by applying its eigenvalues and eigenvectors analytically has been discussed. The direct approach has been applied on one-, two-, and three-dimensional TFDEs with Dirichlet, Neumann, and periodic boundary conditions as well.
Article
Full-text available
This paper presents a new family of methods for evaluating matrix polynomials more efficiently than the state-of-the-art Paterson–Stockmeyer method. Examples of the application of the methods to the Taylor polynomial approximation of matrix functions like the matrix exponential and matrix cosine are given. Their efficiency is compared with that of the best existing evaluation schemes for general polynomial and rational approximations, and also with a recent method based on mixed rational and polynomial approximants. For many years, the Paterson–Stockmeyer method has been considered the most efficient general method for the evaluation of matrix polynomials. In this paper we show that this statement is no longer true. Moreover, for many years rational approximations have been considered more efficient than polynomial approximations, although recently it has been shown that often this is not the case in the computation of the matrix exponential and matrix cosine. In this paper we show that in fact polynomial approximations provide a higher order of approximation than the state-of-the-art computational methods for rational approximations for the same cost in terms of matrix products.
Article
Full-text available
In this article, we apply the method of lines (MOL) for solving the heat equation. The use of MOL yields a system of first–order differential equations with initial value. The solution of this system could be obtained in the form of exponential matrix function. Two approaches could be applied on this problem. The first approach is approximation of the exponential matrix by Taylor expansion, Padé and limit approximations. Using this approach leads to create various explicit and implicit finite difference methods with different stability region and order of accuracy up to six for space and superlinear convergence for time variables. Also, the second approach is a direct method which computes the exponential matrix by applying its eigenvalues and eigenvectors analytically. The direct approach has been applied on one, two and three-dimensional heat equations with Dirichlet, Neumann, Robin and periodic boundary conditions.
Article
Full-text available
The computation of matrix trigonometric functions has received remarkable attention in the last decades due to its usefulness in the solution of systems of second order linear differential equations. Several state-of-the-art algorithms have been provided recently for computing these matrix functions. In this work, we present two efficient algorithms based on Taylor series with forward and backward error analysis for computing the matrix cosine. A MATLAB implementation of the algorithms is compared to state-of-the-art algorithms, with excellent performance in both accuracy and cost.
Article
Full-text available
Several existing algorithms for computing the matrix cosine employ polynomial or rational approximations combined with scaling and use of a double angle formula. Their derivations are based on forward error bounds. We derive new algorithms for computing the matrix cosine, the matrix sine, and both simultaneously that are backward stable in exact arithmetic and behave in a forward stable manner in floating point arithmetic. Our new algorithms employ both Pade approximants of sin x and new rational approximants to cos x and sin x obtained from Pade approximants to e(x). The amount of scaling and the degree of the approximants are chosen to minimize the computational cost subject to backward stability in exact arithmetic. Numerical experiments show that the new algorithms have backward and forward errors that rival or surpass those of existing algorithms and are particularly favorable for triangular matrices.
Article
The analysis of the structural organization of the interaction network of a complex system is central to understand its functioning. Here, we focus on the analysis of the bipartivity of graphs. We first introduce a mathematical approach to quantify bipartivity and show its implementation in general and random graphs. Then, we tackle the analysis of the transportation networks of European airlines from the point of view of their bipartivity and observe significant differences between traditional and low cost carriers. Bipartivity shows also that alliances and major mergers of traditional airlines provide a way to reduce bipartivity which, in its turn, is closely related to an increase of the transportation efficiency.