Conference PaperPDF Available

Accurate approximation of the hyperbolic matrix cosine using Bernoulli matrix polynomials

Authors:

Abstract and Figures

Hyperbolic Matrix functions cosh (A) and sinh (A) emerge in various areas of science and technology, and its computation has attracted significant attention due to their usefulness in the solution of systems of second-order linear differential equations. In this work, we introduce Bernoulli matrix polynomial series expansions for hyperbolic matrix cosine function cosh (A) in order to obtain accurate and powerful methods for their computation.
Content may be subject to copyright.
Juan Ramón Torregrosa Juan Carlos Cortés Antonio Hervás Antoni Vidal Elena López-Navarro
Modelling for Engineering
& Human Behaviour 2021
Val`encia, July 14th-16th, 2021
This book includes the extended abstracts of papers presented at XXIII Edition of the Mathemat-
ical Modelling Conference Series at the Institute for Multidisciplinary Mathematics Mathematical
Modelling in Engineering & Human Behaviour.
I.S.B.N.: 978-84-09-36287-5
November 30th, 2021
Report any problems with this document to imm@imm.upv.es.
Edited by: I.U. de Matem`atica Multidisciplinar, Universitat Polit`ecnica de Val`encia.
J.R. Torregrosa, J-C. Cort´es, J. A. Herv´as, A. Vidal-Ferr`andiz and E. L´opez-Navarro
Contents
Density-based uncertainty quantification in a generalized Logistic-type model . . . . . . . . . . . 1
Combined and updated Hmatrices.....................................................7
Solving random fractional second-order linear equations via the mean square Laplace
transform................................................................................ 13
Conformable fractional iterative methods for solving nonlinear problems . . . . . . . . . . . . . . . 19
Construction of totally nonpositive matrices associated with a triple negatively realizable24
Modeling excess weight in Spain by using deterministic and random dierential equations31
A new family for solving nonlinear systems based on weight functions Kalitkin-Ermankov
type......................................................................................36
Solving random free boundary problems of Stefan type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Modeling one species population growth with delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
On a Ermakov–Kalitkin scheme based family of fourth order . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
A new mathematical structure with applications to computational linguistics and spe-
cialized text translation .................................................................. 60
Accurate approximation of the Hyperbolic matrix cosine using Bernoulli matrix polyno-
mials.....................................................................................67
Full probabilistic analysis of random first-order linear dierential equations with Dirac
delta impulses appearing in control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Some advances in Relativistic Positioning Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A Graph–Based Algorithm for the Inference of Boolean Networks . . . . . . . . . . . . . . . . . . . . . . 84
Stability comparison of self-accelerating parameter approximation on one-step iterative
methods..................................................................................90
Mathematical modelling of kidney disease stages in patients diagnosed with diabetes
mellitus II................................................................................96
The eect of the memory on the spread of a disease through the environtment . . . . . . . . 101
Improved pairwise comparison transitivity using strategically selected reduced informa-
tion.....................................................................................106
Contingency plan selection under interdependent risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Some techniques for solving the random Burgers’ equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Probabilistic analysis of a class of impulsive linear random dierential equations via
density functions........................................................................ 122
v
Modelling for Engineering & Human Behaviour 2021
Probabilistic evolution of the bladder cancer growth considering transurethral resection127
Study of a symmetric family of anomalies to approach the elliptical two body problem
with special emphasis in the semifocal case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
Advances in the physical approach to personality dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
A Laplacian approach to the Greedy Rank-One Algorithm for a class of linear systems 143
Using STRESS to compute the agreement between computed image quality measures and
observer scores: advantanges and open issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Probabilistic analysis of the random logistic dierential equation with stochastic jumps156
Introducing a new parametric family for solving nonlinear systems of equations . . . . . . . 162
Optimization of the cognitive processes involved in the learning of university students in
a virtual classroom......................................................................167
Parametric family of root-finding iterative methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Subdirect sums of matrices. Definitions, methodology and known results. . . . . . . . . . . . . . 180
On the dynamics of a predator-prey metapopulation on two patches . . . . . . . . . . . . . . . . . . .186
Prognostic Model of Cost / Eectiveness in the therapeutic Pharmacy Treatment of Lung
Cancer in a University Hospital of Spain: Discriminant Analysis and Logit . . . . . . . . . . . . . . . 192
Stability, bifurcations, and recovery from perturbations in a mean-field semiarid vegeta-
tion model with delay................................................................... 197
The random variable transformation method to solve some randomized first-order linear
control dierence equations..............................................................202
Acoustic modelling of large aftertreatment devices with multimodal incident sound fields
208
Solving non homogeneous linear second order dierence equations with random initial
values: Theory and simulations..........................................................216
A realistic proposal to considerably improve the energy footprint and energy eciency of
a standard house of social interest in Chile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Multiobjective Optimization of Impulsive Orbital Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . 230
Mathematical Modeling about Emigration/Immigration in Spain: Causes, magnitude,
consequences............................................................................236
New scheme with memory for solving nonlinear problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
SP
NNeutron Noise Calculations...................................................... 246
Analysis of a reinterpretation of grey models applied to measuring laboratory equipment
uncertainties............................................................................252
An Optimal Eighth Order Derivative-Free Scheme for Multiple Roots of Non-linear Equa-
tions....................................................................................257
A population-based study of COVID-19 patient’s survival prediction and the potential
biases in machine learning...............................................................262
A procedure for detection of border communities using convolution techniques. . . . . . . . . 267
vi
12
Accurate approximation of the Hyperbolic matrix
cosine using Bernoulli matrix polynomials
E. Defezı,1J.J. Ib´nezıJ.M. Alonso¯J. Peinado˜and J. Sastreú
(ı) Instituto Universitario de Matem´atica Multidisciplinar,
(¯) Instituto de Instrumentaci´on para Imagen Molecular,
(˜) Departamento de Sistemas Inform´aticos y Computaci´on,
(ú) Instituto de Telecomunicaciones y Aplicaciones Multimedia,
Universitat Polit`ecnica de Val`encia
Cam´ı de Vera s/n, Valencia, Spain.
1 Introduction and motivation
The evaluation of matrix functions plays an important and relevant role in many scientific ap-
plications because matrix functions have proven to be an ecient tool in applications such as
reduced order models [1], [2, pp. 275–303], image denoising [3] and graph neural networks [4],
among others.
Among the dierent matrix functions, we must highlight hyperbolic matrix functions. The com-
putation of the hyperbolic matrix functions has received remarkable attention in the last decades
due to its usefulness in the solution of systems of partial dierential problems, see references [5,6]
for example. For this reason, several algorithms have been provided recently for computing these
matrix functions, looking for high precision in the approximation and economy of computational
cost, see [7, pp.403–407], [8–11] and references therein.
Also, the generalizations of some known classical special functions into matrix framework are
important both from the theoretical and applied point of view. These new extensions (Laguerre,
Hermite, Chebyshev, Jacobi matrix polynomials, etc.) have proved to be very useful in vari-
ous fields such as physics, engineering, statistics and telecommunications. Recently, Bernoulli
polynomials Bn(x), who are defined in [12] as the coecients of the generating function
g(x, t)= tetx
et1=ÿ
nØ0
Bn(x)
n!tn,|t|<2,(1)
and that have the explicit expression for Bn(x)
Bn(x)=
n
ÿ
k=0 An
kBBkxnk,(2)
where the Bernoulli numbers are defined by Bn=Bn(0), satisfying the explicit recurrence
B0=1,Bk=
k1
ÿ
i=0
Bi
k+1i,k Ø1.(3)
1edefez@imm.upv.es
Modelling for Engineering & Human Behaviour 2021
have been generalized to the matrix framework in [13]: For a matrix AœCrr,thenth Bernoulli
matrix polynomial it is defined by the expression
Bn(A)=
n
ÿ
k=0 An
kBBkAnk.(4)
This matrix polynomials have the series expansion
eAt =Aet1
tBÿ
nØ0
Bn(A)tn
n!,|t|<2.(5)
To obtain practical approximations of the exponential matrix using the expansion (5), let’s take
s” as the scaling of the matrix Aand take the degree of the approximation “m”, and then
eA2s¥(e1)
m
ÿ
n=0
Bn(A2s)
n!.(6)
The use of expansion (5) to approximate matrix exponential with good results of precision and
computational cost can be found in [13]. For a matrix AœCrr, using expression (5) we obtain
cosh (A) = sinh (1) ÿ
nØ0
B2n(A)
(2n)! + (cosh (1) 1)ÿ
nØ0
B2n+1(A)
(2n+ 1)! .(7)
Note that unlike the Taylor (and Hermite) polynomials that are even or odd, depending on the
parity of the polynomial degree n, the Bernoulli matrix polynomials do not verify this property,
so in the development of cosh(A) all Bernoulli polynomials are needed (and not just the even-
numbered). We can also obtain, for CœCrr, the expression:
cosh (C) = sinh (1) ÿ
nØ0
22nB2n11
2(C+I)2
(2n)! .(8)
The objective of this work is to present algorithms based on the approximations (7) and (8) for
the matrix hyperbolic cosine, trying to choose the most precise and with the lowest computational
cost.
2 The proposed Algorithms
From (7) one gets the approximation
cosh (A)¥sinh (1)
m
ÿ
n=0
B2n(A)
(2n)! + (cosh (1)1)
m
ÿ
n=0
B2n+1(A)
(2n+ 1)! ,(9)
and from (8) one gets the alternative approximation
cosh (C)¥sinh (1)
m
ÿ
n=0
22nB2n11
2(C+I)2
(2n)! .(10)
We are going to try to compare algorithms based on the approximations in practice (9)-(10). As
dierent algorithms are going to be used, we will to establish the following identification code
denoted by coshmberxy, where the argument is chosen according to the following criteria:
We denote x= 1 if we use directly formula (9).
68
Modelling for Engineering & Human Behaviour 2021
Numerical test 1
E(coshmber13) <E(coshmber14) 1.23% E(coshmber13) <E(coshmber15) 0.61%
E(coshmber13) >E(coshmber14) 40.49% E(coshmber13) >E(coshmber15) 0.00%
E(coshmber13) = E(coshmber14) 58.28% E(coshmber13) = E(coshmber15) 99.39%
Table 1: Errors in test 1
We denote x= 2 if we use directly formula (10).
We use x= 3 if formula (10) is used, but terms with odd powers have been removed.
By other hand, we have the argument yœ{3,4,5}, it is chosen according to the following criteria:
We denote y= 3 if the evaluation of mand suse a norm estimation, similar to the given in
reference [14].
We denote y= 4 if the evaluation of mand suse other algorithm for the norm estimation, see
reference [14] for more details.
We denote y= 5 if the evaluation of mand sis made without norm estimation (calculating
the norms), see [14].
Our algorithm has been compared with algorithm funmcosh. This functions is funm MATLAB
function to compute matrix functions, such as the matrix hyperbolic cosine. All computations
was implemented on MATLAB 2020b.
Matrices and numerical test
For the numerical experiments a set of 153 test matrices matrices has been selected: 60 diagonal-
izable (Hadamard matrices), 60 non-diagonalizable, 39 from toolbox [15] and 13 from Eigtool [16].
Size 128128. We have performed a series of experiments to determine the best algorithm choice.
First we carry out the following tests:
test 1: we compare each coshmber 13, coshmber 14, coshmber 15.
test 2: we compare each coshmber 23, coshmber 24, coshmber 25.
test 3: we compare each coshmber 33, coshmber 34, coshmber 35.
Analysis of results of test 1
We compare algorithms coshmber 13, coshmber14, coshmber15, obtaining the following
table 1 of results. With respect the computational cost, the total number of matrix products of
each algorithm was: coshmber13 (1940), coshmber14 (1872) and coshmber15 (1939).
Among the three proposed algorithms (coshmber13, coshmber14, coshmber15) we choose
algorithm coshmber 14 because E(coshmber13) >E(coshmber14) in the 40.49% and the
number of matrix products is 1872, therefore, this algorithm coshmber 14 has the lowest com-
putational cost. Regarding errors, algorithms coshmber 13 and coshmber 15 are practically
the same.
69
Modelling for Engineering & Human Behaviour 2021
12345
0.5
0.6
0.7
0.8
0.9
1
p
coshmber_1_3
coshmber_1_4
coshmber_1_5
(a) Profile test 1.
Lowest relative error rate
27%
45%
27%
Highest relative error rate
38%
23%
39%
coshmber_1_3
coshmber_1_4
coshmber_1_5
(b) Pie charts Test 1.
Numerical test 2
E(coshmber23) <E(coshmber24) 23.93% E(coshmber23) <E(coshmber25) 0.61%
E(coshmber23) >E(coshmber24) 17.18% E(coshmber23) >E(coshmber25) 0.00%
E(coshmber23) = E(coshmber24) 58.90% E(coshmber23) = E(coshmber25) 99.39%
Table 2: Errors in test 2
Analysis of results of test 2
We compare algorithms coshmber 23, coshmber24, coshmber25, obtaining the table 2
of results. With respect the computational cost, the total number of matrix products of each
algorithm was: coshmber23 (1940), coshmber24 (1872) and coshmber25 (1939).
12345
0.75
0.8
0.85
0.9
0.95
p
coshmber_2_3
coshmber_2_4
coshmber_2_5
(a) Profile test 2.
Lowest relative error rate
34%
32%
34%
Highest relative error rate
32%
35%
33%
coshmber_2_3
coshmber_2_4
coshmber_2_5
(b) Pie charts Test 2.
Among the three proposed algorithms (coshmber23, coshmber24, coshmber25) we choose
algorithm coshmber 23 because E(coshmber23) <E(coshmber24) in the 23.93% despite
the fact that it has a higher computational cost (the number of matrix products is 1940). Re-
garding errors, algorithms coshmber 23 and coshmber 25 are practically the same.
Analysis of results of test 3
We compare algorithms coshmber 23, coshmber24, coshmber25, obtaining the table 3
of results. With respect the computational cost, the total number of matrix products of each
algorithm was: coshmber33 (1435), coshmber34 (1336) and coshmber35 (1325).
70
Modelling for Engineering & Human Behaviour 2021
Numerical test 3
E(coshmber33) <E(coshmber34) 0.61% E(coshmber33) <E(coshmber35) 0.61%
E(coshmber33) >E(coshmber34) 64.42% E(coshmber33) >E(coshmber35) 0.00%
E(coshmber33) = E(coshmber34) 34.97% E(coshmber33) = E(coshmber35) 99.39%
Table 3: Errors in test 3
12345
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
p
coshmber_3_3
coshmber_3_4
coshmber_3_5
(a) Profile test 3.
Lowest relative error rate
21%
58%
21%
Highest relative error rate
42%
15%
43%
coshmber_3_3
coshmber_3_4
coshmber_3_5
(b) Pie charts Test 3.
Among the three proposed algorithms (coshmber33, coshmber34, coshmber35) we choose
algorithm coshmber 34 because E(coshmber33) >E(coshmber34) in the 64.42% and has
a lower computational cost (the number of matrix products is 1336). Regarding errors, algorithms
coshmber 33 and coshmber 25 are practically the same.
Analysis of results with MATLAB function funmcosh (Numerical test 4)
Finally, we will compare the selected algorithms coshmber14, coshmber23, coshmber34
and the MATLAB function funmcosh, see Table 4. With respect the computational cost, the
total number of matrix products of each algorithm was: funmcosh: (2282), coshmber14 (1872),
coshmber 23 (1940) and coshmber 34 (1336).
12345
0
0.2
0.4
0.6
0.8
1
p
funmcosh
coshmber_1_4
coshmber_2_3
coshmber_3_4
(a) Profile test 4.
Lowest relative error rate
2%
16%
38%
43%
Highest relative error rate
90%
3%
5%
2%
funmcosh
coshmber_1_4
coshmber_2_3
coshmber_3_4
(b) Pie charts Test 4.
In general, the relative error improvements over the MATLAB function funmcosh exceed 94% in
all cases. Between algorithms coshmber14, coshmber23, coshmber34, we choose algorithm
coshmber 34 because it has a lower computational cost (the number of total matrix products is
1336).
71
Modelling for Engineering & Human Behaviour 2021
Numerical test 4
E(funmcosh)<E(coshmber14) 1.84%
E(funmcosh)>E(coshmber14) 96.32%
E(funmcosh)=E(coshmber14) 1.84%
E(funmcosh)<E(coshmber23) 3.68%
E(funmcosh)>E(coshmber23) 94.48%
E(funmcosh)=E(coshmber23) 1.84%
E(funmcosh)<E(coshmber34) 0.61%
E(funmcosh)>E(coshmber34) 97.55%
E(funmcosh)=E(coshmber34) 1.84%
Table 4: Errors in test 4
3 Conclusions
In this work, dierent variations of algorithms have been presented to calculate the matrix hy-
perbolic cosine based on new Bernoulli matrix polynomials series expansions (7) and (8). These
algorithms have been tested on a battery of test matrices in order to select the best variants,
both in terms of computational cost as in terms of error in the approximation. The best selec-
tion (algorithm coshmber 34) is based in formula (10), but terms with odd powers have been
removed, and in the evaluation of mand swhich use the algorithm for the norm estimation given
in reference [14].
References
[1] V. Druskin, A. V. Mamonov, M. Zaslavsky, Multiscale s-fraction reduced-order models for massive
wavefield simulations Multiscale Modeling & Simulation, 15(1):445–475, 2017.
[2] A. Frommer, V. Simoncini, Matrix functions. In Model order reduction: theory, research aspects and
applications, Springer, New York (USA), 2008.
[3] V. May, Y. Keller, N. Sharon, Y. Shkolnisky, An algorithm for improving nonlocal means operators
via low-rank approximation IEEE Transactions on Image Processing, 25(3):1340–1353, 2016.
[4] R. Levie, F. Monti, X. Bresson, M. M. Bronstein, Cayleynets: Graph convolutional neural networks
with complex rational spectral filters IEEE Transactions on Signal Processing, 67(1):97–109, 2018.
[5] L. J´odar, E. Navarro, J. Mart´ın, Exact and analytic-numerical solutions of strongly coupled mixed
diusion problems Proceedings of the Edinburgh Mathematical Society, 43:269–293, 2000.
[6] L. J´odar, E. Navarro, A. Posso, M. Casab´an, Constructive solution of strongly coupled continuous
hyperbolic mixed problems Applied Numerical Mathematics, 47(3):477–492, 2003.
[7] M.Fontes, M.G¨unther, N.Marheineke (Eds.), Progress in Industrial Mathematics at ECMI 2012, Math-
ematics in Industry, vol.19, Springer-Verlag, Berlin/Heidelberg, 2014.
[8] E. Defez,J. Sastre,J. Ib´nez, J. Peinado, M.Tung, A method to approximate the hyperbolic sine of a
matrix International Journal of Complex Systems in Science, 4(1):41–45. 2014.
[9] E. Defez, J. Sastre, J. Ibanez, J. Peinado, Solving engineering models using hyperbolic matrix functions
Applied Mathematical Model ling, 40(4):2837–2844. 2016.
[10] N. Higham, P. Kandolf, Computing the action of trigonometric and hyperbolic matrix functions SIAM
Journal on Scientific Computing, 39(2):A613–A627, 2017.
72
Modelling for Engineering & Human Behaviour 2021
[11] A. H. Al-Mohy, A Truncated Taylor Series Algorithm for Computing the Action of Trigonometric and
Hyperbolic Matrix Functions SIAM Journal on Scientific Computing, 40(3):A1696–A1713. 2018.
[12] F. W. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark, NIST handbook of mathematical functions
hardback and CD-ROM. Cambridge University Press, 2010.
[13] E. Defez, J. Ib´nez, P. Alonso-Jord´a, J.M. Alonso, J. Peinado, On Bernoulli matrix polynomials and
matrix exponential approximation Journal of Computational and Applied Mathematics, In Press, 2020.
[14] E. Defez, J. Ib´nez, J.M. Alonso, P. Alonso-Jord´a, On Bernoulli series approximation for the matrix
cosine Mathematical Methods in the Applied Sciences, In Press, 2020.
[15] N.J. Higham, The Test Matrix Toolbox for MATLAB Numerical Analysis Report No. 237, The Uni-
versity of Manchester, England, 1993.
[16] T.G. Wright, Eigtool, Version 2.1, 16, March 2009. Available online at:
http://www.comlab.ox.ac.uk/pseudospectra/eigtool/.
73
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper presents a new series expansion based on Bernoulli matrix polynomials to approximate the matrix cosine function. An approximation based on this series is not a straightforward exercise since there exist different options to implement such a solution. We dive into these options and include a thorough comparative of performance and accuracy in the experimental results section that shows benefits and downsides of each one. Also, a comparison with the Padé approximation is included. The algorithms have been implemented in MATLAB and in CUDA for NVIDIA GPUs.
Article
Full-text available
We present in this paper a new method based on Bernoulli matrix polynomials to approximate the exponential of a matrix. The developed method has given rise to two new algorithms whose efficiency and precision are compared to the most efficient implementations that currently exist. For that, a state-of-the-art test matrix battery, that allows deeply exploring the highlights and downsides of each method, has been used. Since the new algorithms proposed here do make an intensive use of matrix products, we also provide a GPUs-based implementation that allows to achieve a high performance thanks to the optimal implementation of matrix multiplication available on these devices.
Article
Full-text available
A new algorithm is derived for computing the action f(tA)B, where A is an n × n matrix, B is n × n0 with n0 n, and f is cosine, sinc, sine, hyperbolic cosine, hyperbolic sinc, or hyperbolic sine function. In the case of f being even, the computation of f(tA¹/²)B is possible without explicitly computing A¹/², where A¹/² denotes any matrix square root of A. The algorithm offers six independent output options given t, A, B, and a tolerance. For each option, actions of a pair of trigonometric or hyperbolic matrix functions are simultaneously computed. The algorithm scales the matrix A down by a positive integer s, approximates f(s⁻¹tAσ)B, where σ is either 1 or 1/2, by using a truncated Taylor series, and finally uses the recurrences of the Chebyshev polynomials of the first and second kind to recover f(tAσ)B. The selection of the scaling parameter and the degree of Taylor polynomial is based on a forward error analysis and a sequence of the form Ak¹/k in such a way that the overall computational cost of the algorithm is minimized. Shifting is used where applicable as a preprocessing step to reduce the scaling parameter. The algorithm works for any matrix A, and its computational cost is dominated by the formation of products of A with n × n0 matrices that could take advantage of the implementation of level-3 BLAS. Our numerical experiments show that the new algorithm behaves in a forward stable fashion and in most problems outperforms the existing algorithms in terms of CPU time, computational cost, and accuracy.
Article
Full-text available
We developed a novel reduced-order multi-scale method for solving large time-domain wavefield simulation problems. Our algorithm consists of two main stages. During the first "off-line" stage the computational domain is split into multiple subdomains. Then projection-type multi-scale reduced order models (ROMs) are computed for the partitioned operators at each subdomain. The off-line stage is embarrassingly parallel as ROM computations for the subdomains are independent of each other. It also does not depend on the number of simulated sources (inputs) and it is performed just once before the entire time-domain simulation. At the second "on-line" stage the time-domain simulation is performed within the obtained multi-scale ROM framework. The crucial feature of our formulation is the representation of the ROMs in terms of matrix Stieltjes continued fractions (S-fractions). This allows us to sparsify the obtained multi-scale subdomain operator ROMs and to reduce the communications between the adjacent subdomains which is highly beneficial for a parallel implementation of the on-line stage. Our approach suits perfectly the high performance computing architectures, however in this paper we present rather promising numerical results for a serial computing implementation only. These results include 3D acoustic and multi-phase anisotropic elastic problems.
Article
Full-text available
We present a method for improving a Non Local Means operator by computing its low-rank approximation. The low-rank operator is constructed by applying a filter to the spectrum of the original Non Local Means operator. This results in an operator which is less sensitive to noise while preserving important properties of the original operator. The method is efficiently implemented based on Chebyshev polynomials and is demonstrated on the application of natural images denoising. For this application, we provide a comprehensive comparison of our method with leading denoising methods.
Article
Full-text available
In this paper a method for computing hyperbolic matrix sine based on Hermite matrix polynomial expansions is presented. An error bound analysis is given.
Article
Full-text available
This paper deals with the construction of exact and analytical-numerical solutions with a priori error bounds for systems of the type ut = Auxx, A1u(0, t) + B1ux (0, t) = 0, A2u (1, t) + B2ux (1, t) = 0, 0 < x < 1, t > 0, u(x, 0) = f(x), where A1, A2, B1 and B2 are matrices for which no simultaneous diagonalizable hypothesis is assumed, and A is a positive stable matrix. Given an admissible error ε and a bounded subdomain D, an approximate solution whose error with respect to an exact series solution is less than ε uniformly in D is constructed.
Article
We derive a new algorithm for computing the action f(A)V of the cosine, sine, hyperbolic cosine, and hyperbolic sine of a matrix A on a matrix V, without first computing f(A). The algorithm can compute cos(A)V\cos(A)V and sin(A)V\sin(A)V simultaneously, and likewise for cosh(A)V\cosh(A)V and sinh(A)V\sinh(A)V, and it uses only real arithmetic when A is real. The algorithm exploits an existing algorithm \texttt{expmv} of Al-Mohy and Higham for eAV\mathrm{e}^AV and its underlying backward error analysis. Our experiments show that the new algorithm performs in a forward stable manner and is generally significantly faster than alternatives based on multiple invocations of \texttt{expmv} through formulas such as cos(A)V=(eiAV+eiAV)/2\cos(A)V = (\mathrm{e}^{\mathrm{i}A}V + \mathrm{e}^{\mathrm{-i}A}V)/2.
Article
In this paper a method for computing hyperbolic matrix functions based on Hermite matrix polynomial expansions is outlined. Hermite series truncation together with Paterson-Stockmeyer method allow to compute the hyperbolic matrix cosine e ciently. A theoretical estimate for the optimal value of its parameters is obtained. An e cient and highly-accurate Hermite algorithm and a MATLAB implementation have been developed. The MATLAB implementation has been compared with the MATLAB function funm on matrices of di erent dimensions, obtaining lower execution time and higher accuracy in most cases. To do this we used an NVIDIA Tesla K20 GPGPU card, the CUDA environment and MATLAB. With this implementation we get much better performance for large scale problems.
Article
This paper deals with the construction of exact series solutions of mixed problems for strongly coupled hyperbolic partial differential systems using a matrix separation of variables method. Algebraic methods are used to study the underlying Sturm–Liouville type vector problem with matrix coefficients appearing in the boundary value conditions.