## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

To read the full-text of this research,

you can request a copy directly from the author.

... Figure 3.4 shows the convergence rate of the Extended Krylov Subspace Method, together with our asymptotic estimate, where the optimal parameter a has once again been determined numerically. The agreement is sufficiently satisfactory, taking into account that on this problem adaptation of the method to the spectrum can be observed [35]. ...

For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate theextended Krylov subspace method, a technique that was recently proposed to approximate f(A)v for A symmetric. We provide a new theoretical analysis of the method, which improves the original result for A symmetric, and gives a new estimate for A nonsymmetric. Numerical experiments confirm that the new error estimates correctly capture the linear asymptotic convergence rate of the approximation. By using recent algorithmic improvements, we also show that the method is computationally competitive with respect to other enhancement techniques. Copyright © 2009 John Wiley & Sons, Ltd.

... Then we derive a general convergence estimate for the RKSM approximation. We expect the Galerkin approximation associated with the rational Krylov subspace to be better than ADI, when using the same poles, because of the projection process, which allows the method to improve adaptation to the spectrum [34] . However , as we have already seen in Theorem 3.4, the two methods may be equivalent under certain choices of the shifts. ...

For large scale problems, an effective approach for solving the algebraic Lyapunov equation consists of projecting the problem onto a significantly smaller space and then solving the reduced order matrix equation. Although Krylov subspaces have been used for a long time, only more recent developments have shown that rational Krylov subspaces can be a competitive alternative to the classical and very popular alternating direction implicit (ADI) recurrence. In this paper we develop a convergence analysis of the rational Krylov subspace method (RKSM) based on the Kronecker product formulation and on potential theory. Moreover, we propose new enlightening relations between this approach and the ADI method. Our results provide solid theoretical ground for recent numerical evidence of the superiority of RKSM over ADI when the involved parameters cannot be computed optimally, as is the case in many practical application problems.

... We want to point out that, in many cases, as for instance dealing with discretizations of elliptic operators, the RAM exhibits a very fast convergence and the a priori error bounds turn out to be pessimistic. As it is well known this fact often occurs in the application of Krylov subspace methods and it is due to their " good " adaptation to the spectrum (see [18]). Thus, in order to detect the actual behavior of the RAM, a posteriori error estimates could be more suitable. ...

In this paper we analyze the convergence of some commonly used Krylov subspace methods for computing the action of matrix Mittag-Leffler functions. As is well known, such functions find application in the solution of fractional differential equations. We illustrate the theoretical results by some numerical experiments.

The paper deals with the application of the restricted-denominator rational Krylov method, recently discussed in I. Moret and P. Novati [BIT 44, No. 3, 595–615 (2004; Zbl 1075.65062)] and J. van den Eshof and M. Hochbruck [SIAM J. Sci. Comput. 27, No. 4, 1438–1457 (2006; Zbl 1105.65051)], to the computation of the action of the so-called φ-functions, which play a fundamental role in several modern exponential integrators. The analysis here presented is devoted in particular to the construction of error estimates of easy practical use.

We consider restricted rational Lanczos approximations to matrix functions representable by some integral forms. A convergence analysis that stresses the effectiveness of the proposed method is developed. Error estimates are derived. Numerical experiments are presented. Copyright © 2008 John Wiley & Sons, Ltd.

Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy–Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix.

ResearchGate has not been able to resolve any references for this publication.