Available via license: CC BY 4.0
Content may be subject to copyright.
Commutator-free Cayley methods
S. Maslovskaya, C. Offen, S. Ober-Bl¨obaum, P. Singh, B. Wembe
Abstract
Differential equations posed on quadratic matrix Lie groups arise in the context
of classical mechanics and quantum dynamical systems. Lie group numerical inte-
grators preserve the constants of motions defining the Lie group. Thus, they respect
important physical laws of the dynamical system, such as unitarity and energy con-
servation in the context of quantum dynamical systems, for instance. In this article
we develop a high-order commutator free Lie group integrator for non-autonomous
differential equations evolving on quadratic Lie groups. Instead of matrix exponen-
tials, which are expensive to evaluate and need to be approximated by appropriate
rational functions in order to preserve the Lie group structure, the proposed method
is obtained as a composition of Cayley transforms which naturally respect the struc-
ture of quadratic Lie groups while being computationally efficient to evaluate. Unlike
Cayley–Magnus methods the method is also free from nested matrix commutators.
Keywords. Lie group integrators, Cayley transform, Magnus expansion, Non-
autonomous, Commutator-free methods.
1 Introduction
In this article we are concerned with non-autonomous linear ordinary and partial differ-
ential equations,
˙
Y(t) = A(t)Y(t), Y (t0) = Y0, t ∈[t0, tf],(1)
where the solution Y(t)∈Gevolves on Lie groups of the form
G={X∈GLn(C) : XJX∗=J},(2)
where GLn(C) is the group of n×nnon-singular complex matrices and where J∈GLn(C)
is a given matrix (see for instance [5, 10, 20]). The condition Y(t)∈Gfor all tis fulfilled
if and only if ˙
Y(t) = A(t)Y(t) is tangential to the Lie group G. This is the case if and
only if A(t) takes values in the Lie algebra g={Ω∈Cn×n: ΩJ+JΩ∗= 0}to G.
Differential equations of this form describe a wide range of physical systems with
numerous practical applications. Some well known examples are the symplectic group
Spn(C), with J=Å0I
−I0ã, related to the investigations of Hamiltonian systems, and the
Lorenz group SO3,1(R)=O3,1(R)∩SL4(R), with J= diag(1,1,1,−1), related to the study
of special relativity. Another important example is the unitary group Un(C), with J=I,
which is germane to the investigation of quantum systems and has received particular
attention over the last decades. Lie groups of this form were termed quadratic Lie groups
in [24].
In the design of numerical schemes for (1), often a key requirement is that the approxi-
mate solution respects the conservation laws of the underlying physical systems. However,
preservation of the complex quadratic invariant of motion AJA∗−J= 0 can be challeng-
ing: If J=I(unitary group), for instance, then Gauss–Legendre collocation methods are
the only Runge–Kutta methods that preserve the invariant [12]. Rather than seeking an
1
arXiv:2408.13043v1 [math.NA] 23 Aug 2024
integrator that can preserve the conserved quantities defining G, which broadly defines the
field of geometric numerical integration [20, 5], Lie group methods [27] are a narrower class
of methods that exploit the intrinsic geometric Lie group structure. By ensuring that the
approximate solution evolves on the Lie group, these methods ensure that the symmetries
of the system and the corresponding conservation laws are respected.
Concerns such as computational accuracy, processing time, and memory usage, which
motivate algorithm development for ODEs and PDEs more broadly, also remain of paramount
importance when solving ODEs and PDEs on Lie groups such as (1). Quantum optimal
control algorithms [4, 9, 14, 15, 42] for instance, require repeated integration of the under-
lying quantum differential systems, which needs to be fast and accurate while preserving
the geometric properties of the system such as unitarity and conservation of energy. The
aim of this work is to develop numerical integrators for non-autonomous linear differential
equations of the form (1) which, in a context such as that of quantum optimal control, are
accurate and fast, while ensuring that the numerical solution evolves on the Lie group (2).
A prominent tool in the design of Lie group methods for solving (1) has been the
Magnus expansion [27, 34]. The general idea of the Magnus approach is to write the
solution as Y(t)=eΩ(t)Y0and expand Ω(t) into an infinite series,
Ω(t) = Zt
0
A(t1) dt1+1
2Zt
0Zt1
0
[A(t1), A(t2)] dt2dt1+· · · ,
involving nested commutators of A(t) at different times. Truncating the series at a specified
order and computing its exponential yields an approximation of the solution to that order.
The resulting numerical schemes, although very accurate, face many practical diffi-
culties due to the presence of nested commutators in the Magnus expansion – the prior
computation of these commutators can be very expensive, the number of commutators
increases rapidly with the order of accuracy required [38], they reduce sparsity [3] and can
alter the structure sufficiently to make Magnus based scheme infeasible without substantial
alterations [11].
A very versatile technique for overcoming this difficulty is presented by the so-called
commutator-free or quasi-Magnus methods [1, 8, 10]. While the derivation of these methods
also starts from the Magnus expansion, they utilise the Baker-Campbell-Hausdorff (BCH)
formula [18, 37] to approximate the exponential of the Magnus expansion by a product of
multiple exponentials,
eΩ(t)≈eS1eS2. . . eSK,(3)
where each exponent has a much simpler form – specifically, the exponents Skfeatures
no commutators – while achieving the same accuracy, see for instance [1, 8] where Skare
obtained as linear combinations or integrals of A(t).
Another crucial bottleneck in Magnus based methods as well as their commutator-free
counterparts is the computation of the matrix exponential, which can be prohibitively
expensive [36]. While Krylov subspace methods lead to very efficient Magnus–Lanczos
solvers for small time-steps [25, 31], polynomial approximations do not respect the Lie
group structure of (1). Where geometric numerical integration is required, rational ap-
proximations to the exponential must be utilised instead [30].
The most well known among rational approximants to the exponential are degree (n, n)
(i.e. diagonal) Pad´e approximants. The (1,1) Pad´e approximant, called the Cayley trans-
form, preserves the mapping from Lie algebra to the Lie group for quadratic Lie groups
of the form (2), and thus is suitable for applications to (1). It leads to the well known
Crank–Nicholson method, which is a second order method. The fourth-order Magnus ex-
pansion as well as fourth-order commutator-free methods require each exponential, eΩ(t)
or eSkin (3), to be computed with the degree (2,2) Pad´e approximant, while sixth-order
methods need to be paired with the degree (3,3) Pad´e approximant. Since a degree (n, n)
approximant involves nlinear equation solves, the requirement of high-order rational ap-
proximants leads to an n-fold increase in the cost of the overall scheme.
2
Keeping the eventual approximation of the exponential by a rational function in mind,
Cayley–Magnus methods [13, 24] develop an alternative to the Magnus series by seeking
an expansion whose Cayley transform directly provides a high-order approximation to
the solution of (1). Since the Cayley transform is a degree (1,1) rational method, this
approach circumvents the n-fold scaling of traditional Magnus based approaches. However,
much like the Magnus expansion, this new expansion called the Cayley–Magnus expansion
also features commutators, and its application involves very similar challenges due to their
presence. To the best of our knowledge, there is no commutator-free alternative for the
Cayley–Magnus methods.
In this work we propose a new approach which combines the commutator-free approach
with Cayley–Magnus expansion to derive high-order schemes which avoid both, nested
commutators and exponential matrix computations. The resulting schemes have close
parallels to (3), with the exponentials being replaced by the significantly cheaper Cayley
transforms. The schemes respect the Lie group structure of (1) for quadratic Lie groups
of the form (2) by design.
The rest of the article is organized as follows. In Section 2 we introduce some notations
and definitions. Moreover, we recall results on Cayley–Magnus and Legendre expansions
that we will need to build a fourth-order scheme based on the Cayley transform. Sec-
tion 3 is the core of the paper: We first present a variant of the Cayley–BCH formula
derived up to order four; the Cayley-BCH expansion is then used to derive a new fourth-
order commutator-free Lie group integrator for quadratic Lie groups such as SOn(R) or
Un(C). Section 4 contains numerical experiments that demonstrate the effectiveness of
the proposed approach.
2 Preparation
2.1 Cayley transforms for quadratic Lie groups
Consider the matrix differential equation
˙
Y(t) = A(t)Y(t), Y (t0) = Y0, Y (t)∈G, t ∈[t0, tf],(4)
where A(·) is a Lipschitz-continuous operator taking its values in g. If Gis a Lie group and
gits Lie algebra, then the motions evolves on G, i.e. Y(t)∈Gfor all t, provided that A(·)
takes values in gand Y0∈G. The article focuses on numerical methods for differential
equations on quadratic Lie groups which are matrix Lie groups of the form
G={A∈GLn(C) : AJA∗=J},(5)
for an invertible matrix J∈GLn(C). Here A∗=¯
A⊤denotes the conjugate transpose.
The Lie algebra to Gis given as
g={Ω∈Cn×n: ΩJ+JΩ∗= 0}.
An important example of (5) is the unitary group Un(C), with J=I(where Iis the
identity matrix), which occurs in the context of quantum systems. Its Lie algebra consists
of skew-hermitian matrices. Another example is the (complex) symplectic group Sp2n(C),
with J=Å0I
−I0ã. Its Lie algebra consists of Hamiltonian matrices.
Let c∈C\ {0}. For Ω ∈Cn×nwith c−1/∈σ(Ω) the c-Cayley transform of Ω is defined
as
Cay(Ω, c)=(I−cΩ)−1(I+c∗Ω) .
Here σ(M) denotes the spectrum of M. The inverse c-Cayley transform is given as
Cay−1(A, c) = −1
c∗I+c
c∗A−1(I−A) for −c∗
c∈ σ(A).
3
Indeed, the c-Cayley transform constitutes a diffeomorphism Cayc:˜
g
∼
−→ ‹
Gbetween
˜
g={Ω∈g:c−1/∈σ(Ω)}and ‹
G={A∈G:−c∗
c/∈σ(A)}.
Since Cayley transforms respect the Lie group structure for quadratic Lie groups, so
do their products, i.e. the product Cay(Ω, c1) Cay(Ω, c2). . . Cay(Ω, cn) resides in the
Lie group ‹
G={A∈G:−c∗
k
ck/∈σ(A), k = 1, . . . , n}, provided Ω ∈e
g={Ω∈g:
c−1
k/∈σ(Ω), k = 1, . . . , n}. Thus Cayley transforms are natural building blocks for
rational approximations that respect quadratic Lie groups. Indeed, all unitary rational
approximations (relevant to quantum dynamics, where G= Un(C)), including higher-
order diagonal Pad´e approximations, can be obtained as compositions of Cayley transforms
[30].
We will use the Cayley transform as a cheap alternative to the surjective matrix ex-
ponential exp : g→Gto design Lie group integrators. Other approaches employing the
Cayley transform to solve system (4) can be found for example in [24, 33]. These ap-
proaches are generally based on the following result.
In the following, the 1/2-Cayley transform will simply be referred to as Cayley trans-
form and will be denoted by Cay(A).
Lemma 2.1. Let Y(t)be the solution of system (4), with −1/∈σ(Y(t)Y−1
0)for any
t∈[t0, tf], then Y(t)can be written in the form Y(t) = Cay(Ω(t))Y0, where the matrix
Ω∈˜
gsatisfies
˙
Ω(t) = A(t)−1
2[Ω, A(t)] −1
4ΩA(t)Ω,Ω(t0)=Ω0,(6)
with the Lie bracket (commutator) of two matrices Aand Bdefined by [A, B] = A·B−B·A.
Proof. To simplify the notations and without loosing any generality, we consider Y0=I.
So, Y= Cay(Ω) =⇒I−Ω
2Y=I+Ω
2. Differentiation of this relation leads to
−˙
Ω
2Y+ÅI−Ω
2ã˙
Y=˙
Ω
2i.e ˙
Ω=2ÅI−Ω
2ã˙
Y(I+Y)−1
i.e ˙
Ω=2ÅI−Ω
2ãA(t)Y(I+Y)−1
(7)
Moreover,
(I+Y)Y−1=ÇI+ÅI−Ω
2ã−1ÅI+Ω
2ãåÅI+Ω
2ã−1ÅI−Ω
2ã
=ÅI+Ω
2ã−1ÅI−Ω
2ã+I
=ÅI+Ω
2ã−1ÅI−Ω
2+I+Ω
2ã= 2 ÅI+Ω
2ã−1
(8)
Plugging in relation (8) into equation (7), one obtains ˙
Ω = I−Ω
2A(t)I+Ω
2which
allows us to conclude (6).
Remark 1. In numerical time-stepping methods, the time step δt can always be made
sufficiently small such that Ω(t)is close to the zero matrix such that Ω(t)∈˜
g(C), i.e. 2∈
σ(Ω(t)) for all t∈[t0, t0+δt]. While this could force small time-steps in a general setting,
in the context of quantum systems, one has ˜
g=˜
un(C) = un(C) = gsince the solution
evolves on the unitary matrix group, so that the condition Ω∈˜
un(C)is automatically
satisfied.
According to Lemma 2.1, solving system (4) is equivalent to solving system (6), but
now considering time-stepping with a small time step δt in order to guarantee the exis-
tence of Ω ∈˜
g. As the Lie algebra is characterised by the linear constraint AJ +JA∗= 0,
4
we can apply any Runge–Kutta method to (6) and obtain a Lie group structure preserv-
ing integrator, since Runge–Kutta methods preserve linear constraints. This approach is
suggested in [20, IV.8.3], for instance. However, this involves the repeated computation
of matrix commutators, which can be costly in high dimensions. So next, we focus on
system (6) and present the Magnus expansion for Cayley transform, developed by Iserles
[24], which will be used later to derive our methods.
2.2 The Cayley–Magnus expansion
We now focus on system (6) given in Lemma 2.1 i.e
˙
Ω(t) = A(t)−1
2[Ω, A(t)] −1
4ΩA(t)Ω,Ω(t0) = Ω0.(3)
In this section we recall the results of [24] for expanding Ω as a Cayley–Magnus series.
For this, let Ω(t) denote the solution of (6) to the initial value Ω(0) = 0. We consider as
an ansatz the formal series
Ω(t) =
∞
X
m=1
Ωm(t),(9)
where Ωm(t) denotes an expression consisting of miterated integrals over polynomials of
degree min A. Substituting this into (6) and integrating over [0, t] with Ω(0) = 0 leads to
∞
X
m=1
Ωm=Zt
0
A(ξ) dξ−1
2Zt
0"∞
X
m=1
Ωm(ξ), A(ξ)#dξ
−1
4Zt
0"∞
X
m=1
Ωm(ξ)#A(ξ)"∞
X
m=1
Ωm(ξ)#dξ
=Zt
0
A(ξ) dξ−1
2Zt
0
∞
X
m=2
[Ωm−1(ξ), A(ξ)] dξ
−1
4
∞
X
m=3
m−2
X
k=1 Zt
0
Ωm−k−1(ξ)A(ξ)Ωk(ξ) dξ
=Zt
0
A(ξ) dξ−1
2Zt
0
[Ω1(ξ), A(ξ)] dξ
−
∞
X
m=3 "1
2Zt
0
[Ωm−1(ξ), A(ξ)] dξ−1
4
m−2
X
k=1 Zt
0
Ωm−k−1(ξ)A(ξ)Ωk(ξ) dξ#.
Now Ωjcan be determined recursively as follows:
Ω1(t) = Zt
0
A(ξ) dξ,
Ω2(t) = −1
2Zt
0
[Ω1(ξ), A(ξ)] dξ,
Ωm(t) = 1
2Zt
0
[Ωm−1(ξ), A(ξ)] dξ−1
4
m−2
X
k=1 Zt
0
Ωm−k−1(ξ)A(ξ)Ωk(ξ) dξ, for m≥3.
For a combinatorical description of the expressions Ωmusing the language of trees and for
a discussion of convergence properties of the series P∞
m=1 Ωmwe refer to [24]. The first
5
terms of this expansion are explicitly given by
Ω1(t) = Zt
0
A(ξ) dξ, Ω2(t) = −1
2Zt
0Zξ1
0
[A(ξ2), A(ξ1)] dξ2dξ1,
Ω3(t) = 1
4Zt
0Zξ1
0Zξ2
0
[[A(ξ3), A(ξ2)], A(ξ1)] dξ3dξ2dξ1
−1
4Zt
0ñZξ1
0
A(ξ2) dξ2ôA(ξ1)ñZξ1
0
A(ξ2) dξ2ôdξ1.
Lemma 2.2. Truncating the Cayley–Magnus expansion (9) at a given order p, i.e. setting
Ω(t)≈Ωp(t) =
p
X
m=1
Ωp,(10)
leads to a p-order approximation.
Remark 2. Only a few integral terms in each Ωmare relevant to obtain a p-order approx-
imation. Thus, by considering only relevant terms, we can considerably reduce the number
of terms to be computed (there are fewer terms than in the exponential Magnus expansion,
see again [24] for more explanation). Moreover, an approximation of order-p for Ω(t)will
lead to an approximation of order-p for Y(t), i.e. Y(t) = Cay(Ωp(t))Y0+O(tp+1)(see for
instance [13]).
2.3 Cayley–Magnus and Legendre expansion
A starting point of deriving a commutator-free higher-order scheme is to consider a Leg-
endre expansion of A(·), following the strategy in [1]. We will first introduce the Legendre
expansion of the matrix Aand expand Ω in terms of the Legendre expansion. The shifted
Legendre polynomials Pn(x) are defined for n= 0,1,2,· · · through the recurrence
P0(x)=1, P1(x) = 2x−1, Pn+1 =2n+ 1
n+ 1 Pn(x)−n
n+ 1Pn−1(x).(11)
By definition, Pn(x) is a polynomial of degree n, symmetric with respect to 1/2. The first
terms are explicitly given by
P2(x) = 6x2−6x+ 1, P3(x) = 20x3−30x2+ 12x−1, P4(x) = 70x4−140x3+ 90x2−20x+ 1.
For a given time step δt, the matrix Acan be expanded on the interval [0, δt] in a series of
Legendre polynomials given by (see also [1])
A(t) = 1
δt
N
X
k=1
AkPk−1Åt
δt ã+O(δtN+1 ),(0 ≤t≤δt).(12)
where Pk,k= 0,1,2, . . . are the Legendre polynomials defined in (11) and where the coefficients
Akare given by
Ak= (2k−1) Zδt
0
A(t)Pn−1Åt
δt ãdt= (2k−1)δt Z1
0
A(xδt)Pn−1(x) dx.
Plugging the Legendre expansion (12) into the Cayley Magnus expansion, one can express Ω(t)
with respect to the coefficients Ak. The first three terms read
Ω1(δt) = 1
δt
N
X
n=1 Zδt
0
AnPn−1Åξ
δt ãdξ+O(δtN+1 )
=
N
X
n=1 ßZ1
0
Pn−1(x) dx™An+O(δtN+1 )
=A1,since Pk(k≥1) is anti-symmetric w.r.t 1/2 and Z1
0
P0(x) dx= 1,
6
Ω2(δt) = −1
2Z1
0Zξ1
0
[A(ξ2), A(ξ1)] dξ2dξ1
=−1
2δt2Z1
0Zξ1
0"N
X
n=1
AnPn−1Åξ2
δt ã,
N
X
k=1
AkPk−1Åξ1
δt ã#dξ2dξ1
=−1
2
N
X
n,k=1 ßZ1
0Zx1
0
Pn−1(x2)Pk−1(x1) dx2dx1™[An, Ak] + O(δtN+1 ),
Ω3(δt) = 1
4Zt
0Zξ1
0Zξ2
0
[[A(ξ3), A(ξ2)], A(ξ1)] dξ3dξ2dξ1
−1
4Zδt
0®Zξ1
0
A(ξ2) dξ2´A(ξ1)®Zξ1
0
A(ξ2) dξ2´dξ1
=1
4
N
X
n,m,k=1 ßZ1
0Zx1
0Zx2
0
Pn−1(x3)Pm−1(x2)Pk−1(x1) dx3dx2dx1™·
[[An, Am], Ak]
−1
4
N
X
n,m,k=1 ÅZ1
0ßZx1
0
Pn−1(x2) dx2™Pm−1(x1)ßZx1
0
Pk−1(x2) dx2™dx1ã·
(AnAmAk) + O(δtN+1 ).
Next, we will denote by Ω[N]
k(δt) the truncation of Ωk(δt) up to the first Nterms of the
Legendre coefficients.
Proposition 2.1. For a given time-step δt, the first three terms of the Cayley–Magnus expansion
combined with a Legendre expansion of Auntil N= 3 are given by
Ω1(δt) = Ω[2]
1(δt) = A1,Ω[2]
2(δt) = −1
6[A1, A2],
Ω[2]
3(δt) = −1
12 A3
1−1
120 A3
2+1
60 A1A2
2−1
30 A2A1A2+1
60 A2
2A1.
Moreover, we have
Ω(δt) = Ω1(δt) + Ω[2]
2(δt) + Ω[2]
3(δt) + O(δt4).(13)
Remark 3. For any n∈N∗,Anis a term of order δtn. This can be easily seen by comparing the
Legendre expansion with an expansion A(t) = Pm≥1amtm−1in powers of t. Since we are looking
to build a fourth-order scheme, according to the expression of Ω1,Ω2and Ω3, only the first two
terms of this expansion, i.e. A1and A2will be relevant for us.
The approximation in (13) is a third-order approximation when considering the exact integral
to compute A1and A2. However by a matter of fact, the order is increased to 4 when we consider
specifically the Gauss-Legendre quadrature to approximate this integral, i.e. the quadrature error
exactly cancels the leading error term of the Cayley expansion. This later point has been discussed
by Iserles [24]. Thus, taking
A1=AÇtn+Ç1
2−√3
6åδtå, A2=AÇtn+Ç1
2+√3
6åδtå,
and
A1=δt
2(A1+A2), A2=δt√3
2(A2−A1),(14)
one has the following result.
Proposition 2.2. Given a time step δt, the following approximation holds,
Ω(tn+δt) = A1−1
6[A1, A2]−1
12 A3
1+O(δt5),(15)
with A1and A2defined in (14).
7
Proof. From Proposition 2.1, we have
Ω(δt) = Ω1(δt) + Ω2(δt) + Ω3(δt) + O(δt4).
However, apart from A3
1, all the other terms in Ω3are of order greater than δt5, since Akis of
order δtk(see Remark 3), so we first obtain
Ω(δt) = A1(δt)−1
6[A1(δt), A2(δt)] −1
12 A3
1+O(δt4).
Approximating A1and A2by the Gauss-Legendre quadrature as defined in (14), Ω3becomes
symmetric so that the order of the error will be even, see [24, Sec. 4.2]. Thus the order increases
to δt5.
Using the Gauss-Legendre quadrature given above to compute A1and A2, the Cayley Magnus
Time-propagator (CMT) scheme (15) is already a fourth-order approximation scheme as desired
(see also [24]). However, it still contains commutators which can make the implementation ex-
pensive for large systems, can reduce sparsity, and may be more structurally complicated. The
aim in the following section is to use a similar idea as the exponential commutator-free method
developed in [1], using in our case the Cayley transform. To this end, we require a BCH-type
formula for the Cayley transform.
3 Commutator-free Cayley scheme
3.1 BCH-formula for Cayley transform
For the derivation of the fourth-order commutator-free Cayley methods, we need to express the
product of three Cayley transforms as a single Cayley transform up to order four accuracy. While
this can be obtained by two applications of the Cayley-BCH formula developed by Iserles and
Zanna [28], for the sake of concreteness we show the derivation explicitly up to order four.
Proposition 3.1. Let A, B, C ∈gbe three matrices in a neighborhood of 0, then the following
formula holds
Cay(A) Cay(B) Cay(C) = Cay(Ω(A, B, C )),(16)
with
Ω(A, B, C ) = A+B+C+1
2([A, B] + [A, C ] + [B, C]) + 1
4[A, B], C ]
−1
4(ACB +BCA)−1
4(ABA +ACA +BAB +B CB
+CAC +C BC) + F(A, B, C ),
(17)
where Fis a series of homogeneous polynomials in A,B,Cof degrees mwith m≥4.
Proof. First, let us observe that for a small enough neighborhood of 0, one has 1/2/∈σ(A)∪
σ(B)∪σ(C). Now, considering that Cay(A) Cay(B) Cay(C) = Cay(Ω(A, B, C )), then we get
Cay(A) Cay(B) Cay(C) = ÅI−Ω(A, B, C)
2ã−1ÅI+Ω(A, B, C )
2ã,
⇒ÅI−Ω(A, B, C )
2ãCay(A) Cay(B) Cay(C) = ÅI+Ω(A, B, C)
2ã
⇒Ω(A, B, C )
2(I+ Cay(A) Cay(B) Cay(C)) = Cay(A) Cay(B) Cay(C)−I
⇒Ω(A, B, C ) = −2ÇI−ÅI−A
2ã−1ÅI+A
2ãÅI−B
2ã−1ÅI+B
2ã·
ÅI−C
2ã−1ÅI+C
2ãå·ÇI+ÅI−A
2ã−1ÅI+A
2ã·
ÅI−B
2ã−1ÅI+B
2ãÅI−C
2ã−1ÅI+C
2ãå−1
.
A series expansion1of this last relation leads to the desired result.
1Details of the computation can be found in the appendix.
8
Remark 4. If we want a fourth-order scheme, the terms of the formula obtained in Proposi-
tion 3.1 by ignoring F(A, B, C )are enough. However, one needs to compute more terms if we are
looking at more than fourth-order.
The Cayley transform version of the usual BCH- and sBCH-formula [28] can then be deduced
from Proposition 3.1 by setting C= 0 and C=Arespectively.
Corollary 3.1 (BCH-formula).Considering A, B ∈gin the neighborhood of 0, then one has
Cay(A) Cay(B) = Cay(Ω(A, B )),(18)
with
Ω = A+B+1
2[A, B]−1
4(ABA +BAB) + F(A, B )(19)
where Fis a series of homogeneous polynomials in A,Bof degrees mwith m≥4.
Remark 5. If we consider the general Cayley transform, then
Cayc1(A) Cayc2(B) = Cay(Ω)
leads to
Ω = 2x1
|c1|2A+2x2
|c2|2B+2x1x2
|c1|2|c2|2[A, B]−2x1x2
|c1|2|c2|2Åx1
|c1|2ABA +x2
|c2|2BABã
+2iÅy1
c1|c1|4A3+y2
c2|c2|4B3+x1x2y2
|c1|2|c2|4B2A−x1x2y1
|c1|4|c2|2A2B−2x1x2y2
|c1|2|c2|4AB2
−y1
|c1|4A2−y2
|c2|4B2ã+F(A, B),
with c1=x1+iy1,c2=x2+iy2and where F(A, B)is as in the previous corollary.
Corollary 3.2 (sBCH-formula).Considering A, B ∈gin the neighborhood of 0, then one has
Cay(A) Cay(B) Cay(A) = Cay(Ω(A, B)) (20)
where
Ω = 2A+B−1
2(A2B+BA2+BAB +A3) + F(A, B)(21)
where Fis a series of homogeneous polynomials in Aand Bof degree mwith m≥4.
3.2 Commutator-Free Cayley Time-propagator scheme (CFCT)
In this section, in analogy to the commutator-free quasi-Magnus integrators, we seek a fourth-
order approximation of the form
Y(δt)≈Y1:= Cay(Ω1(δt)) Cay(Ω2(δt)) Cay(Ω3(δt))Y0= Cay(e
Ω(δt))Y0,(22)
with Ωi=P2
k=1 αi,kAk,i= 1,2,3, with Akbeing the Gauss-Legendre coefficients of Adefined
in (14) and αi,k ∈R. Recall that the Lie algebra ghas the structure of a real vector space. If
A∈gthen the Gauss-Legendre coefficients are elements of g. Since we are seeking a Lie group
structure-preserving scheme, we seek real coefficients αi,k (rather than complex coefficients) such
that Ωi∈gis guaranteed. We want to find, if there exists, the correct coefficients αi,k,k= 1,2
and i= 1,2,3, such that approximation (22) leads to a fourth-order approximation. Using the
Cayley–BCH formula from Proposition 3.1, we get
e
Ω=Ω1+ Ω2+ Ω3−1
4(Ω1Ω2Ω1+ Ω1Ω3Ω1+ Ω2Ω1Ω2+ Ω2Ω3Ω2+ Ω3Ω1Ω3+ Ω3Ω2Ω3)
−1
4(Ω1Ω3Ω2+ Ω2Ω3Ω1) + 1
2([Ω1,Ω2] + [Ω1,Ω3] + [Ω2,Ω3]) + 1
4[[Ω1,Ω2],Ω3]
=
3
X
i=1
2
X
j=1
αij Aj+1
2
2
X
i,j=1
(α1iα2j+α1iα3j+α2iα3j)[Ai, Aj]−1
4
2
X
i,j,k=1
(α1iα3jα2k
+α2iα3jα1k+α1iα2jα1k+α1iα3jα1k+α2iα1jα2k+α2iα3jα2k+α3iα1jα3k
+α3iα2jα3k+α2iα1jα3k+α3iα1jα2k−α1iα2jα3k−α3iα2jα1k)AiAjAk
9
On the other hand, from (15) one has
Ω(δt) = A1(δt)−1
6[A1(δt), A2(δt)] −1
12 A3
1(δt) + O(δt5).
Equating the two expressions for Ω(δt) and ˜
Ω, we obtain nonlinear relations for the coefficients.
A solution is provided by
α31 =α11, α21 = 1 −2α11 , α12 =−α32 =α11 −α2
11,
a22 = 0,with a11 =21/3
3+22/3
6+2
3.
(23)
We obtain the following final scheme
Y1= Cay(α11A1(δt) + α12 A2(δt)) Cay(α21A1(δt))·
Cay(α11A1(δt)−α12 A2(δt))Y0
(24)
Proposition 3.2. For a given time-step δt, we have
Y(δt) = Cay(α11A1(δt) + α12A2(δt)) Cay(α21A1(δt))·
Cay(α11A1(δt)−α12 A2(δt))Y0+O(δt5)
with coefficients αij as in (23).
Remark 6. To simplify notation, we have considered only the interval [t0, t0+δt]. However,
since the operator Ais non-autonomous, the coefficients Akwill naturally depend on tnfor a
given discretization n= 1,··· , N. So, the final scheme will look like
Yn+1 = Cay(α11A1(tn, δt) + α12 A2(tn, δt)) Cay(α21A1(tn, δt))·
Cay(α11A1(tn, δt)−α12 A2(tn, δt))Yn
for a given time stepping tn=t0,...,tN.
Remark 7. In contrast to the exponential commutator-free method [1], we cannot expect a
fourth-order scheme with the product of only two Cayley transforms. Indeed, if we write Y1=
Cay(Ω1(δt)) Cay(Ω2(δt))Y0= Cay( e
Ω(δt))Y0, then the coefficients αi,k will be complex. Complex
coefficients, however, are not compatible with the structure of the Lie algebra g, which is a vector
space over R.
4 Examples
In this section we consider two examples to illustrate our results. The first one, a driven two-level
quantum system, is a classical and well known system in quantum computing. It appears as a good
starting point since it has also been considered in the context of the exponential commutator-free
approach [1]. So, we will be able to compare our scheme with the fourth-order commutator-free
exponential time-propagator CFET4:2 derived in [1] for a system where we have an analytical
solution. In the second example we consider the Schr¨odinger equation with a time-dependent
Hamiltonian in dimension one.
Time-dependent Schr¨odinger equations with explicitly time-dependent potentials occur natu-
rally in quantum optimal control, where the potential generally contains a time-dependent control
function (laser profile, magnetic field, etc.). Several approaches based on splitting methods have
been developed for this type of problems (see for instance [6, 25, 39]). The interest of our approach
is to propose an alternative to the use of exponentials to integrate the solution. Indeed, in the
context of quantum optimal control, the search for a solution involves an optimization process
that usually requires a significant number of system integrations. The use of Cayley transforms
instead of exponentials therefore makes sense, given that the geometric properties of the system
are preserved, while the solution is computed more cheaply.
For the first example, we consider a magnetic field such that we can analytically express
the solution, which is taken as the reference solution. In the second example, we will consider
CFET4:2 from [1] as the reference solution.
10
4.1 A driven two-level system
For our first test problem, we consider an example from [1] which is a driven two-level system,
realized for instance by a spin 1/2 in a magnetic field
B(t)=(Bx(t), By(t), Bz(t)). In the
eigenbasis of the z-component of angular momentum, the Hamiltonian operator is defined by
H(t) = 1
2ÅBz(t)Bx(t)−iBy(t)
Bx(t) + iBy(t)−Bz(t)ã.
With the magnetic field
B(t) = (2Vcos(2ωt),2Vsin(2ωt),2∆) the system is periodically driven
and the propagator can be analytically expressed using Floquet theory. In this particular case,
the Hamiltonian becomes
H(t) = 1
2Å∆V e−2iωt
V e2iωt −∆ã,
where ∆, V, ω ∈R. The system to solve is given by
˙
Y(t) = A(t)Y(t), Y (t0) = I, A(t) = −iH (t),(25)
with Y(t)∈U2(C)⊂C2×2and t∈[0, T ]. The exact solution is given by
Y(t) = Ñe−iwt cos(Λt)−i∆−ω
Λsin(Λt)−iV
Λe−iwt sin(Λt)
−iV
Λeiwt sin(Λt)eiwt cos(Λt) + i∆−ω
Λsin(Λt)é,
with Λ = p(∆ −ω)2+V2. Notice that in accordance with Floquet theory for periodically driven
systems, Y(πn/ω, 0) = Y(π /ω, 0)nfor any given n∈N. Also, the transition probability spin up
↔spin down,
P(t) = |Y21(t, 0)|2=ÅV
Ωã2
sin2(Ωt),
is typical for a Breit–Wigner resonance.
For the numerical simulations, we solve the system (25) using different numerical schemes.
The first one is the Commutator-Free Exponential Time-propagator (denoted as CFET4:2) from
[1, Prop. 4.2] which is the exponential version of the main method developed in this work. We use
also the Cayley–Magnus Time-propagator (denoted as CMT), given by equation (15) and already
obtained by Iserles in [24], which contains nested commutators. Finally we use the main method
derived is this paper namely the Commutator-Free Cayley Time-propagator (denoted as CFCT)
and given by
Y(tn+1)≈Y[CFCT]
n+1 = Cay(α11A1(tn, δt) + α12 A2(tn, δt)) Cay(α21A1(tn, δt))·
Cay(α11A1(tn, δt)−α12 A2(tn, δt))Yn.
We propagating until T= 20π/ω, taking ω= 1, ∆ = V= 0.5. For the error analysis we consider
the Euclidean norm in C2. The solutions as well as error and total energy during the propagation
are displayed in Figure 1. Conservation of the norm (also display in the same figure), insure the
preservation of the transition probability P(t) by the numerical scheme, which is not the case
when using a classical scheme as the Runge–Kutta method RK45.
4.2 Linear time-dependent Schr¨odinger equation
We consider the one-dimensional time-dependent Schr¨odinger equation,
i∂tφ(x, t) = H(x, t)φ(x, t), φ(x, 0) = φ0(x)t∈[0, T ], x ∈D= [−L, L],(26)
where the time-dependent Hamiltonian H(x, t) is in the form
H(x, t) = ∂2
x+V(x, t),
with the potential V(x, t) = V0(x) + u(t)xcontaining an external potential (with a fixed control
term u(t) in this example). In this first case, we consider the internal potential V0and the laser
profile u(t) to be given by
V0(x) = x4−10x2, u(t) = csin(ωt), c =−102, ω = 5π.
11
0 10 20 30 40 50 60
t
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Y
1
real
imaginary
10 1
t
10 5
10 4
10 3
10 2
10 1
100
Euclidian error in
Y
(
t
)
t
4
CFET4:2
CMT
CFCT
100101102
t
10 15
10 14
10 13
10 12
10 11
10 10
||
Y
(
t
)|| ||
Y
(0)||
RefSol
CFET4:2
CFCT
RK45
0 10 20 30 40 50 60
t
2.00
1.75
1.50
1.25
1.00
0.75
0.50
0.25
0.00
Total energy
Refsol
CFET4:2
CMT
CFCT
Figure 1: (Left) Projection of the solution of system (25) along the first axis, together with
error obtained for CFET4:2, CMT, CFCT taking T= 20π/ω,ω= 1 and ∆ = V= 0.5.
(Right) Illustration of the norm conservation during the propagation for CFET4:2, CMT
and CFCT. We can clearly see the loss of this property by using the classical integrator
as RK45.
We consider as initial state a Gaussian wave-packet,
φ0(x) = e−(x−x0)2
2σ2, σ = 0.5, x0=−2.
Following spatial discretisation, we have to solve the following system of ODEs,
∂tφ(t) = A(t)φ(t), φ(0) = φ0, A(t) = −iH(t),
where H(t) is now a matrix representation of the Hamiltonian. Specifically, we use a Fourier
spectral discretization on an equispaced grid −L=x0,··· , xN=L, after imposing periodic
boundary conditions.
We implement both CMT (with commutators) and CFCT (with commutator-free) and com-
pare both with the reference solution (which is given here by CFET4:2). For a time discretization
0 = t0,··· , tM=T, CMT and CFCT respectively read
φ[CMT]
n+1 = Cay(A1(tn, δt)−1
6[A1(tn, δt), A2(tn, δt)] −1
12 A3
1(tn, δt)φ[CMT]
n,
φ[CFCT]
n+1 = Cay (α11A1+α12A2) Cay (α21 A1) Cay (α11A1−α12 A2)φ[CFCT]
n,
with α11, α12 , α21, α22 , α31 , α32 defined in (23), where
A1=δt
2(A1+A2), A2=δt√3
2(A2−A1),
and
A1=AÇtn+Ç1
2−√3
6åδtå, A2=AÇtn+Ç1
2+√3
6åδtå.
12
Numerical simulations. We check the conservation of the state norm ∥φ(·, t)∥L2(D)for
each of these two methods and compute the error with respect to CFET4:2 given in Figure 2. In
addition, we compute the energy change during propagation, Figure 2. Note that, since we have
a time-dependent Hamiltonian, energy is no longer conserved during propagation. In Figure 3
one can see the blow up of the energy when considering RK45 scheme.
The implementation of these methods is done using the expsolve package [40], which is
utilized for initializing the Hamiltonian, computing observables such as the energy, and the com-
putation of the L2norms and inner products (using the l2norm and l2inner methods).
42024
x
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
(x,T)
_0
real CFET4:2
real CFCT
imag CFET4:2
imag CFCT
10 310 2
t
10 11
10 9
10 7
10 5
10 3
10 1
L
2 error in (
t
)
t
4
CMT
CFCT
10 210 1100
t
10 15
10 13
10 11
10 9
10 7
10 5
10 3
10 1
|| (
t
)||
L
2|| (0)||
L
2
CFET4:2
CMT
CFCT
RK45
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
t
1.0
0.5
0.0
0.5
1.0
Total energy
CFET4:2
CMT
CFCT
Figure 2: (Left) Solution of system (26) together with error obtained for CMT, CFCT
(taking CFET4:2 as reference solution), propagated until T= 2. (Right) Illustration of
the norm conservation and change of the energy during the propagation for CFET4:2,
CMT and CFCT. Again, there is no conservation of the norm along the propagation when
using the classical integrator RK45.
A Details on the computation of coefficients
Writing down all the equations in (23) and equalizing them with relation (15) one arrives at the
system
α11 +α21 +α31 = 1
α12 +α22 +α32 = 0
α11α22 +α11 α32 +α21α32 −α12 α21 −α12 α31 −α22α31 =−1
3
2α11α21 α31 +α2
11α21 +α11 α2
21 +α2
11α31 +α11 α2
31 +α2
21α31 +α21 α2
31 =1
3
2α11α22 α31 +α11α12 α21 +α11 α12α31 +α11 α21 α22 +α11α31 α32
+α21α22 α31 +α21α31 α32 = 0
2α11α21 α32 +α2
11α22 +α2
11α32 +α12 α2
21 +α2
21α32 +α11 α2
31 +α22α2
31
+ 2α12α21 α31 −2α11α22 α31 = 0,
13
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
t
0
100
200
300
400
500
Total energy
CFET4:2
CMT
CFCT
RK45
Figure 3: Illustration of the energy blow-up when using RK45 to solve example 2.
We solved this system using a “formal calculus” solver (Maple) for real-valued coefficients (α11, α12 , α21, α22, α31 , α32)
satisfying the required conditions given in Section 3.
B Details on the proof of Proposition 3.1
Recall that one has
Ω(A, B, C ) = −2ÇI−ÅI−A
2ã−1ÅI+A
2ãÅI−B
2ã−1ÅI+B
2ã·
ÅI−C
2ã−1ÅI+C
2ãå·ÇI+ÅI−A
2ã−1ÅI+A
2ã·
ÅI−B
2ã−1ÅI+B
2ãÅI−C
2ã−1ÅI+C
2ãå−1
.
Taylor expansion of this relation gives
Ω = −2ÅI−ÅI+A
2+A2
4+A3
8ãÅI+A
2ãÅB
2+B2
4+B3
8ãÅI+B
2ã
ÅC
2+C2
4+C3
8ãÅI+C
2ããÅI+ÅI+A
2+A2
4+A3
8ãÅI+A
2ã
ÅB
2+B2
4+B3
8ãÅI+B
2ãÅC
2+C2
4+C3
8ãÅI+C
2ãã−1
+F(A, B, C )
14
=−2ÅI−ÅI+A+A2
2+A3
4ãÅI+B+B2
2+B3
4ãÅI+C+C2
2+C3
4ãã×
ÅI+ÅI+A+A2
2+A3
4ãÅI+B+B2
2+B3
4ãÅI+C+C2
2+C3
4ãã−1
+F(A, B, C )
=ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2+A3
4+B3
4+C3
4+A2B
2+AB2
2
+A2C
2+AC2
2+B2C
2+BC2
2ãÅI+ÅA
2+B
2+C
2+AB
2+AC
2+BC
2+A2
4+B2
4
+C2
4+A2B
4+AB2
4+A2C
4+AC2
4+B2C
4+BC2
4+A3
8+B3
8+C3
8ãã−1
+F(A, B, C )
=ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2+A3
4+B3
4+C3
4+A2B
2+AB2
2
+A2C
2+AC2
2+B2C
2+BC2
2ãÅI−A
2−B
2−C
2−AB
2−AC
2−BC
2−A2
4−B2
4
−C2
4+ÅA
2+B
2+C
2+AB
2+AC
2+BC
2+A2
4+B2
4+C2
4ã2å+F(A, B, C )
=ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2+A3
4+B3
4+C3
4+A2B
2+AB2
2
+A2C
2+AC2
2+B2C
2+BC2
2ãÅI−A
2−B
2−C
2−AB
4−AC
4−BC
4+BA
4
+CA
4+CB
4ã+F(A, B, C )
=A+B+C+1
2([A, B] + [A, C ] + [B, C]) + 1
4[A, B], C ]−1
4(ACB +BCA)
−1
4(ABA +BAB +BAB +BCB +CAC +C BC) + F(A, B, C )
with the terms in F(A, B, C ) being of at least fourth-order. Since we assumed that A, B and
Care in the neighborhood of zero, the series converges. Moreover, the Neumann series for
(I−A/2)−1converges and the term (I+ Cay(A) Cay(B) Cay(C))−1becomes
Å2I+ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2
+A2B
2+AB2
2+A2C
2+AC2
2+B2C
2+BC2
2+A3
4+B3
4+C3
4ã+‹
F(A, B, C )ã−1
=1
2ÅI+1
2ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2
+A2B
2+AB2
2+A2C
2+AC2
2+B2C
2+BC2
2+A3
4+B3
4+C3
4ã+‹
F(A, B, C )ã−1
,
where ‹
F(A, B, C ) contains terms related to truncated terms of the series expansion of
(I−A
2)−1, (I−B
2)−1, and (I−C
2)−1. Once again, these converge and ‹
F(A, B, C ) is small.
Thus, 1
2(I+ Cay(A) Cay(B) Cay(C)) is a small perturbation of the identity such that the
Neumann series to (I+ Cay(A) Cay(B) Cay(C))−1converges.
References
[1] A. Alvermann, H. Fehske, High-order commutator-free exponential time-propagation
of driven quantum systems, J. Comput. Phys., 230 (2011), pp. 5930-5956,
DOI:10.1016/j.jcp.2011.04.006.
[2] W. Auzinger, T. Kassebacher, O. Koch and M. Thalhammer, Adaptive splitting
methods for nonlinear Schr¨odinger equations in the semiclassical regime, Numer
Algor 72 (2016), pp. 1–35, DOI:10.1007/s11075-015-0032-4
15
[3] W. Auzinger, J. Dubois, K. Held, H. Hofst¨atter, T. Jawecki, A. Kauch, O. Koch,
K. Kropielnicka, P. Singh, C. Watzenb¨ock, Efficient Magnus-type integrators for
solar energy conversion in Hubbard models, Journal of Computational Mathematics
and Data Science, 2, (2022), 100018, DOI:10.1016/j.jcmds.2021.100018.
[4] J. Baum, R. Tycko, A. Pines, Broadband population inversion by phase modulated
pulses, J. Chem. Phys. 79 (1983), no. 9, pp. 4643–4644, DOI:10.1063/1.446381.
[5] S. Blanes, and F. Casas, A concise introduction to geometric numerical integration
(1st ed.), Chapman and Hall/CRC (2016). DOI:10.1201/b21563
[6] S. Blanes, F. Casas, and A. Murua, Splitting methods for differential equations,
arXiv:2401.01722 (2024), DOI:10.48550/arXiv.2401.01722.
[7] S. Blanes, F. Casas, J. A. Oteo, J. Ros, The Magnus expansion and some of its ap-
plications, Physics Reports 470 (2009), no. 151, DOI:10.1016/j.physrep.2008.11.001.
[8] S. Blanes, P.C. Moan, Fourth- and sixth-order commutator-free Magnus integrators
for linear and non-linear dynamical systems, Applied Numerical Mathematics, 56
(2006), no. 12, pp. 1519-1537, DOI:10.1016/j.apnum.2005.11.004.
[9] C. Brif, R. Chakrabarti and H. Rabitz Control of quantum phenomena: past, present
and future New J. Phys. 12 (2010), 075008, DOI:10.1088/1367-2630/12/7/075008.
[10] E. Celledoni, E. C¸ okaj, A. Leone, D. Murari and B. Owren Lie group integrators
for mechanical systems, International Journal of Computer Mathematics, 99 (2022),
no. 1, pp. 58-88, DOI:10.1080/00207160.2021.1966772.
[11] G. Chen, M. Foroozandeh, C. Budd, P. Singh, Quantum simulation of highly-
oscillatory many-body Hamiltonians for near-term devices arXiv:2312.08310
DOI:10.48550/arXiv.2312.08310.
[12] L. Dieci, R.D. Russell, E. Van Vleck, Unitary Integrators and Applications to Con-
tinuous Orthonormalization Techniques, SIAM Journal on Numerical Analysis, 31
(1994), no. 1, pp. 261-281, DOI:10.1137/0731014.
[13] F. Diele, L. Lopez and R. Peluso, The Cayley transform in the numerical solution
of unitary differential systems, Advances in Computational Mathematics 8(1998),
pp 317–334, DOI:10.1023/A:1018908700358.
[14] M. Foroozandeh, P. Singh Optimal Control of Spins by Analyti-
cal Lie Algebraic Derivatives Automatica 129 (2021), no. 109611
DOI:10.1016/j.automatica.2021.109611.
[15] S.J. Glaser, U. Boscain, T. Calarco, C.P. Koch, W. Kckenberger, R. Kosloff,
I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrggen, D. Sugny, F.K. Wilhelm, Train-
ing schrdingers cat: quantum optimal control, Eur. Phys. J. D 69 (2015), NO. 12.
DOI:10.1140/epjd/e2015-60464-1.
[16] S. G¨uttel, Rational Krylov approximation of matrix functions: Numerical
methods and optimal pole selection, GAMM-Mitt., 36 (2013), pp. 8–31,
DOI:10.1002/gamm.201310002
[17] V. Grimm, Resolvent Krylov subspace approximation to operator functions, BIT.
Numerical Mathematics 52 (2012), pp. 639-659, DOI:10.1007/s10543-011-0367-8.
[18] B.C. Hall, Lie Groups, Lie Algebras, and Representations, Graduate Texts in Math-
ematics, Springer, New York (2003), DOI:10.1007/978-3-319-13467-3.
16
[19] P. H¨anggi, Driven quantum systems, in: T. Dittrich, P. H¨anggi, G.-L. Ingold, B.
Kramer, G. Sch¨on, W. Zwerger (Eds.), Quantum Transport and Dissipation, Wiley-
VCH, Weinheim, 1997, pp. 249–286, https://books.google.de/books.
[20] E. Hairer, C. Lubich, G. Wanner, Geometric Numerical Integration, Springer, Berlin
(2006), DOI:10.1007/3-540-30666-8
[21] M. Hochbruck and C. Lubich On Magnus Integrators for Time-Dependent
Schr¨odinger Equations SIAM Journal on Numerical AnalysisVol. 41 (2003), no. 3,
DOI:10.1137/S0036142902403875.
[22] M. Hochbruck and C. Lubich, On Krylov Subspace Approximations to the Matrix
Exponential Operator, SIAM Journal on Numerical AnalysisVol. 34 (1997), no. 5,
DOI:10.1137/S0036142995280572.
[23] U. Hohenester, P.K. Rekdal, A. Borz`ı, J. Schmiedmayer, Optimal quantum control
of Bose-Einstein condensates in magnetic microtraps, Physical Review A, 75, Issue
2, id.023602, DOI:10.1103/PhysRevA.90.033628.
[24] Iserles, On Cayley-transform methods for the discretization of Lie-Group Equations,
cFound. Comput. Math. 1(2001) pp. 129-160, DOI:10.1007/s102080010003.
[25] Iserles, K. Kropielnicka, and P. Singh Magnus–Lanczos Methods with Simplified
Commutators for the Schr¨odinger Equation with a Time-Dependent Potential SIAM
Journal on Numerical AnalysisVol. 56 (2018), no. 3 DOI:10.1137/17M1149833
[26] P. Bader, Iserles, K. Kropielnicka, and P. Singh Efficient methods for linear
Schr¨odinger equation in the semiclassical regime with time-dependent potential Proc.
R. Soc. A.47220150733 (2016), DOI:10.1098/rspa.2015.0733
[27] Iserles, HZ. Munthe-Kaas, SP. Nørsett and A. Zanna, Lie-group methods, Acta
Numerica 9(2000), pp 215-365. DOI:10.1017/S0962492900002154.
[28] A. Iserles and A. Zanna, On the Dimension of Certain Graded Lie Algebras Arising
in Geometric Integration of Differential Equations. LMS Journal of Computation and
Mathematics, 3(2000), pp. 44-75. DOI:10.1112/S1461157000000206.
[29] K. Ito and K. Kunisch Optimal bilinear control of an abstract Schr¨odinger
equation SIAM Journal on Control and OptimizationVol. 46 (2007), Iss. 1,
DOI:10.1137/05064254X.
[30] T. Jawecki, P. Singh, Unitary rational best approximations to the exponential func-
tion, arxiv:2312.13809.
[31] K. Kormann, S. Holmgren, H.O. Karlsson, Accurate time propagation for the
Schrodinger equation with an explicitly time-dependent Hamiltonian. J Chem Phys.
128 (2008), no. 18:184101. DOI:10.1063/1.2916581.
[32] McLachlan RI and Quispel GRW, Splitting methods. Acta Numerica. 11 (2002),
pp. 341-434, DOI:10.1017/S0962492902000053.
[33] A. Marthinsen and B. Owren, Quadrature methods based on the Cayley
transform, Applied Numerical Mathematics, 39 (2001), no. 3–4, pp. 403-413,
DOI:10.1016/S0168-9274(01)00087-3.
[34] W. Magnus, On the exponential solution of differential equations for a linear operator
Commun. Pure Appl. Math. 7(1954), pp. 649-673, DOI:10.1002/cpa.3160070404.
[35] D.E. Manolopoulos, Derivation and reflection properties of a transmission-free absorb-
ing potential, J. Chem. Phys. 117 (2002), pp. 9552–9559, DOI:10.1063/1.1517042.
17
[36] C.Moler and C. Van Loan, Nineteen Dubious Ways to Compute the Exponen-
tial of a Matrix, Twenty-Five Years Later, SIAM Review 45 (2003), pp. 3-49
DOI:10.1137/S00361445024180.
[37] J.A. Oteo, The Baker–Campbell–Hausdorff formula and nested commutator identi-
ties, J. Math. Phys. 32 (1991), pp. 419–424, DOI:10.1063/1.529428.
[38] H. Munthe–Kaas and B. Owren Computations in a free Lie algebra Phil. Trans. R.
Soc. A. 357 (1999), pp. 957–981 DOI:10.1098/rsta.1999.0361.
[39] P. Singh, Sixth-order schemes for laser–matter interaction in the Schr¨odinger equa-
tion. J. Chem. Phys. 21 April 2019; 150 (15): 154111, DOI:10.1063/1.5065902.
[40] P. Singh and D. Goodacre, expsolve: A differentiable numerical algorithms package for
computational quantum mechanics, Zenodo (2024). DOI:10.5281/zenodo.13121390.
[41] Y. Saad, Iterative methods for sparse linear systems, Society for Industrial and
Applied Mathematics, 2003.
[42] E. Van Reeth, H. Rafiney, M. Tesch, S.J. Glaser, D. Sugny, Optimizing mri contrast
with b1 pulses using optimal control theory, in: Proc. IEEE Int. Symp. Biomed.
Imaging, 2016, pp. 310–313, DOI:10.1109/ISBI.2016.7493271
[43] AJ. Wathen, Preconditioning, Acta Numerica. 24 (2015), pp. 329-376.
DOI:10.1017/S0962492915000021.
18