PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Differential equations posed on quadratic matrix Lie groups arise in the context of classical mechanics and quantum dynamical systems. Lie group numerical integrators preserve the constants of motions defining the Lie group. Thus, they respect important physical laws of the dynamical system, such as unitarity and energy conservation in the context of quantum dynamical systems, for instance. In this article we develop a high-order commutator free Lie group integrator for non-autonomous differential equations evolving on quadratic Lie groups. Instead of matrix exponentials, which are expensive to evaluate and need to be approximated by appropriate rational functions in order to preserve the Lie group structure, the proposed method is obtained as a composition of Cayley transforms which naturally respect the structure of quadratic Lie groups while being computationally efficient to evaluate. Unlike Cayley-Magnus methods the method is also free from nested matrix commutators.
Commutator-free Cayley methods
S. Maslovskaya, C. Offen, S. Ober-Bl¨obaum, P. Singh, B. Wembe
Abstract
Differential equations posed on quadratic matrix Lie groups arise in the context
of classical mechanics and quantum dynamical systems. Lie group numerical inte-
grators preserve the constants of motions defining the Lie group. Thus, they respect
important physical laws of the dynamical system, such as unitarity and energy con-
servation in the context of quantum dynamical systems, for instance. In this article
we develop a high-order commutator free Lie group integrator for non-autonomous
differential equations evolving on quadratic Lie groups. Instead of matrix exponen-
tials, which are expensive to evaluate and need to be approximated by appropriate
rational functions in order to preserve the Lie group structure, the proposed method
is obtained as a composition of Cayley transforms which naturally respect the struc-
ture of quadratic Lie groups while being computationally efficient to evaluate. Unlike
Cayley–Magnus methods the method is also free from nested matrix commutators.
Keywords. Lie group integrators, Cayley transform, Magnus expansion, Non-
autonomous, Commutator-free methods.
1 Introduction
In this article we are concerned with non-autonomous linear ordinary and partial differ-
ential equations,
˙
Y(t) = A(t)Y(t), Y (t0) = Y0, t [t0, tf],(1)
where the solution Y(t)Gevolves on Lie groups of the form
G={XGLn(C) : XJX=J},(2)
where GLn(C) is the group of n×nnon-singular complex matrices and where JGLn(C)
is a given matrix (see for instance [5, 10, 20]). The condition Y(t)Gfor all tis fulfilled
if and only if ˙
Y(t) = A(t)Y(t) is tangential to the Lie group G. This is the case if and
only if A(t) takes values in the Lie algebra g={Cn×n: J+J= 0}to G.
Differential equations of this form describe a wide range of physical systems with
numerous practical applications. Some well known examples are the symplectic group
Spn(C), with J=Å0I
I0ã, related to the investigations of Hamiltonian systems, and the
Lorenz group SO3,1(R)=O3,1(R)SL4(R), with J= diag(1,1,1,1), related to the study
of special relativity. Another important example is the unitary group Un(C), with J=I,
which is germane to the investigation of quantum systems and has received particular
attention over the last decades. Lie groups of this form were termed quadratic Lie groups
in [24].
In the design of numerical schemes for (1), often a key requirement is that the approxi-
mate solution respects the conservation laws of the underlying physical systems. However,
preservation of the complex quadratic invariant of motion AJAJ= 0 can be challeng-
ing: If J=I(unitary group), for instance, then Gauss–Legendre collocation methods are
the only Runge–Kutta methods that preserve the invariant [12]. Rather than seeking an
1
arXiv:2408.13043v1 [math.NA] 23 Aug 2024
integrator that can preserve the conserved quantities defining G, which broadly defines the
field of geometric numerical integration [20, 5], Lie group methods [27] are a narrower class
of methods that exploit the intrinsic geometric Lie group structure. By ensuring that the
approximate solution evolves on the Lie group, these methods ensure that the symmetries
of the system and the corresponding conservation laws are respected.
Concerns such as computational accuracy, processing time, and memory usage, which
motivate algorithm development for ODEs and PDEs more broadly, also remain of paramount
importance when solving ODEs and PDEs on Lie groups such as (1). Quantum optimal
control algorithms [4, 9, 14, 15, 42] for instance, require repeated integration of the under-
lying quantum differential systems, which needs to be fast and accurate while preserving
the geometric properties of the system such as unitarity and conservation of energy. The
aim of this work is to develop numerical integrators for non-autonomous linear differential
equations of the form (1) which, in a context such as that of quantum optimal control, are
accurate and fast, while ensuring that the numerical solution evolves on the Lie group (2).
A prominent tool in the design of Lie group methods for solving (1) has been the
Magnus expansion [27, 34]. The general idea of the Magnus approach is to write the
solution as Y(t)=eΩ(t)Y0and expand Ω(t) into an infinite series,
Ω(t) = Zt
0
A(t1) dt1+1
2Zt
0Zt1
0
[A(t1), A(t2)] dt2dt1+· · · ,
involving nested commutators of A(t) at different times. Truncating the series at a specified
order and computing its exponential yields an approximation of the solution to that order.
The resulting numerical schemes, although very accurate, face many practical diffi-
culties due to the presence of nested commutators in the Magnus expansion the prior
computation of these commutators can be very expensive, the number of commutators
increases rapidly with the order of accuracy required [38], they reduce sparsity [3] and can
alter the structure sufficiently to make Magnus based scheme infeasible without substantial
alterations [11].
A very versatile technique for overcoming this difficulty is presented by the so-called
commutator-free or quasi-Magnus methods [1, 8, 10]. While the derivation of these methods
also starts from the Magnus expansion, they utilise the Baker-Campbell-Hausdorff (BCH)
formula [18, 37] to approximate the exponential of the Magnus expansion by a product of
multiple exponentials,
eΩ(t)eS1eS2. . . eSK,(3)
where each exponent has a much simpler form specifically, the exponents Skfeatures
no commutators while achieving the same accuracy, see for instance [1, 8] where Skare
obtained as linear combinations or integrals of A(t).
Another crucial bottleneck in Magnus based methods as well as their commutator-free
counterparts is the computation of the matrix exponential, which can be prohibitively
expensive [36]. While Krylov subspace methods lead to very efficient Magnus–Lanczos
solvers for small time-steps [25, 31], polynomial approximations do not respect the Lie
group structure of (1). Where geometric numerical integration is required, rational ap-
proximations to the exponential must be utilised instead [30].
The most well known among rational approximants to the exponential are degree (n, n)
(i.e. diagonal) Pad´e approximants. The (1,1) Pad´e approximant, called the Cayley trans-
form, preserves the mapping from Lie algebra to the Lie group for quadratic Lie groups
of the form (2), and thus is suitable for applications to (1). It leads to the well known
Crank–Nicholson method, which is a second order method. The fourth-order Magnus ex-
pansion as well as fourth-order commutator-free methods require each exponential, eΩ(t)
or eSkin (3), to be computed with the degree (2,2) Pad´e approximant, while sixth-order
methods need to be paired with the degree (3,3) Pad´e approximant. Since a degree (n, n)
approximant involves nlinear equation solves, the requirement of high-order rational ap-
proximants leads to an n-fold increase in the cost of the overall scheme.
2
Keeping the eventual approximation of the exponential by a rational function in mind,
Cayley–Magnus methods [13, 24] develop an alternative to the Magnus series by seeking
an expansion whose Cayley transform directly provides a high-order approximation to
the solution of (1). Since the Cayley transform is a degree (1,1) rational method, this
approach circumvents the n-fold scaling of traditional Magnus based approaches. However,
much like the Magnus expansion, this new expansion called the Cayley–Magnus expansion
also features commutators, and its application involves very similar challenges due to their
presence. To the best of our knowledge, there is no commutator-free alternative for the
Cayley–Magnus methods.
In this work we propose a new approach which combines the commutator-free approach
with Cayley–Magnus expansion to derive high-order schemes which avoid both, nested
commutators and exponential matrix computations. The resulting schemes have close
parallels to (3), with the exponentials being replaced by the significantly cheaper Cayley
transforms. The schemes respect the Lie group structure of (1) for quadratic Lie groups
of the form (2) by design.
The rest of the article is organized as follows. In Section 2 we introduce some notations
and definitions. Moreover, we recall results on Cayley–Magnus and Legendre expansions
that we will need to build a fourth-order scheme based on the Cayley transform. Sec-
tion 3 is the core of the paper: We first present a variant of the Cayley–BCH formula
derived up to order four; the Cayley-BCH expansion is then used to derive a new fourth-
order commutator-free Lie group integrator for quadratic Lie groups such as SOn(R) or
Un(C). Section 4 contains numerical experiments that demonstrate the effectiveness of
the proposed approach.
2 Preparation
2.1 Cayley transforms for quadratic Lie groups
Consider the matrix differential equation
˙
Y(t) = A(t)Y(t), Y (t0) = Y0, Y (t)G, t [t0, tf],(4)
where A(·) is a Lipschitz-continuous operator taking its values in g. If Gis a Lie group and
gits Lie algebra, then the motions evolves on G, i.e. Y(t)Gfor all t, provided that A(·)
takes values in gand Y0G. The article focuses on numerical methods for differential
equations on quadratic Lie groups which are matrix Lie groups of the form
G={AGLn(C) : AJA=J},(5)
for an invertible matrix JGLn(C). Here A=¯
Adenotes the conjugate transpose.
The Lie algebra to Gis given as
g={Cn×n: J+J= 0}.
An important example of (5) is the unitary group Un(C), with J=I(where Iis the
identity matrix), which occurs in the context of quantum systems. Its Lie algebra consists
of skew-hermitian matrices. Another example is the (complex) symplectic group Sp2n(C),
with J=Å0I
I0ã. Its Lie algebra consists of Hamiltonian matrices.
Let cC\ {0}. For Cn×nwith c1/σ(Ω) the c-Cayley transform of is defined
as
Cay(Ω, c)=(IcΩ)1(I+cΩ) .
Here σ(M) denotes the spectrum of M. The inverse c-Cayley transform is given as
Cay1(A, c) = 1
cI+c
cA1(IA) for c
c∈ σ(A).
3
Indeed, the c-Cayley transform constitutes a diffeomorphism Cayc:˜
g
Gbetween
˜
g={g:c1/σ(Ω)}and
G={AG:c
c/σ(A)}.
Since Cayley transforms respect the Lie group structure for quadratic Lie groups, so
do their products, i.e. the product Cay(Ω, c1) Cay(Ω, c2). . . Cay(Ω, cn) resides in the
Lie group
G={AG:c
k
ck/σ(A), k = 1, . . . , n}, provided e
g={g:
c1
k/σ(Ω), k = 1, . . . , n}. Thus Cayley transforms are natural building blocks for
rational approximations that respect quadratic Lie groups. Indeed, all unitary rational
approximations (relevant to quantum dynamics, where G= Un(C)), including higher-
order diagonal Pad´e approximations, can be obtained as compositions of Cayley transforms
[30].
We will use the Cayley transform as a cheap alternative to the surjective matrix ex-
ponential exp : gGto design Lie group integrators. Other approaches employing the
Cayley transform to solve system (4) can be found for example in [24, 33]. These ap-
proaches are generally based on the following result.
In the following, the 1/2-Cayley transform will simply be referred to as Cayley trans-
form and will be denoted by Cay(A).
Lemma 2.1. Let Y(t)be the solution of system (4), with 1/σ(Y(t)Y1
0)for any
t[t0, tf], then Y(t)can be written in the form Y(t) = Cay(Ω(t))Y0, where the matrix
˜
gsatisfies
˙
Ω(t) = A(t)1
2[Ω, A(t)] 1
4A(t)Ω,Ω(t0)=Ω0,(6)
with the Lie bracket (commutator) of two matrices Aand Bdefined by [A, B] = A·BB·A.
Proof. To simplify the notations and without loosing any generality, we consider Y0=I.
So, Y= Cay(Ω) =I
2Y=I+
2. Differentiation of this relation leads to
˙
2Y+ÅI
2ã˙
Y=˙
2i.e ˙
Ω=2ÅI
2ã˙
Y(I+Y)1
i.e ˙
Ω=2ÅI
2ãA(t)Y(I+Y)1
(7)
Moreover,
(I+Y)Y1=ÇI+ÅI
2ã1ÅI+
2ãåÅI+
2ã1ÅI
2ã
=ÅI+
2ã1ÅI
2ã+I
=ÅI+
2ã1ÅI
2+I+
2ã= 2 ÅI+
2ã1
(8)
Plugging in relation (8) into equation (7), one obtains ˙
= I
2A(t)I+
2which
allows us to conclude (6).
Remark 1. In numerical time-stepping methods, the time step δt can always be made
sufficiently small such that Ω(t)is close to the zero matrix such that Ω(t)˜
g(C), i.e. 2∈
σ(Ω(t)) for all t[t0, t0+δt]. While this could force small time-steps in a general setting,
in the context of quantum systems, one has ˜
g=˜
un(C) = un(C) = gsince the solution
evolves on the unitary matrix group, so that the condition ˜
un(C)is automatically
satisfied.
According to Lemma 2.1, solving system (4) is equivalent to solving system (6), but
now considering time-stepping with a small time step δt in order to guarantee the exis-
tence of ˜
g. As the Lie algebra is characterised by the linear constraint AJ +JA= 0,
4
we can apply any Runge–Kutta method to (6) and obtain a Lie group structure preserv-
ing integrator, since Runge–Kutta methods preserve linear constraints. This approach is
suggested in [20, IV.8.3], for instance. However, this involves the repeated computation
of matrix commutators, which can be costly in high dimensions. So next, we focus on
system (6) and present the Magnus expansion for Cayley transform, developed by Iserles
[24], which will be used later to derive our methods.
2.2 The Cayley–Magnus expansion
We now focus on system (6) given in Lemma 2.1 i.e
˙
Ω(t) = A(t)1
2[Ω, A(t)] 1
4A(t)Ω,Ω(t0) = 0.(3)
In this section we recall the results of [24] for expanding as a Cayley–Magnus series.
For this, let Ω(t) denote the solution of (6) to the initial value Ω(0) = 0. We consider as
an ansatz the formal series
Ω(t) =
X
m=1
m(t),(9)
where m(t) denotes an expression consisting of miterated integrals over polynomials of
degree min A. Substituting this into (6) and integrating over [0, t] with Ω(0) = 0 leads to
X
m=1
m=Zt
0
A(ξ) dξ1
2Zt
0"
X
m=1
m(ξ), A(ξ)#dξ
1
4Zt
0"
X
m=1
m(ξ)#A(ξ)"
X
m=1
m(ξ)#dξ
=Zt
0
A(ξ) dξ1
2Zt
0
X
m=2
[Ωm1(ξ), A(ξ)] dξ
1
4
X
m=3
m2
X
k=1 Zt
0
mk1(ξ)A(ξ)Ωk(ξ) dξ
=Zt
0
A(ξ) dξ1
2Zt
0
[Ω1(ξ), A(ξ)] dξ
X
m=3 "1
2Zt
0
[Ωm1(ξ), A(ξ)] dξ1
4
m2
X
k=1 Zt
0
mk1(ξ)A(ξ)Ωk(ξ) dξ#.
Now jcan be determined recursively as follows:
1(t) = Zt
0
A(ξ) dξ,
2(t) = 1
2Zt
0
[Ω1(ξ), A(ξ)] dξ,
m(t) = 1
2Zt
0
[Ωm1(ξ), A(ξ)] dξ1
4
m2
X
k=1 Zt
0
mk1(ξ)A(ξ)Ωk(ξ) dξ, for m3.
For a combinatorical description of the expressions musing the language of trees and for
a discussion of convergence properties of the series P
m=1 mwe refer to [24]. The first
5
terms of this expansion are explicitly given by
1(t) = Zt
0
A(ξ) dξ, 2(t) = 1
2Zt
0Zξ1
0
[A(ξ2), A(ξ1)] dξ2dξ1,
3(t) = 1
4Zt
0Zξ1
0Zξ2
0
[[A(ξ3), A(ξ2)], A(ξ1)] dξ3dξ2dξ1
1
4Zt
0ñZξ1
0
A(ξ2) dξ2ôA(ξ1)ñZξ1
0
A(ξ2) dξ2ôdξ1.
Lemma 2.2. Truncating the Cayley–Magnus expansion (9) at a given order p, i.e. setting
Ω(t)p(t) =
p
X
m=1
p,(10)
leads to a p-order approximation.
Remark 2. Only a few integral terms in each mare relevant to obtain a p-order approx-
imation. Thus, by considering only relevant terms, we can considerably reduce the number
of terms to be computed (there are fewer terms than in the exponential Magnus expansion,
see again [24] for more explanation). Moreover, an approximation of order-p for Ω(t)will
lead to an approximation of order-p for Y(t), i.e. Y(t) = Cay(Ωp(t))Y0+O(tp+1)(see for
instance [13]).
2.3 Cayley–Magnus and Legendre expansion
A starting point of deriving a commutator-free higher-order scheme is to consider a Leg-
endre expansion of A(·), following the strategy in [1]. We will first introduce the Legendre
expansion of the matrix Aand expand in terms of the Legendre expansion. The shifted
Legendre polynomials Pn(x) are defined for n= 0,1,2,· · · through the recurrence
P0(x)=1, P1(x) = 2x1, Pn+1 =2n+ 1
n+ 1 Pn(x)n
n+ 1Pn1(x).(11)
By definition, Pn(x) is a polynomial of degree n, symmetric with respect to 1/2. The first
terms are explicitly given by
P2(x) = 6x26x+ 1, P3(x) = 20x330x2+ 12x1, P4(x) = 70x4140x3+ 90x220x+ 1.
For a given time step δt, the matrix Acan be expanded on the interval [0, δt] in a series of
Legendre polynomials given by (see also [1])
A(t) = 1
δt
N
X
k=1
AkPk1Åt
δt ã+O(δtN+1 ),(0 tδt).(12)
where Pk,k= 0,1,2, . . . are the Legendre polynomials defined in (11) and where the coefficients
Akare given by
Ak= (2k1) Zδt
0
A(t)Pn1Åt
δt ãdt= (2k1)δt Z1
0
A(xδt)Pn1(x) dx.
Plugging the Legendre expansion (12) into the Cayley Magnus expansion, one can express Ω(t)
with respect to the coefficients Ak. The first three terms read
1(δt) = 1
δt
N
X
n=1 Zδt
0
AnPn1Åξ
δt ãdξ+O(δtN+1 )
=
N
X
n=1 ßZ1
0
Pn1(x) dxAn+O(δtN+1 )
=A1,since Pk(k1) is anti-symmetric w.r.t 1/2 and Z1
0
P0(x) dx= 1,
6
2(δt) = 1
2Z1
0Zξ1
0
[A(ξ2), A(ξ1)] dξ2dξ1
=1
2δt2Z1
0Zξ1
0"N
X
n=1
AnPn1Åξ2
δt ã,
N
X
k=1
AkPk1Åξ1
δt ã#dξ2dξ1
=1
2
N
X
n,k=1 ßZ1
0Zx1
0
Pn1(x2)Pk1(x1) dx2dx1[An, Ak] + O(δtN+1 ),
3(δt) = 1
4Zt
0Zξ1
0Zξ2
0
[[A(ξ3), A(ξ2)], A(ξ1)] dξ3dξ2dξ1
1
4Zδt
0®Zξ1
0
A(ξ2) dξ2´A(ξ1)®Zξ1
0
A(ξ2) dξ2´dξ1
=1
4
N
X
n,m,k=1 ßZ1
0Zx1
0Zx2
0
Pn1(x3)Pm1(x2)Pk1(x1) dx3dx2dx1·
[[An, Am], Ak]
1
4
N
X
n,m,k=1 ÅZ1
0ßZx1
0
Pn1(x2) dx2Pm1(x1)ßZx1
0
Pk1(x2) dx2dx1ã·
(AnAmAk) + O(δtN+1 ).
Next, we will denote by [N]
k(δt) the truncation of k(δt) up to the first Nterms of the
Legendre coefficients.
Proposition 2.1. For a given time-step δt, the first three terms of the Cayley–Magnus expansion
combined with a Legendre expansion of Auntil N= 3 are given by
1(δt) = [2]
1(δt) = A1,[2]
2(δt) = 1
6[A1, A2],
[2]
3(δt) = 1
12 A3
11
120 A3
2+1
60 A1A2
21
30 A2A1A2+1
60 A2
2A1.
Moreover, we have
Ω(δt) = 1(δt) + [2]
2(δt) + [2]
3(δt) + O(δt4).(13)
Remark 3. For any nN,Anis a term of order δtn. This can be easily seen by comparing the
Legendre expansion with an expansion A(t) = Pm1amtm1in powers of t. Since we are looking
to build a fourth-order scheme, according to the expression of 1,2and 3, only the first two
terms of this expansion, i.e. A1and A2will be relevant for us.
The approximation in (13) is a third-order approximation when considering the exact integral
to compute A1and A2. However by a matter of fact, the order is increased to 4 when we consider
specifically the Gauss-Legendre quadrature to approximate this integral, i.e. the quadrature error
exactly cancels the leading error term of the Cayley expansion. This later point has been discussed
by Iserles [24]. Thus, taking
A1=AÇtn+Ç1
23
6åδtå, A2=AÇtn+Ç1
2+3
6åδtå,
and
A1=δt
2(A1+A2), A2=δt3
2(A2A1),(14)
one has the following result.
Proposition 2.2. Given a time step δt, the following approximation holds,
Ω(tn+δt) = A11
6[A1, A2]1
12 A3
1+O(δt5),(15)
with A1and A2defined in (14).
7
Proof. From Proposition 2.1, we have
Ω(δt) = 1(δt) + 2(δt) + 3(δt) + O(δt4).
However, apart from A3
1, all the other terms in 3are of order greater than δt5, since Akis of
order δtk(see Remark 3), so we first obtain
Ω(δt) = A1(δt)1
6[A1(δt), A2(δt)] 1
12 A3
1+O(δt4).
Approximating A1and A2by the Gauss-Legendre quadrature as defined in (14), 3becomes
symmetric so that the order of the error will be even, see [24, Sec. 4.2]. Thus the order increases
to δt5.
Using the Gauss-Legendre quadrature given above to compute A1and A2, the Cayley Magnus
Time-propagator (CMT) scheme (15) is already a fourth-order approximation scheme as desired
(see also [24]). However, it still contains commutators which can make the implementation ex-
pensive for large systems, can reduce sparsity, and may be more structurally complicated. The
aim in the following section is to use a similar idea as the exponential commutator-free method
developed in [1], using in our case the Cayley transform. To this end, we require a BCH-type
formula for the Cayley transform.
3 Commutator-free Cayley scheme
3.1 BCH-formula for Cayley transform
For the derivation of the fourth-order commutator-free Cayley methods, we need to express the
product of three Cayley transforms as a single Cayley transform up to order four accuracy. While
this can be obtained by two applications of the Cayley-BCH formula developed by Iserles and
Zanna [28], for the sake of concreteness we show the derivation explicitly up to order four.
Proposition 3.1. Let A, B, C gbe three matrices in a neighborhood of 0, then the following
formula holds
Cay(A) Cay(B) Cay(C) = Cay(Ω(A, B, C )),(16)
with
Ω(A, B, C ) = A+B+C+1
2([A, B] + [A, C ] + [B, C]) + 1
4[A, B], C ]
1
4(ACB +BCA)1
4(ABA +ACA +BAB +B CB
+CAC +C BC) + F(A, B, C ),
(17)
where Fis a series of homogeneous polynomials in A,B,Cof degrees mwith m4.
Proof. First, let us observe that for a small enough neighborhood of 0, one has 1/2/σ(A)
σ(B)σ(C). Now, considering that Cay(A) Cay(B) Cay(C) = Cay(Ω(A, B, C )), then we get
Cay(A) Cay(B) Cay(C) = ÅIΩ(A, B, C)
2ã1ÅI+Ω(A, B, C )
2ã,
ÅIΩ(A, B, C )
2ãCay(A) Cay(B) Cay(C) = ÅI+Ω(A, B, C)
2ã
Ω(A, B, C )
2(I+ Cay(A) Cay(B) Cay(C)) = Cay(A) Cay(B) Cay(C)I
Ω(A, B, C ) = 2ÇIÅIA
2ã1ÅI+A
2ãÅIB
2ã1ÅI+B
2ã·
ÅIC
2ã1ÅI+C
2ãå·ÇI+ÅIA
2ã1ÅI+A
2ã·
ÅIB
2ã1ÅI+B
2ãÅIC
2ã1ÅI+C
2ãå1
.
A series expansion1of this last relation leads to the desired result.
1Details of the computation can be found in the appendix.
8
Remark 4. If we want a fourth-order scheme, the terms of the formula obtained in Proposi-
tion 3.1 by ignoring F(A, B, C )are enough. However, one needs to compute more terms if we are
looking at more than fourth-order.
The Cayley transform version of the usual BCH- and sBCH-formula [28] can then be deduced
from Proposition 3.1 by setting C= 0 and C=Arespectively.
Corollary 3.1 (BCH-formula).Considering A, B gin the neighborhood of 0, then one has
Cay(A) Cay(B) = Cay(Ω(A, B )),(18)
with
= A+B+1
2[A, B]1
4(ABA +BAB) + F(A, B )(19)
where Fis a series of homogeneous polynomials in A,Bof degrees mwith m4.
Remark 5. If we consider the general Cayley transform, then
Cayc1(A) Cayc2(B) = Cay(Ω)
leads to
= 2x1
|c1|2A+2x2
|c2|2B+2x1x2
|c1|2|c2|2[A, B]2x1x2
|c1|2|c2|2Åx1
|c1|2ABA +x2
|c2|2BABã
+2iÅy1
c1|c1|4A3+y2
c2|c2|4B3+x1x2y2
|c1|2|c2|4B2Ax1x2y1
|c1|4|c2|2A2B2x1x2y2
|c1|2|c2|4AB2
y1
|c1|4A2y2
|c2|4B2ã+F(A, B),
with c1=x1+iy1,c2=x2+iy2and where F(A, B)is as in the previous corollary.
Corollary 3.2 (sBCH-formula).Considering A, B gin the neighborhood of 0, then one has
Cay(A) Cay(B) Cay(A) = Cay(Ω(A, B)) (20)
where
= 2A+B1
2(A2B+BA2+BAB +A3) + F(A, B)(21)
where Fis a series of homogeneous polynomials in Aand Bof degree mwith m4.
3.2 Commutator-Free Cayley Time-propagator scheme (CFCT)
In this section, in analogy to the commutator-free quasi-Magnus integrators, we seek a fourth-
order approximation of the form
Y(δt)Y1:= Cay(Ω1(δt)) Cay(Ω2(δt)) Cay(Ω3(δt))Y0= Cay(e
Ω(δt))Y0,(22)
with i=P2
k=1 αi,kAk,i= 1,2,3, with Akbeing the Gauss-Legendre coefficients of Adefined
in (14) and αi,k R. Recall that the Lie algebra ghas the structure of a real vector space. If
Agthen the Gauss-Legendre coefficients are elements of g. Since we are seeking a Lie group
structure-preserving scheme, we seek real coefficients αi,k (rather than complex coefficients) such
that igis guaranteed. We want to find, if there exists, the correct coefficients αi,k,k= 1,2
and i= 1,2,3, such that approximation (22) leads to a fourth-order approximation. Using the
Cayley–BCH formula from Proposition 3.1, we get
e
Ω=Ω1+ 2+ 31
4(Ω121+ 131+ 212+ 232+ 313+ 323)
1
4(Ω132+ 231) + 1
2([Ω1,2] + [Ω1,3] + [Ω2,3]) + 1
4[[Ω1,2],3]
=
3
X
i=1
2
X
j=1
αij Aj+1
2
2
X
i,j=1
(α1iα2j+α1iα3j+α2iα3j)[Ai, Aj]1
4
2
X
i,j,k=1
(α1iα3jα2k
+α2iα3jα1k+α1iα2jα1k+α1iα3jα1k+α2iα1jα2k+α2iα3jα2k+α3iα1jα3k
+α3iα2jα3k+α2iα1jα3k+α3iα1jα2kα1iα2jα3kα3iα2jα1k)AiAjAk
9
On the other hand, from (15) one has
Ω(δt) = A1(δt)1
6[A1(δt), A2(δt)] 1
12 A3
1(δt) + O(δt5).
Equating the two expressions for Ω(δt) and ˜
Ω, we obtain nonlinear relations for the coefficients.
A solution is provided by
α31 =α11, α21 = 1 2α11 , α12 =α32 =α11 α2
11,
a22 = 0,with a11 =21/3
3+22/3
6+2
3.
(23)
We obtain the following final scheme
Y1= Cay(α11A1(δt) + α12 A2(δt)) Cay(α21A1(δt))·
Cay(α11A1(δt)α12 A2(δt))Y0
(24)
Proposition 3.2. For a given time-step δt, we have
Y(δt) = Cay(α11A1(δt) + α12A2(δt)) Cay(α21A1(δt))·
Cay(α11A1(δt)α12 A2(δt))Y0+O(δt5)
with coefficients αij as in (23).
Remark 6. To simplify notation, we have considered only the interval [t0, t0+δt]. However,
since the operator Ais non-autonomous, the coefficients Akwill naturally depend on tnfor a
given discretization n= 1,··· , N. So, the final scheme will look like
Yn+1 = Cay(α11A1(tn, δt) + α12 A2(tn, δt)) Cay(α21A1(tn, δt))·
Cay(α11A1(tn, δt)α12 A2(tn, δt))Yn
for a given time stepping tn=t0,...,tN.
Remark 7. In contrast to the exponential commutator-free method [1], we cannot expect a
fourth-order scheme with the product of only two Cayley transforms. Indeed, if we write Y1=
Cay(Ω1(δt)) Cay(Ω2(δt))Y0= Cay( e
Ω(δt))Y0, then the coefficients αi,k will be complex. Complex
coefficients, however, are not compatible with the structure of the Lie algebra g, which is a vector
space over R.
4 Examples
In this section we consider two examples to illustrate our results. The first one, a driven two-level
quantum system, is a classical and well known system in quantum computing. It appears as a good
starting point since it has also been considered in the context of the exponential commutator-free
approach [1]. So, we will be able to compare our scheme with the fourth-order commutator-free
exponential time-propagator CFET4:2 derived in [1] for a system where we have an analytical
solution. In the second example we consider the Schr¨odinger equation with a time-dependent
Hamiltonian in dimension one.
Time-dependent Schr¨odinger equations with explicitly time-dependent potentials occur natu-
rally in quantum optimal control, where the potential generally contains a time-dependent control
function (laser profile, magnetic field, etc.). Several approaches based on splitting methods have
been developed for this type of problems (see for instance [6, 25, 39]). The interest of our approach
is to propose an alternative to the use of exponentials to integrate the solution. Indeed, in the
context of quantum optimal control, the search for a solution involves an optimization process
that usually requires a significant number of system integrations. The use of Cayley transforms
instead of exponentials therefore makes sense, given that the geometric properties of the system
are preserved, while the solution is computed more cheaply.
For the first example, we consider a magnetic field such that we can analytically express
the solution, which is taken as the reference solution. In the second example, we will consider
CFET4:2 from [1] as the reference solution.
10
4.1 A driven two-level system
For our first test problem, we consider an example from [1] which is a driven two-level system,
realized for instance by a spin 1/2 in a magnetic field
B(t)=(Bx(t), By(t), Bz(t)). In the
eigenbasis of the z-component of angular momentum, the Hamiltonian operator is defined by
H(t) = 1
2ÅBz(t)Bx(t)iBy(t)
Bx(t) + iBy(t)Bz(t)ã.
With the magnetic field
B(t) = (2Vcos(2ωt),2Vsin(2ωt),2∆) the system is periodically driven
and the propagator can be analytically expressed using Floquet theory. In this particular case,
the Hamiltonian becomes
H(t) = 1
2ÅV e2iωt
V e2iωt ã,
where , V, ω R. The system to solve is given by
˙
Y(t) = A(t)Y(t), Y (t0) = I, A(t) = iH (t),(25)
with Y(t)U2(C)C2×2and t[0, T ]. The exact solution is given by
Y(t) = Ñeiwt cos(Λt)iω
Λsin(Λt)iV
Λeiwt sin(Λt)
iV
Λeiwt sin(Λt)eiwt cos(Λt) + iω
Λsin(Λt)é,
with Λ = p(∆ ω)2+V2. Notice that in accordance with Floquet theory for periodically driven
systems, Y(πn/ω, 0) = Y(π /ω, 0)nfor any given nN. Also, the transition probability spin up
spin down,
P(t) = |Y21(t, 0)|2=ÅV
ã2
sin2(Ωt),
is typical for a Breit–Wigner resonance.
For the numerical simulations, we solve the system (25) using different numerical schemes.
The first one is the Commutator-Free Exponential Time-propagator (denoted as CFET4:2) from
[1, Prop. 4.2] which is the exponential version of the main method developed in this work. We use
also the Cayley–Magnus Time-propagator (denoted as CMT), given by equation (15) and already
obtained by Iserles in [24], which contains nested commutators. Finally we use the main method
derived is this paper namely the Commutator-Free Cayley Time-propagator (denoted as CFCT)
and given by
Y(tn+1)Y[CFCT]
n+1 = Cay(α11A1(tn, δt) + α12 A2(tn, δt)) Cay(α21A1(tn, δt))·
Cay(α11A1(tn, δt)α12 A2(tn, δt))Yn.
We propagating until T= 20π/ω, taking ω= 1, = V= 0.5. For the error analysis we consider
the Euclidean norm in C2. The solutions as well as error and total energy during the propagation
are displayed in Figure 1. Conservation of the norm (also display in the same figure), insure the
preservation of the transition probability P(t) by the numerical scheme, which is not the case
when using a classical scheme as the Runge–Kutta method RK45.
4.2 Linear time-dependent Schr¨odinger equation
We consider the one-dimensional time-dependent Schr¨odinger equation,
i∂tφ(x, t) = H(x, t)φ(x, t), φ(x, 0) = φ0(x)t[0, T ], x D= [L, L],(26)
where the time-dependent Hamiltonian H(x, t) is in the form
H(x, t) = 2
x+V(x, t),
with the potential V(x, t) = V0(x) + u(t)xcontaining an external potential (with a fixed control
term u(t) in this example). In this first case, we consider the internal potential V0and the laser
profile u(t) to be given by
V0(x) = x410x2, u(t) = csin(ωt), c =102, ω = 5π.
11
0 10 20 30 40 50 60
t
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Y
1
real
imaginary
10 1
t
10 5
10 4
10 3
10 2
10 1
100
Euclidian error in
Y
(
t
)
t
4
CFET4:2
CMT
CFCT
100101102
t
10 15
10 14
10 13
10 12
10 11
10 10
||
Y
(
t
)|| ||
Y
(0)||
RefSol
CFET4:2
CFCT
RK45
0 10 20 30 40 50 60
t
2.00
1.75
1.50
1.25
1.00
0.75
0.50
0.25
0.00
Total energy
Refsol
CFET4:2
CMT
CFCT
Figure 1: (Left) Projection of the solution of system (25) along the first axis, together with
error obtained for CFET4:2, CMT, CFCT taking T= 20π/ω,ω= 1 and = V= 0.5.
(Right) Illustration of the norm conservation during the propagation for CFET4:2, CMT
and CFCT. We can clearly see the loss of this property by using the classical integrator
as RK45.
We consider as initial state a Gaussian wave-packet,
φ0(x) = e(xx0)2
2σ2, σ = 0.5, x0=2.
Following spatial discretisation, we have to solve the following system of ODEs,
tφ(t) = A(t)φ(t), φ(0) = φ0, A(t) = iH(t),
where H(t) is now a matrix representation of the Hamiltonian. Specifically, we use a Fourier
spectral discretization on an equispaced grid L=x0,··· , xN=L, after imposing periodic
boundary conditions.
We implement both CMT (with commutators) and CFCT (with commutator-free) and com-
pare both with the reference solution (which is given here by CFET4:2). For a time discretization
0 = t0,··· , tM=T, CMT and CFCT respectively read
φ[CMT]
n+1 = Cay(A1(tn, δt)1
6[A1(tn, δt), A2(tn, δt)] 1
12 A3
1(tn, δt)φ[CMT]
n,
φ[CFCT]
n+1 = Cay (α11A1+α12A2) Cay (α21 A1) Cay (α11A1α12 A2)φ[CFCT]
n,
with α11, α12 , α21, α22 , α31 , α32 defined in (23), where
A1=δt
2(A1+A2), A2=δt3
2(A2A1),
and
A1=AÇtn+Ç1
23
6åδtå, A2=AÇtn+Ç1
2+3
6åδtå.
12
Numerical simulations. We check the conservation of the state norm φ(·, t)L2(D)for
each of these two methods and compute the error with respect to CFET4:2 given in Figure 2. In
addition, we compute the energy change during propagation, Figure 2. Note that, since we have
a time-dependent Hamiltonian, energy is no longer conserved during propagation. In Figure 3
one can see the blow up of the energy when considering RK45 scheme.
The implementation of these methods is done using the expsolve package [40], which is
utilized for initializing the Hamiltonian, computing observables such as the energy, and the com-
putation of the L2norms and inner products (using the l2norm and l2inner methods).
42024
x
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
(x,T)
_0
real CFET4:2
real CFCT
imag CFET4:2
imag CFCT
10 310 2
t
10 11
10 9
10 7
10 5
10 3
10 1
L
2 error in (
t
)
t
4
CMT
CFCT
10 210 1100
t
10 15
10 13
10 11
10 9
10 7
10 5
10 3
10 1
|| (
t
)||
L
2|| (0)||
L
2
CFET4:2
CMT
CFCT
RK45
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
t
1.0
0.5
0.0
0.5
1.0
Total energy
CFET4:2
CMT
CFCT
Figure 2: (Left) Solution of system (26) together with error obtained for CMT, CFCT
(taking CFET4:2 as reference solution), propagated until T= 2. (Right) Illustration of
the norm conservation and change of the energy during the propagation for CFET4:2,
CMT and CFCT. Again, there is no conservation of the norm along the propagation when
using the classical integrator RK45.
A Details on the computation of coefficients
Writing down all the equations in (23) and equalizing them with relation (15) one arrives at the
system
α11 +α21 +α31 = 1
α12 +α22 +α32 = 0
α11α22 +α11 α32 +α21α32 α12 α21 α12 α31 α22α31 =1
3
2α11α21 α31 +α2
11α21 +α11 α2
21 +α2
11α31 +α11 α2
31 +α2
21α31 +α21 α2
31 =1
3
2α11α22 α31 +α11α12 α21 +α11 α12α31 +α11 α21 α22 +α11α31 α32
+α21α22 α31 +α21α31 α32 = 0
2α11α21 α32 +α2
11α22 +α2
11α32 +α12 α2
21 +α2
21α32 +α11 α2
31 +α22α2
31
+ 2α12α21 α31 2α11α22 α31 = 0,
13
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
t
0
100
200
300
400
500
Total energy
CFET4:2
CMT
CFCT
RK45
Figure 3: Illustration of the energy blow-up when using RK45 to solve example 2.
We solved this system using a “formal calculus” solver (Maple) for real-valued coefficients (α11, α12 , α21, α22, α31 , α32)
satisfying the required conditions given in Section 3.
B Details on the proof of Proposition 3.1
Recall that one has
Ω(A, B, C ) = 2ÇIÅIA
2ã1ÅI+A
2ãÅIB
2ã1ÅI+B
2ã·
ÅIC
2ã1ÅI+C
2ãå·ÇI+ÅIA
2ã1ÅI+A
2ã·
ÅIB
2ã1ÅI+B
2ãÅIC
2ã1ÅI+C
2ãå1
.
Taylor expansion of this relation gives
= 2ÅIÅI+A
2+A2
4+A3
8ãÅI+A
2ãÅB
2+B2
4+B3
8ãÅI+B
2ã
ÅC
2+C2
4+C3
8ãÅI+C
2ããÅI+ÅI+A
2+A2
4+A3
8ãÅI+A
2ã
ÅB
2+B2
4+B3
8ãÅI+B
2ãÅC
2+C2
4+C3
8ãÅI+C
2ãã1
+F(A, B, C )
14
=2ÅIÅI+A+A2
2+A3
4ãÅI+B+B2
2+B3
4ãÅI+C+C2
2+C3
4ãã×
ÅI+ÅI+A+A2
2+A3
4ãÅI+B+B2
2+B3
4ãÅI+C+C2
2+C3
4ãã1
+F(A, B, C )
=ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2+A3
4+B3
4+C3
4+A2B
2+AB2
2
+A2C
2+AC2
2+B2C
2+BC2
2ãÅI+ÅA
2+B
2+C
2+AB
2+AC
2+BC
2+A2
4+B2
4
+C2
4+A2B
4+AB2
4+A2C
4+AC2
4+B2C
4+BC2
4+A3
8+B3
8+C3
8ãã1
+F(A, B, C )
=ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2+A3
4+B3
4+C3
4+A2B
2+AB2
2
+A2C
2+AC2
2+B2C
2+BC2
2ãÅIA
2B
2C
2AB
2AC
2BC
2A2
4B2
4
C2
4+ÅA
2+B
2+C
2+AB
2+AC
2+BC
2+A2
4+B2
4+C2
4ã2å+F(A, B, C )
=ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2+A3
4+B3
4+C3
4+A2B
2+AB2
2
+A2C
2+AC2
2+B2C
2+BC2
2ãÅIA
2B
2C
2AB
4AC
4BC
4+BA
4
+CA
4+CB
4ã+F(A, B, C )
=A+B+C+1
2([A, B] + [A, C ] + [B, C]) + 1
4[A, B], C ]1
4(ACB +BCA)
1
4(ABA +BAB +BAB +BCB +CAC +C BC) + F(A, B, C )
with the terms in F(A, B, C ) being of at least fourth-order. Since we assumed that A, B and
Care in the neighborhood of zero, the series converges. Moreover, the Neumann series for
(IA/2)1converges and the term (I+ Cay(A) Cay(B) Cay(C))1becomes
Å2I+ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2
+A2B
2+AB2
2+A2C
2+AC2
2+B2C
2+BC2
2+A3
4+B3
4+C3
4ã+
F(A, B, C )ã1
=1
2ÅI+1
2ÅA+B+C+AB +AC +BC +A2
2+B2
2+C2
2
+A2B
2+AB2
2+A2C
2+AC2
2+B2C
2+BC2
2+A3
4+B3
4+C3
4ã+
F(A, B, C )ã1
,
where
F(A, B, C ) contains terms related to truncated terms of the series expansion of
(IA
2)1, (IB
2)1, and (IC
2)1. Once again, these converge and
F(A, B, C ) is small.
Thus, 1
2(I+ Cay(A) Cay(B) Cay(C)) is a small perturbation of the identity such that the
Neumann series to (I+ Cay(A) Cay(B) Cay(C))1converges.
References
[1] A. Alvermann, H. Fehske, High-order commutator-free exponential time-propagation
of driven quantum systems, J. Comput. Phys., 230 (2011), pp. 5930-5956,
DOI:10.1016/j.jcp.2011.04.006.
[2] W. Auzinger, T. Kassebacher, O. Koch and M. Thalhammer, Adaptive splitting
methods for nonlinear Schr¨odinger equations in the semiclassical regime, Numer
Algor 72 (2016), pp. 1–35, DOI:10.1007/s11075-015-0032-4
15
[3] W. Auzinger, J. Dubois, K. Held, H. Hofst¨atter, T. Jawecki, A. Kauch, O. Koch,
K. Kropielnicka, P. Singh, C. Watzenb¨ock, Efficient Magnus-type integrators for
solar energy conversion in Hubbard models, Journal of Computational Mathematics
and Data Science, 2, (2022), 100018, DOI:10.1016/j.jcmds.2021.100018.
[4] J. Baum, R. Tycko, A. Pines, Broadband population inversion by phase modulated
pulses, J. Chem. Phys. 79 (1983), no. 9, pp. 4643–4644, DOI:10.1063/1.446381.
[5] S. Blanes, and F. Casas, A concise introduction to geometric numerical integration
(1st ed.), Chapman and Hall/CRC (2016). DOI:10.1201/b21563
[6] S. Blanes, F. Casas, and A. Murua, Splitting methods for differential equations,
arXiv:2401.01722 (2024), DOI:10.48550/arXiv.2401.01722.
[7] S. Blanes, F. Casas, J. A. Oteo, J. Ros, The Magnus expansion and some of its ap-
plications, Physics Reports 470 (2009), no. 151, DOI:10.1016/j.physrep.2008.11.001.
[8] S. Blanes, P.C. Moan, Fourth- and sixth-order commutator-free Magnus integrators
for linear and non-linear dynamical systems, Applied Numerical Mathematics, 56
(2006), no. 12, pp. 1519-1537, DOI:10.1016/j.apnum.2005.11.004.
[9] C. Brif, R. Chakrabarti and H. Rabitz Control of quantum phenomena: past, present
and future New J. Phys. 12 (2010), 075008, DOI:10.1088/1367-2630/12/7/075008.
[10] E. Celledoni, E. C¸ okaj, A. Leone, D. Murari and B. Owren Lie group integrators
for mechanical systems, International Journal of Computer Mathematics, 99 (2022),
no. 1, pp. 58-88, DOI:10.1080/00207160.2021.1966772.
[11] G. Chen, M. Foroozandeh, C. Budd, P. Singh, Quantum simulation of highly-
oscillatory many-body Hamiltonians for near-term devices arXiv:2312.08310
DOI:10.48550/arXiv.2312.08310.
[12] L. Dieci, R.D. Russell, E. Van Vleck, Unitary Integrators and Applications to Con-
tinuous Orthonormalization Techniques, SIAM Journal on Numerical Analysis, 31
(1994), no. 1, pp. 261-281, DOI:10.1137/0731014.
[13] F. Diele, L. Lopez and R. Peluso, The Cayley transform in the numerical solution
of unitary differential systems, Advances in Computational Mathematics 8(1998),
pp 317–334, DOI:10.1023/A:1018908700358.
[14] M. Foroozandeh, P. Singh Optimal Control of Spins by Analyti-
cal Lie Algebraic Derivatives Automatica 129 (2021), no. 109611
DOI:10.1016/j.automatica.2021.109611.
[15] S.J. Glaser, U. Boscain, T. Calarco, C.P. Koch, W. Kckenberger, R. Kosloff,
I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrggen, D. Sugny, F.K. Wilhelm, Train-
ing schrdingers cat: quantum optimal control, Eur. Phys. J. D 69 (2015), NO. 12.
DOI:10.1140/epjd/e2015-60464-1.
[16] S. G¨uttel, Rational Krylov approximation of matrix functions: Numerical
methods and optimal pole selection, GAMM-Mitt., 36 (2013), pp. 8–31,
DOI:10.1002/gamm.201310002
[17] V. Grimm, Resolvent Krylov subspace approximation to operator functions, BIT.
Numerical Mathematics 52 (2012), pp. 639-659, DOI:10.1007/s10543-011-0367-8.
[18] B.C. Hall, Lie Groups, Lie Algebras, and Representations, Graduate Texts in Math-
ematics, Springer, New York (2003), DOI:10.1007/978-3-319-13467-3.
16
[19] P. anggi, Driven quantum systems, in: T. Dittrich, P. anggi, G.-L. Ingold, B.
Kramer, G. Sch¨on, W. Zwerger (Eds.), Quantum Transport and Dissipation, Wiley-
VCH, Weinheim, 1997, pp. 249–286, https://books.google.de/books.
[20] E. Hairer, C. Lubich, G. Wanner, Geometric Numerical Integration, Springer, Berlin
(2006), DOI:10.1007/3-540-30666-8
[21] M. Hochbruck and C. Lubich On Magnus Integrators for Time-Dependent
Schr¨odinger Equations SIAM Journal on Numerical AnalysisVol. 41 (2003), no. 3,
DOI:10.1137/S0036142902403875.
[22] M. Hochbruck and C. Lubich, On Krylov Subspace Approximations to the Matrix
Exponential Operator, SIAM Journal on Numerical AnalysisVol. 34 (1997), no. 5,
DOI:10.1137/S0036142995280572.
[23] U. Hohenester, P.K. Rekdal, A. Borz`ı, J. Schmiedmayer, Optimal quantum control
of Bose-Einstein condensates in magnetic microtraps, Physical Review A, 75, Issue
2, id.023602, DOI:10.1103/PhysRevA.90.033628.
[24] Iserles, On Cayley-transform methods for the discretization of Lie-Group Equations,
cFound. Comput. Math. 1(2001) pp. 129-160, DOI:10.1007/s102080010003.
[25] Iserles, K. Kropielnicka, and P. Singh Magnus–Lanczos Methods with Simplified
Commutators for the Schr¨odinger Equation with a Time-Dependent Potential SIAM
Journal on Numerical AnalysisVol. 56 (2018), no. 3 DOI:10.1137/17M1149833
[26] P. Bader, Iserles, K. Kropielnicka, and P. Singh Efficient methods for linear
Schr¨odinger equation in the semiclassical regime with time-dependent potential Proc.
R. Soc. A.47220150733 (2016), DOI:10.1098/rspa.2015.0733
[27] Iserles, HZ. Munthe-Kaas, SP. Nørsett and A. Zanna, Lie-group methods, Acta
Numerica 9(2000), pp 215-365. DOI:10.1017/S0962492900002154.
[28] A. Iserles and A. Zanna, On the Dimension of Certain Graded Lie Algebras Arising
in Geometric Integration of Differential Equations. LMS Journal of Computation and
Mathematics, 3(2000), pp. 44-75. DOI:10.1112/S1461157000000206.
[29] K. Ito and K. Kunisch Optimal bilinear control of an abstract Schr¨odinger
equation SIAM Journal on Control and OptimizationVol. 46 (2007), Iss. 1,
DOI:10.1137/05064254X.
[30] T. Jawecki, P. Singh, Unitary rational best approximations to the exponential func-
tion, arxiv:2312.13809.
[31] K. Kormann, S. Holmgren, H.O. Karlsson, Accurate time propagation for the
Schrodinger equation with an explicitly time-dependent Hamiltonian. J Chem Phys.
128 (2008), no. 18:184101. DOI:10.1063/1.2916581.
[32] McLachlan RI and Quispel GRW, Splitting methods. Acta Numerica. 11 (2002),
pp. 341-434, DOI:10.1017/S0962492902000053.
[33] A. Marthinsen and B. Owren, Quadrature methods based on the Cayley
transform, Applied Numerical Mathematics, 39 (2001), no. 3–4, pp. 403-413,
DOI:10.1016/S0168-9274(01)00087-3.
[34] W. Magnus, On the exponential solution of differential equations for a linear operator
Commun. Pure Appl. Math. 7(1954), pp. 649-673, DOI:10.1002/cpa.3160070404.
[35] D.E. Manolopoulos, Derivation and reflection properties of a transmission-free absorb-
ing potential, J. Chem. Phys. 117 (2002), pp. 9552–9559, DOI:10.1063/1.1517042.
17
[36] C.Moler and C. Van Loan, Nineteen Dubious Ways to Compute the Exponen-
tial of a Matrix, Twenty-Five Years Later, SIAM Review 45 (2003), pp. 3-49
DOI:10.1137/S00361445024180.
[37] J.A. Oteo, The Baker–Campbell–Hausdorff formula and nested commutator identi-
ties, J. Math. Phys. 32 (1991), pp. 419–424, DOI:10.1063/1.529428.
[38] H. Munthe–Kaas and B. Owren Computations in a free Lie algebra Phil. Trans. R.
Soc. A. 357 (1999), pp. 957–981 DOI:10.1098/rsta.1999.0361.
[39] P. Singh, Sixth-order schemes for laser–matter interaction in the Schr¨odinger equa-
tion. J. Chem. Phys. 21 April 2019; 150 (15): 154111, DOI:10.1063/1.5065902.
[40] P. Singh and D. Goodacre, expsolve: A differentiable numerical algorithms package for
computational quantum mechanics, Zenodo (2024). DOI:10.5281/zenodo.13121390.
[41] Y. Saad, Iterative methods for sparse linear systems, Society for Industrial and
Applied Mathematics, 2003.
[42] E. Van Reeth, H. Rafiney, M. Tesch, S.J. Glaser, D. Sugny, Optimizing mri contrast
with b1 pulses using optimal control theory, in: Proc. IEEE Int. Symp. Biomed.
Imaging, 2016, pp. 310–313, DOI:10.1109/ISBI.2016.7493271
[43] AJ. Wathen, Preconditioning, Acta Numerica. 24 (2015), pp. 329-376.
DOI:10.1017/S0962492915000021.
18
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We build efficient and unitary (hence stable) methods for the solution of the linear time-dependent Schrödinger equation with explicitly time-dependent potentials in a semiclassical regime. The Magnus-Zassenhaus schemes presented here are based on a combination of the Zassenhaus decomposition (Bader et al. 2014 Found. Comput. Math. 14, 689-720. (doi:10.1007/s10208-013-9182-8)) with the Magnus expansion of the time-dependent Hamiltonian. We conclude with numerical experiments.
Article
Full-text available
It is control that turns scientific knowledge into useful technology: in physics and engineering it provides a systematic way for driving a dynamical system from a given initial state into a desired target state with minimized expenditure of energy and resources. As one of the cornerstones for enabling quantum technologies, optimal quantum control keeps evolving and expanding into areas as diverse as quantumenhanced sensing, manipulation of single spins, photons, or atoms, optical spectroscopy, photochemistry, magnetic resonance (spectroscopy as well as medical imaging), quantum information processing and quantum simulation. In this communication, state-of-the-art quantum control techniques are reviewed and put into perspective by a consortium of experts in optimal control theory and applications to spectroscopy, imaging, as well as quantum dynamics of closed and open systems. We address key challenges and sketch a roadmap for future developments.
Article
Strongly interacting electrons in solids are generically described by Hubbard-type models, and the impact of solar light can be modeled by an additional time-dependence. This yields a finite dimensional system of ordinary differential equations (ODE)s of Schrödinger type, which can be solved numerically by exponential time integrators of Magnus type. The efficiency may be enhanced by combining these with operator splittings. We will discuss several different approaches of employing exponential-based methods in conjunction with an adaptive Lanczos method for the evaluation of matrix exponentials and compare their accuracy and efficiency. For each integrator, we use defect-based local error estimators to enable adaptive time-stepping. This serves to reliably control the approximation error and reduce the computational effort.
Article
Computation of derivatives (gradient and Hessian) of a fidelity function is one of the most crucial steps in many optimization algorithms. Having access to accurate methods for computing these derivatives is even more desirable where the optimization process requires propagation of these computations over many steps, which is particularly important in optimal control of spin systems. Here we propose a novel numerical approach, ESCALADE (Efficient Spin Control using Analytical Lie Algebraic Derivatives), that offers the exact first and second derivatives of the fidelity function by taking advantage of the properties of the Lie group of 2 × 2 unitary matrices, SU(2), and its Lie algebra, the Lie algebra of skew-Hermitian matrices, su(2). A full mathematical treatment of the proposed method along with some numerical examples are presented.
Article
Control of quantum systems via lasers has numerous applications that require fast and accurate numerical solution of the Schrödinger equation. In this paper, we present three strategies for extending any sixth-order scheme for the Schrödinger equation with time-independent potential to a sixth-order method for the Schrödinger equation with laser potential. As demonstrated via numerical examples, these schemes prove effective in the atomic regime as well as the semiclassical regime and are a particularly appealing alternative to time-ordered exponential splittings when the laser potential is highly oscillatory or known only at specific points in time (on an equispaced grid, for instance). These schemes are derived by exploiting the linear in space form of the time dependent potential under the dipole approximation (whereby commutators in the Magnus expansion reduce to a simpler form), separating the time step of numerical propagation from the issue of adequate time-resolution of the laser field by keeping integrals intact in the Magnus expansion and eliminating terms with unfavorable structure via carefully designed splittings.
Book
Discover How Geometric Integrators Preserve the Main Qualitative Properties of Continuous Dynamical Systems A Concise Introduction to Geometric Numerical Integration presents the main themes, techniques, and applications of geometric integrators for researchers in mathematics, physics, astronomy, and chemistry who are already familiar with numerical tools for solving differential equations. It also offers a bridge from traditional training in the numerical analysis of differential equations to understanding recent, advanced research literature on numerical geometric integration. The book first examines high-order classical integration methods from the structure preservation point of view. It then illustrates how to construct high-order integrators via the composition of basic low-order methods and analyzes the idea of splitting. It next reviews symplectic integrators constructed directly from the theory of generating functions as well as the important category of variational integrators. The authors also explain the relationship between the preservation of the geometric properties of a numerical method and the observed favorable error propagation in long-time integration. The book concludes with an analysis of the applicability of splitting and composition methods to certain classes of partial differential equations, such as the Schrödinger equation and other evolution equations. The motivation of geometric numerical integration is not only to develop numerical methods with improved qualitative behavior but also to provide more accurate long-time integration results than those obtained by general-purpose algorithms. Accessible to researchers and post-graduate students from diverse backgrounds, this introductory book gets readers up to speed on the ideas, methods, and applications of this field. Readers can reproduce the figures and results given in the text using the MATLAB® programs and model files available online.
Article
The computation of the Schr\"odinger equation featuring time-dependent potentials is of great importance in quantum control of atomic and molecular processes. These applications often involve highly oscillatory potentials and require inexpensive but accurate solutions over large spatio-temporal windows. In this work we develop Magnus expansions where commutators have been simplified. Consequently, the exponentiation of these Magnus expansions via Lanczos iterations is significantly cheaper than that for traditional Magnus expansions. At the same time, and unlike most competing methods, we simplify integrals instead of discretising them via quadrature at the outset -- this gives us the flexibility to handle a variety of potentials, being particularly effective in the case of highly oscillatory potentials, where this strategy allows us to consider significantly larger time steps.
Article
This textbook treats Lie groups, Lie algebras and their representations in an elementary but fully rigorous fashion requiring minimal prerequisites. In particular, the theory of matrix Lie groups and their Lie algebras is developed using only linear algebra, and more motivation and intuition for proofs is provided than in most classic texts on the subject. In addition to its accessible treatment of the basic theory of Lie groups and Lie algebras, the book is also noteworthy for including: • a treatment of the Baker–Campbell–Hausdorff formula and its use in place of the Frobenius theorem to establish deeper results about the relationship between Lie groups and Lie algebras • motivation for the machinery of roots, weights and the Weyl group via a concrete and detailed exposition of the representation theory of sl(3;C) • an unconventional definition of semisimplicity that allows for a rapid development of the structure theory of semisimple Lie algebras • a self-contained construction of the representations of compact groups, independent of Lie-algebraic arguments The second edition of Lie Groups, Lie Algebras, and Representations contains many substantial improvements and additions, among them: an entirely new part devoted to the structure and representation theory of compact Lie groups; a complete derivation of the main properties of root systems; the construction of finite-dimensional representations of semisimple Lie algebras has been elaborated; a treatment of universal enveloping algebras, including a proof of the Poincaré–Birkhoff–Witt theorem and the existence of Verma modules; complete proofs of the Weyl character formula, the Weyl dimension formula and the Kostant multiplicity formula. Review of the first edition: “This is an excellent book. It deserves to, and undoubtedly will, become the standard text for early graduate courses in Lie group theory ... an important addition to the textbook literature ... it is highly recommended.” — The Mathematical Gazette
Article
The error behavior of exponential operator splitting methods for nonlinear Schrödinger equations in the semiclassical regime is studied. For the Lie and Strang splitting methods, the exact form of the local error is determined and the dependence on the semiclassical parameter is identified. This is enabled within a defect-based framework which also suggests asymptotically correct a posteriori local error estimators as the basis for adaptive time stepsize selection. Numerical examples substantiate and complement the theoretical investigations.