PreprintPDF Available

Lindblad engineering for quantum Gibbs state preparation under the eigenstate thermalization hypothesis

Authors:
  • Quantinuum London
  • Quantinuum
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Building upon recent progress in Lindblad engineering for quantum Gibbs state preparation algorithms, we propose a simplified protocol that is shown to be efficient under the eigenstate thermalization hypothesis (ETH). The ETH reduces circuit overhead of the Lindblad simulation algorithm and ensures a fast convergence toward the target Gibbs state. Moreover, we show that the realized Lindblad dynamics exhibits resilience against depolarizing noise, opening up the path to a first demonstration on quantum computers. We complement our claims with numerical studies of the algorithm's convergence in various regimes of the mixed-field Ising model. In line with our predictions, we observe a mixing time scaling polynomially with system size when the ETH is satisfied. In addition, we assess the impact of algorithmic and hardware-induced errors on the algorithm's performance by carrying out quantum circuit simulations of our Lindblad simulation protocol with realistic noise models. This work bridges the gap between recent theoretical advances in Gibbs state preparation algorithms and their eventual quantum hardware implementation.
Lindblad engineering for quantum Gibbs state preparation
under the eigenstate thermalization hypothesis
Eric Brunner,1, Luuk Coopmans,1Gabriel Matos,1, 2
Matthias Rosenkranz,1, Frederic Sauvage,1and Yuta Kikuchi3, 4
1Quantinuum, Partnership House, Carlisle Place, London SW1P 1BX, United Kingdom
2Quantinuum, 17 Beaumont St., Oxford OX1 2NA, United Kingdom
3Quantinuum K.K., Otemachi Financial City Grand Cube 3F, 1-9-2 Otemachi, Chiyoda-ku, Tokyo, Japan
4Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako, Saitama 351-0198, Japan
Building upon recent progress in Lindblad engineering for quantum Gibbs state preparation algo-
rithms, we propose a simplified protocol that is shown to be efficient under the eigenstate thermal-
ization hypothesis (ETH). The ETH reduces circuit overhead of the Lindblad simulation algorithm
and ensures a fast convergence toward the target Gibbs state. Moreover, we show that the realized
Lindblad dynamics exhibits resilience against depolarizing noise, opening up the path to a first
demonstration on quantum computers. We complement our claims with numerical studies of the
algorithm’s convergence in various regimes of the mixed-field Ising model. In line with our predic-
tions, we observe a mixing time scaling polynomially with system size when the ETH is satisfied.
In addition, we assess the impact of algorithmic and hardware-induced errors on the algorithm’s
performance by carrying out quantum circuit simulations of our Lindblad simulation protocol with
realistic noise models. This work bridges the gap between recent theoretical advances in Gibbs state
preparation algorithms and their eventual quantum hardware implementation.
I. INTRODUCTION
The simulation of quantum systems is poised to be one
of the most promising applications of quantum comput-
ing. This task typically requires the accurate preparation
of relevant initial quantum states. In particular, Gibbs
states at an inverse temperature βand for a quantum
(non-commuting) Hamiltonian H,
σβ=eβH
Tr[eβH ],(1)
play an essential role in understanding the thermal
properties of quantum many-body systems. Further-
more, Gibbs states serve as key resources in a variety
of quantum algorithms for optimization problems, ma-
chine learning tasks, and more (see [1,2] and references
therein). Such demands have led to significant research
efforts to develop quantum algorithms for Gibbs state
preparation [315].
Inspired by the successful application of Markov chain
Monte Carlo algorithms to a wide range of practical
computational problems, including classical Gibbs sam-
pling [16,17], the authors in [5,7,18,19] investigated
their quantum analogue for sampling from Gibbs states of
quantum Hamiltonians. Inspired by thermalization phe-
nomena occurring in nature [20], there has also been a
broad interest in utilizing dissipative quantum dynamics,
described under the Markov assumption by the Lindblad
equation [21], to prepare quantum thermal states on a
quantum computer [3,2234]. The underlying mecha-
nism is the existence of an (often unique) steady state,
eric.brunner@quantinuum.com
matthias.rosenkranz@quantinuum.com
which can be made close to a target state through a
specific design of the underlying Lindbladian [3537].
Recently, the authors of [28,29] have resolved several
outstanding obstacles in the engineering of such specific
Lindbladians, which are both efficiently simulatable and
have the desired Gibbs state as their unique steady state.
From a complexity theory perspective, low-
temperature Gibbs states are hard to prepare even
with a quantum computer in the worst case [38]. Still,
it is anticipated that quantum algorithms simulating
Lindblad dynamics can efficiently prepare certain Gibbs
states of interest [3941] that would otherwise be hard
to sample from. The efficiency of these algorithms is
assessed in terms of the resources required to implement
the Lindblad evolution as a quantum circuit and the
mixing time of the dynamics. The latter characterizes
the time for any initial state to converge close to the
steady state of the Lindbladian.
Despite remarkable advances in the design and simu-
lation of adequate Lindbladians, a gap remains between
theoretical results in idealized scenarios and their prac-
tical application. Such practical considerations include
the noise resilience of the proposed algorithms or nu-
merical studies of their convergence. We address this
gap by proposing a version of the Gibbs state prepa-
ration algorithms [2831,34] with reduced quantum re-
source requirements, potentially facilitating a near-term
hardware demonstration. We provide a comprehensive
numerical investigation of convergence characteristics of
Lindblad dynamics in various settings, establishing the
crucial roles played by both the jump operators and the
system Hamiltonian. In addition, we analyze algorithmic
errors and the noise susceptibility of our proposal—both
analytically and based on simulations of the correspond-
ing quantum circuits.
In Sec. II we recall the necessary background regard-
arXiv:2412.17706v1 [quant-ph] 23 Dec 2024
2
ing Lindbladians, their mixing time and steady states.
In Sec. III and IV we introduce our variant of a quan-
tum Gibbs state preparation algorithm. Crucial to our
algorithm is the compliance of the system Hamiltonian
Hand the selected jump operators with the eigenstate
thermalization hypothesis (ETH) [4245]. As presented
in Sec. III, the ETH guarantees fast and accurate con-
vergence of the Lindblad evolution towards the desired
Gibbs state, as already observed in [46] for a slightly
different setup. We provide a circuit implementation of
our algorithm in Sec. IV, together with a detailed anal-
ysis of the algorithmic errors incurred. In Sec. Vwe
complement our theoretical investigations with extensive
numerical simulations of the Lindblad dynamics for a
1D mixed-field Ising model, in an idealized case with-
out noise. Specifically, we investigate the influence of
the ETH and the locality and number of jump operators
on the convergence characteristics of the evolution. In
Sec. VI we study errors induced by noise. Unlike unitary
dynamics—which always converges towards the maxi-
mally mixed state under the influence of finite depolar-
izing noise—Lindblad dynamics has a nontrivial steady
state under this noise model. Furthermore, we show
that for stochastic noise, errors in the prepared state are
smaller than expected. This showcases some inherent ro-
bustness of our approach to noise, which is crucial for
a potential demonstration of this class of algorithms on
quantum hardware. These findings are corroborated with
quantum circuit simulations, taking into account a local
depolarizing noise model, in Sec.VII. In this section we
also investigate numerically the effect of our algorithm’s
main parameters on its performance, and study the in-
terplay between algorithmic and noise-induced errors for
this noise model.
II. PRELIMINARIES
In this section, we briefly summarize the concepts of
Lindblad dynamics, steady states, detailed balance and
mixing time. These form the basis for the discussions in
the subsequent sections. The Lindblad equation,
dρ
dt(t) = L[ρ(t)],(2)
describes the dissipative dynamics of an open quantum
system, which is formally solved as ρ(t)=etL[ρ(0)] for
an initial state ρ(0). The Lindbladian Lis the generator
of the dynamics and can be written as follows:
L[ρ] = i[G, ρ] + X
aA
γaLaρLa1
2{LaLa, ρ}.(3)
The first term in Eq. (3), i[G, ρ], captures the coherent
(unitary) part of the dynamics. Gis Hermitian and in
the present work we set G=H, where His the system
Hamiltonian of the Gibbs state σβin Eq. (1). The re-
maining terms are responsible for the dissipation along
the evolution: the transition term PaAγaLaρLaand
the decay term 1
2PaAγa{LaLa, ρ}. The set of in-
dices Aselects suitable Lindblad operators Lathat drive
the dissipation and γaare transition weights.
Quantum detailed balance—a generalization of the
classical notion of detailed balance of Markov chains—
controls the steady state of the Lindblad dynamics. A
steady state σof Lis defined by etL[σ] = σfor all t0,
or, equivalently, L[σ] = 0. Throughout this work, we
assume that the Lindblad dynamics has a full-rank and
unique steady state, denoted as ρ. The latter can be
ensured by choosing the coherent term and Lindblad op-
erators such that only multiples of the identity commute
with them [47, Theorem 3].
We adopt the Kubo-Martin-Schwinger (KMS) inner
product to define quantum detailed balance.1For a full-
rank density matrix σ, the KMS inner product is defined
as the weighted scalar product [49,5153],
X, Y σ:= Tr[Xσ1/2Y σ1/2],(4)
for bounded operators X, Y . We denote by Lthe ad-
joint of Lwith respect to the Hilbert Schmidt inner prod-
uct, i.e. X, L[Y]HS =⟨L[X], Y HS with X, Y HS :=
Tr[XY]. Then, we say that the Lindbladian Lobeys the
σ-detailed balance (σ-DB) condition if Lis self-adjoint
with respect to the KMS inner product (4),
X, L[Y]σ=⟨L[X], Y σ(5)
for all bounded operators X, Y . For a Lindbladian Lthat
obeys the σ-DB condition, we have for all X
⟨L[σ], XHS =I, L[X]σ=⟨L[I], Xσ= 0,(6)
where Iis the identity operator. We used the σ-DB con-
dition (5) in the second equality and L[I] = 0 in the last
equality. When a Lindbladian obeys the σ-DB condition,
the corresponding Lindblad dynamics converges to the
steady state σin the infinite time limit [54, Proposition
7.5].
To prepare the quantum Gibbs state σβ(1), we wish to
engineer a Lindbladian Lthat satisfies the σβ-DB con-
dition. When this is only approximately satisfied, the
steady state ρof Linevitably deviates from the target
σβ. In such case it is important to control the convergence
accuracy ρσβ1of the steady state compared to
the Gibbs state, as quantified here via the trace distance.
Furthermore, to use this construction algorithmically, the
speed of convergence and the efficient implementability
of the Lindblad dynamics are crucial (see Sec. III). We
quantify the convergence speed via the mixing time tmix,
i.e. the time it takes to approach the steady state under
the Lindblad dynamics from any initial state ρin trace
distance:
tmix(ϵ):= inf t0| ρ:
eLt[ρ]ρ
1ϵ.(7)
1Several other notions exist in the literature [4850].
3
Contribution and relation to prior works
Our protocol builds upon recent advances in Lind-
blad engineering for quantum Gibbs state preparation,
with the aim to reduce circuit complexity and facilitate
a potential near-term hardware demonstration. We fol-
low [28], whose authors proposed an efficiently imple-
mentable quantum algorithm to prepare Gibbs states by
simulating the Lindblad evolution (2) with a carefully
designed Lwhich is approximately σβ-DB. Key to their
algorithm is the use of Lindblad operators in the form of
a filtered operator Fourier transform (8), which controls
the trade-off between convergence accuracy and required
resources for preparing the target Gibbs state. In sub-
sequent work, the authors of [29] proposed G=GCKG,
Eq. (A9), as the coherent part of the Lindbladian and
showed that it satisfies the σβ-DB condition exactly while
being efficiently implementable. Despite a scalable imple-
mentation (i.e. circuit complexity polynomial in system
size), its overhead likely remains prohibitive for an im-
plementation on near-term quantum hardware.
In our contribution, we simplify the construction of [28]
by using a discrete set of Lindblad operators [31], in-
stead of the continuous set resulting from the Gaussian
and Metropolis-like transition weights used in [29] (see
App. A 2). In addition, we avoid the implementation of
GCKG by resorting to the ETH, assuming that our under-
lying system Hamiltonian exhibits quantum chaotic be-
havior. Under the ETH, we show that the σβ-DB condi-
tion is satisfied on average, and upper-bound the conver-
gence accuracy ρσβ1for our Lindbladian L, which
is only approximately detailed balanced. Moreover, we
show that the Lindblad dynamics exhibits a mixing time
bounded by a polynomial in the number of qubits n, as-
suming the spectral norm of the Hamiltonian His
bounded by a polynomial in n[26,46,55]. Note that,
very recently, the authors of [56] showed that the mix-
ing time for non-commuting local lattice Hamiltonians is
bounded by log(n) above a constant critical temperature
improving over bounds based on the spectral gap of the
Lindbladian. Here, we tailor the spectral gap analysis of
[46] to our Lindbladian (3).
Implementation of the evolution under the full Lind-
bladian (3) is simplified by employing the single-ancilla
protocol of [32] (similarly used in [30] for tasks of ground-
state preparations), combined with a randomized ap-
proach in which we select only a single Lindblad oper-
ator (8) in each time step of the dynamics. Such a ran-
domized method was also proposed in [34]. In compari-
son to their method, the chaoticity of the system Hamil-
tonian allows us to use simple Pauli-product operators as
jump operators. In turn, [34] proposed n-qubit unitary
2-designs to construct their jump operators, which is scal-
able, but, nevertheless, requires deeper circuits than our
local jump operators in practice. On the other hand, the
authors of [55] proposed unitary 1-design jump operators
and proved a bound on the mixing time similar to ours for
random sparse Hamiltonians. While their choice of jump
operators is similar to ours, they use the quantum Gibbs
sampling algorithm in [29] without the simplifications we
propose here.
The algorithmic design choices made in this work po-
sition our approach as a candidate for a near-term hard-
ware demonstration of Lindblad-simulation-based Gibbs
state preparation algorithms. In addition, and in con-
trast to most prior works, we provide a comprehensive
numerical analysis and full circuit simulations to assess
the convergence characteristics, algorithmic errors and
noise-robustness of our protocol. See [57] for a recent
numerical investigation of Lindblad-based ground state
preparation protocol.
III. LINDBLAD ENGINEERING WITH ETH
In this section we introduce our protocol in more detail
and discuss basic aspects of the ETH. The ETH will allow
us to derive analytical bounds on the accuracy and the
speed of convergence towards the target Gibbs state for
our Lindbladian (3), which is only approximately detailed
balanced.
A. Lindblad operators
We consider Lindblad operators {La}aAgiven in the
form of a filtered operator Fourier transform (OFT) [31],
La=Z
−∞
dt g(t)Aa(t) = X
νBH
ηνAa
ν,(8)
with Aa(t):= eiHtAaeiH t,ην:=R
−∞ dteiνt g(t) and
Aa
ν:=PEiEj=νΠiAaΠj, where Πiis the projector
onto the eigenstate of Hcorresponding to the energy
Ei. We denote with BH:={EiEj|Ei, Ejspec[H]}
the set of Bohr frequencies. The set of jump operators
{Aa}aAcan be chosen arbitrarily as long as it satisfies
{γaAa}aA={γaAa}aA. In our case, we will al-
ways consider Hermitian jump operators, such that this
condition is fulfilled by construction. Furthermore, to
guarantee uniqueness of the steady state we require that
only multiples of the identity commute with the jump
operators and Hamiltonian [47, Theorem 3]. We choose
a filter function
g(t) = 2
E
2π31/4
e2
Et2+iβ2
Et/2,(9)
which satisfies the normalization condition
R
−∞ dt|g(t)|2= 1. This choice selects energy tran-
sitions in (8) that decrease the energy by roughly
β2
E/2. For our analytical and numerical studies,
we choose E=2, as discussed in App. C 1.
This choice ensures that the energy transitions in-
duced by the jump operators are well within the
energy window defined by the Fourier transform
4
of g. Noting ην= eβν/2ην[see Eq. (A6)] and
using {γaAa}aA={γaAa}aA, one can read-
ily show that Paγaσ1/2
βLaσ1/2
β[·]σ1/2
βLaσ1/2
β=
PaγaLa[·]La. It then follows that the transition term
in (3) satisfies the σβ-DB condition (5) as
X
aA
γaX, LaY Laσβ
=X
aA
γaσ1/2
βLaσ1/2
βXσ1/2
βLaσ1/2
β, Y σβ
=X
aA
γaLaXLa, Y σβ.
(10)
It remains to assess under which conditions the decay
term in Eq. (3) obeys the σβ-DB condition.
B. Convergence under the ETH
The ETH [44,45] states that, for a given Hamiltonian
Hwith eigenbasis {|Ei⟩}, the matrix elements of a local
observable Aare expressed as
Ei|A|Ej=A(Ei)δij +f(Eij , νij )
pD(Eij )Rij ,(11)
with Eij := (Ei+Ej)/2 and νij :=EiEj.A(E) and
f(E, ν ) are smooth functions of Eand ν. The density of
states D(E) is defined by D(E) = PEispec[H]˜
δ(EEi),
where ˜
δ(EEi) is a smeared delta function so that D(E)
becomes a smooth function of E.Ris a Hermitian matrix
with entries Rij whose real and imaginary parts are inde-
pendent random variables and which satisfy ER[Rij ] = 0
and ER[|Rij |2] = 1, where ERdenotes the average over
R. We assume that the diagonal vanishes, Rii = 0, for
all i. See App. Cfor details.
In the following, we assume that for given jump op-
erators the Lindbladian L, Eq. (3), is well-approximated
by a random realization of Lwith jump operators mod-
eled according to Eq. (11). The strategy is to show that
the ETH-averaged Lindbladian ERLrespects the σβ-DB
condition and that the distance between the average ERL
and a single realization of Lis bounded by a sufficiently
small quantity with high probability. Via this bound we
obtain upper bounds on the mixing time and the conver-
gence accuracy for our Lindbladian L.
a. Detailed balance The decay term in Eq. (3)
obeys the σβ-DB condition under the ETH average. To
see this, we convert the decay term to,
X
aA
γaX, {LaLa, Y }⟩σβ
=X
aA
γaσ1/2
βLaLaσ1/2
βX+Xσ1/2
βLaLaσ1/2
β, Y σβ
(12)
for all bounded operators X,Y. According to the ETH,
the matrix elements of the jump operators Aatake the
form (11), which leads to,
ER[σ1/2
βLaLaσ1/2
β]
=X
ν,ν
ηνηνeβνν
2ER[(Aa
ν)Aa
ν]ETH
=ER[LaLa],(13)
where the last equality follows from ER[(Aa
ν)Aa
ν]δν,ν.
Combining Eqs. (12) and (13), we find
ERhX
aA
γaX, {LaLa, Y }⟩σβi
=ERhX
aA
γa⟨{LaLa, X}, Y σβi.
(14)
From Eqs. (10) and (14) we conclude that the Gibbs
state σβis the steady state of the averaged dissipative
Lindbladian, i.e. neglecting the coherent term i[G, ρ]
in Eq. (3)—which in our case is generated by the system
Hamiltonian, G=H. Note that Hcommutes with σβ,
which implies that the coherent term i[H, ρ] does not
affect the steady state. Thus, we find that the Gibbs
state σβis the steady state of the averaged Lindbladian
ERL,
ERL[σβ] = 0.(15)
b. Mixing time For a Lindbladian Lwith steady
state ρand gap L, the mixing time is bounded via
(see e.g. [58, Eq. (104)])
tmix(ϵ)1
L
log2ρ1/2
ϵ.(16)
The spectral norm ρ1/2
typically scales exponen-
tially in the number of qubits n. Thus, we obtain a poly-
nomial dependence of the mixing time on the number
of qubits nif the spectral gap of Lis lower-bounded by
1/poly(n).
In App. C 3, we show that ERLeffectively reduces
to a classical Markov chain on the spectrum of H[46].
This allows us to employ the well-developed frame-
work of Markov chain conductance to derive an inverse-
polynomial lower bound ERLΩ(β3/n). According
to Eq. (16), this implies a bound on the mixing time of
ERLpolynomial in nand β. To obtain a similar bound
for L(3), we show in App. C 4 that the distance between
the Lindbladians Land ERLis bounded by O(β/p|A|).
As proven in App. C 5, see Eq. (C80), it follows that
tmix(ϵ) O 2(βH+ log(1)),(17)
with high probability, provided a sufficiently large num-
ber of jump operators, |A|= Ω(n2β6). Typically con-
sidered Hamiltonians have polynomially bounded norm
H, for example H=O(nk) for k-local Hamilto-
nians, or H=O(n) for geometrically local Hamilto-
nians.
5
c. Convergence accuracy of the steady state We
again invoke the result on the distance between Land
ERL. As proven in App. C 5, see Eq. (C90), the trace
distance between the Gibbs state σβand the steady state
of L,ρ, is bounded by
ρσβ1 O ϵ+2
p|A|(βH+ log(1))!.
(18)
Thus, a sufficiently large number of jump operators
|A|= n2β6H2
leads to a steady state that is
ϵ-close to the Gibbs state, i.e., ρσβ1ϵ. In the
next section we will see that |A|does not enter the com-
plexity scaling of our quantum protocol for the Lindblad
simulation. Therefore, in principle, we can choose |A|as
large as allowed by the considered jump operator model.
IV. RANDOMIZED SINGLE-ANCILLA
PROTOCOL
So far we have studied the convergence properties of
the Lindblad dynamics. In this section we describe how
this evolution can be implemented as a quantum circuit.
In Sec. VII we perform full circuit simulations and inves-
tigate the algorithmic errors of our protocol. We adopt a
single-ancilla protocol as in [30,32] and combine it with a
randomized scheme [34,59] to simulate the dynamics un-
der the Lindbladian (3) with G=Hand γa=γpa. The
parameter γ0 controls the strength of the dissipation
and ensures that the probabilities pasatisfy 0 pa1
together with Papa= 1. Instead of applying the full
Lindbladian (3), at each evolution time step we apply
a single Lindblad operator Lasampled with probability
pa. We further factorize the coherent and the dissipative
parts of the time evolution. Hence, starting from an ini-
tial state ρ(0), the state prepared after Mevolution steps
of length δt each, is given by
M
Y
i=1
eδtγDai Uδ t![ρ(0)],(19)
where we have defined
Da[ρ]:=LaρLa1
2{LaLa, ρ},(20)
Uδt[ρ]:= eiδtHρeiδ tH ,(21)
and {a1, . . . , aM}is the set of labels of the randomly
sampled Lindblad operators. The total evolution time
is t=Mδt. By taking the average over the random
sampling of Lindblad operators we find,
X
a
paeδtγDa Uδ t[ρ] = ρ+δtL[ρ] + O(δt2)
= eδtL[ρ] + O(δt2).
(22)
.
.
..
.
.
|0anc
Vai(δt)
ρ(0) eiHδt ρ(M δt)
Repeat for i= 1,...,M
FIG. 1. The quantum circuit for simulating Lindblad evo-
lution (19). The ancilla qubit on the top is traced out by
discarding the measurement results.
It has been further shown that the fluctuations of the
individual trajectories (19) around the average are sup-
pressed for sufficiently many steps M[34], such that, for
large M, a single trajectory describes the average evolu-
tion (22) sufficiently well.
To implement each dissipative Lindblad evolution
eδtγDa[ρ], we note that, for the dilation [24]
Ka:=|10|anc La+|01|anc La,(23)
the following identity holds,
Tranc[eiδtγK a(|00|anc ρ)eiδtγ Ka]
= eδtγDa[ρ] + O((δ)2).(24)
Implementation of the left-hand side of the identity only
requires introducing a single ancilla qubit and Hamilto-
nian simulation.
To retain the algorithmic error of O(δt2) in Eq. (22),
we apply the second-order product formula to implement
the evolution under the unitary eiδtγK a[30]. To do so,
we first discretize the OFT (8) over a restricted domain
[T , T ]. Taking discretized time steps t:=T/S, we get
¯
La:=
S
X
s=S
tsg(st)Aa(st),(25)
with ts:= tfor S+ 1 sS1 and ts:= t/2
for s=±S. We call tthe OFT discretization step.
Accordingly, the dilation Ka(23) is discretized as ¯
Ka:=
|10|anc ¯
La+|01|anc ¯
La.
Now, one can implement eiδtγ ¯
Ka(24) by applying
the second-order product formula, which we denote by
Va(δt), such that
Va(δt)=eiδtγ ¯
Ka+O(δt2γ2).(26)
The same ancilla qubit can be reused after resetting it
to |0. The resulting quantum circuit implementing the
Lindblad dynamics is sketched in Fig. 1.
As apparent from the previous discussion, several ap-
proximations are required for the circuit implementation.
6
In App. B, we provide a comprehensive study of the re-
sulting errors. When accounting for all these, the prepa-
ration of the steady state of Lincurs the following algo-
rithmic error:
tmix × O δt +t
βe2(T)2
+pβ|BH|e1
82πβ
t2βH∥−12
+Tt2
δt .
(27)
The contribution of the individual error sources are dis-
cussed further in App. B 4, and analyzed in detail in
Sec. VII, by means of simulations of the corresponding
quantum circuits. For the algorithmic error (27) to be
less than ϵ, the algorithm uses Hamiltonian simulation
(i.e. evolution under the system Hamiltonian H) for a
time scaling as
Θ tmix +βt2
mix
ϵrlog βtmix
ϵ!.(28)
Combined with Eq. (17) and also accounting for the
convergence accuracy of Eq. (18), this shows that un-
der the ETH, preparation of the Gibbs state, with ϵ-
error, can be implemented efficiently as a circuit with
a single additional ancilla. We note that the currently
known best Lindblad simulation algorithm achieves the
runtime [24,29]
˜
O(βtmix polylog(1)) ,(29)
where ˜
Ohides the polylogarithmic dependence on βand
tmix. However, this requires an additional overhead cir-
cuit for a complex block encoding of the Lindbladian.
The protocol presented in this work is based on the Trot-
terization and dilation (weak measurement), which saves
the overhead cost by sacrificing the near-optimal scal-
ing (29).
V. NOISELESS NUMERICAL STUDY
This section studies the dynamical properties of our
randomized Lindbladian protocol. To this end, we nu-
merically estimate mixing time, spectral gap and con-
vergence accuracy for the Lindbladian (3), taking into
account the random application of a single jump oper-
ator at each time step, as in the quantum algorithm in
Sec. IV. Here we ignore the Trotterization (19), OFT dis-
cretization (25), as well as the influence of noise as they
will be the focus of Secs. VIVII.
We consider the 1D mixed-field Ising model with open
boundary conditions
H=J
n2
X
i=0
ZiZi+1 h
n1
X
i=0
Xim
n1
X
i=0
Zi.(30)
The parameters hand mcontrol the strength of the
transverse and longitudinal fields, respectively. We set
the inverse temperature to β= (2J)1.
101
100
101
Longitudinal field m/J
CH
REG
101
100
101
CH
REG
0.75 0.80 0.85
E[D1]
101100101
Transverse field h/J
0TFIM
0.005 0.010
Var[D1]
101100101
Transverse field h/J
0TFIM
FIG. 2. Identification of chaotic (point CH) and non-chaotic
regimes (TFIM,REG) of the mixed-field Ising model (30) as a
function of transerse and longitudinal fields via eigenstate de-
localization. ETH is expected to hold in the chaotic regime.
Mean E[D1] (left) and variance Var[D1] (right) of the fractal
dimension D1are evaluated in the n-qubit Z-basis for n= 8.
Large mean E[D1] combined with small Var[D1] signal quan-
tum chaos. Other parameter regimes and eigenstate delocal-
ization in other bases are discussed in App. D 1 (cf. Tab. I).
We analyze various settings defined via: (i) the number
of jump operators, (ii) the locality of the jump operators,
and (iii) the degree to which the system satisfies the ETH,
parametrized by hand m(keeping Jfixed). We observe
a polynomial scaling of the mixing time with nwhen
the ETH holds, confirming our analytical bound (17).
Moreover, we observe a polynomially decreasing distance
between the steady state of the Lindblad dynamics and
the target Gibbs state with increasing number of jump
operators |A|as indicated by the bound Eq. (18). In set-
tings where the ETH is not expected to hold, we observe
vastly different convergence properties.
A. Quantum chaos and ETH in the mixed-field
Ising model
Eigenstate thermalization is expected to hold with high
accuracy in quantum chaotic systems [44]. Typical signa-
tures of quantum chaos comprise, e.g., spectral statistics
[60,61] or the delocalization behavior of energy eigen-
states [6268]. We identify sets of parameters (h, m) of
the mixed-field Ising model (30) with distinct quantum
chaotic properties by following the approach advocated
in [67,68] based on delocalization of energy eigenstates.
This approach directly assesses crucial properties under-
lying the ETH, while simultaneously being independent
of a specific choice of observables (jump operators).
To identify quantum chaotic parameter regimes we
consider an expansion of the eigenstates |Eiof (30) in
an arbitrarily chosen basis. The fractal dimension D(i)
1
of an energy eigenstate quantifies its delocalization in the
chosen basis (for details, see App. D 1). Quantum chaos
is signaled by a large average fractal dimension E[D1]
7
which indicates strong eigenstate delocalization. The av-
erage is taken over the inner part of the energy eigen-
states as specified below, all expanded in the same basis,
and the superscript (i) is omitted. A simultaneous small
variance Var[D1] over the bulk of the spectrum shows
that eigenstates delocalize uniformly. Figure 2shows
E[D1] (left) and Var[D1] (right) in the parameter range
m/J {0}[101,101], h/J [101,101] evaluated over
the central 80% of the spectrum to exclude edge cases.
The parameter region 0.7h/J 2, 0.2m/J 0.9
shows the clearest signature of uniform eigenstate delo-
calization in Fig 2(see also Fig. 12 in App. D 1). Within
this chaotic lake, the ETH ansatz should be a valid ap-
proximation. In turn, the limits m or h are
dominated by either the transverse or longitudinal field,
whereas in the limit h, m 0 the interaction term dom-
inates the dynamics. In all three cases, the eigenstates
become more localized and we would not expect the ETH
to hold. We call these limits the regular limits of our
model. Another important case is the limit m0, cor-
responding to the transverse field Ising model, which is an
integrable (i.e., non-chaotic) model. Based on the phase
diagrams in Fig. 2, we select three relevant parameter
points, shown as white crosses, for our numerical studies
in this section—TFIM (the transverse field Ising model),
REG (dominated by the longitudinal field), and the point
CH deeply within the chaotic parameter regime. Addi-
tional parameter points, exploring the remaining regular
limits and the chaos transition, are considered in App. D.
In Tab. Iwe summarize all parameter points and give
their coordinates.
B. Mixing time and convergence accuracy
In this subsection we study numerically the mixing
time and the convergence accuracy ρσβ1of the
Lindblad dynamics (2). The Lindblad operators (8) are
constructed from a set of random k-local Pauli jump op-
erators Aawith aA. For our studies, we vary both k
and the number |A|of considered jump operators. Sim-
ilar to the quantum protocol in Sec. IV, at each time
step we sample a single jump operator Aa,aA, ac-
cording to the probability distribution pa= 1/|A|, and
evolve the current state for one time step with the Lind-
bladian generated from this single jump operator. We
use the fourth-order Runge-Kutta method (instead of the
Trotterization (19) used in the quantum algorithm) and
evaluate the integral in Eq. (8) exactly (instead of the
discretization (25)). For each parameter point, we simu-
late the Lindblad evolution up to a maximal number of
time steps Nmax
steps = 3·105. The step size is determined by
an adaptive scheme for each setting and system size in-
dividually. Details on the numerical scheme are provided
in App. D 3, with step sizes reported in Tab. I.
Figure 3exemplifies the evolution of the trace dis-
tance ρ(t)σβ1for the two parameter points REG
and CH (cf. Tab. I) and increasing number of qubits
n= 3,...,8. As initial state we fix the maximally mixed
state, ρ(0) = I/2nwith Ithe 2n×2nidentity, and we fix
for each na set of random (k= 2)-local jump operators
of size |A|= 20. A single trajectory of the dynamics for
a given nis generated by sampling a random jump oper-
ator from this set in each time step. The trace distances
for several of these trajectories are plotted as thin lines
(varying nis indicated by color). The time-evolved state
is computed as an average of the trajectory states at each
time point (see App. D 3 for details). The trace distances
of this state to the Gibbs state are shown as bold lines. In
all cases, the averaged time-evolved state is significantly
closer to the Gibbs state than the individual trajectories.
Because of the randomization, the Lindblad dynamics
does not exactly converge to ρbut plateaus at a value
between 104and 102. Moreover, the exact steady state
of Eq. (2) only approximates the Gibbs state σβ. How-
ever, as we will see in Fig. 5, the convergence accuracy
ρσβ1is well below the observed plateau in Fig. 3in
all cases. Hence, with our limited number of trajectories
in Fig. 3, we cannot dynamically resolve the difference be-
tween the two states. Further note that for the point REG
and n= 8, the time evolution already stops at Jt 104.
This is because this setting requires a much smaller step
size for the simulation, cf. Tab. I, such that we only reach
a time of Jt 104within the maximal number of steps
Nmax
steps.
a. Mixing time estimate and spectral gap We set
ϵ= 102and estimate the mixing time (7) by
ˆ
tmix := inf{t > 0 : ρ(t)σβ1<102},(31)
starting from the maximally mixed state, ρ(0) = I/2n,
instead of maximizing over all initial states. In a slight
abuse of notation, ρ(t) denotes the evolved state under
the numerical scheme (not the state under the exact dy-
namics (2)). The mixing time estimate ˆ
tmix is highlighted
in Fig. 3by vertical dashed lines. We generally observe
that the early time dynamics (tˆ
tmix) coincides for all
trajectories (thin lines). Hence, a very small number of
trajectories (even only a single one) suffices to accurately
compute the estimate ˆ
tmix. As expected, ˆ
tmix increases
with n. The quantum chaotic parameter point CH ex-
hibits a much smaller mixing time than the regular limit
REG. Interestingly, the trace distance ρ(t)σβ1con-
verges to a smaller value for REG than for CH, which we
will discuss in more detail in Fig. 5.
From the bound in Eq. (17) we expect the mixing time
to scale as O(poly(n)). Here we study numerically the
scaling of ˆ
tmix and the spectral gap Lwith the number
of qubits nfor different numbers of (k= 2)-local jump
operators, |A|= 5,20,50, at the parameter points TFIM,
CH and REG (cf. Fig. 2). As before, the initial state is
maximally mixed. Other parameter points, other values
of k, as well as simulations with initial state |0nare
discussed in App. D 2.
Figure 4shows the mixing time ˆ
tmix (top row) and the
spectral gap L(bottom row) as a function of nfor dif-
ferent |A|. We compute Lfrom the full Lindbladian (3),
8
102103104
Evolution time Jt
104
103
102
101
100
kρ(t)σβk1
REG
Number of qubits n
3
4
5
6
7
8
102103104
Evolution time Jt
CH
FIG. 3. Trace distance between the target Gibbs state σβand the state ρ(t) evolved under Lindblad dynamics of the mixed-field
Ising model (30) with single random jump operators at each time step for n= 3,...,8 qubits. We consider a set of size |A|= 20
of random 2-local Pauli jump operators and the Hamiltonian parameter points REG (left, dominated by the longitudinal field)
and CH (right, quantum chaotic regime), cf. Fig. 2. Thin lines are individual trajectories of the randomized Lindblad simulation
protocol (details in App. D 3), thick lines are trace distances between the Gibbs state and the time-evolved state averaged over
trajectories. Vertical lines indicate the mixing time estimate (31). The results confirm much faster mixing in the quantum
chaotic regime (CH) than in the non-chaotic regime (REG). On the other hand, the system in parameter regime REG converges
to a state closer to the Gibbs state as explained in App. D 2.
containing all Lindblad operators La, a A(details in
App. D 3 b). For each |A|and each parameter point the
legend shows the leading exponent κof the polynomial
dependence of ˆ
tmix,Lon n, obtained by fitting the data
to nκ. In almost all cases, this assumption is justified
based on our numerical findings. We note that the num-
ber of jump operators |A|does not have a significant
effect on ˆ
tmix and L.
We observe small mixing times for the chaotic point
CH with a polynomial scaling exponent of approximately
κ1.4. Other chaotic parameter points show a similarly
fast convergence (see App. D 2). In contrast, the regular
limit REG shows over an order of magnitude larger mix-
ing times with slightly smaller scaling exponent κ1.3.
Note, however, that in this case and for n= 8 or
|A|= 5, the time-evolved state does not converge to
a distance ρ(t)σβ1below ϵ= 102within the max-
imum time horizon considered. Even worse mixing time
estimates are obtained in the limit of large transverse
field, hm, J , as discussed in App. D 3. Interest-
ingly, the integrable point TFIM, corresponding to the
transverse-field Ising model, shows the fastest conver-
gence with exponent κ1.2.
The situation is mirrored for the spectral gap L.
As expected, we obtain large gaps with a decay expo-
nent between 2κ 1 for the chaotic parameter
point CH, and comparably smaller gaps for REG. Inter-
estingly, in this case and for |A|= 5, the spectral gap
drops roughly two orders of magnitude in comparison
|A|= 20,50. This is in line with the above observed
extremely long convergence time in this case. The in-
tegrable limit TFIM exhibits the largest spectral gap, ac-
companied by the smallest (in modulus) decay exponents
between 1κ 0.5.
In Fig. 13 in App. D 2, we further explore the influence
of initial state and locality kof the jump operators. We
generally observe an increase of mixing time for larger k.
Moreover, the zero state |0ntypically converges faster
to the steady state than the maximally mixed state.
b. Distance between steady state and Gibbs state
According to the bound (18), the trace distance be-
tween the steady state and Gibbs state, ρσβ1
O(poly(n)/p|A|), depends inversely on the number of
jump operators |A|when |A|is not sufficiently large.
Here we study numerically the scaling of ρσβ1
with |A| [5,200] for n= 5,6,7 at the parameter points
TFIM,CH and REG. The results are shown in Fig. 5. As
expected, the distance between both states decreases in
all cases for increasing |A|. For the points TFIM and CH we
observe a clear linear decrease on the double-logarithmic
scale. The leading order exponent κis obtained from a fit
of the data to |A|κ(solid lines) and given in the legend.
The data shows a smaller (in modulus) exponent, around
κ 0.2, compared to the scaling κ=1/2 suggested
by the analytical upper bound (18).
The data for the parameter point REG shows strong
variation. However, we still discern a clear decrease with
|A|. In this case, the leading order exponent is approx-
imately κ 1/2. Overall, we observe a much smaller
trace distance between steady state and Gibbs state in
the regular limit REG (m/J 1, h/J 1), compared
to the other two points. This is in line with Fig. 3,
where we also observed a convergence to a lower value
of ρ(t)σβ1for REG. The same effect is present for
the complementary regular limit h/J 1, m/J 1 (cf.
Fig. 14, App. D 2). The likely cause is that, effectively,
only one energy transition contributes to the Lindblad
operator which leads to an effective Lindbladian with the
9
|A|=5, |A|=20, |A|=50,
3 4 5 6 7 8
102
103
104
Mixing time Jˆ
tmix
Jˆ
tmix =Cnκ
κ=1.09
κ=1.26
κ=1.23
3 4 5 6 7
Number of qubits n
105
103
101
Spectral gap L/J
L/J =Cnκ
κ=-0.95
κ=-0.67
κ=-1.04
3 4 5 6 7 8
κ=1.54
κ=1.41
κ=1.38
3 4 5 6 7 8
κ=1.26
κ=1.32
3 4 5 6 7
Number of qubits n
κ=-1.35
κ=-1.27
κ=-1.34
3 4 5 6 7
Number of qubits n
κ=-1.09
κ=-0.88
κ=-1.35
TFIM
TFIM
CH
CH
REG
REG
FIG. 4. Scaling of mixing time (top) and spectral gap of Lindbladian L(bottom) with n. We consider sets of random 2-local
Pauli jump operators of size |A|= 5,20,50 and the mixed-field Ising model (30) at parameter points TFIM (left, transverse field
Ising model), CH (middle, quantum chaotic regime) and REG (right, dominated by the longitudinal field), cf. Fig. 2. The mixing
time and spectral gap data fit well a polynomial nκ(lines). The transverse-field (TFIM) and chaotic (CH) regimes show much
faster mixing and correspondingly larger spectral gap than the longitudinal-field-dominated regime (REG). The number of jump
operators |A|has a negligible effect except for REG,|A|= 5. In this case the dynamics does not converge within the maximal
simulation time and the spectral gap is significantly smaller than in other cases.
Gibbs state as its steady state with a high accuracy. For
details, see App. D 2.
Interestingly, we do not observe an increase of ρ
σβ1for an increasing number of qubits n, suggesting
that this empirical distance has milder dependence on n
than our analytical upper bound (18). This is a promis-
ing observation for the practical implementability of our
approach since it potentially allows us to choose a num-
ber of jump operators |A|independently of nand still
retain ρσβ1=O(ϵ), as discussed below Eq. (18).
VI. RESILIENCE AGAINST HARDWARE
NOISE
In addition to errors incurred by the approximations
necessary for circuit implementation, any Gibbs state
preparation will inevitably encounter imperfections due
to hardware noise. As much as for the algorithmic errors
in Sec. IV, it is crucial to quantify and understand these
errors, which is the purpose of this section.
To facilitate the exposition we focus only on the effect
of noise, rather than other algorithmic errors, and con-
sider as our main example a global depolarization noise
model. As we shall see, considering such type of noise al-
lows us to obtain tighter bounds, in trace distance, than
would be possible through the treatment of generic noise
channels. In fact, this holds also for any stochastic noise
(i.e. a mixture of different unitary channels including the
identity) as detailed in App. E 6. Furthermore, to make
this study concrete, we will evaluate deviations incurred
by noise resorting to protocol characteristics extracted
from the numerical studies presented in Sec. V, that are
extrapolated to larger system sizes.
A. Setup
A noisy realization of the protocol consists in noiseless
Lindblad evolution interleaved with global depolarization
channels. In the noiseless case, the dynamics correspond-
ing to Msteps of evolution, each of duration δt, is given
by ΓM:= (eδtL)M. In contrast, the noisy dynamics is ob-
tained as e
ΓM:= λeδtL)Mwhere the depolarization
channel is defined as
Λλ[X]:= (1 λ)X+λTr[X]I
2n,(32)
with a probability or error λ[0,1]. Assuming a similar
noise model at the gate level, this probability can be
related to the number Ngof noisy operations, typically
2-qubit gates, required for the circuit implementation of
eδtLand the error λgper gate: 1 λ= (1 λg)Ng.
10
n=5n=6n=7
101102
Number of jump operators |A|
104
kρσβk1
kρσβk1=C|A|κ
κ=-0.22
κ=-0.20
κ=-0.17
101102
Number of jump operators |A|
104
κ=-0.24
κ=-0.23
κ=-0.20
101102
Number of jump operators |A|
106
105
κ=-0.55
κ=-0.44
κ=-0.51
TFIM CH REG
FIG. 5. Scaling of trace distance between the exact steady state of the Lindblad dynamics ρand target Gibbs state σβwith
the number of jump operators |A|. We consider random 2-local Pauli jump operators, n= 5,6,7 qubits, and the mixed-field
Ising model (30) at parameter points TFIM (left, transverse field Ising model), CH (middle, quantum chaotic regime) and REG
(right, dominated by the longitudinal field), cf. Fig. 2. Polynomial fits |A|κ(lines) confirm a decrease with exponent κ 0.2
for TFIM and CH slightly smaller in modulus than expected from the bound (18) (κ=1/2). The integrable point REG shows
a faster decrease (κ 1/2) and a significantly smaller distance between the steady state and Gibbs state. Appendix D 2
explains this observation and presents further results.
Let us define ρM:= ΓM[ρ(0)] and ˜ρM=e
ΓM[ρ(0)],
the states obtained after Msteps of noiseless and noisy
evolution respectively, both starting from ρ(0) = I/2n.
Under the noiseless dynamics, ρMconverges to the target
Gibbs state σβ, up to a convergence accuracy (18) which
can be neglected for the purpose of this section. In trace
distance this convergence is captured by
ρMσβ1=BeαM ,(33)
which closely fits the numerical results obtained in the
chaotic regime as seen in Fig. 6(left panel). The data
reported is the same as for the chaotic setting CH in
Fig. 3, and for each system size n= 3,...,8, both
the convergence rate α > 0 and the initial distance
B=ρ(0) σβ1[0,2] are extracted. From these,
a dependency on nis fitted. Results are displayed for α
in the inset of Fig. 6(left panel), showing that a geomet-
ric fit closely matches the data, and for Bin Fig. 15 (right
panel) of App. E.
B. Bounds on the convergence accuracy
As detailed in App. E 2, through Eq. (33) and the use
of triangle inequalities, we can upper bound the distance
˜ρMσβ1between the noisy prepared state, at any step
M, and the Gibbs state by
e
BM:=BuM
01λ
1u0+λ
1u0
with u0:= (1 λ)eα[0,1).
(34)
Given that 0 u0<1, this bound decreases monotoni-
cally with Mtowards
e
B:=Bλ
1u0
,(35)
that bounds the distance ˜ρσβ1between the steady
state ˜ρof the noisy evolution and the target Gibbs
state. This bound depends both on the probability λ
and the convergence rate α. As expected, the smaller
the error per step of evolution, the closer ˜ρis to the
Gibbs state σβ. Similarly, the bound decreases as the
convergence rate increases, and simply becomes for
α . Notably, except for the extreme case λ= 1,
this shows that the steady state of the noisy Lindblad
dynamics always differs from the steady state of the noise
channel Λλ, which is the maximally mixed state, in stark
contrast to unitary evolution.
Finally, we highlight that, as detailed in App. E 6,
Eqs. (34) and (35) hold for generic stochastic noise, in-
cluding local depolarization and arbitrary Pauli noise
channels that are often the dominant noise contributions.
C. Results
In order to quantify errors induced by the noise, and ul-
timately to assess the viability of Lindblad-based Gibbs
state preparation protocols, one wishes to evaluate the
bounds of Eq. (35) at varied system sizes, noise lev-
els, and algorithmic parameters (for instance, the dis-
cretization of the Lindblad operators (25) that affects
the gate count of the circuit and thus the strength of the
noise per step of evolution). To illustrate applications of
these bounds, we resort to values of Band αextrapo-
lated to large n, from the numerical fittings discussed in
11
0 500 1000 1500 2000 2500
Evolution time Jt
103
102
101
100
kρ(t)σβk1
0 20 40 60 80 100
Number of qubits n
0.0
0.2
0.4
0.6
0.8
1.0
˜
B/B
Number of qubits n=345678
1.2 1.4 1.6 1.8 2.0
log(n)
4.5
4.0
3.5
log(α)
y=1.90 1.39x
λg=104106108
FIG. 6. Noise study for the mixed-field Ising model (30). (Left panel) Fitting of the convergence dynamics in the chaotic
regime (CH data from Fig. 3). We report the trace distance between the state prepared by noiseless evolution and the target
Gibbs state, for different system sizes (n= 3 to 8 qubits, colors in legend). A fit of the form Eq. (33) (dashed lines) matches
closely the convergence dynamics. (Inset) the convergence rates αare found to scale as O(n1.39). (Right panel) Extrapolating
the convergence rates αto larger system sizes (up to n= 100), and assuming a number of 2-qubit gates Ng(n) = 50nper
evolution step (t = 1), we can evaluate bounds (35) on the distance between the steady state of the noisy dynamics and
the Gibbs state. These are normalized by Band plotted (solid lines) for different values of the error probabilities λgper noisy
gate. For comparison, we report bounds obtained for a more generic noise model with the same strength (dotted lines) and for
corresponding unitary circuits (dashed lines). These are further detailed in the main text and appendices (App. E 4 and E 5).
Sec. VI A, while for the error probabilities λwe fix the
error per noisy 2-qubit gate λgand assume a number of
gates Ng= 50nper unit of time (Jδt = 1) that scales
linearly with the system size. Other choices, grounded in
algorithmic and system details or scalings of the conver-
gence rate, can be made.
The bounds (35) are reported in unit of Bin
Fig. 6(right panel, solid lines), and evaluated for differ-
ent values of the 2-qubit gate error, λg= 104,106and
108. Such values span error rates representative of the
transition from quantum platforms with high-fidelity 2-
qubit operations on physical qubits expected in the near
term, to platforms with a limited number of quantum-
error-corrected qubits in the medium term. As can be
seen, for a noise strength λg= 104(blue line), except
for the smaller system sizes, the prepared steady state
quickly becomes indistinguishable from the maximally
mixed state, which corresponds to a value e
B/B = 1.
As the 2–qubit gate errors decrease to λg= 106, the
range of sizes for which the prepared state remains suffi-
ciently close to the target Gibbs states increases to a few
tens of qubits. Finally, for λg= 108it becomes possible
to prepare Gibbs states of up to n= 100 qubits with
relatively low error ˜ρσβ0.2.
We stress that the previous evaluation of the bounds
relies on extrapolation of the convergence rates from rela-
tively small system sizes, up to n= 8, to much larger val-
ues of n, making these only approximate. Still, through
bounds on the spectral gap derived in App. C 3 and C 5,
and corresponding assumptions, we expect the conver-
gence rate to scale geometrically as O(nc), as used here,
albeit with potentially different values of the exponent c.
To put the previous results in perspective, we include
bounds that are based on the treatment of generic noise
with the same strength. For that, we adapt the bounds
of Ref. [28] (Lemma II.1) to our discrete evolution and
evaluate them for the noise channel (32) considered here.
These bounds are obtained from a general bound on the
distance between steady states of two Lindbladians (here
the ideal and the noisy Lindbladian) in terms of their
operator distance. Derivations are provided in App. E 4,
yielding the bound in Eq. (E15). As can be seen in the
figure, such bounds (dotted lines) are substantially looser
than the ones in Eq. (35) obtained for stochastic noise: 3
to 70 times larger for the regime of small errors ˜
B0.2.
Notably, the latter accounts for the fact that errors (espe-
cially the ones occurring at early times in the dynamics)
are tempered by the subsequent steps of evolutions that
always tend towards the steady state. This feature is not
captured by the more generic bounds.
Finally, we also incorporate deviations that would be
entailed for a unitary Gibbs state preparation proto-
col with comparable circuit complexity (dashed lines
and detailed further in App. E 5). As can be seen,
these are also significantly larger than the bounds of
Eq. (35) obtained for Lindblad evolution and stochastic
noise: 2 to 50 times larger for the regime of small errors
˜
B0.2. While such a study has its limitations—some
amount of non-unitarity would be required when prepar-
ing Gibbs states—it exemplifies the enhanced resilience
of Lindblad-based protocols compared to unitary evolu-
tion, and adds to the body of work identifying this ef-
fect [6973]. Conceptually, the existence of a non-trivial
steady state, distinct from the steady state of the noise
channel, ensures that the long-time evolution does not
accumulate errors and still retains information about the
12
Gibbs state σβ.
Overall, the bounds in Eq. (35) for stochastic noise, or
Eq. (E15) for generic noise, allow us to assess the viability
of Gibbs state preparation protocols on near-term quan-
tum hardware. Furthermore, we saw that errors induced
by noise can be significantly less detrimental than ex-
pected, especially when the dominant contribution from
the noise is stochastic. Going forward, combining such
noise estimations with the analysis of algorithmic errors
in Sec. IV, will be key in determining the optimal al-
gorithmic parameters for our protocol. Finally, we note,
that the noise considered here was adversarial, in that our
Lindbladian was not designed to account for it. Engineer-
ing Lindbladians for Gibbs state preparation taking into
account pre-characterized noise (even if approximately)
may open up the path to even more resilient protocols.
VII. QUANTUM CIRCUIT SIMULATION
The focus of Sec. Vwas on the dynamical properties
of our Lindbladian (3), neglecting most of the errors in-
curred by a circuit implementation. In turn, this sec-
tion focuses on a numerical investigation of the concrete
circuit implementation of our protocol (Sec. IV) and its
resilience against noise. We investigate the influence of
the main algorithmic errors in Sec. VII A and compare
our analytical bounds derived in Sec. IV to the error
obtained from noiseless circuit simulation. In addition,
circuit-level simulations allow us to analyze the influence
of specific noise models on the algorithm performance. To
this end, in Sec. VII B we support our theoretical treat-
ment in Sec. VI by noisy circuit simulations assuming a
local depolarizing two-qubit-gate noise model. The cir-
cuit simulations are performed with qujax [74], a Python
package leveraging JAX [75]. As in the previous section,
we focus on the parameter set CH of the Ising model (30).
A. Algorithmic errors
We analyze algorithmic errors of our protocol pre-
sented in Sec. IV. We recall the upper bound on the
Lindblad simulation error (27),
tmix × O δt +Tt2
δt +t
βe2(T)2
+pβ|BH|e1
82πβ
t2βH∥−12,
(36)
with evolution step δt and OFT discretization step t
appearing in Eqs. (19) and (25). The first term is due
to Trotterization, random sampling of Lindblad oper-
ators (19), and dilation (24). The approximation of
eiδtγK ain Eq. (24) induces the second, third and
fourth terms, each of which arises as follows (see the dis-
cussion around Eq. (B46) for more details). Approximat-
ing the Lindblad operators (8) via Eq. (25) by restrict-
ing the integration domain to [T, T ], and discretizing it
into 2Stime steps of size t=T/S induces a trunca-
tion (third term) and discretization error (fourth term).
The implementation of the operator Aa(st) appearing
in Eq. (25) requires coherent evolution under eiHst,
that is approximated by a second-order product formula,
and thus, induces the second term in Eq. (36).
The Gibbs state preparation error of our circuit is
quantified by the distance ρcirc
σβ1between the ap-
proximate steady state ρcirc
of the noiseless quantum cir-
cuit (19) after Trotterization and OFT discretization and
the Gibbs state σβ. For the following noiseless circuit
simulations we use the mixed-field Ising Hamiltonian (30)
with n= 5 qubits and set the cutoff time of the dis-
cretized OFT (25) to JT = 1.6. This value ensures that
the bulk of the Gaussian filter g(t) [Eq. (9)] is captured
(i.e. R
−∞ dt g(t)RT
Tdt g(t)/R
−∞ dt g(t)107)
and, consequently, a small truncation error [cf. Eq. (B24)
for the definition]. Furthermore, we fix the maximum
simulation time to Jt = 500, which is sufficiently large for
the dynamics to converge to its steady state ρcirc
for the
chosen Hamiltonian parameter point (cf. Fig. 3). Thus,
the number Mof evolution steps, Eq. (19), is given by
M= 500/Jδt for fixed step size δt. Moreover, we choose
|A|= 10 random jump operators and average all simula-
tions over 10 repetitions.
To investigate the applicability of our theoretical
bound (36), we compute the distances ρcirc
σβ1on a
two-dimensional discretized grid of (δt, t)-values in the
range 102Jδt 101and 0.06 Jt1. We fit this
data with the function
fα1234(δt, t) =
α1+α2δt +α3Tt2
δt +α4pβ|BH|e1
82πβ
t2βH∥−12
,
(37)
which captures the error scaling of Eq. (36) for fixed evo-
lution time. Details are given in App. F 1. Since we use a
large integration window with JT = 1.6, the truncation
error term e2(T)2in Eq. (36) is negligible relative to
the other error sources, and we drop its contribution in
Eq. (37). The parameter α1has been introduced to ac-
count for the convergence accuracy of the ideal Lindblad
evolution (18).
In Fig. 7(left panel) we report the distances ρcirc
σβ1(indicated as dots) as a function of the evolution
step size δt for several values of the OFT discretiza-
tion step size t(colors in the legend). We observe
that the fitted function f(Eq. (37), solid lines) cap-
tures the approximately polynomial error increase for
large δt (J δt 1), although with larger slope than the
data. For small δt (Jδt 0.1) there are two distinct
behaviors. If tis sufficiently large, the third term of
Eq. (37) (proportional to α3) is large and its 1/δt de-
pendence counteracts the δt dependence of the second
term. In this regime, the error can increase with de-
creasing evolution step size δt, which is visible in the fit
13
102101100101
Evolution step Jδt
102
101
kρcirc
σβk1
Jt= 0.06
Jt= 0.10
Jt= 0.16
Jt= 0.26
0.1 0.2 0.3 0.4 0.5 0.6 0.7
OFT discretization step Jt
103
102
101
100
t = 0.01
t = 0.10
t = 1.00
t = 10.00
FIG. 7. Algorithmic errors of the randomized single-ancilla Lindblad simulation protocol for the mixed-field Ising model (30)
with n= 5 qubits. Trace distance between the steady state ρcirc
simulated by the noiseless circuit (simulation time Jt = 500)
and the target Gibbs state σβas a function of Trotter evolution step δt (left panel) and OFT discretization step t(right
panel). The fit (solid lines) obtained with the ansatz (37) captures the algorithmic error data (dots) reasonably well. The black
vertical line in the right panel indicates the maximum tfor the fit ansatz to be valid. Fitted parameters and further details
are in App. F.
corresponding to Jt= 0.26 (left end of the light blue
line). The circuit simulations data (dots) partially re-
flects this behavior: for Jt0.16 the error decreases
only mildly with decreasing δt. The contribution from
the third term of Eq. (37) can be suppressed for suffi-
ciently small Jt0.1, which results in an overall error
decay with δt according to the second term. Note that
the observed error (dots) decreases faster than predicted
by the fit in this regime, which is expected since the func-
tion we fit is an upper bound.
In Fig. 7(right panel) we show the error dependence
on the OFT discretization step tfor four values of the
evolution step δt (given in the legend). Several character-
istic regimes can be identified. For small Jδt = 0.01 (dark
blue dots) and Jt0.25, the third term of Eq. (37)
controls the weak decrease of ρcirc
σβ1as tde-
creases. For larger OFT discretization step, the fourth
term of Eq. (37) dominates, leading to a rapid increase
of the error for 0.25 Jt0.4. A qualitatively dif-
ferent behavior is observed for larger Jδt 1. In this
case, the second term of Eq. (37) exceeds the third term
(dependent on t), and the error plateaus and becomes
independent of tfor Jt0.25. A more detailed dis-
cussion of the individual error contributions is given in
App. F 2.
Overall, our circuit simulations show that our analyti-
cal bounds capture the different algorithmic error sources
reasonably well and allow us to understand the individ-
ual contributions. In the noiseless case, we can control
the algorithmic errors by reducing evolution step δt and
OFT discretization step t, at the expense of increased
circuit depth scaling polynomially in the inverse error,
cf. Eq. (28). However, when executing on noisy quan-
tum computers the increased circuit depth generally will
lead to larger errors induced by noise. These two trends,
namely the reduction of algorithmic errors and the in-
crease of noise-induced errors, counteract each other and
need to be balanced for an optimal overall accuracy of
the prepared state. We study this interplay in the next
subsection.
B. Local depolarizing noise
To complete our error analysis of the randomized
single-ancilla Gibbs state preparation protocol, we per-
form circuit simulations with local depolarizing noise. To
reflect realistic implementation on hardware, the circuits
are first compiled to the native gate set of the Quantin-
uum’s H1 architecture using the t|ketcompiler [76] with
optimization level 2. Furthermore, after each 2-qubit
gate in the compiled circuit, we apply a noise channel
Λ[X]:= (1 λg)X+λgTr2Q[X]I2Q
4,(38)
controlled by the error probability λg[0,1] and where
Tr2Q indicates tracing out the 2-qubit Hilbert space the
gate acts on, while I2Q is the identity operator on such
space. We choose an OFT discretization step size Jt=
0.2. As in Section VII A, all simulations are performed
for the mixed-field Ising model 30 with n= 5 qubits,
cutoff time JT = 1.6, |A|= 10, maximum simulation
time Jt = 500, and we average all simulations over 10
repetitions.
Results of the circuit simulations with noise are dis-
played in Fig. 8. Denoting as ˜ρcirc
the output of the
noisy quantum circuit simulation, we evaluate the trace
distance ||˜ρcirc
σβ||1to the Gibbs state as a function
of the noise parameter λg[106,104]. This distance
quantifies the overall error, including both algorithmic
14
and noise contributions, in the preparation of the target
Gibbs state. Results (indicated as circles) for different
values of the evolution steps Jδt = 1,3 and 5 are re-
ported (colors in legend). As can be seen in the figure,
for noise strengths λg<105, lower values of δt system-
atically yield smaller trace distances. As the noise in-
creases, however, larger evolution steps result in smaller
distances. This highlights trade-offs between algorithmic
and noise errors: while the algorithmic error scales with
the size of the evolution steps, the effect of the noise de-
creases with it, such that in certain noise regimes adopt-
ing larger evolution steps becomes beneficial.
To validate the noise analysis of Sec. VI, we evaluate
the bounds ˜
B(Ng) derived in Eq. (35) for stochastic
noise. These bounds are computed using an error rate
λ= 1(1λg)Ngfor Ngthe number of 2-qubit gates per
evolution step. For Jδt ={1,3,5}, the gate counts per
step of the compiled circuits are Ng={308,484,644},
respectively. To be comparable to the trace distance, the
bounds ˜
B(Ng) are shifted by the algorithmic error d0
that is obtained in the noiseless scenario (i.e. by setting
the noise strength λg= 0) for each of the values of δt
probed. As seen in Fig. 8,˜
B(Ng) + d0(dotted lines)
always upper bounds the overall errors. Furthermore, the
trade-off between algorithmic and noise errors, whereby
larger evolution steps can incur smaller overall errors, is
also captured by these bounds, albeit at slightly shifted
values of λg.
Finally, we perform a fit of the errors obtained with the
bounds ˜
B(Neff ) from Eq. (35) for an effective 2-qubit
gate count Neff , rather than the true number of 2-qubit
gates Ng. For t ={1,3,5}, we obtain fitted Neff =
{135,193,247}respectively. The resulting bounds, again
shifted by the algorithmic errors d0, are reported as solid
lines. This shows that, for the noise model simulated,
the functional dependence of the bounds ˜
Bcaptures
the actual errors remarkably well.
Overall, these simulations allow us to validate the noise
analysis of Sec. VI. We show that, while overestimating
the actual noise, the bound ˜
Balready captures im-
portant trade-offs between algorithmic and noise errors.
These will be important considerations when aiming at
running specific circuits on quantum hardware. Further-
more, we see that the bound captures the actual errors re-
markably well using an effective number of gates smaller
than the real number of gates. Understanding this re-
duction in relation to the prior works on noise-resilience
of quantum simulation [7782] is left to future work.
VIII. CONCLUSION
Our contribution bridges the gap between the predom-
inantly theory-driven literature on Lindblad simulation
algorithms for quantum Gibbs state preparation, and the
engineering of concrete Lindbladians and protocols that
exhibit good convergence in practice, with circuits im-
plementable on foreseeable quantum hardware. To this
106105104
Noise strength λg
101
||˜ρcirc
σβ||1
t = 1.0
t = 3.0
t = 5.0
˜
B(Ng) + d0
˜
B(Neff) + d0
FIG. 8. Circuit simulations with a depolarizing noise ap-
plied to the 2-qubit gates for n= 5 qubits. The distance
˜ρcirc
σβ1between the target Gibbs state and the output
of the noisy circuits (circles) is reported as a function of the
noise strength λgin Eq. (38) for different values of the evolu-
tion step δt (colors in legend). Further details regarding the
parameters used for the circuit construction are provided in
the main text. These distances account for both algorithmic
and noise errors in the preparation of the Gibbs state. In
addition, we report the bounds ˜
Bon the noise errors from
Eq. (35), shifted by the algorithmic error d0, which is eval-
uated for λg= 0. We report these shifted bounds both for
the actual number of gates Ngused in the compiled circuits
(dashed lines) and for an effective number of gates Neff (solid
lines), which is obtained by fitting the bound ˜
B+d0to the
data. Both Ngand the fitted values Neff are provided in the
main text for all the δt studied.
end, we combine several recent developments in the liter-
ature [2831,34,46] and propose a variant of this class of
algorithms with reduced implementation cost (Secs. III
and IV). Through a numerical analysis (Sec. V), we es-
tablish the crucial influence of the dynamical properties
of the underlying system Hamiltonian and the Lindblad
operators on mixing time and convergence accuracy. In
line with our theoretical analysis, we observe a weak,
almost linear, polynomial system-size scaling of the mix-
ing time for systems obeying the ETH. In contrast, the
Lindblad dynamics exhibits vastly different convergence
characteristics in the non-chaotic limits of our model.
In realistic scenarios, the overall Gibbs state prepara-
tion error is controlled by an interplay of algorithmic and
hardware-induced errors—for example smaller evolution
steps in Eq. (19) reduce the Trotterization error but, at
the same time, increase the gate count of the full circuit.
To obtain a good understanding of those error sources, we
investigate their impact both on a theoretical level and
based on circuit simulations (Secs. VI and VII). We nu-
merically demonstrate that our theoretical error bounds
provide a good description of the actual errors in noiseless
circuit simulations. Further, we show that the Lindblad
dynamics exhibits an inherent resilience against incoher-
ent stochastic noise, due to the presence of a nontrivial
steady state of the noisy dissipative dynamics. Unlike in
the unitary case, noise-induced errors at earlier times are
15
damped through the dissipative character of the dynam-
ics, which is a promising observation for the successful
demonstration of this class of algorithms on near-term
hardware. We support our theoretical noise analysis with
circuit simulations considering a depolarizing two-qubit
gate noise model.
The successful demonstration of Lindblad-based quan-
tum Gibbs state preparation algorithms on foresee-
able hardware requires a deep understanding of many
contributing factors—dynamical and algorithmic de-
sign properties (Hamiltonian, Lindblad operators, initial
state, algorithmic parameters), and of the expected hard-
ware errors. This work lays a foundation for this goal.
Future work includes an investigation of suitable initial
states implementable with a shallow circuit, and a de-
tailed account of the trade-off between algorithmic and
noise-induced errors for realistic hardware-specific noise
models. To further mitigate the influence of noise, one
may design the Lindbladian to take into account and
counteract pre-characterized stochastic noise channels.
Furthermore, the influence of coherent errors, and poten-
tial routes to convert those into stochastic noise similar to
randomized compiling [83,84], need to be explored. Fi-
nally, to assess the performance of the algorithm, efficient
ways to certify the prepared Gibbs state, e.g. based on
recent Hamiltonian learning results [41,85], are required.
ACKNOWLEDGMENTS
We thank Daniel Stilck Fran¸ca, Tomoya Hayata, and
Maria Tudorovskaya for insightful discussions. We thank
Marcello Benedetti and Henrik Dreyer for their feedback
on this manuscript.
16
Appendix A: Protocol details and relation to the CKG algorithm
After briefly reviewing the quantum Gibbs sampling algorithm proposed in [28,29], we clarify its relation to our
protocol.
1. CKG Quantum Gibbs sampling algorithm
The CKG algorithm [29] uses the Lindbladian
LCKG[ρ] = i[G, ρ] + X
aAZ
−∞
dω γ(ω)La(ω)ρLa(ω)1
2{La(ω)La(ω), ρ}
=i[G, ρ] + X
aAX
ν12
αν12Aa
ν1ρ(Aa
ν2)1
2{(Aa
ν1)Aa
ν2, ρ},
La(ω) = X
νBH
ˆgCKG(νω)Aν,
(A1)
where αν12:=Rdωγ(ωgCKG(ν1ωgCKG(ν2ω) and BHis the set of Bohr frequencies of the Hamiltonian Hof
the target Gibbs state σβ. With the choice of transition weight γ(ω)=e(ω+ωγ)2
2∆2
γand frequency-domain filter function
ˆgCKG(ω) = 1
(2π2
E)1/4eω2
4∆2
Efor parametrization satisfying β=2ωγ
2
E+∆2
γ, one can readily confirm the identity,
αν12= eβ(ν1+ν2)/2αν1,ν2.(A2)
This identity ensures that the transition term PaAPν12αν12Aa
ν1[·](Aa
ν2)obeys the σβ-DB condition (5), while
the decay term 1
2PaAPν12{La(ω)La(ω),·} does not. To enforce the σβ-DB condition for the full Lindbladian
LCKG[·], the authors in [29] designed the coherent term as
G=GCKG :=i
2X
aAX
ν12
αν12tanh β(ν1ν2)
4(Aa
ν1)Aa
ν2,(A3)
which, combined with the decay term, obeys the σβ-DB condition, and thus, so does the Lindbladian LCKG,
X, L
CKG[Y]σβ=⟨L
CKG[X], Y σβ(A4)
for any bounded operators X, Y .
2. Relation to the present protocol
In the present work, we choose a δ-function for the transition weight
γ(ω) = δ(ω+ωγ) with ωγ=β2
E/2.(A5)
With such a choice, the coefficients αν12factorize as
αν12=ην1ην2with ην:= ˆgCKG(ν+ωγ) = 1
(2π2
E)1/4e(ν+β2
E/2)2
4∆2
E,(A6)
and satisfy the identity (A2). Furthermore, the Lindbladian (A1) is reduced to
LCKG[ρ] = i[G, ρ] + γX
aALaρLa1
2{LaLa, ρ},(A7)
La=Z
−∞
dt g(t)Aa(t) = X
νBH
ηνAa
ν,(A8)
G=GCKG =i
2X
aAX
ν12BH
ην1ην2tanh β(ν1ν2)
4(Aa
ν1)Aa
ν2,(A9)
17
with Aa(t):= eiHtAaeiH t,ην:=R
−∞ dteiνt g(t), Aa
ν:=PEiEj=νΠiAaΠjand g(t) as defined in Eq. (9). Equa-
tion (A7) is identical to our Lindbladian (3) except for the coherent term involving GCKG and the fact that transition
rates γain Eq. (3) formally depend on abecause subsequently we write γa=γpaand sample individual Lindblad
operators according to the discrete distribution pa.
Next, we show that the coherent part (A9) vanishes under the ETH average, i.e. ER[GCKG] = 0. To this end, we
assume that the jump operators Aasatisfy the ETH (11),
Ei|Aa|Ej=Aa(Ei)δij +fa(Eij , νij )
pD(Eij )Ra
ij ,(A10)
where the random variables Ra
ij are independent and satisfy ER[Ra
ij ] = 0 together with ER[|Ra
ij |2] = 1, and D(E) is
the density of states. Hence, the ETH average of a product of jump operators is given by
ER[(Aa
ν2)Aa
ν1] = X
EkEi=ν2
EkEj=ν1
δkiδk j Aa(Ei)Aa(Ek)|EiEj|+fa
kifa
kj
pD(Eki)D(Ek j )
ER[Ra
ki Ra
kj ]|EiEj|
=X
EkEi=ν2
EkEj=ν1
|EiEi|δν1,0δν2,0(Aa(Ei))2+δν12|fa
ki|2
D(Eki).
(A11)
In the first line, we use the fact that the cross terms involving Aaand Raare first-order in the random matrix
elements Ra
ij and, hence, vanish in the ETH average. For the second line note that the δkiδkj in the first term enforces
Ek=Ei=Ej, such that ν1=ν2= 0. Moreover, ER[Ra
ki Ra
kj ] = δij , which enforces Ei=Ejand, thus, ν1=ν2.
Importantly, both of these contributions are zero whenever ν1=ν2. From this, and the fact that tanh(0) = 0, it
directly follows that the coherent term (A9) vanishes under the ETH:
ER[G]i
2X
νBH
tanh βν
4X
ν1ν2=ν
ην1ην2δν12= 0.(A12)
The above argument confirms that the average dissipative Lindbladian ERD, with
D[ρ] = X
aA
γaLaρLa1
2{LaLa, ρ}.(A13)
[cf. Eq. (3)], is σβ-DB without introducing the coherent term i[GCKG,·], as we saw this in Sec. III B. Since the Gibbs
state σβcommutes with H, a coherent term i[G, ·] generated by the system Hamiltonian G=Hdoes not change
the steady state, and ERL=i[H, ·] + ERDremains σβ-DB under the ETH average under a slightly generalized
definition of detailed balance. To see this, note that, in general, (L)KMS =σ1/2
βL[σ1/2
β·σ1/2
β]σ1/2
βis the adjoint
of L(the Hilbert-Schmidt adjoint of L) with respect to the KMS inner product ⟨·,·⟩σβ. In our case both generators
are not equal, as required by our KMS detailed balance condition (5). Their difference (under the ETH average) is
given by ERL(ERL)KMS = 2i[H, ·], as can be readily confirmed. This is a special case of the more general version
of quantum detailed balance introduced in [51] (Def. 27). In fact, it is easy to see [similar as in Eq. (6)], that under
this condition
ERL[σβ], XHS =I, ERL[X]σβ=(ERL)KMS [I], Xσβ=ERL[I]2i[H, I ], Xσβ= 0 (A14)
for all X, which confirms that σβis the steady state of ERL.
Thus, by replacing GCKG in Eq. (A7) with the system Hamiltonian H, we end up with our Lindbladian (3). For
our protocol, we set E=2 (see App. C 1 where we elaborate on this choice of parameter), which results in the
filter function [cf. Eq. (9)]
g(t) = 1
π3/4β1/2eit/βe2t22.(A15)
With this, our Lindblad operators take the form [cf. Eq. (8)]
La=Z
−∞
dt g(t)Aa(t) = β2
4π1/4X
νBH
e(βν+1)2
8Aa
ν=XηνAa
ν,(A16)
18
with the frequency-domain filter function
ην=β2
4π1/4
e(βν+1)2
8.(A17)
Appendix B: Details of single-ancilla Gibbs state preparation protocol
We provide details on the implementation of the steps of Lindblad evolution that are core to our Gibbs-state
preparation protocol by closely following [30], where the authors employed a single-ancilla protocol to prepare ground
states. A thorough error analysis of each approximation necessary for a quantum circuit implementation is provided
(Sec. B 1 to Sec. B 3). A summary of these contributions is reported in Sec. B 4 together with the overall circuit
complexity entailed [Eq. (B44)].
In the following, we aim at implementing the evolution etLfor a Lindbladian Lover a simulation time t=Mδt.
The lindbladian (3) with G=Hcan be recast as
L[ρ] = X
aA
pai[H, ρ] + γLaρLa1
2{LaLa, ρ},(B1)
in terms of a discrete probability distribution over the jump operator indices {pa}aAand where the strength γof
the dissipative terms has been renormalized accordingly. For the purpose of implementation, we approximate the
evolution under such Lindbladian through
e tL M
Y
i=1
eδtγDai Uδ t!where Da[ρ]:=LaρLa1
2{LaLa, ρ},and Uδt[ρ]:= eiδtH ρeiδtH .(B2)
At each evolution step we randomly sample a Lindblad operator yielding a set {Lai}i=1,...,M where each of the indices
aihas been drawn with probability pai. Using the Taylor expansion, one can see that, on average, such sampling
incurs errors that scale with δt2as we have
X
a
paeδtγDa Uδ t[ρ] = ρ+δtL[ρ] + O(δt2)=eδtL[ρ] + O(δt2).(B3)
1. Regularization of the integral in the operator Fourier transforms
Recall from Eq. (8), that the individual Lindblad operator Lacan be expressed as an operator Fourier transform
via the integral
La=Z
−∞
dt g(t)Aa(t),where Aa(t):= eiHtAaeiH t .(B4)
To realize such an integral as a quantum circuit, we need to regularize it. This is achieved by restricting its domain
to [T , T ] and discretizing it using the trapezoidal rule. For a step size t:=T/S, we obtain
¯
La:=t
2gSeiHS tAaeiHS t+
S1
X
s=S+1
tgseiHstAaeiH st+t
2gSeiHS tAaeiHS t
=
S
X
s=S
tsgseiHstAaeiH st,
(B5)
where we have defined gs:=g(st) and ts:= tfor S+ 1 sS1 or ts:= t/2 for s=±S.
19
Accordingly, the Hermitian dilation operator Ka(23) corresponding to Labecomes
¯
Ka:=|10|anc ¯
La+|01|anc ¯
La=Xanc ¯
La+¯
La
2iYanc ¯
La¯
La
2
=
S
X
s=S
ts(Re[gs]Xanc + Im[gs]Yanc)eiH stAaeiHst
=:
S
X
s=S
¯
Ka
s,
(B6)
where Aais assumed to be Hermitian. Errors entailed by the discretization of the jump operators will be quantified
in Sec. B 3, but for now we proceed with the implementation of the Lindblad evolution.
2. Second-order product formula for Lindblad evolution
Having defined the discretized Lindblad ¯
Laand dilation operators ¯
Ka, we aim at implementing a step of dissipative
evolution eδtγDaappearing in Eq. (B2). Recall from Eq. (24) that this can be achieved, up to an error scaling as
(δtγ)2, through the unitary evolution eiδtγK aacting on the system together with a single additional qubit:
Tranc[eiδtγK a(|00|anc ρ)eiδtγ Ka] = eδDa[ρ] + O((δ)2).(B7)
This evolution is implemented for the discretized ¯
Ka, as a second-order product formula yielding
eiδtγ ¯
Ka= eiδtγ PS
s=S¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s,(B8)
where we defined the ordered products Q
sO(s) = O(S)···O(S+1)O(S) and Q
sO(s) = O(S)···O(S1)O(S).
The resulting Trotter error is given by
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
seiδtγ ¯
Ka= (δtγ)3/2X
s1,s2,s3
cs1s2s3¯
Ka
s1¯
Ka
s2¯
Ka
s3+O((δtγ)2),(B9)
with some coefficients cs1s2s3. Furthermore, noting that (0|anc I)¯
Ka
s1¯
Ka
s2¯
Ka
s3(|0anc I) = 0, we find
Tranc "
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s(|00|anc ρ)
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s#
= Tranc[eiδ ¯
Ka(|00|anc ρ)eiδtγ ¯
Ka] + O((δtγ)2).
(B10)
We simplify the evolution operator and estimate the evolution time required for each step of Lindblad evolution.
Recalling that the evolution under a single ¯
Ka
s(out of a total of S) takes the form,
eiδtγ
2¯
Ka
s= exp iδtγ
2ts(Re[gs]Xanc + Im[gs]Yanc)eiH stAaeiHst
= (Ianc eiHst) eiδtγ
2ts(Re[gs]Xanc+Im[gs]Yanc )Aa
| {z }
=:Ba
s
(Ianc eiHst),(B11)
we can rewrite Eq. (B8) as
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s= (Ianc eiHS t)
Y
s
Ba
s(Ianc eiHt)!
Y
s
(Ianc eiHt)Ba
s!
| {z }
=:Va(δt)
(Ianc eiHS t).
(B12)
20
Therefore, a single step of Lindblad evolution (B2), which consists in a unitary part Uδt[·]=eiH δt [·]eiH δt and a
dissipative part (B10), is given by the superoperator
Wa(δt)[ρ]
:= Tranc[(Ianc eiHSt)Va(δt)(Ianc eiH St)(|0 0|⊗Uδt[ρ])(Ianc eiHS t)Va(δt)(Ianc eiH St)]
= eiHS tTranc[Va(δt)(|00| eiHSteiH δtρei teiH St)Va(δt)]eiHSt.(B13)
Since we sequentially apply Msteps of evolution Wa(δt), the operators eiHS tand eiHS tcancel out, except at the
first and last steps. Still, we can show that these two steps can also be discarded. For that, suppose we evolve an
initial state ρ(0) for evolution time Mδt that is larger than the mixing time tmix . By definition of the mixing time,
any initial state ρevolved for a time larger than tmix approximates the Gibbs state σβ= eβH /Tr[eβH]. That is,
M
Y
i=1
eδtDai Uδt ![ρ]Trotterization
M
Y
i=1 Wai(δt)![ρ]σβ.(B14)
Taking ρ=USt[ρ(0)], and using the fact that σβ=USt[σβ] as Hand σβcommute, this implies
USt M
Y
i=1 Wai(δt)! USt[ρ(0)] = M
Y
i=1 f
Wai(δt)![ρ(0)] σβ,(B15)
where we have defined
f
Wa(δt)[ρ]:=USt Wa(δt) USt[ρ] = Tranc[Va(δt)(|00| eiHδtρeiH δt)Va(δt)].(B16)
Therefore, the evolution by f
Wa[ρ], Eq. (B16), induces the same Lindblad evolution as the one by Wa[ρ], Eq. (B13),
but does not require any of the unitary e±iHS tevolution. The resulting quantum circuit is sketched as follows:
.
.
..
.
.
|0anc
Vai(δt)
ρ(0) eit ρ(Mδt)
Repeat for i= 1, . . . , M
(B17)
where Vai(δt) is given by
Vai(δt) =
. . . . . .
. . . . . .
. . . . . .
.
.
..
.
.
. . . . . .
anc
Bai
SBai
S+1 Bai
SBai
SBai
S
eiHteiHt
(B18)
In the circuit depiction of Eq. (B18), the jump operator Aai, used in the definition of any of the Bai
sas per Eq. (B11),
has been taken to be a one-local operator acting on the second system qubit from the top. More generally, these
jump operators could act on more than one qubit, but in the present work, we restrict them to be local, i.e. acting
on O(1) qubits to be aligned with the ETH. Hence, the cost of implementing any of the Vai(δt) is dominated by the
2Ssteps of Hamiltonian simulation Ut= eiHt.The Hamiltonian simulation of Utand Uδt in Eq. (B15), e.g. again
via Trotterization, induces an error which can be controlled by choosing an appropriately fine evolution step. We
discuss this in more detail below Eq. (B42), where we summarize all algorithmic errors. In the following, we attempt
to identify the choice of S.
21
3. Truncation and discretization of integral
So far, the circuit implementation presented was general to the evolution under the Lindbladian of Eq. (B1) together
with the Lindblad operators obtained through the generic operator Fourier transform of Eq. (B4). We now specialize
to our protocol and specify the Lindblad operator (B4). Let us recall our definition of the filter function [Eq. (A15)],
g(t) = 1
π3/4β1/2eit/βe2t22,(B19)
resulting in Lindblad operators of the form
La=Z
−∞
dt g(t)Aa(t) = β2
4π1/4X
νBH
e(βν+1)2
8Aa
ν.(B20)
We prove the following lemma, which plays a key role in determining the complexity of the protocol.
Lemma 1. Let g(t)be the filter function defined in Eq. (B19). Then, provided that
S= βHlog β
ϵ,t= Θ
β
βH+qn+ log β
ϵ
(B21)
in (B5), we have the regularization error
L¯
L=O(ϵ).(B22)
Proof. Introducing ¯
L:=P
s=−∞ tsgseiHstAeiH st, we separate the discretization and truncation errors,
L¯
L∥≤∥L¯
L+¯
L¯
L.(B23)
We start with bounding the truncation error ¯
L¯
Las follows:
¯
L¯
L A
π3/4β1/2tX
|s|>S
e2(st/β)2<2∆t
π3/4β1/2
X
s=S
e2S(∆t/β)2s=2∆t
π3/4β1/2
e2(St/β)2
1e2S(∆t/β)2=ϵ,(B24)
for
S= β
tslogt
βϵ!,(B25)
where we used A=O(1) assuming Ais O(1)-local.
Next, we bound the discretization error,
L¯
L
Z
−∞
dt g(t)A(t)t
X
s=−∞
g(st)A(st)
.(B26)
Using the Poisson summation formula,2we have,
t
X
s=−∞
g(st)A(st) = Z
−∞
dt g(t)A(t) + X
k=Z\{0}Z
−∞
dtei2πkt/tg(t)A(t)
=Z
−∞
dt g(t)A(t) + X
νBHX
k=Z\{0}Z
−∞
dtei2πkt/tg(t)eiν tAν,
(B28)
2For a function h(τ), the Poisson summation formula is given by,
X
s=−∞
h(s) =
X
k=−∞ Z
−∞
dτei2πkτ h(τ).(B27)
Setting h(s)=∆t g(st)A(st) in Eq. (B27), we arrive at Eq. (B28).
22
leading to
L¯
L 1
π3/4β1/2
X
νBH
AνX
|k|>0Z
−∞
dtei2πkt/teit/β e2t22eiνt
=β1/2
21/2π1/4
X
νBH
AνX
|k|>0
e1
82πβ
tkβν12
ν2H
β1/2|BH|∥A
21/2π1/4X
|k|>0
e1
82πβ
tk2βH∥−12
(2β)1/2|BH|∥A
π1/4
X
k=1
e1
82πβ
t2βH∥−12πβ
tk2βH∥−1
(2β)1/2|BH|∥A
π1/4
e1
82πβ
t2βH∥−12
1e1
82πβ
t2βH∥−12πβ
t.
(B29)
As the number of Bohr frequencies satisfies |BH| 4n, it suffices to choose
t=2πβ
q8log(2β)1/2|BH|∥A
π1/4ϵ+ 2βH+ 1
= Θ
β
βH+qn+ log β
ϵ
,(B30)
to ensure L¯
L ϵ. Combining Eqs. (B25) and (B30) we obtain Eq. (B21).
Denoting as ρi:=ρ(iδt) the state after isteps of ideal evolution, with ρ0=ρ(0), and by ¯ρi:=|00|anc ρiand
using Lemma 1, we bound the effect of the regularization error on the dissipative evolution (B7),
Tranc[eiδtγK a¯ρieiδ tγK a]Tranc[eiδtγ ¯
Ka¯ρieiδ ¯
Ka]
1
=
TranceiδtγK a¯ρieiδ tγK aeiδtγ ¯
Ka¯ρieiδ ¯
KaeiδtγK aeiδtγ Ka
1
(B32)
=
Zδtγ
0
dτTrancei(τδ)Ka[¯
KaKa,eiτ¯
Ka¯ρieiτ¯
Ka]ei(τδtγ)Ka
1
Zδtγ
0
dτTrancei(τδ)Ka[¯
KaKa,eiτ¯
Ka¯ρieiτ¯
Ka¯ρi]ei(τδ)Ka
1
+
Zδtγ
0
dτTrancei(τδ)Ka[¯
KaKa,¯ρi]ei(τδ)Ka
1
(B37)
=O(¯
KaKaδtγ).
(B31)
In the third line, we used the identity,
eitBeitC AeitC eitB A= i Zt
0
dτeiτB[BC, eiτ C Aeiτ C ]eiτB ,(B32)
for non-commuting operators A,B, and C. In the last inequality, we used
Trancei(τδ)Ka[¯
KaKa,eiτ¯
Ka¯ρieiτ¯
Ka¯ρi]ei(τδ)Ka
1=O(¯
KaKa · τ¯
Ka),
Trancei(τδ)Ka[¯
KaKa,¯ρi]ei(τδ)Ka
1=O(¯
KaKa · (τpδtγ)Ka),
(B33)
and ¯
Ka=Ka=Aa=O(1). Finally, combining Eq. (B6) and Lemma 1together with Eq. (B31), we find
Tranc[eiδtγK a¯ρieiδ tγK a]Tranc[eiδtγ ¯
Ka¯ρieiδ ¯
Ka]
1
=O(¯
KaKaδtγ) = O(¯
LaLaδtγ) = O(ϵδtγ).(B34)
23
4. Algorithmic errors
Combining all the previous results, we assess the resources needed for the Gibbs-state preparation protocol. We
start by relating the overall error ϵin the approximate Lindblad evolution algorithm, to the total evolution time
t, the number of performed steps M(or equivalently the size δt of these steps) and the error ϵresulting from
the regularization of the integral appearing in App. B 3. We then provide bounds on the runtime of the Gibbs
state preparation algorithm in terms of total Hamiltonian simulation time, and also the required number of steps of
Hamiltonian simulation eiHt, as seen in Eq. (B18).
a. Lindblad simulation
The approximate Lindblad evolution incurs an error in the evolved state that is now quantified. The trace distance
between the ideal ρ(Mδt):= etL[ρ(0)] and the approximately evolved state E{ai}QM
i=1 Wai(δt)[ρ(0)](B13), with
the average E{ai}[·] taken over trajectories associated with the randomly sampled jump operators {Aa1, . . . , AaM}, is
ρ(Mδt)E{ai}hM
Y
i=1 Wai(δt)[ρ0]i
1.(B35)
Let us now start by recalling all the sources of errors contributing to Eq. (B35) using ρi:=ρ(iδt), ρ0=ρ(0), and
¯ρi:=|00|anc ρias before. We need to account for the following:
Randomized applications of the jump operators.
eδtL[ρi]Ea[eδtγ Da Uδt [ρi]]1=O(δt2),(B36)
according to Eq. (B3).
Dilation of the dissipative evolution.
eδtγDaTranc[eiδtγ Ka¯ρieiδtγ Ka]
1=O((δtγ)2),(B37)
according to Eq. (B7), and noting that the unitary evolution Uδt does not affect the errors.
Truncation and discretization of the operator Fourier transform.
Tranc[eiδtγK a¯ρieiδ tγK a]Tranc[eiδtγ ¯
Ka¯ρieiδ ¯
Ka]
1=O(ϵδtγ) (B38)
according to Eq. (B34).
Trotterization.
Tranch
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s¯ρi
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
siTranc[eiδtγ ¯
Ka¯ρieiδ ¯
Ka]
1
=O((δtγ)2),
(B39)
according to Eq. (B10).
We are now in ready to combine all these errors. First, notice that at any step i= 1, . . . , M we have
ρiE{aj}hi
Y
j=1 Waj(δt)[ρ0]i
1
=
ρiEaihWai(δt)[ρi1]i+EaihWai(δt)hρi1E{aj}hi1
Y
j=1 Waj(δt)[ρ0]ii
1
ρiEaihWai(δt)[ρi1]i
1+
ρi1E{aj}hi1
Y
j=1 Waj(δt)[ρ0]i
1.
(B40)
24
The last line is obtained through the triangle inequality and the fact that Eai[Wai(δt)] is a quantum channel together
with the contractivity of the trace distance under quantum channels. After recursive use of this inequality, we can
bound the total error through
ρ(Mδt)E{ai}hM
Y
i=1 Wai(δt)[ρ0]i
1
M
X
i=1
ρiEaiWai(δt)[ρi1]
1
M
X
i=1
ρiEaieδtγDa Uδ t[ρi1]
1
+Eai
eδtγDai Uδ t[ρi1]Tranc[eiδtγ Ka¯ρi1eiδtγ Ka]
1
+Eai
Tranc[eiδtγK a¯ρieiδ tγK a]Tranc[eiδtγ ¯
Ka¯ρieiδ ¯
Ka]
1
+Eai
Tranc[eiδ ¯
Ka¯ρieiδ ¯
Ka]Tranch
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
s¯ρi
Y
s
eiδtγ
2¯
Ka
s
Y
s
eiδtγ
2¯
Ka
si
1
O(M δt2+ϵM δt).(B41)
Hence, we can guarantee an error O(ϵ) in the prepared state, after a total Lindblad evolution time t=Mδt, provided
that
ϵ=Oϵ
t, M =Ot2
ϵ.(B42)
So far, we have not specified how to implement the coherent evolutions Uδt and Utin Eqs (21) and (B15). If
we employ the second-order product formula with rδsteps to approximately implement Uδt, the error is O(δt3/r2
δ).
Similarly, applying the second-order product formula to Utwith rsteps induces the error O(∆t3/r2
). Note that
for each of the Msteps of size δt,Uthas to be applied 4Stimes in Eq. (B15). Thus, the error accumulated during
Msteps of the Lindblad evolution is
OMδt3
r2
δ
+M S t3
r2
.(B43)
Choosing rδ=O(pM δt3) and r=O(pM St3) suffices to bound the error by ϵ.
b. Gibbs state preparation
Provided a mixing time tmix =Mmixδt, the total Hamiltonian simulation time to prepare the Gibbs state with an
error ϵis
Mmix ×(δt + Θ(St)) = Θ tmix +βt2
mix
ϵrlog βtmix
ϵ!,(B44)
to prepare an ϵ-precise Gibbs state in trace distance. For the simulation time corresponding to the dissipative part,
we used Sand tsatisfying Lemma 1with ϵgiven by Eq. (B42). The resulting circuit, from Eqs. (B17) and (B18),
uses a total of
Mmix ×Θ(S) = Θ βt2
mixH
ϵlog βtmix
ϵ(B45)
applications of Bs(Ianc eiHt) and (Ianc eiHt)Bs.
For the purpose of the error analysis conducted in Sec. VII, we collect all the sources of algorithmic error along the
Lindblad simulation for time tmix =Mmixδt. Adding Eqs. (B36), (B37), (B38),(B39), and (B43), we have the total
algorithmic error,
tmix × O δt +t
βe2(T)2+pβ|BH|e1
82πβ
t2βH∥−12
+δt2
r2
δ
+Tt2
δt r2
.(B46)
25
In Eq. (27) of the main text, we drop the second last term δt2as this is subleading to the first term δt, and set
r= 1 to align with the setup adopted in the main text.
Appendix C: Convergence of Lindblad dynamics under the ETH
This appendix details the convergence analysis of the Lindblad dynamics towards the Gibbs state, assuming that the
eigenstate thermalization hypothesis (ETH) holds. For this, we follow the strategy of [46] and compute the spectral
gap of the ETH-averaged Lindbladian ERLand relate it, via a bound on the channel distance between ERLand the
actual Lindbladian L, to the mixing time of L. Throughout this appendix we set γa= 1/|A|in Eqs. (3) and (A13).
In Sec. C 1 we restate the ETH, and discuss the energy scales at play together with the assumptions made. In
Sec. C 2 we introduce the density of states and summarize the properties relevant for our subsequent derivations.
Following this, we compute the spectral gap of the averaged Lindbladian ERLand the distance between ERLand L
in Secs. C 3 and C 4. Based on these derivations, we then prove in Sec. C 5 our final mixing time bound for the actual
Lindbladian L, and the closeness of the steady state of Lto the target Gibbs state.
1. Eigenstate thermalization hypothesis
The ETH, originally proposed by Srednicki [44], hypothesizes that, for a Hamiltonian H, the matrix elements of a
local operator Ain the eigenbasis {|Ei⟩} of His expressed as [Eq. (11)]
Aij :=Ei|A|Ej=A(Ei)δij +f(Eij, νij )
pD(Eij )Rij ,(C1)
with Eij := (Ei+Ej)/2, such that Ei=Eii, and νij :=EiEj. We will also sometimes use Dij :=D(Eij) and
fij :=f(Eij, νij ). Both A(E) and f(E , ν) are smooth functions of Eand ν. We defer the discussion of the properties
of the density of states D(E) to Sec. C 2.Ris a Hermitian matrix with entries Rij whose real and imaginary parts
are independent random variables that satisfy ER[Rij] = 0 and ER[|Rij |2] = 1. Furthermore its diagonal entries are
set to Rii = 0 for all i.
Given the ETH ansatz (C1) and properties of the random variables Rij, one can verify several identities that will
be useful later on when performing averages over entries of R. First, we have
ER[Aij A
kl] = δij δk lA(Ei)A(Ek) + δik δj l
f(Eij , νij )
pD(Eij ).(C2)
Furthermore, recalling that Aν=PEiEj=νΠjAΠi, with Πithe projector onto the Hamiltonian’s eigenspace with
energy Ei, one can verify that
ER[Aν1ρA
ν2] = δν12ER[Aν1ρA
ν1],and ER{A
ν1A
ν1, ρ}=δν12ER{A
ν1A
ν1, ρ}.(C3)
To proceed with the concrete calculation of the mixing time, we make a couple of simplifying assumptions on the
non-universal function f(E, ν ).
Assumption 1. The function f(E, ν)in Eq. (C1)satisfies the following properties.
(a) f(E, ν ) = f(ν)is independent of E, and supported on ν[RMT,RMT], where it is flat.
(b) The width of the support scales with βas RMT = Θ(1), and the function takes an (n, β )-independent constant
value f0on the support.
To motivate the assumptions we note that an operator A, such that A2=O(1), obeys Tr[σβA2] A2=
O(1). With the ETH, one can express the trace in the integral form,
Tr[σβA2] = Z
−∞
dνeβν/2|f(ν)|2=O(1).(C4)
To ensure that the integral converges to a finite value, we need |f(ν)|ν→∞
o(eβν/4) [44,86]. Assumption 1(b)
provides a simple restriction on f(ν) so that this constraint is met. While we made specific choices of RMT and
26
ν
|f(ν)|2
E= Θ(∆RMT)
η2
ν
E
2= Θ(∆RMT)
RMT = Θ(1)
FIG. 9. Squared Gaussian filter function η2
ν(A17) from the operator Fourier transform and the function |f(ν)|2in the off-
diagonal ETH (11). The relationship between their supports E=2 = Θ(∆RMT) guarantees significant overlap between
the two functions.
|f(ν)|, any polynomial dependence of them on nalso leads to a polynomial upper bound on the mixing time. In the
following, we shall provide computational resources with and without Assumption 1(b).
We illustrate the relevant scales under Assumption 1(b) in Fig. 9. Given the above condition on |f(ν)|, we choose
the width of our filter function ην(A17) to be E= Θ(∆RMT) = Θ(1). This, in turn, leads to the filter being
peaked at ν=β2
E/2 = E/2. Figure 9illustrates that there is a significant overlap between the supports of η2
ν
and |f(ν)|2, which guarantees efficient transition of states upon applications of the jump operators. Quantitatively,
we can derive upper and lower bounds on the overlap integrals,
Γ:=ZRMT
0
dν η2
ν|f(ν)|2and Γ:=Z
−∞
dν η2
ν|f(ν)|2,(C5)
which will show up frequently in this appendix. First, we can relate these two integrals through
ΓΓ,(C6)
where we used the fact that fis only supported on ν[RMT,RMT ], from Assumption 1(a), and the profile of
ην(A17) that has most of its mass on the range [RMT,0]. Hence it is sufficient to derive bounds for Γ. For its
upper bound, we insert Eq. (A17) and obtain
Γβ
4πZRMT
RMT
dνe((βν)2+1)/4eβν/2|f(ν)|2β
4πZRMT
RMT
dνeβν/2|f(ν)|2(C4)
=O(β),(C7)
On the other hand, Assumption 1(b) allows one to lower bound the overlap,
Γβe(βRMT)2/4e1/4
4πZRMT
0
dν|f(ν)|2=e(βRMT)2/4βe1/4
4πRMT|f0|2= Θ(1),(C8)
using RMT = Θ(1) in the last equality.
2. Density of states
We introduce the density of states
D(E) = X
Eispec[H]
˜
δ(EEi).(C9)
Here ˜
δ(EEi) is a smeared delta function ensuring that D(E) is a smooth function of E. The following assumption
on the density of states guarantees that the ratios appearing in the conductance calculations in Sec. C 3 are of order 1.
Assumption 2 (Bounded ratio of density of states).The ratio of densities of states is uniformly bounded such that
for all |EE| RMT
D(E)
D(E)RD= Θ(1).(C10)
27
We note that in [46], the spectrum of the Hamiltonian is truncated to exclude pathological cases that would
invalidate this bound on the ratio close to the extremal values of the spectrum.
The normalized density of Gibbs states is defined by
Dβ(E)eβE D(E) (C11)
with the normalization taken to satisfy R
−∞ dEDβ(E) = 1. We require that the bulk of the density of Gibbs states
is lower-bounded by a characteristic scale 1/spec and that the tails decay sufficiently fast. This is captured by the
following assumption.
Assumption 3 (Density of Gibbs states).The (normalized) density of Gibbs states Dβ(E), defined in Eq. (C11),
satisfies the following properties.
(a) There exists an interval [EL, ER]that contains more than half the weight, such that RER
ELdEDβ(E)1/2, and
for all E[EL, ER]we have
Dβ(E)Dmin
β= Ω(∆1
spec).(C12)
(b) The right tail [ER,)and left tail (−∞, EL]decay such that for all E[ER,)we have
Z
E
dEDβ(E) = O(∆specDβ(E)) (C13)
and for all E(−∞, EL]we have
ZE
−∞
dEDβ(E) = O(∆specDβ(E)) .(C14)
To gain intuition about these assumptions, it is instructive to think of the density of states as a Gaussian
D(E)e(EE)2
2∆2
spec ,(C15)
where E:= Tr[H]/2nis the energy at infinite temperature. Then the density of Gibbs states is a Gaussian shifted
by β2
spec
Dβ(E) = 1
q2π2
spec
e(EE+β2
spec)2
2∆2
spec .(C16)
Figure 10 illustrates those densities. It can be shown that those Gaussian densities fulfill Assumptions 2and 3[46].
For instance, the density (C16) indeed obeys Assumption 3(b),
Z
E
dEDβ(E) = 1
q2π2
spec Z
E
dEe(EE+β2
spec)2
2∆spec =1
πZ
|EE+β2
spec|/2∆2
spec
dueu2
1
2πw Z
w
du2ueu2w=|EE+β2
spec|/q2∆2
spec
=1
2πw Z
w2
dueu
=2
spec
|EE+β2
spec|Dβ(E)
specDβ(E).
(C17)
The last line follows from spec |EE+β2
spec|in the tail EER.
28
E
E
Eβ2
spec
D(E)Dβ(E)
spec
FIG. 10. Example of the density of states D(E) and density of the Gibbs state Dβ(E) that we consider.
3. Spectral gap of ERL
In this section, we derive a lower bound on the spectral gap of the Lindbladian ERLunder the ETH average. After
working out the action of the average Lindbladian onto eigenstates projectors, in Sec. C3a, we proceed in two steps:
First, in Sec. C 3 b, we show that the average Lindbladian ERLcan be mapped to a classical Markov chain admitting
the Gibbs state as its exact steady state. In the second step, in Sec. C 3 c we work out bounds on the conductance of
the Markov chain, which allows us to bounds its spectral gap through Cheeger’s inequality. Subsequently, in Sec. C 4,
we derive an upper bound on the distance between the Lindbladian Land the averaged one ERL, which implies that
the steady state ρof Lis close to the Gibbs state σβ.
a. Action of the averaged Lindbladian
We wish to evaluate the action of the averaged Lindbladian ERL, with Lgiven in Eq. (B1), onto any of the operators
|EiEj|. Setting γa= 1/|A|in Eq. (A13), or equivalently pa= 1/|A|and γ= 1 in Eq. (B1), together with our
choice of jump operators in Eq. (A16), we can write the dissipative part of our Lindbladian as
D[ρ] = 1
|A|X
aLaρLa1
2{LaLa, ρ}=1
|A|X
aX
ν12
ην1ην2Aa
ν1ρAa
ν21
2{Aa
ν1Aa
ν2, ρ}.(C18)
Taking an average over the random matrices elements appearing in Eq. (C1), and using Eq. (C3), leads to
ERD[ρ] = 1
|A|X
aX
ν
η2
νERhAa
νρAa
ν1
2{Aa
νAa
ν, ρ}i,(C19)
that has the form of the Davies generator [87,88]. Let us define Dij :=D(Eij) together with ηij :=ηνij and
fa
ij :=fa(νij). Recall that νij =EiEj. Using Eq. (C1), we can see that
X
ν
η2
νER[Aa
ν|EiEj|Aa
ν] = η2
0Aa(Ei)Aa(Ej)|EiEj|+δij X
k=i
η2
ki|fa
ki|2
Dki |EkEk|.(C20)
Similarly, for the second term appearing in Eq. (C19), we obtain
X
ν
η2
νER[{Aa
νAa
ν,|EiEj|}] =
η2
0Aa(Ei)2+Aa(Ej)2+X
ν=0
η2
ν |fa
i(i+ν)|2
Di(i+ν)
+|fa
j(j+ν)|2
Dj(j+ν)!
|EiEj|.(C21)
Finally, we note that [H, |EiEj|]=(EiEj)|Ei Ej|.
Putting everything together we can evaluate the action of ERLonto any of the |EiEj|operators. For the case
where i=jwe get:
ERL[|EiEi|] = 1
|A|X
aX
j=i
η2
ji |fa
ji |2
Dji
(|EjEj|−|EiEi|).(C22)
29
On the other hand for i=j, after some reordering, we obtain:
ERL[|EiEj|] =
i(EiEj)1
2|A|X
a
η2
0(Aa(Ei) Aa(Ej))2+X
m=i
η2
mi |fa
im|2
Dim
+X
l=j
η2
lj |fa
jl |2
Djl
|EiEj|.
(C23)
In the following, we will often need to bound the terms Pk=i
η2
ki|fki |2
Dki , for a fixed i, appearing in Eqs. (C22) and
(C23). To do so, we first approximate the sums as follows:
X
k=i
η2
ki|fki|2
Dki
=Z
−∞
dEkD(Ek)η2
ki |fki|2
D(Eki)=ZEi+∆RMT
EiRMT
dEkD(Ek)η2
ki |fki|2
D(Eki).(C24)
We first replaced the sum by an integral that holds for a smooth function g(E). Then, given that f(ν) is supported
on the interval [RMT,RMT ], we restricted the integral’s interval. Finally using the fact that the ratio of densities
is bounded through Assumption (2) and the definition of Γ in Eq. (C5), we get
Γ
RDX
k=i
η2
ki|fki|2
Dki RDΓ.(C25)
b. Mapping to a classical Markov chain
Here, we show that the average Lindbladian ERLdefines the generator of a classical Markov chain on the spectrum
of H. The transition rates qijbetween eigenstate projectors |EiEi|are readily obtained from Eq. (C22) as
qij:= Tr [|EjEj|ERL[|Ei Ei|]] = η2
ji |fji |2
Dji δij X
k=i
η2
ki|fki|2
Dki
,(C26)
where for the sake of simplicity we assumed that fa=ffor any aA. Moving to the off-diagonal elements |EiEj|,
we saw from Eq. (C23) that these were eigenvectors of ERL. From Eq. (C25), the minimum (in magnitude) of the
real part of these eigenvalues is lower bounded through
off := min
ij
1
2|A|X
a
η2
0(Aa(Ei) Aa(Ej))2+X
m=i
η2
mi |fa
im|2
Dim
+X
l=j
η2
lj |fa
jl |2
Djl
Γ
RD
.(C27)
Under the time evolution with etERL, the imaginary part of the eigenvalues results in a phase, while the real part
leads to a decay of the amplitude on the off-diagonal operators |EiEj|. Later on, we will discuss the timescales of
such decay, but for now focus on the classical Markov chain defined by Eq. (C26), which is just the restriction of ERL
to eigenstate projectors. This converges to the Gibbs state σβ, since ERLis σβ-DB, as discussed in Sec. III B, and
the mixing time of the chain is controlled by the transition rates (C26).
We shift and rescale the transition rates qijto obtain a stochastic transition matrix of the classical Markov chain:
Pij:=qij+ij
r,with r:= max
iX
k=i
η2
ki|fki|2
Dki
.(C28)
To see that with such choice Pijis indeed stochastic3we first note that the transition rates (C26) satisfy Pjqij= 0.
Hence, PjPij= 1 for any choice of r > 0. It remains to show that the chosen rensures that any Pij0. This
can be seen by noting that ris equal to maxiqiiand that the qiiare the only negative transition rates. Finally,
according to Eq. (C25), we readily see that ris bounded through
Γ
RDrRDΓ,(C29)
3We say Pijis stochastic if it satisfies PjPij= 1 for all iand any Pij0.
30
S
¯
S¯
S
EEr
ELER
S1S
P1Prr+1
SrSr+1
E
Eβ2
spec
Dβ(E)
RMT
spec
FIG. 11. Sketch of energy intervals, transitions, and density of states that contribute to the conductance from Sto ¯
S.
Given the spectral gap Pof the Markov chain with transition matrix P, the gap of the diagonal part of ERLis
diag =rP, and we now wish to bound P. To this end, we relate the gap of the Markov chain with its conductance
Φ defined as
Φ(S):=Q(S, ¯
S)
πS
, Q(S1, S2):=X
iS1,jS2
πiPij.(C30)
Here, Sspec(H) refers to a subset of the spectrum of Hand ¯
Sto its complement. We further defined the probability
πS:=PiSπifor the stationary distribution πiof the Markov chain. This stationary distribution πisatisfies the
detailed balance condition given by πiPij=πjPjifor any i, j spec(H). Defining the minimum conductance
(also called bottleneck ratio) as
ϕ:= min
πS1/2
Q(S, ¯
S)
πS
,(C31)
the spectral gap Pis bounded via Cheeger’s inequality as
ϕ2
2P2ϕ. (C32)
c. Conductance calculation
In the following we aim at bounding the conductance (C31). For that, we introduce a set of non-overlapping energy
intervals {Si}i, each of which has size RMT as sketched in Fig. 11. The intervals Siare labeled in increasing order
of energies, i.e., Ei< Ei+1 for any EiSiand Ei+1 Si+1. Furthermore, we assume for now that the subset S
appearing in the definition of the conductance has the form SS+1 ·· · Srsupported on the energy interval
[E, Er], such that πS1/2. That is, we assume that Sis contiguous over the energy spectrum. When Sis not
contiguous, we can decompose it into contiguous subsets and combine their conductance bounds, as discussed later.
We divide the analysis of the conductance into four cases [46] that cover all the possible locations of Eland Erwith
respect to the interval [EL, ER] that was defined in Assumption 3.
31
a. Case Er[EL+ RMT, ERRMT].The conductance is lower-bounded with Assumptions 2and 3as
Φ(S)2Q(S, ¯
S)2(πSrPrr+1 +πSP1)2πSrPrr+1
=2
rX
ErSr,Er+1Sr+1
πErη2
r,r+1 |fr,r+1|2
Dr,r+1
=2
rZEr
Er1
dE Dβ(E)ZEr+1
Er
dED(E)η2
EE|f(EE)|2
DE+E
2
Ass. 2
2
rRDZEr
Er1
dE Dβ(E)ZEr+1
Er
dEη2
EE|f(EE)|2
Ass. 3(a)
2Dmin
β
rRDZEr
Er1
dEZEr+1
Er
dEη2
EE|f(EE)|2
ν=EE
=2Dmin
β
rRDZEr
Er1
dEZEr+1E
ErE
dν η2
ν|fν|2
µ=ErE
=2Dmin
β
rRDZRMT
0
dµZRMT+µ
µ
dν η2
ν|fν|2
2Dmin
β
rRDZRMT
0
dµZRMT
0
dν η2
ν|fν|2
2Dmin
βeβRMT RMT
rRDZRMT
0
dν η2
ν|fν|2
(C29)
= eβRMT RMT
spec .(C33)
The derivations of such bounds only account for energy-increasing transitions, as seen in the first line of the derivations.
b. Case E[EL+ RMT, ERRMT].Following similar steps, the conductance is lower-bounded as
Φ(S)2πSP1=2
rX
ES,E1S1
πEη2
1,ℓ |fℓ,ℓ1|2
DE+E1
2RMT
spec .(C34)
Note that the energy-increasing conductance bound (C33) is smaller than the energy-decreasing one (C34).
c. Case Er(−∞, EL].Under Assumption 3,Q(S, ¯
S) is lower-bounded as,
Q(S, ¯
S)πSrPrr+1 1
rmin
ErSr
RMTDβ(Er)eβRMT ZRMT
0
dνη2
ν|f(ν)|2.(C35)
and
πSZEr
−∞
dEDβ(E) = OspecDβ(Er).(C36)
Thus, the conductance is bounded as
Φ(S) = Q(S, ¯
S)
πSeβRMT RMT
spec .(C37)
d. Case E[ER,).Following similar steps as for the previous case leads to the same lower bound (C34).
To deal with unions of contiguous energy sets, let us first consider the case with two such sets, S1and S2, that are
not contiguous to each others (and thus that do not overlap). We wish to relate the conductance Φ(S12 :=S1S2)
to the individual conductances Φ(S1) and Φ(S2). To do so, we note that given Assumption 1(a), transitions Pij
between any iS1to any jS2are suppressed such that Q(S12,¯
S12) = Q(S1,¯
S1) + Q(S2,¯
S2). Hence
Φ(S12) = Q(S1,¯
S1) + Q(S2,¯
S2)
πS1+πS2min{Φ(S1),Φ(S2)}.(C38)
32
This can extended to unions of more than two non-contiguous sets, and shows that taking such union cannot decrease
the conductance. Overall, accounting for all the cases studied, we get that the conductance is lower-bounded as
Φ(S)eβRMT RMT
spec ,(C39)
which thus gives the minimum conductance, i.e.,
ϕeβRMT RMT
spec .(C40)
Therefore, using Cheeger’s inequality (C32) together with the relationship between the classical Markov chain and
the diagonal part of the averaged Lindbladian (C28), we obtain the lower-bound for the spectral gap
diag 2
2= e2βRMT 2
RMTΓ
2
spec .(C41)
Using RMT = Θ(1) (Assumption 1(b)) and the lower-bound on the overlap integrals (C8), we get
diag = 1
2,(C42)
Here, we further used a standard scaling of the spectral width of a local Hamiltonian, spec =O(n) [8991]. Finally,
comparing the scaling of Eq. (C42) to one of the eigenvalues of the off-diagonal part of the averaged Lindbladian (C27),
we conclude that diag provides the lower bound of the spectral gap of the average Lindbladian,
ERL= e2βRMT 2
RMTΓ
2
spec Ass.1(b)
= 1
2.(C43)
4. Concentration: upper bound of ∥L ERL∥σ1
β,22
We derive an upper bound on the operator distance between the Lindbladian Land its ETH average ERL. A
convenient distance measure for our purposes is the weighted induced quantum channel norm · σ1
β,22[53], which
is defined by
∥C∥σ1
β,22:= max
X
∥C[X]σ1
β,2
Xσ1
β,2
,with Xσ1
β,2:=qX, Xσ1
β
(4)
=qTr[Xσ1/2
βXσ1/2
β],(C44)
for a quantum channel Cand the maximization is performed over Hermitian operators X. Later, this norm will allow
us to bound the trace distance between the steady states ρand σβ.
Note that the ETH average does not affect the system Hamiltonian, such that we can simplify
∥L ERL∥σ1
β,22=∥D ERD∥σ1
β,22.(C45)
where Dis the dissipative part of L, which form we recall here:
D[ρ] = 1
|A|X
aLaρLa1
2{LaLa, ρ}=1
|A|X
aX
ν12
ην1ην2Aa
ν1ρAa
ν21
2{Aa
ν1Aa
ν2, ρ}.(C46)
Defining K[·]:=σ1/4
βD[σ1/4
β·σ1/4
β]σ1/4
β, and Kits vectorization, we can verify that4
∥D ERD∥σ1
β,22=∥K ERK∥22=KERK.(C48)
4Let δD:=D ERDand δK:=K ERK. The first equality in (C48) follows from
δK∥22= max
XδK[X]2
X2
= max
X
σ1/4
βδD[σ1/4
β1/4
β]σ1/4
β2
X2
= max
˜
X
σ1/4
βδD[˜
X]σ1/4
β2
σ1/4
β˜
1/4
β2
= max
˜
X
δD[˜
X]σ1
β,2
˜
Xσ1
β,2
(C47)
with ˜
X:=σ1/4
β1/4
β. In the present work, the maximization of X(˜
X) to define the induced norm is restricted to Hermitian operators.
33
We note that, for any σβ-DB dissipative channel D, the channel Kis self-adjoint with respect to the Hilbert-Schmidt
inner product for all bounded operators. Furthermore K, the vectorization of K, is given by5
K=1
|A|X
aX
µ,ν
ηµηνeβ
4(µ+ν)(Aa
ν)Aa
µeβ
4(µν)
2I(Aa
µ)Aa
νeβ
4(µν)
2(Aa
ν)(Aa
µ)I,(C50)
where Ais the transpose of A.
Our strategy to bound Eq. (C48) is to partition the energy spectrum into a set of non-overlapping energy intervals
{Si}i(Fig. 11) using the projectors
ΠSi:=X
Ei[iRMT,(i+1)∆RMT ]|EiEi|.(C51)
We start with the terms involving AA. For KAA:=PaPµ,ν ηµηνeβ
4(µ+ν)((Aa
ν)Aa
µER[(Aa
ν)Aa
µ]) and
an even integer p,
ER
SkΠSl)∆KAASiΠSj)
p
ER
SkΠSl)∆KAASiΠSj)
p
ER
SkΠSl)∆KAA ¯
E1ΠSj)
p
p
(C53)
ER
SkΠSl)1
|A|X
aX
µ,ν
eβ
4(µ+ν)ηµην((Aa
ν)Aa
µ(Aa
ν)Aa
µ)(ΠSiΠSj)
p
p
(C54)
=ER
SkΠSl)2
|A|X
aX
µ,ν
eβ
4(µ+ν)ηµην(Ba
ν)Ba
µSiΠSj)
p
p
(C55)
ER
2
|A|X
a
Ga
Sk,SiGa
Sl,Sj
p
p
(C56)
2
p|A|!p
ERGSk,Sip
pERGSl,Sjp
p
(C58)
2
p|A|!pmax{Tr[ΠSk],Tr[ΠSi]}ER[|GSk,Si|2] max{Tr[ΠSl],Tr[ΠSj]}ER[|GSl,Sj|2]p/2.(C52)
In the third line, we used that, for the zero-mean random matrix X:= (Aa
ν)Aa
µE[(Aa
ν)Aa
µ]
ERXp
pERXXp
p,(C53)
where Xand Xare i.i.d. random matrices (all the diagonal entries are zero). In the fourth line, we wrote A=
(B+B)/2 and A= (BB)/2 with zero-mean random matrices Band B. Then, it follows
AAA′∗ A=BB+B′∗ B. (C54)
In the fifth line, we defined the zero-mean random matrices G¯
F , ¯
Ewhose entries all have the same variance,
ER[|(Ga
S,S )ij |2] = max
FS,ES(e β
4(FE)ηFE)2ER[|(Ba
FE)ij |2] = max
FS,ES(e β
4(FE)ηFE)2|f(FE)|2
DF+E
2.(C55)
5The vectorization of Dis
D=1
|A|X
aX
µ,ν
ηµην(Aa
ν)Aa
µ1
2I(Aa
µ)Aa
ν1
2(Aa
ν)(Aa
µ)I.(C49)
Then, the vectorization of Kis K= ((σ1/4
β)σ1/4
β)D((σ1/4
β)σ1/4
β).
34
In the sixth line, the Gaussian matrices are decoupled as
ER
X
a
caGaGa
p
p
=X
a1,...,ap
ca1c
a2. . . cap1c
apERTr[Ga1Ga2. . . Gap1Gap]ERTr[Ga1Ga2. . . Gap1Gap]
X
a|ca|2p/2pairs
X
a1,...,ap
ERTr[Ga1Ga2. . . Gap1Gap]ERTr[Ga1Ga2. . . Gap1Gap]
X
a|ca|2p/2pairs
X
a1,...,ap
ERTr[Ga1Ga2. . . Gap1Gap]
pairs
X
b1,...,bp
ERTr[Gb1Gb2. . . Gbp1Gbp]
X
a|ca|2p/2ERGp
pERGp
p.
(C56)
The quantity ERTr[Gb1. . . Gbp] is non-vanishing upon averaging over independent random matrices Ror equivalently
Gaonly when all the indices make pairs. Accordingly, the sum Ppairs
a1,...,aponly runs over such indices. The last inequality
holds because ERTr[Ga1Ga2. . . Gap1Gap]0 holds when the indices are contracted. The seventh line follows from
Fact D.1 in [46]. For a d1×d2matrix Gof the maximum variance VarG,
ERGp
pmin{d1, d2}Varp/2
Gc1pmax{d1, d2}p+ (c2p)p,(C57)
for constants c1, c2. Setting psuch that min{d1, d2}1/p 1/c1, we have
(ERGp
p)1/p pVarG·max{d1, d2}.(C58)
Therefore, we can upper-bound the fluctuation in each eigensector,
ER
SkΠSl)∆KAASiΠSj)
RMTRD
p|A|rmax
ESkSi
D(E) max
FSlSj
D(F)
×max
EkSk,EiSi
eβ
4(EkEi)ηEkEi|f(EkEi)|
qDEk+Ei
2max
ElSl,EjSj
eβ
4(ElEj)ηElEj|f(ElEj)|
qDEl+Ej
2
RMTR2
D
p|A|eβ
4(ˆ
Ekˆ
Ei)ηˆ
Ekˆ
Ei|f(ˆ
Ekˆ
Ei)| · eβ
4(ˆ
Elˆ
Ej)ηˆ
Elˆ
Ej|f(ˆ
Elˆ
Ej)|,(C59)
where ˆ
Ek,ˆ
Eiis the pair of energies solving the first maximization of the third line, and ˆ
El,ˆ
Ejsolves the second
maximization of the third line. Recalling Assumption 2, for the second line we used that the number of states in each
energy interval can be expressed as
Tr[ΠSi]D(Ei)RDRMT,for all EiSi.(C60)
The last line in Eq. (C59) holds only if ˆ
EiSiand ˆ
EkSkare close, which is true because f(ˆ
Ekˆ
Ei) vanishes for
|ˆ
Ekˆ
Ei|>RMT (Assumption 1(a)). Same for ˆ
Ejand ˆ
El.
35
Now, we add all the energy eigensectors.
ERKAA
=ER
X
i,j,k,l
SkΠSl)∆KAASiΠSj)
X
m,n
ER
X
i,j
Si+mΠSj+n)∆KAASiΠSj)
X
m,n
max
i,j
ER
Si+mΠSj+n)∆KAASiΠSj)
(C59)
RMTR2
D
p|A|X
m
max
ieβ
4(ˆ
Ei+mˆ
Ei)ηˆ
Ei+mˆ
Ei|f(ˆ
Ei+mˆ
Ei)|2
RMTR2
D
p|A|X
m
eβ
4(ˆ
Eˆı+mˆ
Eˆı)ηˆ
Eˆı+mˆ
Eˆı|f(ˆ
Eˆı+mˆ
Eˆı)|2(C61)
RMTR2
D
p|A|X
m
eβˆνm/4ηˆνm|fνm)|2(C62)
Ass.2
=O 1
RMTp|A|Z
−∞
dνeβν/4ην|f(ν)|2!where we used RMT X
mZdν
(C64)
=O 1
RMTp|A|!.(C63)
In the inequality (C61), we defined ˆı(m):= argmax
i
eβ
4(ˆ
Ei+mˆ
Ei)ηˆ
Ei+mˆ
Ei|f(ˆ
Ei+mˆ
Ei)|, and suppressed its m-
dependence for the notational simplicity. In the inequality (C62), ˆνmis given by νmSi{Ei+mEi|EiSi, Ei+m
Si+m}that maximizes eβνm/4ηνm|f(νm)|. The last equality follows from the Cauchy-Schwarz inequality,
Z
−∞
dνeβν/4ην|f(ν)|2
Z
−∞
dν η2
ν
|{z }
=1
Z
−∞
dνeβν/2|f(ν)|2(C4)
=O(1).(C64)
Next, we bound the norm of KIAA :=PaPµ,ν ηµηνI(Aa
µ)Aa
νER[(Aa
µ)Aa
ν].
ER
SkΠSl)∆KIAASiΠSj)
p
=ER
SkΠSl)∆KIAASiΠSj)
p
p
ER
ΠSl
1
|A|X
aX
µ,ν
eβ
4(µν)ηµην(Aa
µ)Aa
ν(Aa
µ)Aa
νΠSj
p
p
=ER
2
|A|X
aX
µ,ν
eβ
4(µν)ηµηνΠSl(Ba
µ)Ba
νΠSj
p
p
ER
2
|A|X
aX
h
GSl,ShGSh,Sj
p
p
(C66)
2
p|A|!p
ER
X
h
GSl,ShGSh,Sj
p
p
(C67)
2
p|A|!pX
h
max{Tr[ΠSl],Tr[ΠSh]}ER[|GSl,Sh|2] max{Tr[ΠSh],Tr[ΠSj]}ER[|GSh,Sj|2]p/2.(C65)
36
In the second last inequality, the Gaussian matrices are decoupled as
ER
X
a
caGaGa
p
p
=X
a1,...,ap
ca1c
a2. . . cap1c
apERTr[Ga1Ga1Ga2Ga2. . . Gap1Gap1GapGap]
X
a|ca|2p/2pairs
X
a1,...,ap
ERTr[Ga1Ga1Ga2Ga2. . . Gap1Gap1GapGap]
=X
a|ca|2p/2ERG1G1p
p2.
(C66)
The sum Ppairs
a1,...,aponly runs over the indices that consist only of pairs. The last inequality holds because
ERTr[Ga1Ga2. . . Gap1Gap]0 holds when all the indices are paired. The last inequality follows from,
ER
GGp
pd2ER
Gp
·ER
Gp
(C58)
pVarGVarG·max{d1, d2}max{d2, d3},(C67)
for a d1×d2matrix Gand a d2×d3matrix G. The even integer pis chosen so that d1/p
21.
Therefore, we can upper-bound the fluctuation in each eigensector,
ER
¯
F1ΠSl)∆KIAA ¯
E1ΠSj)
RMTRD
p|A|X
Shrmax
ESlSh
D(E) max
FShSj
D(F)
×max
ElSl,EhSh
eβ
4(ElEh)ηElEhfEl+Eh
2, ElEh
qDEl+Eh
2max
EhSh,EjSj
eβ
4(EhEj)ηEhEjfEh+Ej
2, EhEj
qDEh+Ej
2
=RMTR2
D
p|A|X
h
eβ
4(ˆ
Elˆ
Eh)ηˆ
Elˆ
Eh|f(ˆ
Elˆ
Eh)|eβ
4(ˆ
Ehˆ
Ej)ηˆ
Ehˆ
Ej|f(ˆ
Ehˆ
Ej)|,(C68)
where ˆ
Eh,ˆ
Ej,ˆ
Elare the energies solving the maximization of the third line.
Summing the contributions from all the energy eigenspaces, we find
ERKIAA
=ER
X
j,l
(IΠSl)∆KIAA(IΠSj)
X
m
ER
X
j
(IΠSj+m)∆KIAA(IΠSj)
X
m
max
j
ER
(IΠSj+m)∆KIAA(IΠSj)
(C68)
RMTR2
D
p|A|X
m,h
eβ
4(ˆ
Eˆȷ+mˆ
Eh)ηˆ
Eˆȷ+mˆ
Eh|f(ˆ
Eˆȷ+mˆ
Eh)|eβ
4(ˆ
Ehˆ
Eˆȷ)ηˆ
Ehˆ
Eˆȷ|f(ˆ
Ehˆ
Eˆȷ)|(C69)
=RMTR2
D
p|A|X
m,h
eβ
4(ˆ
Eˆȷ+h+mˆ
Eˆȷ+h)ηˆ
Eˆȷ+h+mˆ
Eˆȷ+h|f(ˆ
Eˆȷ+h+mˆ
Eˆȷ+h)|eβ
4(ˆ
Eˆȷ+hˆ
Eˆȷ)ηˆ
Eˆȷ+hˆ
Eˆȷ|f(ˆ
Eˆȷ+hˆ
Eˆȷ)|
RMTR2
D
p|A|X
h
eβˆνm/4ηˆνm|fνm)|X
m
eβˆνh/4ηˆνh|fνh)|(C70)
Ass.2
=O 1
RMTp|A|Z
−∞
dνeβν/4ην|f(ν)|2!where we used RMT X
hZdν
(C64)
=O 1
RMTp|A|!.(C71)
37
In the inequality (C69), ˆȷ(m) solves the maximization, ˆȷ(m):= argmax
jPheβ
4(ˆ
Ej+mˆ
Eh)ηˆ
Ej+mˆ
Eh|f(ˆ
Ej+m
ˆ
Eh)|eβ
4(ˆ
Ehˆ
Ej)ηˆ
Ehˆ
Ej|f(ˆ
Ehˆ
Ej)|, and its m-dependence is suppressed for the notational simplicity. In the inequal-
ity (C70), ˆνxfor an integer xis given by νxSi{Ei+xEi|EiSi, Ei+xSi+x}that maximizes eβνx/4ηνx|f(νx)|.
Similarly, one can obtain the bound
ERKAAI O 1
RMTp|A|!.(C72)
Overall, we have
ER∥D ERD∥σ1
β,22=ERKAA+ KIAA + KAAI O 1
RMTp|A|!(C73)
Furthermore, a careful concentration analysis in [46] together with Eq. (C45) shows that
∥L ERL∥σ1
β,22=∥D ERD∥σ1
β,22 O 1
RMTp|A|!(C74)
holds with the probability exponentially close to 1 with respect to n. Using RMT = Θ(1), we can simplify
Eq. (C74) to
∥L ERL∥σ1
β,22=O β
p|A|!.(C75)
5. Bounding the convergence to the Gibbs state
In the previous sections C 3 and C 4 we showed that the gap of the average Lindbladian ERLis lower-bounded by
a polynomial in βand the system size n, whereas the channel distance between the actual Lindbladian Land its
average ERLis upper-bounded by the inverse square root of the number of jump operators |A|. In this section we will
use both results to polynomially bound the spectral gap of the true Lindbladian Land to show a high convergence
accuracy of the Lindblad evolution, in the sense that its exact steady state ρis close to the target Gibbs state σβ,
which is the steady state of ERL.
Let us start with the spectral gap. From the gap ERLof the ETH-averaged Lindbladian (C43) and the channel
distance (C75), we obtain a lower bound on the gap Lof the Lindbladian L=ERL+ (L ERL) via
L= e2βRMT 2
RMTΓ
2
spec O 1
RMTp|A|!.(C76)
Hence, there exists a number of jump operators,
|A|= e4βRMT 4
spec
6
RMTΓ2!,(C77)
which allows the spectral gap (C76) to be lower bounded as
L= e2βRMT 2
RMTΓ
2
spec .(C78)
Finally, the lower bound of the spectral gap is turned into an upper bound of the mixing time,
tmix βH+ log 1
ϵ× O e2βRMT2
spec
2
RMTΓ!(C79)
38
Under Assumption 1(b) and spec 1/n, we find the spectral gap (C78), the sufficient number of jump opera-
tors (C77), and the mixing time (C79),
L= 1
2,|A|= Ω(n2β6), tmix =O2βH+ log(1).(C80)
Let us now come to the distance between Gibbs state and steady state of L. The inequality (C74) allows us to
bound the χ2divergence, σβρσ1
β,2, between Gibbs state and the steady state ρof L, which in turn, bounds
the trace distance, σβρ1 σβρσ1
β,2[53]. Let us simplify notation, L=ERL, and let σand σbe states
such that L[σ] = L[σ] = 0. Then
σσσ1,2=etL[σ]etL[σ]σ1,2 etL[σσ]σ1,2+(etLetL)[σ]σ1,2.(C81)
To bound the second term, we note
etLetLσ1,22tZ1
0
dsestL(L−L)e(1s)tLσ1,22
tZ1
0
dsestLσ1,22· ∥L Lσ1,22· e(1s)tLσ1,22
t∥L Lσ1,22,
(C82)
where in the last inequality we used
exL[ρ]σ1,2=exL[ρσ]σ1,2+ 1 ρσσ1,2+ 1 = ρσ1,2(C83)
for x0, leading to
exL[ρ]σ1,22= max
ρexL[ρ]σ1,2
ρσ1,21.(C84)
The same inequality holds for L. Thus, the second term in Eq. (C81) is bounded by
(etLetL)[σ]σ1,2t∥L Lσ1,22· σσ1,2.(C85)
Note that σσ1,2can be evaluated as follows:
σσ1,2=σσ1,2+σσσ1,2= 1 + σσσ1,2.(C86)
To bound the first term in (C81), we set t= 1
Llog(σ1) for the spectral gap Lof L(C78) so that the
following inequality holds,
etL[σσ]σ1,2etLσσσ1,2ϵ(C87)
Putting these together, we have the bound on (C81),
σσσ1,2ϵ+t∥L Lσ1,22(1 + σσσ1,2)
ϵ+ 1
Llog(σ1)∥L Lσ1,22
11
Llog(σ1)∥L Lσ1,22
,(C88)
which also upper bounds the trace distance due to σσ1 σσσ1,2.
Setting σ=ρ,σ=σβ, and using Eqs. (C75) and (C78), we can bound
1
Llog(σ1)∥L Lσ1,22=O 2
p|A|(βH+ log(1))!,(C89)
where we used log(σ1
β)βH. With this, we obtain the following upper bound on the trace distance,
ρσβ1 ρσβσ1
β,2
w.h.p
ϵ+1
Llog(σ1
β)∥L Lσ1,22
=ϵ+O 2
p|A|(βH+ log(1))!= 2ϵ.
(C90)
39
101
100
101
Longitudinal field m/J
CH2
CH
KIH
REG
INTER
CH3
REG2
Pauli string ZYZYZYXX Pauli string ZXZZXZXZ Pauli string ZXXZYYYY Pauli string ZZXYZZYY
101
100
101
Longitudinal field m/J
CH2
CH
KIH
REG
INTER
CH3
REG2
0TFIM
101100101
Transverse field h/J
0TFIM
101100101
Transverse field h/J
101100101
Transverse field h/J
0.75
0.80
0.85
E[D1]
0.005
0.010
Var[D1]
101100101
Transverse field h/J
FIG. 12. Fractal dimension analysis of the mixed-field Ising model (30) for n= 8 qubits. The upper and lower rows show,
respectively, mean E[D1] and variance Var[D1] of the first fractal dimension, Eq. (D1), evaluated over the inner 80% of the
spectrum, as a function of transverse and longitudinal field strengths. The four columns correspond to four different Pauli
strings (given in each column’s title) which define the basis in which the fractal dimension is evaluated. Large mean values
E[D1] in conjunction with small Var[D1] are a signature of quantum chaos. We identify a clear region of uniform eigenstate
delocalization below the center (m=h= 1) of the phase diagrams, which is present for all considered Pauli bases. Within the
phase diagram, we identify eight parameter choices (white crosses), with broadly varying eigenstate characteristics, of interest,
which we will consider in more depth in the following convergence analysis. They are summarized in Tab. I.
The second inequality holds with a high probability. In the last equality, we set the number of jump operators as
|A|= n2β6H2
ϵ,(C91)
so that ρis 2ϵ-close to σβin trace distance. Choosing the number of jump operators as Eq. (C91) automatically
guarantees the second equation in Eq. (C80).
Appendix D: Details on numerical experiments and further results
We discuss further numerical results for the mixing time and convergence accuracy of the Lindblad dynamics (2).
We expand the analysis of Sec. Vby investigating the impact of the locality kof the jump operators, and of the
initial state in App. D 2. Furthermore, we extend our analysis to more parameter points, exploring further regular
limits and the edge of chaos of the model (30). We start this section by giving a more detailed overview of fractal
dimensions and eigenstate delocalization, from which we identify the various dynamical regimes of our mixed-field
Ising model (30). In App. D 3, we give further details on our Lindbladian simulation scheme.
1. Eigenstate delocalization
Quantum chaos is signaled by a strong and uniform delocalization of a substantial part of the energy eigenstates
in almost any basis of the Hilbert space (excluding pathological cases such as the eigenbasis itself) [6268]. A way to
quantify the delocalization of eigenstates in a given Hilbert space basis |ζ, is given by the fractal dimensions. For an
40
Key Hamiltonian parameters Step sizes, n= 3 4 5 6 7 8
TFIM h/J = 1.0, m/J = 0.0 0.25 0.25 0.125 0.125 0.125 0.125
CH2 h/J = 1.0, m/J = 0.2 0.25 0.25 0.125 0.125 0.125 0.125
CH h/J = 1.0, m/J = 0.4 0.25 0.25 0.125 0.125 0.125 0.125
KIH h/J = 0.9045, m/J = 0.8090 Jt= 0.25 0.125 0.125 0.125 0.125 0.0625
REG h/J = 0.1585, m/J = 3.062 0.125 0.0625 0.0625 0.0625 0.0625 0.03125
INTER h/J = 0.5623, m/J = 1.230 0.25 0.125 0.125 0.125 0.0625 0.0625
CH3 h/J = 1.698, m/J = 0.5551 0.125 0.125 0.125 0.0625 0.0625 0.0625
REG2 h/J = 6.310, m/J = 0.2158 0.0625 0.03125 0.03125 0.03125 0.03125 0.015625
TABLE I. Summary of the Hamiltonian parameter configurations considered in our numerical experiments. For each configu-
ration, we give the label, the corresponding coordinates h/J and m/J , and the time step sizes for the considered values of n.
The step sizes are determined by our numerical scheme as described in App. D 3.
eigenstate |Ei=Pdim H
ζ=1 ψi
ζ|ζ, the finite-size generalized fractal dimensions are defined by
D(i)
q=1
1qlogdim HR(i)
q, R(i)
q=
dim H
X
ζ=1 |ψi
ζ|2q, q R,(D1)
where dim His the dimension of the Hilbert space. In the following we will omit the superscript (i) of Dqif not needed.
For q= 1 the definition is given by taking the limit, D1= limq1Dq, which coincides with the normalized Shannon
entropy of the probability distribution {|ψζ|2}ζ. In the large-system limit one can identify three different regimes:
localized (the eigenstate is supported only on a vanishingly small portion of basis states, with limdim H→∞ Dq= 0 for all
q1), multifractal or extended non-ergodic (0 limdim H→∞ Dq1 dependent on q) and ergodic (limdim H→∞ Dq=
1 for all q, which holds for the limit of the uniform distribution |ψi
ζ|2= 1/dim Hfor all ζ).
For uniform eigenstate delocalization in a given expansion basis, two statistical signatures are crucial: Firstly, a
large average fractal dimension E[Dq], where the average Eis taken over the eigenstates {|Ei⟩}i, indicates that almost
all eigenstates are strongly delocalized. Secondly, a small variance Var[Dq] over the spectrum signals approximately
uniform behavior of all eigenstates. In the following we will focus on the first fractal dimension, q= 1, which is
sufficient for our purposes.
To gain a comprehensive picture of the dynamical landscape of the mixed-field Ising model (30), we analyze D1in
full parameter domain of our Hamiltonian. We focus on the bulk of the spectrum, to filter out potentially untypical
eigenstates at the spectral edges, and compute mean E[D1] and variance Var[D1] over the inner 80% (in terms of
energy) of the eigenstates. To consolidate our analysis in Fig. 2, where we consider the fractal dimension in the n-qubit
Z-basis, we evaluate D1in four additional random n-qubit Pauli bases. The results are shown in Fig. 12 for n= 8.
The four columns correspond to the four chosen Pauli bases, given in the column titles. For each column, we show
the mean value E[D1] (top row) and variance Var[D1] (bottom row), for the Ising Hamiltonian (30) in the parameter
range m {0} [101,101], h[101,101]. As before, we also include the value m= 0, which corresponding to the
(integrable) transverse-field Ising model.
As in Fig. 2, the parameter domain 0.7h/J 2, 0.2m/J 0.9 shows strong uniform eigenstate delocalization
for all considered Pauli bases. In this regime we observe a consistent maximization of E[D1], which is accompanied
by a drop of variance Var[D1], signaling uniform properties of the eigenstates over the bulk of the spectrum. The
fact that this happens independently of the chosen basis, confirms our claim in Sec. V A. We expect our model to
behave quantum chaotically, and consequently to align with the ETH predictions, in this parameter regime. Note
that the rest of the parameter domain looks slightly different for each chosen basis. This is expected, since the fractal
dimensions are by construction basis dependent, and we expect traces of this basis-dependence to be visible within
the non-chaotic parameter regions. Physically, we can identify three regular limits of the model (30); m/J ,
h/J or m, h 0. In these limits the Hamiltonian is effectively given by only a single term. In general, we
would expect that the ETH is not applicable in the regular limits. In the limit m/J 0, the mixed-field Ising
model approaches the transverse field Ising model, which can be mapped to free fermions via the Jordan-Wigner
transformation. This is an integrable (i.e. non-chaotic) model however, it can exhibit critical behavior.
To investigate the dynamical regimes of the mixed-field Ising model, we consider several parameter configurations,
covering different dynamical regimes of the Hamiltonian (30), for our simulations. These configurations are shown in
Fig. 12 as white crosses with corresponding labels. We consider points along an anti-diagonal through the center of
the phase diagram (m=h= 1) (REG,INTER,CH3,REG2. The points REG and REG2 are exemplary for the two regular
model limits, where either the longitudinal or the transverse field dominates the dynamics, cf. Eq. (30). The point
CH3 is located within the chaotic domain of the model. Additionally, we investigate the point KIH, considered in [92]
as a “robustly non-integrable” parameter point based on the distribution of level spacing ratios. From the standpoint
41
k= 2, initial state: zero state
k= 2, initial state: maximally mixed
k= 3, initial state: zero state
k= 3, initial state: maximally mixed
k=n, initial state: zero state
k=n, initial state: maximally mixed
3 4 5 6 7 8
102
103
Mixing time Jˆ
tmix
3 4 5 6 7
Number of qubits n
102
Spectral gap L/J
3 4 5 6 7 8
103
104
3 4 5 6 7 8
103
104
3 4 5 6 7 8
103
104
3 4 5 6 7
Number of qubits n
103
102
3 4 5 6 7
Number of qubits n
104
103
102
3 4 5 6 7
Number of qubits n
103
102
CH2
CH2
KIH
KIH
INTER
INTER
CH3
CH3
FIG. 13. Scaling of mixing time (top row) and spectral gap (bottom row) for varying k= 2,3, n, indicated by color, and two
different initial states, the maximally mixed state I/2nand the zero state |0n. We observe an general increase of the mixing
time, and a corresponding decrease of L, with the locality kof the jump operators. Especially non-local jump operators,
k=n, show a significantly faster increase (decrease) of the mixing time (spectral gap), as compared to local operators. In most
cases, the zero state converges faster to the steady state. This effect is most prominent for the chaotic Hamiltonian parameter
point CH3, see Tab. I.
of eigenstate delocalization, this parameter point seems to be located close to the edge of chaos. We further explore
the transition towards the transverse-field Ising model (m0) with the three points TFIM,CH2 and CH, located on
the vertical line h= 1 for three different values of m. All eight parameter points are summarized in Tab. I. According
to our fractal dimension analysis, we expect that the configurations CH,CH2 and CH3, show the clearest signatures of
quantum chaos and, therefore, that ETH is applicable at these points.
2. Mixing time and convergence accuracy
We extend our analysis of mixing time and convergence accuracy in Sec. V B. Especially, we take a closer look at
the influence of the locality kof our jump operators and of the initial state on the Lindblad evolution.
The upper row of Fig. 13 shows the scaling of the mixing time estimate ˆ
tmix (7) with n, for two initial states and
three values of k= 2,3, n. As Hamiltonian parameter points we consider CH2,KIH,INTER and CH3, cf. Tab. I, i.e. the
parameter sets that are not considered in Sec. V B. Note that the point REG2, which is dominated by the transverse
field, fails to converge below convergence accuracy σβρ(t)1= 102within the maximal time horizon considered
in our simulations for all cases, see App. D 3 for details. Therefore, we do not have mixing time estimates for this
point. We fix |A|= 50 random k-local Pauli jump operators, cf. Sec. V, in Fig. 13. As initial state we compare
the maximally mixed state I/2n(solid lines), with Ithe 2n×2nidentity, to the zero state |0n(dotted lines).
The different values of kare indicated by color. Firstly, we observe that the chaotic parameter points CH2 and CH3
show faster convergence than the other two points, i.e. smaller mixing time ˆ
tmix. The point KIH, according to our
analysis in Fig. 12 located at the edge of chaos, still shows a faster mixing than the non-chaotic point INTER. These
observations are in line with the results discussed in Fig. 4. The locality of the jump operators as a huge influence on
the mixing time. Note that the ETH ansatz (11) assume local operators with kn. Indeed, for local jump operators,
k= 2,3, ˆ
tmix scales polynomially in nwith approximately the same small exponent for both initial states and both
values of k. This confirms our observations Fig. 4, showing that our analytical bounds are applicable in this case.
In turn, non-local jump operators k=n, show a a much faster growth of the mixing time with n(for both initial
states), potentially even with a super-polynomial dependency on n. To validate this observation, simulations of larger
system sizes would be necessary. Regarding the influence of the initial state, for the chaotic point CH3, we observe
that choosing the zero state as initial state gives consistently shorter mixing times as compared to starting from the
maximally mixed state. For the remaining parameter points, this effect is less pronounced, but we still observe a weak
42
increase of mixing time for the maximally mixed state in most cases.
In the lower row of Fig. 13 we show the spectral gap Las a function of nfor the same Hamiltonian parameter
points. For k= 2,3 we observe, similar to Fig. 4, a polynomial decrease of the gap with n, in line with our theoretically
predicted bound (C78). For non-local jump operators, k=n, we observe a much faster decay of Lwith n. Whether
this decay is polynomially or super-polynomially, we cannot reliably assess based on the limited number of data points.
As expected, the gap mirrors the behavior of the mixing time; larger mixing times corresponds to smaller gaps, and
vice verse. This supports the theoretical bound (16), stating that the mixing time is mainly controlled by the spectral
gap of the Lindbladian generator.
Let us turn to the convergence accuracy of the Lindblad evolution, i.e. the trace distance σβρ1between
Gibbs state σβ, Eq. (1), and the steady state ρof L, Eq. (3). In Fig. 14 we plot this distance as a function of |A|
for varying n= 5,6,7, indicated by color. We show results for the five Hamiltonian parameter points not discussed
in the main text in Sec. V B. We set the locality to k= 2. Other values of kexhibit a similar behavior. In all cases
we observe a polynomial decrease of σβρ1with the number of jump operators |A|, which is qualitatively in
line with our analytical prediction in Eq. (18). The approximate leading order exponent κof the decay is obtained
from a linear fit in the double-logarithmic scale and is given for each curve in the legend. We generally observe, as in
Fig. 5, slightly smaller exponents (in absolute value) between κ 0.2 and κ 0.4 for all parameter points except
the point REG2. In comparison, our upper bound (18) predicts an exponent of κ=1/2. The parameter point REG2
shows the predicted scaling with good accuracy. However, this point resides in a regular limit of our model (30),
where the Hamiltonian is mainly controlled by the transverse field only, and we cannot expect ETH to be a faithful
description for the system at this point.
Similar to Fig. 5, we observe a trace distance σβρ1of order 104. Only the regular limit REG2, similar to the
other regular limit REG shown in Fig. 5, shows a significantly lower trace distance of order 106. First, let us consider
REG by setting (h/J, m/J) = (0,3) to understand its qualitative convergence behavior. The corresponding Hamiltonian
has the Bohr frequencies, BH=2JJ±2mm}J,ℓmZ. The Lindblad operators are given by Eq. (A15),
La=X
νBH
ηνAa
νη2JAa
2J.(D2)
We used that the jump operator with ν=2Jdominantly contributes because the others are suppressed by ην
e(βν+1)2/8[Eq. (A17)] for βJ = 0.5. With this Lindblad operator, the dissipative part of the Lindbladian (A13)
approximately takes the Davies form,
D[ρ]X
aA
γaη2
2JAa
2JρAa
2J1
2{Aa
2JAa
2J, ρ}.(D3)
One can readily show that the approximated generator (right-hand side) is exactly σβ-DB (i.e. σβbeing the exact
steady state). Thus, the steady state of Lis comparably close to the Gibbs state. Similarly, we can qualitatively
understand the convergence at REG2 by considering (h/J, m/J) = (6,0). At the leading order in h/J, where the
energy eigenstates are those of hPiXi, the Bohr frequencies are BH=2hh}hZ. The Lindblad operators are
given by Eq. (A15), Laη2hAa
2hwith η2he(6βJ +1)2/8. This again explains why the convergence accuracy is
high at REG2. Moreover, since even the most dominant transition rate is strongly suppressed by η2h, the applications
of the jump process rarely happen, and hence, the mixing time is large.
3. Details of the numerical scheme
a. Time evolution
To simulate the dynamics under Eq. (2) we employ the fourth-order Runge-Kutta method and develop an adaptive
scheme to obtain a suitable time step δtRK for the simulation. Note that the optimal step size for simulation can vary
greatly between the Hamiltonian parameter points and for different numbers of qubits n. Hence, an efficient way to
determine a suitable step size is crucial for our analysis. The obtained step sizes for every parameter configuration
and nare given in Tab. I.
As jump operators, we consider a set of random k-local Pauli jump operators Aa=P1 · ·· Pnwith Pi
{I1Q, X, Y , Z}and |{i:Pi=I1Q}| =k, indexed by aA. Here, I1Q is the 2 ×2 identity matrix. For our numerical
studies, we vary the size of Aand the locality kof the jump operators, cf. Secs. V B and D 2.
To advance the system state for a single time step, we use a randomized approach similar to our quantum algorithm
described in Sec. IV. Let ρ(i)(jδtRK ) describe the time-evolved state of a single trajectory i= 1, . . . , Ntra at time
43
n=5n=6n=7
101102
104
kρσβk1
kρσβk1=C|A|κ
κ=-0.22
κ=-0.21
κ=-0.18
101102
Number of jump operators |A|
104
kρσβk1
κ=-0.36
κ=-0.33
κ=-0.30
101102
104
κ=-0.30
κ=-0.30
κ=-0.27
101102
104
κ=-0.38
κ=-0.36
κ=-0.35
101102
Number of jump operators |A|
106
κ=-0.57
κ=-0.53
κ=-0.42
CH2 KIH INTER
CH3 REG2
FIG. 14. Scaling of the convergence accuracy ρσβ1with the number of jump operators |A|, for the parameter configu-
rations not considered in Fig. 5, and n= 5,6,7 (denoted by color). We consider (k= 2)-local jump operators. As in Fig. 5, we
observe a polynomial decay with |A|. The leading order polynomial exponents κ(obtained from a fit) are given in the legend.
We observe 0.3< κ for the close-to-chaotic configurations CH2 and KIH, and slightly larger (in modulus) κfor the less chaotic
points INTER and REG2. In turn, these points, generally, show lower distances ρσβ1, i.e. a higher accuracy. For REG2,
the distance is around two orders of magnitude smaller than in the chaotic regime. The same effect was observed for the other
regular limit REG of our model (30), cf. Fig. 5. We give an explanation of this effect in App. D 2.
step j. To advance each trajectory in time, we do not apply the full Lindbladian (3), but, instead, choose Ntra
random jump operators Aaiwith aiA,i= 1, . . . , Ntra with probability pai= 1/|A|, and perform a fourth-order
Runge-Kutta step with the Lindbladian generated by this single jump, Lai=i[H, ·] + Dai[cf. Eq. (20)] for each i.
This procedure approximates the dynamics generated by (3) for γa=γpawith overall dissipation strength parameter
γ= 1. In contrast to our quantum protocol, we do not factorize the coherent and dissipative parts of the one-step
time evolution as in Eq. (19), but instead apply the Lindbladian Laifor each iaccording to the scheme
ρ(i)((j+ 1)δtRK ) = ρ(i)(tRK) + δtRK
6(kai
1+ 2kai
2+ 2kai
3+kai
4)
kai
1=Lai[ρ(i)(tRK)]
kai
2=Lai[ρ(i)(tRK) + δtRKkai
1/2]
kai
3=Lai[ρ(i)(tRK) + δtRKkai
2/2]
kai
4=Lai[ρ(i)(tRK) + δtRKkai
3].
(D4)
This advances each trajectory ione step forward in time. The average state at time point (j+ 1) is given by
ρ((j+ 1)δtRK ) = PNtra
i=1 ρ(i)((j+ 1)δtRK )/Ntra. To obtain an approximate solution to Eq. (2), we have to average over
a suitable large number of trajectories. In practice, we observe that only a very small number of trajectories, of the
order of ten, suffices for our purposes, as can be seen for example from the mixing time estimates (vertical dashed
lines) in Fig. 3. For all our simulations, we each use Ntra = 10 trajectories for system sizes n= 3,...,7, and Ntra = 3
trajectories for n= 8.
We design an adaptive scheme to determine a suitable step size δtRK. We start with a fixed starting step size
δt0
RK and evolve the system as described above under Eq. (D4) until the dynamically evolved average state ρ(tRK)
44
becomes non-Hermitian. More precisely, at step jwe check if ρ(jδtRK) is Hermitian and, after this, normalize ρ(jδtRK )
to tr[ρ(tRK)] = 1 and project it to the subspace of Hermitian matrices via ρ(jδtRK) = (ρ(jδtRK ) + ρ(jδtRK ))/2.
We observed that for too large δtRK , even a single step according to Eq. (D4) suffices to map a Hermitian ρ(tRK)
to a state ρ((j+ 1)δtRK ) which is far away from the Hermitian subspace. If this is the case, the simulation is
stopped, and we start again from the initial state ρ0with reduced step size δt1
RK =δt0
RK/2. This procedure is
repeated until ρ(tRK) stays Hermitian for all time steps j. Note that we still apply the Hermitian projection
ρ(tRK) = (ρ(j δtRK) + ρ(jδtRK))/2 in each step (after checking if ρ(jδtRK ) is Hermitian), to improve numerical
stability. We verified numerically (for small systems) that, for a suitable small step size, this scheme is consistent.
This means it converges to the steady state of Lto numerical precision if the full Lindbladian Lis applied in each
time step, that is, not only a single randomly selected Lai.
Based on our observations, there is a sharp transition between too large step sizes, where the evolved state ρ(j δtRK)
becomes non-Hermitian already after very few steps, and sufficiently small step sizes, where the evolved state stays
Hermitian for all times. Therefore, this criterion efficiently determines a reasonable time step for our Runge-Kutta
scheme. The obtained step sizes are given in Tab. Ifor each parameter point and number of qubits n. Note furthermore,
that we fix a maximal number of time steps to Nmax
steps = 3 ·105in all our simulations.
b. Spectral gap of the Lindbladian
To determine the spectral gap of the Lindbladian L, Eq. (3), we construct the vectorization Lof L, cf. Eqs. (C49)
and (C50), as
L=i (IHHI) + X
aX
µ,ν
ηµην(Aa
ν)Aa
µ1
2I(Aa
µ)Aa
ν1
2(Aa
ν)(Aa
µ)I.(D5)
In general, Lis not Hermitian and, thus, has complex eigenvalues iC,i= 1,...,22n. If Lhas a unique steady
state (as we always assume here), there is a single eigenvalue which is zero, and all other eigenvalues have negative
real parts. This is, indeed, the case for all of our considered settings. Without loss of generality, let 1= 0. Then we
define the spectral gap of Las the distance between zero and the second largest real part of the spectrum,
L= min{|Re[i]| | i= 2,...,22n}.(D6)
Appendix E: Noise study
In this appendix, we provide additional details and data supporting the noise study presented in Sec. VI. After
recalling the setup of interest (App. E 1), we derive and discuss bounds for the convergence accuracy of a state
prepared through Lindbladian evolution subject to global depolarization noise (App. E 2). We then provide details
of the scaling study (App. E 3). We present bounds obtained through a more generic noise analysis and based on
what would be obtained for unitary circuits (App. E 4 and E 5, respectively). These are used for the Fig. 6of the
main text. Finally, we show that the bounds obtained in App. E 2, and more generally the noise study, can readily
be generalized to any stochastic noise (App. E 6).
1. Setup
The noiseless dynamics corresponding to Msteps of Lindblad evolution, each of duration δt, is given by ΓM=
(eδtL)M, with Lthe Lindbladian admitting the target Gibbs state σβas a steady state. We also denote Γ := Γ1= eδtL.
The noisy dynamics is defined as e
ΓM= λΓ)M, where each step of ideal evolution is now followed by a depolarization
channel
Λλ[X]:= (1 λ)X+λTr[X]I
2n.(E1)
Under this simplified noise model, the probability λ[0,1] appearing in Eq. (E1) can be related to the probability
of a random error happening per step of evolution. With Ngthe number of noisy quantum operations (typically
2-qubit gates) involved in the circuit simulating such evolution, and denoting as λgthe probability of error per gate,
the probability λsatisfies
(1 λ) = (1 λg)Ng.(E2)
45
Let ρM:= ΓM[ρ(0)] be the state obtained under noiseless evolution starting with the maximally mixed state
ρ(0) = I/d. As Mincreases, ρMconverges towards σβ, and we assume such convergence to be captured in trace
distance by:
ρMσβ1=BeαM ,(E3)
with convergence rate α > 0 and initial distance B=ρ(0) σβ1[0,2]. We highlight that the convergence rate
αdepends implicitly on the choice of evolution step δt, through the definition of ρM. Denoting this dependence
explicitly, we have α(aδt) = (δt) for any a > 0. For now, we assume that δt is fixed and drop it in our notations.
Let us denote M(ε) the smallest number of steps such that ρMσβ1ϵ. From Eq. (E3) it is obtained as:
M(ϵ) = ln(B/ϵ)
α.(E4)
Finally, let ˜ρM=e
ΓM[ρ(0)] be the state obtained after Msteps of noisy evolution starting again from ρ(0) and let
us define ˜ρto be the steady state of the noisy evolution such that e
ΓM[˜ρ] = ˜ρ.
2. Convergence of the noisy states towards the target Gibbs state
Our aim is to understand how the noise affects the convergence of the noisy dynamics. In particular, we want to
assess the distance ˜ρMσβ1between the noisy state and the target Gibbs state. To do so, we can relate the states
prepared through noisy and noiseless evolutions. For instance, after one step of noisy evolution, we get:
ρ0
Γ:=eδtL
ρ1
Λλ
˜ρ1=λρ0+ (1 λ)ρ1.
Iterating such step, we get that after Msteps of noisy evolution, the resulting state has the form
˜ρM= (1 λ)MρM+λ(1 λ)M1ρM1+·· · +λ(1 λ)ρ1+λρ0
= (1 λ)MρM+λ
M1
X
m=0
(1 λ)mρm.(E5)
Making use of Eq. (E3) and the triangle inequality, we obtain an upper bound for ˜ρMσβ1at arbitrary M:
˜ρMσβ1e
BM:= (1 λ)MρMσβ1+λ
M1
X
m=0
(1 λ)mρmσβ1
=B"(1 λ)MeαM +λ
M1
X
m=0
(1 λ)meαm#
=BuM
0+λ1uM
0
1u0with u0:= (1 λ)eα[0,1)
=BuM
01λ
1u0+λ
1u0where λ
1u0[0,1).
(E6)
Let us now comment on the convergence displayed in Eq. (E6). Given that 0 u0<1, the bound on the distance
decreases monotonically towards the value
e
B:=Bλ
1u0
,(E7)
that corresponds to a bound on the distance ˜ρσβ1between noiseless and noisy steady states. Such distance
depends both on the error rate λand the convergence rate α. As would be expected the smaller the errors are, the
closer the fixed state of the noisy dynamics is compared to the Gibbs state. For λ= 0, they coincide. Also, we see
that the noisy steady state differs from the mixed state that lies at a distance σβρ01=Bfrom σβ. In particular
˜ρρ(0)1 σβρ(0)1 σβ˜ρ1Be
B=B1λ
1u00,(E8)
46
that is always non-null, except when λ= 1. That is, the noisy dynamics always converges to a state distinct from the
fixed point of the noise channel Λλ. This is in stark contrast with unitary evolution.
Additionally, we see in Eq. (E6) that at fixed noise level p, the term e
Bdecreases as the decay rate αincreases.
Further inspection reveals features of the mixing time of the noisy dynamics. In particular, we see that the distance
between ˜ρMand ˜ρdecays as uM
0, that adopts a scaling e˜αanalogous to Eq. (E3) but now with a convergence rate
˜α=α+NgCwhere C=ln(1 λg)>0.(E9)
Notably, this convergence rate is always larger than the noiseless one. That is, while the noise is detrimental in that
it perturbs the steady state of the dynamical evolution, reaching the noisy steady state is never slowed.
3. Scaling study
Our aim is to study the impact of the errors due to noise as the system sizes increase. For that, we wish to evaluate
the bounds of Eq. (E7) for different n. This requires specifying values for the convergence rates α, the distances
between mixed and Gibbs state B, and the probabilities of error λ. For the two first quantities, we can rely on data
obtained through the numerical simulations for small system sizes that are then extrapolated to larger ones.
In Fig. 15 (left panel), we show the fits of αand Bfor the numerical simulations performed for n= 3 8 qubits
for the Hamiltonian’s configuration CH given in Tab. I. We recall that this configuration correspond to the chaotic
regime. As can be seen the dynamics of Eq. (E3) closely matches the numerical data. Then in Fig. 15 (right panel)
we report geometric fits for these two quantities as a function of n. In both cases, albeit performed on a small number
of data points, we see reasonably good fits, in particular for α. For the probability of error λ, we resort to Eq. (E2)
and take a number of gates per unit of time (δt = 1) scaling linearly in n, namely Ng= 100n. With these, and fixing
the value of the probability of error pgper noisy gate, we can evaluate Eq. (E7) for arbitrary n. These are the data
points used for Fig. 6in the main text (solid lines).
4. Comparison to generic noise bounds
We wish to compare Eq. (34) to bounds obtained for generic noise models. In the following, we first port the bounds
of Ref. [28] (Lemma II.1) to the discrete settings considered here and then evaluate those for the depolarization noise.
Denote as Aand as e
Aa step of noiseless and noisy dynamics respectively. Let ρand ˜ρbe the noiseless and noisy
steady states that satisfy A[ρ] = ρand e
A[˜ρ] = ˜ρ. The mixing time of the noiseless dynamics in the discrete
case is characterized by the number of steps Mmix(ϵ) ensuring that for any state ρwe have ∥AMmix(ϵ)[ρ]ρ1ϵ.
For now let us drop the dependency in ϵ. To bound ˜ρρwe proceed as follow
˜ρρ1=
e
AMmix [˜ρ]ρ
1
e
AMmix [˜ρ] AMmix [ ˜ρ]
1+
AMmix [˜ρ]ρ
1
(e
AMmix AMmix )[˜ρ]
1+ϵ,
(E10)
where we made use of the triangle inequality and of the definition of Mmix. We further bound
e
AMmix AMmix
11
A (e
AMmix1 AMmix 1)
11+
(e
A−A)e
AMmix1
11
e
AMmix1 AMmix 1
11+
e
A−A
11
Mmix
e
A−A
11.
(E11)
To get the second line, we used the triangle inequality first, and then sub-multiplicativity of the induced norm ∥·∥11
together with ∥A∥111 and e
A∥111 as both Aand e
Aare quantum channels. The third line follows through
recursion. Overall we get:
eρρ1Mmix(ϵ)
e
A−A
11+ϵ. (E12)
Due to the repeated use of the triangle inequality in Eq. (E11), each evolution step incurs a contribution e
A−A∥11
to the resulting bound. This may significantly over-estimate the effect of the noise, as it corresponds to a worst-case
scenario where each step incurs the maximum deviation possible and noise inter-steps never average out, but it allows
47
4.5
4.0
3.5
log(α)
y=1.90 1.39x
1.2 1.4 1.6 1.8 2.0
log(n)
0.8
0.6
log(B)
y=-1.142 + 0.31 x
0 500 1000 1500 2000 2500
Evolution time Jt
103
102
101
100
kρ(t)σβk1
n= 3
n= 4
n= 5
n= 6
n= 7
n= 8
Fit (y=Bexpαt)
FIG. 15. (Left panel) From the numerical data of App. (D) for the setup CH (see details in Tab. I), we plot the trace distance
of the prepared state along the Lindblad evolution, compared to the target Gibbs state (solid lines). For each of the system
sizes probed (colors in legend), we fit the parameters αand Bfrom Eq. (E3) and display the resulting fit (dashed lines). (Right
panel) Values of the parameters obtained for α(upper panel) and B(lower panel) are plotted and fitted as a function of n.
us to deal with arbitrary discrepancies between Aand e
A. This is in contrast to the bounds obtained in Eq. (E6),
where part of the deviations induced by the noise, especially the ones occurring early on, are mitigated during the
evolution.
With these generic error bounds obtained, we can now specialize to our noise model. To do so, we need to evaluate
e
A−A
11. For our setup, presented in App. E 1, we have A= eδtLwith ρ=σβ, while the noisy evolution is
given by e
A= Λλ A. Using the definition of the depolarization channel from Eq. (E1) we see that
(e
A−A)[X]
1=
λY λTr[Y]I
2n
1λ(Y1+ Tr[Y]) 2λX1,(E13)
where we have introduced Y=A[X] and used Tr[Y] Y1 X1. Overall, we get that e
AA∥11= 2λ. Hence,
application of Eq. (E12) to our setup or to any noise model inducing e
AA∥11= 2λ gives us the bound on the
distance between noisy and noiseless state:
˜ρσβ12λMmix (ϵ) + ϵ. (E14)
Substituting the expression M(ϵ) for Mmix(ϵ) from Eq. (E4), and minimizing the bound over the choice of ϵwe
finally obtain
e
B
:= max B, 2λ
αln
2λ+ 1(E15)
as a bound for ˜ρσβ1. Comparison of Eqs. (E7) and (E15) shows differences between the bounds obtained for
the global depolarization channel to bounds obtained for generic noise models with corresponding strengths. These
are reported as plain and dotted lines, respectively, in Fig. 6(right panel) in the main text. As can be seen, the
bounds for generic noise models significantly overestimate the impact of the global depolarization noise.
5. Comparison to unitary circuits
We wish to compare deviations induced by noise for the Lindbladian protocols to deviations that would occur
in a unitary circuit. We note the limit of such comparison as preparation of Gibbs states would rely in non-unitary
dynamics in the first place. Nonetheless, for the sake of comparison, we will consider unitary circuits that are assumed
to have the same complexity (i.e., the same number of noisy gates) and to produce the same outputs as our protocols.
Recall that in the noiseless case, the number of steps Mmix(ϵ) required to prepare a state σϵthat differs by ϵin trace
distance from σβwas provided in Eq. (E4). Given that each evolution step requires Ngnoisy gates, this incurs a total
of Ntot =Mmix(ϵ)Ngnoisy gates. Upon the global depolarization model, and for a unitary circuit having Ntot gates,
the state σϵprepared by the noiseless circuit would become ˜σϵ=λtot I/2n+ (1 λtot)σϵwith
(1 λtot) = (1 λg)Mmix(ϵ)Ng.(E16)
48
Compared to the Gibbs state σβ, this state would deviate in trace distance by
˜σϵσβ1=
λtotI
2nσβ+ (1 λtot)(σϵσβ)
1λtotB+ (1 λtot )ϵ. (E17)
In contrast to the noisy Lindblad dynamics. E6, as we increase the number of steps Mmix(ϵ) we see competitive effects
between a decrease of the convergence accuracy ϵand an increase of the errors due to the noise. To be conservative
in our study, we minimize the right-hand side of Eq. (E17) over ϵ, or equivalently Mmix(ϵ), to make this bound as
tight as possible. Results of the bounds obtained through this minimization for different system sizes are reported as
dashed lines in Fig. 6(right panel) in the main text. When compared to Eq. (E7), we see a noticeable increase in the
distances of the prepared state towards the targeted one for the unitary case. That is, we probed the added resilience
of the Lindblad dynamics compared to unitary circuits.
6. Generalization of the convergence analysis to probabilistic noise models
While presented for a global depolarization noise model, the convergence analysis of App. E 2 can readily be ported
to more general models. Let us consider the setup of App. E 1, but with now a stochastic noise of the form
Λλ[X]:= (1 λ)X+X
l
λlUlXU
l(E18)
such that each of the Uiis unitary and with λl[0,1] together with λ=Plλl1. Such a model can be understood
as the probabilistic application of a unitary Ul(or the identity) occurring with a probability λl(or 1 λ). This
encompasses global or local depolarization, any Pauli noise model and many more.
As before, let us denote as ρ(0) the initial state, as ρM= ΓM[ρ(0)] the state obtained after Msteps of ideal evolution,
and as ˜ρM=˜
ΓM[ρ(0)] the state obtained after Msteps of noisy evolution. We define χm,0:= (1)PlλlUlΓ[˜ρm1]U
l,
which is a valid state ensuring that one step one noisy evolution yields
λΓ)[˜ρm1]:= (1 λ)Γ[ ˜ρm1] + λχm,0,(E19)
and further define χm,K := ΓK[σm,0] that is also a state. With these notations, the state obtained after ksteps of
noisy evolution, akin to Eq. (E5) for the global depolarization case, can now be written as
˜ρM= (1 λ)MρM+λ(1 λ)M1χ1,M1+· ·· +λ(1 λ)χM1,1+λχM,0
= (1 λ)MρM+λ
M1
X
K=0
(1 λ)KχMK,K .(E20)
Assuming that the convergence dynamics of Eq. (E3) holds for any initial state with, in particular
χm,K σβ1=BeαK ,(E21)
we recover exactly the same results as before in terms of bounds for the trace distance between the target Gibbs state
compared to the state prepared along noisy evolution, as per Eq. (E6), or compared to the steady state of the noisy
dynamics, as per Eq. (E7).
Appendix F: Details on algorithmic errors in the quantum circuit simulation
We analyze the circuit error dependence on the main algorithmic parameters, evolution step δt, and OFT discretiza-
tion step tfor our randomized single-ancilla Lindblad simulation protocol discussed in the main text. As discussed
in Sec. VII A, we use the mixed-field Ising model (30) and fix the integration domain of the OFT to JT = 1.6, and
simulate the circuit evolution up to a time point Jt = 500. The circuit error is quantified by the distance ρcirc
σβ1
between the target Gibbs state σβand the steady state of the circuit ρcirc
. We compute ϵij := ρcirc
(δti,tj)σβ1
for various evolution steps δtiand OFT discretization steps tjin the range 102J δt 101and 0.06 Jt2.
More precisely, we consider Jδti= 10(δ)
iand Jtj= 10(∆)
jwith {(δ)
i}i=1,...10 and {(∆)
j}j=1,...,30 evenly discretizing
the intervals [2,1] and [log100.06,log102], respectively.
49
0.10 0.15 0.20 0.25 0.30 0.35
OFT discretization step Jt
103
102
101
100
kρcirc
σβk1
t = 0.01
t = 0.10
t = 1.00
t = 10.00
0.10 0.15 0.20 0.25 0.30 0.35
OFT discretization step Jt
103
102
101
100
A2+A1
A3+A1
A4+A1
f
FIG. 16. Operator Fourier transform discretization error dependence of the randomized single-ancilla protocol for n= 5 qubits.
The left plot shows the same data as Fig. 7, on a limited domain Jt0.37. The right plot decomposes the total error (37)
(solid curve), for two values J δt = 0.01 and Jδt = 1, into individual terms as described below Eq. (F2) (shown as dotted,
dashed and dashed-dotted curves, respectively).
1. Error fit
To test the applicability of our theoretical bound (36), we fit the function (37) to the data. As we are mainly
interested in the logarithmic dependency of the error on δt and t, we minimize the following loss function
(αopt
1, αopt
2, αopt
3, αopt
4) = argmin
α1234sX
i,j log10fα1234(δti,tj)log10ϵij 2.(F1)
Note that, since the logarithm is strictly monotonically increasing, closeness of log10fα1234(δti,tj) and log10 ϵij
also implies that fα1234(δti,tj) and ϵij are close. Introducing the logarithm gives a stronger relative weight to
the regime where δt and tare small, in which the values of fα1234(δti,tj) and ϵij are orders of magnitude
smaller than for large δt and t. An alternative would be to consider a relative error loss function, which results in a
very similar fit. Note that the bound on the discretization error (the fourth term in Eq. (37)) is only valid for small
t, satisfying 2πβ
t2βH1>0, which corresponds to Jt0.37 for our parameter choice. When approaching
this limit, the error bound diverges. Hence, we limit the domain of the fit and take into account only δtiand tj
with Jδti0.3 and Jtj0.37 for this. With these restrictions, we obtain optimal parameters αopt
1= 2.6×103,
αopt
2= 1.8×102,αopt
3= 4.1×104,αopt
4= 1.5×104from Eq. (F1).
2. OFT discretization error decomposition
In Fig. 16 we take a closer look at the individual error terms in Eq. (37). We focus on the error dependence on
the OFT discretization step t. The left plot shows the same data as in the right plot of Fig. 7, where we limit the
domain to Jt0.37. The right plot decomposes the fit (solid lines in the left plot) into its individual terms
fα1234(δt, t) = A1+A2+A3+A4,
A1=α1, A2=α2δt, A3=α3Tt2
δt , A4=α4pβ|BH|e1
82πβ
t2βH∥−12
.(F2)
We consider the cases Jδt = 0.01 and 1 (blue and red lines) as examples and plot A1+A2(dotted), A1+A3(dashed)
and A1+A4(dashed-dotted line) in the right panel. For each curve we include the shift due to the constant term A1
to make the individual curves comparable to the total error shown as solid curves. For small Jδt = 0.01, the error
due to the Trotterization, randomization and dilation, quantified by A2(blue dotted), is negligibly small. Therefore,
the shape of the blue curve is controlled by the discretization error A4(blue dotted-dashed) and the polynomial t
dependence of A3(blue dashed), which is the dominant contribution for Jt0.25. The term A3comes from the
coherent evolution under the system Hamiltonian for time step tto compute the discretized OFT (25) [cf. discussion
50
below Eq. (36)]. This weak polynomial decrease of the error for Jt0.25 is clearly visible in the dark blue data
points in the left plot (although with a slightly larger slope as predicted by the fit). The other exemplary case is for
larger Jδt = 1, shown in red. In this case, the Trotter error A2(red dotted) is larger than A3(red dashed), such that
for Jt0.25 the error becomes independent from t. This behavior is clearly shown by the red data points in the
left plot. For larger t, the discretization error A4becomes the dominant contribution.
[1] A. M. Dalzell et al., Quantum algorithms: A survey of applications and end-to-end complexities (2023), arXiv:2310.03011
[quant-ph].
[2] A. Abbas, A. Ambainis, B. Augustino, A. artschi, H. Buhrman, C. Coffrin, G. Cortiana, V. Dunjko, D. J. Egger,
B. G. Elmegreen, N. Franco, F. Fratini, B. Fuller, J. Gacon, C. Gonciulea, S. Gribling, S. Gupta, S. Hadfield, R. Heese,
G. Kircher, T. Kleinert, T. Koch, G. Korpas, S. Lenk, J. Marecek, V. Markov, G. Mazzola, S. Mensa, N. Mohseni,
G. Nannicini, C. O’Meara, E. P. Tapia, S. Pokutta, M. Proissl, P. Rebentrost, E. Sahin, B. C. B. Symons, S. Tornow,
V. Valls, S. Woerner, M. L. Wolf-Bauwens, J. Yard, S. Yarkoni, D. Zechiel, S. Zhuk, and C. Zoufal, Challenges and
opportunities in quantum optimization, Nature Reviews Physics 6, 718–735 (2024),arXiv:2312.02279 [quant-ph].
[3] B. M. Terhal and D. P. DiVincenzo, Problem of equilibration and the computation of correlation functions on a quantum
computer, Phys. Rev. A 61, 022301 (2000),arXiv:quant-ph/9810063.
[4] D. Poulin and P. Wocjan, Sampling from the thermal quantum gibbs state and evaluating partition functions with a
quantum computer, Phys. Rev. Lett. 103, 220502 (2009).
[5] K. Temme, T. J. Osborne, K. G. Vollbrecht, D. Poulin, and F. Verstraete, Quantum Metropolis Sampling, Nature 471, 87
(2011),arXiv:0911.3635 [quant-ph].
[6] E. Bilgin and S. Boixo, Preparing thermal states of quantum systems by dimension reduction, Phys. Rev. Lett. 105, 170405
(2010).
[7] M.-H. Yung and A. Aspuru-Guzik, A quantum–quantum metropolis algorithm, Proceedings of the National Academy of
Sciences 109, 754 (2012).
[8] M. J. Kastoryano and F. G. S. L. Brand˜ao, Quantum gibbs samplers: The commuting case, Communications in Mathe-
matical Physics 344, 915 (2016),arXiv:1409.3435.
[9] F. G. S. L. Brand˜ao and M. J. Kastoryano, Finite Correlation Length Implies Efficient Preparation of Quantum Thermal
States, Commun. Math. Phys. 365, 1 (2019),arXiv:1609.07877 [quant-ph].
[10] A. N. Chowdhury and R. D. Somma, Quantum algorithms for Gibbs sampling and hitting-time estimation, Quant. Inf.
Comput. 17, 0041 (2017).
[11] M. Motta, C. Sun, A. T. K. Tan, M. J. O. Rourke, E. Ye, A. J. Minnich, F. G. S. L. Brand˜ao, and G. K.-L. Chan,
Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution, Nature
Phys. 16, 205 (2019),arXiv:1901.07653 [quant-ph].
[12] A. Gily´en, Y. Su, G. H. Low, and N. Wiebe, Quantum singular value transformation and beyond: exponential improvements
for quantum matrix arithmetics, in Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing
(ACM, 2019) arXiv:1806.01838.
[13] L. Coopmans, Y. Kikuchi, and M. Benedetti, Predicting Gibbs-State Expectation Values with Pure Thermal Shadows,
PRX Quantum 4, 010305 (2023).
[14] Z. Holmes, G. Muraleedharan, R. D. Somma, Y. Subasi, and B. S¸ahino˘glu, Quantum algorithms from fluctuation theorems:
Thermal-state preparation, Quantum 6, 825 (2022),arXiv:2203.08882 [quant-ph].
[15] D. Zhang, J. L. Bosse, and T. Cubitt, Dissipative Quantum Gibbs Sampling (2023), arXiv:2304.04526 [quant-ph].
[16] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of State Calculations by Fast
Computing Machines, The Journal of Chemical Physics 21, 1087 (1953).
[17] W. K. Hastings, Monte carlo sampling methods using markov chains and their applications, Biometrika 57, 97 (1970).
[18] J. E. Moussa, Low-Depth Quantum Metropolis Algorithm (2019), arXiv:1903.01451 [quant-ph].
[19] J. Jiang and S. Irani, Quantum Metropolis Sampling via Weak Measurement (2024), arXiv:2406.16023 [quant-ph].
[20] S. Lloyd, Universal quantum simulators, Science 273, 1073 (1996).
[21] G. Lindblad, On the Generators of Quantum Dynamical Semigroups, Commun. Math. Phys. 48, 119 (1976).
[22] M. Kliesch, T. Barthel, C. Gogolin, M. Kastoryano, and J. Eisert, Dissipative quantum church-turing theorem, Phys. Rev.
Lett. 107, 120501 (2011).
[23] A. M. Childs and T. Li, Efficient simulation of sparse Markovian quantum dynamics, Quant. Inf. Comput. 17, 0901 (2017),
arXiv:1611.05543 [quant-ph].
[24] R. Cleve and C. Wang, Efficient Quantum Algorithms for Simulating Lindblad Evolution (2019), arXiv:1612.09512.
[25] P. Wocjan and K. Temme, Szegedy Walk Unitaries for Quantum Maps, Commun. Math. Phys. 402, 3201 (2023),
arXiv:2107.07365 [quant-ph].
[26] O. Shtanko and R. Movassagh, Preparing thermal states on noiseless and noisy programmable quantum processors (2021),
arXiv:2112.14688 [quant-ph].
[27] P. Rall, C. Wang, and P. Wocjan, Thermal State Preparation via Rounding Promises, Quantum 7, 1132 (2023),
arXiv:2210.01670.
51
[28] C.-F. Chen, M. J. Kastoryano, F. G. S. L. Brand˜ao, and A. Gily´en, Quantum Thermal State Preparation (2023),
arXiv:2303.18224 [quant-ph].
[29] C.-F. Chen, M. J. Kastoryano, and A. Gily´en, An efficient and exact noncommutative quantum Gibbs sampler (2023),
arXiv:2311.09207 [quant-ph].
[30] Z. Ding, C.-F. Chen, and L. Lin, Single-ancilla ground state preparation via Lindbladians (2023), arXiv:2308.15676 [quant-
ph].
[31] Z. Ding, B. Li, and L. Lin, Efficient quantum Gibbs samplers with Kubo–Martin–Schwinger detailed balance condition
(2024), arXiv:2404.05998 [quant-ph].
[32] Z. Ding, X. Li, and L. Lin, Simulating open quantum systems using hamiltonian simulations, PRX Quantum 5, 020332
(2024).
[33] A. Gily´en, C.-F. Chen, J. F. Doriguello, and M. J. Kastoryano, Quantum generalizations of Glauber and Metropolis
dynamics (2024), arXiv:2405.20322 [quant-ph].
[34] H. Chen, B. Li, J. Lu, and L. Ying, A Randomized Method for Simulating Lindblad Equations and Thermal State
Preparation (2024), arXiv:2407.06594v2 [quant-ph].
[35] F. Verstraete, M. M. Wolf, and J. Ignacio Cirac, Quantum computation and quantum-state engineering driven by dissipa-
tion, Nature Physics 5, 633 (2009).
[36] B. Kraus, H. P. uchler, S. Diehl, A. Kantian, A. Micheli, and P. Zoller, Preparation of entangled states by quantum
markov processes, Phys. Rev. A 78, 042307 (2008).
[37] P. M. Harrington, E. J. Mueller, and K. W. Murch, Engineered dissipation for quantum information science, Nature
Reviews Physics 4, 660 (2022).
[38] S. Bravyi, A. Chowdhury, D. Gosset, and P. Wocjan, Quantum Hamiltonian complexity in thermal equilibrium, Nature
Phys. 18, 1367 (2022),arXiv:2110.15466 [quant-ph].
[39] C. Rouz´e, D. Stilck Fran¸ca, and A. M. Alhambra, Efficient thermalization and universal quantum computing with quantum
Gibbs samplers (2024), arXiv:2403.12691 [quant-ph].
[40] T. Bergamaschi, C.-F. Chen, and Y. Liu, Quantum computational advantage with constant-temperature Gibbs sampling
(2024), arXiv:2404.14639 [quant-ph].
[41] J. Ra jakumar and J. D. Watson, Gibbs Sampling gives Quantum Advantage at Constant Temperatures with O(1)-Local
Hamiltonians (2024), arXiv:2408.01516 [quant-ph].
[42] M. Srednicki, Chaos and Quantum Thermalization, Phys. Rev. E 50, 888 (1994),arXiv:cond-mat/9403051.
[43] M. Srednicki, Thermal fluctuations in quantized chaotic systems, J. Phys. A 29, L75 (1996),arXiv:chao-dyn/9511001.
[44] M. Srednicki, The approach to thermal equilibrium in quantized chaotic systems, Journal of Physics A: Mathematical and
General 32, 1163 (1999).
[45] L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, From quantum chaos and eigenstate thermalization to statistical
mechanics and thermodynamics, Adv. Phys. 65, 239 (2016),arXiv:1509.06411 [cond-mat.stat-mech].
[46] C.-F. Chen and F. G. S. L. Brand˜ao, Fast Thermalization from the Eigenstate Thermalization Hypothesis (2021),
arXiv:2112.07646 [quant-ph].
[47] H. Spohn and J. L. Lebowitz, Irreversible Thermodynamics for Quantum Systems Weakly Coupled to Thermal Reservoirs,
in Advances in Chemical Physics, Vol. 38 (John Wiley & Sons, Ltd, 1978) pp. 109–142.
[48] R. Alicki, On the detailed balance condition for non-hamiltonian systems, Reports on Mathematical Physics 10, 249 (1976).
[49] A. Kossakowski, A. Frigerio, V. Gorini, and M. Verri, Quantum detailed balance and KMS condition, Communications in
Mathematical Physics 57, 97 (1977).
[50] E. A. Carlen and J. Maas, Gradient flow and entropy inequalities for quantum markov semigroups with detailed balance,
Journal of Functional Analysis 273, 1810 (2017).
[51] F. Fagnola and V. Umanit`a, Generators of Detailed Balance Quantum Markov Semigroups, Infinite Dimensional Analysis,
Quantum Probability and Related Topics 10, 335 (2007),arXiv:0707.2147.
[52] F. Fagnola and V. Umanit`a, Generators of KMS Symmetric Markov Semigroups on B(h) Symmetry and Quantum Detailed
Balance, Communications in Mathematical Physics 298, 523 (2010).
[53] K. Temme, M. J. Kastoryano, M. B. Ruskai, M. M. Wolf, and F. Verstraete, The χ2-divergence and Mixing times of
quantum Markov processes, J. Math. Phys. 51, 122201 (2010),arXiv:1005.2358 [quant-ph].
[54] M. M. Wolf, Quantum Channels & Operations Guided Tour , Lecture notes (Niels-Bohr Institute, Copenhagen, 2012).
[55] A. Ramkumar and M. Soleimanifar, Mixing time of quantum Gibbs sampling for random sparse Hamiltonians (2024),
arXiv:2411.04454.
[56] C. Rouz´e, D. S. Fran¸ca, and ´
A. M. Alhambra, Optimal quantum algorithm for Gibbs state preparation (2024),
arXiv:2411.04885.
[57] H.-E. Li, Y. Zhan, and L. Lin, Dissipative ground state preparation in ab initio electronic structure theory (2024),
arXiv:2411.01470 [quant-ph].
[58] M. J. Kastoryano and K. Temme, Quantum logarithmic Sobolev inequalities and rapid mixing, Journal of Mathematical
Physics 54, 052202 (2013).
[59] E. Campbell, Random compiler for fast hamiltonian simulation, Phys. Rev. Lett. 123, 070503 (2019).
[60] Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Distribution of the Ratio of Consecutive Level Spacings in Random
Matrix Ensembles, Physical Review Letters 110, 084101 (2013).
[61] V. Oganesyan and D. A. Huse, Localization of interacting fermions at high temperature, Physical Review B 75, 155111
(2007).
[62] A. R. Kolovsky and A. Buchleitner, Quantum chaos in the Bose-Hubbard model, Europhysics Letters 68, 632 (2004).
52
[63] Y. Y. Atas and E. Bogomolny, Multifractality of eigenfunctions in spin chains, Physical Review E 86, 021104 (2012).
[64] W. Beugeling, A. Andreanov, and M. Haque, Global characteristics of all eigenstates of local many-body Hamiltonians:
participation ratio and entanglement entropy, Journal of Statistical Mechanics: Theory and Experiment 2015, P02002
(2015).
[65] Y. Y. Atas and E. Bogomolny, Quantum Ising model in transverse and longitudinal fields: chaotic wave functions, Journal
of Physics A: Mathematical and Theoretical 50, 385102 (2017).
[66] W. Beugeling, A. acker, R. Moessner, and M. Haque, Statistical properties of eigenstate amplitudes in complex quantum
systems, Physical Review E 98, 022204 (2018).
[67] L. Pausch, E. G. Carnio, A. Rodr´ıguez, and A. Buchleitner, Chaos and Ergodicity across the Energy Spectrum of Interacting
Bosons, Physical Review Letters 126, 150601 (2021).
[68] E. Brunner, L. Pausch, E. G. Carnio, G. Dufour, A. Rodr´ıguez, and A. Buchleitner, Many-Body Interference at the Onset
of Chaos, Physical Review Letters 130, 080401 (2023).
[69] M. Raghunandan, F. Wolf, C. Ospelkaus, P. O. Schmidt, and H. Weimer, Initialization of quantum simulators by sympa-
thetic cooling, Science Advances 6, eaaw9268 (2020).
[70] S. Polla, Y. Herasymenko, and T. E. O’Brien, Quantum digital cooling, Phys. Rev. A 104, 012414 (2021).
[71] X. Mi et al., Stable quantum-correlated many-body states through engineered dissipation, Science 383, adh9932 (2024),
arXiv:2304.13878 [quant-ph].
[72] T. S. Cubitt, Dissipative ground state preparation and the Dissipative Quantum Eigensolver (2023), arXiv:2303.11962
[quant-ph].
[73] E. Granet and H. Dreyer, A noise-limiting quantum algorithm using mid-circuit measurements for dynamical correlations
at infinite temperature (2024), arXiv:2401.02207 [quant-ph].
[74] S. Duffield, G. Matos, and M. Johannsen, qujax: Simulating quantum circuits with JAX, Journal of Open Source Software
8, 5504 (2023).
[75] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas,
S. Wanderman-Milne, and Q. Zhang, JAX: composable transformations of Python+NumPy programs (2018), version
0.3.13. http://github.com/jax-ml/jax (accessed 2024-12-21).
[76] S. Sivarajah, S. Dilkes, A. Cowtan, W. Simmons, A. Edgington, and R. Duncan, t|ket: a retargetable compiler for NISQ
devices, Quantum Science and Technology 6, 014003 (2020).
[77] A. M. Childs, E. Farhi, and J. Preskill, Robustness of adiabatic quantum computation, Phys. Rev. A 65, 012322 (2001).
[78] J. Roland and N. J. Cerf, Noise resistance of adiabatic quantum computation using random matrix theory, Phys. Rev. A
71, 032330 (2005).
[79] K. Kechedzhi, S. V. Isakov, S. Mandr`a, B. Villalonga, X. Mi, S. Boixo, and V. Smelyanskiy, Effective quantum volume,
fidelity and computational cost of noisy quantum processing experiments, Future Gener. Comput. Syst. 153, 431 (2024),
arXiv:2306.15970 [quant-ph].
[80] B. F. Schiffer, A. F. Rubio, R. Trivedi, and J. I. Cirac, The quantum adiabatic algorithm suppresses the proliferation of
errors (2024), arXiv:2404.15397 [quant-ph].
[81] E. Granet and H. Dreyer, Dilution of error in digital Hamiltonian simulation (2024), arXiv:2409.04254 [quant-ph].
[82] E. Chertkov, Y.-H. Chen, M. Lubasch, D. Hayes, and M. Foss-Feig, Robustness of near-thermal dynamics on digital
quantum computers (2024), arXiv:2410.10794 [quant-ph].
[83] J. J. Wallman, Noise tailoring for scalable quantum computation via randomized compiling, Physical Review A 94, 052325
(2016).
[84] A. Hashim, Randomized Compiling for Scalable Quantum Computing on a Noisy Superconducting Quantum Processor,
Physical Review X 11, 041039 (2021).
[85] A. Bakshi, A. Liu, A. Moitra, and E. Tang, Learning quantum Hamiltonians at any temperature in polynomial time (2023),
arXiv:2310.02243 [quant-ph].
[86] C. Murthy and M. Srednicki, Bounds on chaos from the eigenstate thermalization hypothesis, Phys. Rev. Lett. 123, 230606
(2019).
[87] E. B. Davies, Markovian master equations, Commun. Math. Phys. 39, 91 (1974).
[88] E. B. Davies, Markovian master equations. II, Mathematische Annalen 219, 147 (1976).
[89] A. Dymarsky and H. Liu, New characteristic of quantum many-body chaotic systems, Phys. Rev. E 99, 010102 (2019).
[90] M. C. Ba˜nuls, D. A. Huse, and J. I. Cirac, Entanglement and its relation to energy variance for local one-dimensional
hamiltonians, Phys. Rev. B 101, 144305 (2020).
[91] S. Lu, M. C. Ba˜nuls, and J. I. Cirac, Algorithms for quantum simulation at finite energies, PRX Quantum 2, 020321
(2021).
[92] H. Kim and D. A. Huse, Ballistic Spreading of Entanglement in a Diffusive Nonintegrable System, Physical Review Letters
111, 127205 (2013).
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Lindblad dynamics and other open-system dynamics provide a promising path towards efficient Gibbs sampling on quantum computers. In these proposals, the Lindbladian is obtained via an algorithmic construction akin to designing an artificial thermostat in classical Monte Carlo or molecular dynamics methods, rather than being treated as an approximation to weakly coupled system-bath unitary dynamics. Recently, Chen, Kastoryano, and Gilyén (arXiv:2311.09207) introduced the first efficiently implementable Lindbladian satisfying the Kubo–Martin–Schwinger (KMS) detailed balance condition, which ensures that the Gibbs state is a fixed point of the dynamics and is applicable to non-commuting Hamiltonians. This Gibbs sampler uses a continuously parameterized set of jump operators, and the energy resolution required for implementing each jump operator depends only logarithmically on the precision and the mixing time. In this work, we build upon the structural characterization of KMS detailed balanced Lindbladians by Fagnola and Umanità, and develop a family of efficient quantum Gibbs samplers using a finite set of jump operators (the number can be as few as one), akin to the classical Markov chain-based sampling algorithm. Compared to the existing works, our quantum Gibbs samplers have a comparable quantum simulation cost but with greater design flexibility and a much simpler implementation and error analysis. Moreover, it encompasses the construction of Chen, Kastoryano, and Gilyén as a special instance.
Article
Full-text available
We present a novel method to simulate the Lindblad equation, drawing on the relationship between Lindblad dynamics, stochastic differential equations, and Hamiltonian simulations. We derive a sequence of unitary dynamics in an enlarged Hilbert space that can approximate the Lindblad dynamics up to an arbitrarily high order. This unitary representation can then be simulated using a quantum circuit that involves only Hamiltonian simulation and tracing out the ancilla qubits. There is no need for additional postselection in measurement outcomes, ensuring a success probability of one at each stage. Our method can be directly generalized to the time-dependent setting. We provide numerical examples that simulate both time-independent and time-dependent Lindbladian dynamics with accuracy up to the third order. Published by the American Physical Society 2024
Article
Full-text available
A promising avenue for the preparation of Gibbs states on a quantum computer is to simulate the physical thermalization process. The Davies generator describes the dynamics of an open quantum system that is in contact with a heat bath. Crucially, it does not require simulation of the heat bath itself, only the system we hope to thermalize. Using the state-of-the-art techniques for quantum simulation of the Lindblad equation, we devise a technique for the preparation of Gibbs states via thermalization as specified by the Davies generator. In doing so, we encounter a severe technical challenge: implementation of the Davies generator demands the ability to estimate the energy of the system unambiguously. That is, each energy of the system must be deterministically mapped to a unique estimate. Previous work showed that this is only possible if the system satisfies an unphysical 'rounding promise' assumption. We solve this problem by engineering a random ensemble of rounding promises that simultaneously solves three problems: First, each rounding promise admits preparation of a 'promised' thermal state via a Davies generator. Second, these Davies generators have a similar mixing time as the ideal Davies generator. Third, the average of these promised thermal states approximates the ideal thermal state.
Article
Full-text available
Szegedy developed a generic method for quantizing classical algorithms based on random walks (Szegedy, in: 45th annual IEEE Symposium on Foundations of Computer Science, pp 32–41, 2004. https://doi.org/10.1109/FOCS.2004.53). A major contribution of his work was the construction of a walk unitary for any reversible random walk. Such unitary posses two crucial properties: its eigenvector with eigenphase 0 is a quantum sample of the limiting distribution of the random walk and its eigenphase gap is quadratically larger than the spectral gap of the random walk. It was an open question if it is possible to generalize Szegedy’s quantization method for stochastic maps to quantum maps. We answer this in the affirmative by presenting an explicit construction of a Szegedy walk unitary for detailed balanced Lindbladians—generators of quantum Markov semigroups—and detailed balanced quantum channels. We prove that our Szegedy walk unitary has a purification of the fixed point of the Lindbladian as eigenvector with eigenphase 0 and that its eigenphase gap is quadratically larger than the spectral gap of the Lindbladian. To construct the walk unitary we leverage a canonical form for detailed balanced Lindbladians showing that they are structurally related to Davies generators. We also explain how the quantization method for Lindbladians can be applied to quantum channels. We give an efficient quantum algorithm for quantizing Davies generators that describe many important dynamics of open quantum systems, for instance, the relaxation of a quantum system coupled to a bath. Our algorithm extends known techniques for simulating dynamics of quantum systems on a quantum computer.
Article
Full-text available
The preparation and computation of many properties of quantum Gibbs states is essential for algorithms such as quantum semidefinite programming and quantum Boltzmann machines. We propose a quantum algorithm that can predict M linear functions of an arbitrary Gibbs state with only O(logM) experimental measurements. Our main insight is that for sufficiently large systems we do not need to prepare the n-qubit mixed Gibbs state explicitly but, instead, we can evolve a random n-qubit pure state in imaginary time. The result then follows by constructing classical shadows of these random pure states. We propose a quantum circuit that implements this algorithm by using quantum signal processing for the imaginary time evolution. We numerically verify the efficiency of the algorithm by simulating the circuit for a ten-spin-1/2 XXZ-Heisenberg model. In addition, we show that the algorithm can be successfully employed as a subroutine for training an eight-qubit fully connected quantum Boltzmann machine.
Article
Engineered dissipative reservoirs have the potential to steer many-body quantum systems toward correlated steady states useful for quantum simulation of high-temperature superconductivity or quantum magnetism. Using up to 49 superconducting qubits, we prepared low-energy states of the transverse-field Ising model through coupling to dissipative auxiliary qubits. In one dimension, we observed long-range quantum correlations and a ground-state fidelity of 0.86 for 18 qubits at the critical point. In two dimensions, we found mutual information that extends beyond nearest neighbors. Lastly, by coupling the system to auxiliaries emulating reservoirs with different chemical potentials, we explored transport in the quantum Heisenberg model. Our results establish engineered dissipation as a scalable alternative to unitary evolution for preparing entangled many-body states on noisy quantum processors.
Article
We unveil the signature of many-body interference across dynamical regimes of the Bose-Hubbard model. Increasing the particles’ indistinguishability enhances the temporal fluctuations of few-body observables, with a dramatic amplification at the onset of quantum chaos. By resolving the exchange symmetries of partially distinguishable particles, we explain this amplification as the fingerprint of the initial state’s coherences in the eigenbasis.