FIG 12 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Diagonal elements of Pauli transfer matrix for a noise map of a single logical qubit during c syndrome-measurement cycles plotted versus the number of code cycles. Each color corresponds to the code distance of a logical qubit. The circles are numerical results, and the dashed lines are fitting results with an exponentially decaying function.
Source publication
In the early years of fault-tolerant quantum computing (FTQC), it is expected that the available code distance and the number of magic states will be restricted due to the limited scalability of quantum devices and the insufficient computational power of classical decoding units. Here, we integrate quantum error correction and quantum error mitigat...
Similar publications
Practical quantum computing requires robust encoding of logical qubits in physical systems to protect fragile quantum information. Currently, the lack of scalability limits the logical encoding in most physical systems, and thus the high scalability of propagating light can be a game changer for realizing a practical quantum computer. However, prop...
Quantum computers can be protected from noise by encoding the logical quantum information redundantly into multiple qubits using error-correcting codes1,2. When manipulating the logical quantum states, it is imperative that errors caused by imperfect operations do not spread uncontrollably through the quantum register. This requires that all operat...
The idea of semi-quantum has been widely used in recent years in the design of quantum cryptographic schemes. It allows certain participants in quantum protocols to remain classical and execute quantum information processing tasks by using as few quantum resources as possible. In this paper, we propose a fault-tolerant semi-quantum secure direct co...
By encoding logical qubits into specific types of photonic graph states, one can realize quantum repeaters that enable fast entanglement distribution rates approaching classical communication. However, the generation of these photonic graph states requires a formidable resource overhead using traditional approaches based on linear optics. Overcomin...
A powerful feature of stabilizer error correcting codes is the fact that stabilizer measurement projects arbitrary errors to Pauli errors, greatly simplifying the physical error correction process as well as classical simulations of code performance. However, logical non-Clifford operations can map Pauli errors to non-Pauli (Clifford) errors, and w...
Citations
... In Sec. III, we discuss the conventional probabilistic error cancellation technique [26][27][28][29][30][31][32][33][34][35], developed to fully mitigate noise in a quantum circuit, and demonstrate that it can also be employed to partially mitigate the error probabilities of stochastic Pauli noise channels in a controlled manner. We show that our approach can be used to implement stochastic Pauli noise with desired decoherence rates on NISQ devices. ...
... Here we provide a brief overview of PEC technique [26][27][28][29][30][31][32][33][34][35]. For more details, we refer the readers to recent reviews such as Refs. ...
Quantum systems are inherently open and susceptible to environmental noise, which can have both detrimental and beneficial effects on their dynamics. This phenomenon has been observed in biomolecular systems, where noise enables novel functionalities, making the simulation of their dynamics a crucial target for digital and analog quantum simulation. Nevertheless, the computational capabilities of current quantum devices are often limited due to their inherent noise. In this work, we present a novel approach that capitalizes on the intrinsic noise of quantum devices to reduce the computational resources required for simulating open quantum systems. Our approach combines quantum noise characterization methods with quantum error mitigation techniques, enabling us to manipulate and control the intrinsic noise in a quantum circuit. Specifically, we selectively enhance or reduce decoherence rates in the quantum circuit to achieve the desired simulation of open-system dynamics. We provide a detailed description of our methods and report on the results of noise characterization and quantum error mitigation experiments conducted on both real and emulated IBM Quantum computers. Additionally, we estimate the experimental resource requirements for our techniques. Our approach holds the potential to unlock new simulation techniques in noisy intermediate-scale quantum devices, harnessing their intrinsic noise to enhance quantum computations.
... In the context of fault-tolerant quantum computers [6], magic state distillation (MSD) requires multiple copies of a noisy state to create one state with reduced noise. MSD is the most expensive process required to run fault-tolerant quantum computers [7] and it is imperative to substantially reduce the cost of MSD to make early fault-tolerant quantum computers practically viable [8][9][10][11][12][13][14]. ...
Reducing noise in quantum systems is a major challenge towards the application of quantum technologies. Here, we propose and demonstrate a scheme to reduce noise using a quantum autoencoder with rigorous performance guarantees. The quantum autoencoder learns to compresses noisy quantum states into a latent subspace and removes noise via projective measurements. We find various noise models where we can perfectly reconstruct the original state even for high noise levels. We apply the autoencoder to cool thermal states to the ground state and reduce the cost of magic state distillation by several orders of magnitude. Our autoencoder can be implemented using only unitary transformations without ancillas, making it immediately compatible with the state of the art. We experimentally demonstrate our methods to reduce noise in a photonic integrated circuit. Our results can be directly applied to make quantum technologies more robust to noise.
... One critical challenge for VQA parameter optimization is quantum noise [21][22][23], which limits their capabilities and introduces additional complexities to parameter optimization. Modeling and mitigating hardware noise is a core part of Near-term Intermediatescale Quantum (NISQ) algorithms [24][25][26]. Quantifying and improving the reliability and robustness of a VQA has been an important task and has gained increasing attention recently. To name a few, machine learning methods have been used to estimate the reliability of a quantum circuit [27]; noise-aware ansatz design methodologies [28] and robust circuit realization from a lower-level abstraction [29,30] have also been investigated. ...
Given their potential to demonstrate near-term quantum advantage, variational quantum algorithms (VQAs) have been extensively studied. Although numerous techniques have been developed for VQA parameter optimization, it remains a significant challenge. A practical issue is the high sensitivity of quantum noise to environmental changes, and its propensity to shift in real time. This presents a critical problem as an optimized VQA ansatz may not perform effectively under a different noise environment. For the first time, we explore how to optimize VQA parameters to be robust against unknown shifted noise. We model the noise level as a random variable with an unknown probability density function (PDF), and we assume that the PDF may shift within an uncertainty set. This assumption guides us to formulate a distributionally robust optimization problem, with the goal of finding parameters that maintain effectiveness under shifted noise. We utilize a distributionally robust Bayesian optimization solver for our proposed formulation. This provides numerical evidence in both the Quantum Approximate Optimization Algorithm (QAOA) and the Variational Quantum Eigensolver (VQE) with hardware-efficient ansatz, indicating that we can identify parameters that perform more robustly under shifted noise. We regard this work as the first step towards improving the reliability of VQAs influenced by real-time noise.
... While the performance of these algorithms is limited by barren plateaus [66], i.e., the problem of exponentially vanishing gradients with the system size, in recent years tools have been developed to study and mitigate this phenomenon [67,68]. In addition, error mitigation has been shown to offer significant improvements when noise is an issue [69][70][71], and approaches inspired by FTQC have resulted in partial error correction schemes developed for NISQ devices [72,73]. ...
Quantum batteries are predicted to have the potential to outperform their classical counterparts and are therefore an important element in the development of quantum technologies. In this work we simulate the charging process and work extraction of many-body quantum batteries on noisy-intermediate scale quantum (NISQ) devices, and devise the Variational Quantum Ergotropy (VQErgo) algorithm which finds the optimal unitary operation that maximises work extraction from the battery. We test VQErgo by calculating the ergotropy of a quantum battery undergoing transverse field Ising dynamics. We investigate the battery for different system sizes and charging times and analyze the minimum required circuit depth of the variational optimization using both ideal and noisy simulators. Finally, we optimize part of the VQErgo algorithm and calculate the ergotropy on one of IBM's quantum devices.
... The benefit of our LVQCbased method is that it is valid to reduce the circuit depth needed to accurately calculate the Green's function on a broad level of quantum computers from NISQ devices to FTQCs. Reducing the circuit depth is crucial for NISQ devices (and even for early FTQCs [32][33][34][35][36][37]) to complete the computation within the coherence time and to alleviate the accumulation of gate error. The reduction of the circuit depth is also essential for ideal FTQCs to reduce the total simulation time. ...
Computation of the Green's function is crucial to study the properties of quantum many-body systems such as strongly correlated systems. Although the high-precision calculation of the Green's function is a notoriously challenging task on classical computers, the development of quantum computers may enable us to compute the Green's function with high accuracy even for classically-intractable large-scale systems. Here, we propose an efficient method to compute the real-time Green's function based on the local variational quantum compilation (LVQC) algorithm, which simulates the time evolution of a large-scale quantum system using a low-depth quantum circuit constructed through optimization on a smaller-size subsystem. Our method requires shallow quantum circuits to calculate the Green's function and can be utilized on both near-term noisy intermediate-scale and long-term fault-tolerant quantum computers depending on the computational resources we have. We perform a numerical simulation of the Green's function for the one- and two-dimensional Fermi-Hubbard model up to 4×4 sites lattice (32 qubits) and demonstrate the validity of our protocol compared to a standard method based on the Trotter decomposition. We finally present a detailed estimation of the gate count for the large-scale Fermi-Hubbard model, which also illustrates the advantage of our method over the Trotter decomposition.
... The most promising implementations of qubits keep them detuned from environment and other states, operating each qubit at a unique and isolated transition frequency, believed to cause only small incoherent disturbances. The assumption of twolevel states is necessary for any successful fault-tolerant quantum computation performed on noisy devices, as the error mitigation relies on the controlled space of noise [1][2][3]. On the other hand, the potential contribution of external states can lead to systematic errors, which are hard to correct [4][5][6][7]. Such a leakage has been directly observed in the delay test [8] at the level of 3.5 variances, but its origin has not been yet identified. ...
We demonstrate an implementation of the precise test of dimension on the qubit, using the public IBM quantum computer,using the determinant dimension witness.The accuracy is below 10 ⁻³ comparing to maximal possible value of the witness in higher dimension.The test involving minimal independent sets of preparation and measurement operations (gates) is applied both for specific configurations andparametric ones. The test is be robust against nonidealities such as incoherent leakage and erroneous gate execution.Two of the IBM devices failed the test by more than 5 standard deviations, which has no simple explanation.
... All roadmaps toward practical quantum computing focus on finding ways to suppress errors and increase the number of logical qubits available. Whereas the longterm goal is to achieve fault-tolerant quantum computing by implementing qubit-demanding error-correcting codes and diminishing the noise below a certain threshold [1][2][3][4], near-term computing uses all physical qubits as logical ones and significantly relies on the error mitigation techniques compensating the detrimental noise effects in medium-depth quantum circuits [5]. The latter approach attracts increasing attention in view of prospects for advantageous quantum simulations of molecules and binding affinities between chemical compounds [6][7][8] as well as complex quantum dynamics [9][10][11][12]. ...
... The multiplication and compression of MPOs are well known [30][31][32][33][34][35][36] and routinely implemented in popular computation packages, e.g., in Quimb [47], here we review them for the sake of completeness. MY rec [1] MY rec [2] M rec [3] M rec [4] MX rec [5] M rec [6] MY rec [7] M rec [8] M rec [9] FIG. S6. ...
Before fault-tolerance becomes implementable at scale, quantum computing will heavily rely on noise mitigation techniques. While methods such as zero noise extrapolation with probabilistic error amplification (ZNE-PEA) and probabilistic error cancellation (PEC) have been successfully tested on hardware recently, their scalability to larger circuits may be limited. Here, we introduce the tensor-network error mitigation (TEM) algorithm, which acts in post-processing to correct the noise-induced errors in estimations of physical observables. The method consists of the construction of a tensor network representing the inverse of the global noise channel affecting the state of the quantum processor, and the consequent application of the map to informationally complete measurement outcomes obtained from the noisy state. TEM does therefore not require additional quantum operations other than the implementation of informationally complete POVMs, which can be achieved through randomised local measurements. The key advantage of TEM is that the measurement overhead is quadratically smaller than in PEC. We test TEM extensively in numerical simulations in different regimes. We find that TEM can be used in circuits twice as deep as PEC in realistic conditions with the sparse Pauli-Lindblad noise, such as those in E. van den Berg et al., Nat. Phys. (2023). By using Clifford circuits, we explore the capabilities of the method in wider and deeper circuits with lower noise levels. We find that in the case of 100 qubits and depth 100, both PEC and ZNE fail to produce accurate results by using $\sim 10^5$ shots, while TEM does.
... The extent to which error mitigation and error-aware circuits can approach quantum advantage is of significant interest in demonstrating useful, if limited, NISQ quantum applications. Additionally, mitigating and understanding the sources and types of runtime or algorithmic errors may allow for reducing the threshold for which fault-tolerant quantum algorithms can be applied [61,62] by lowering the target total fidelity at the cost of additional sampling overhead or runtime overhead. ...
Dynamical mean-field theory (DMFT) maps the local Green's function of the Hubbard model to that of the Anderson impurity model and thus gives an approximate solution of the Hubbard model from the solution of a simpler quantum impurity model. Accurate solutions to the Anderson impurity model nonetheless become intractable for large systems. Quantum and hybrid quantum-classical algorithms have been proposed to efficiently solve impurity models by preparing and evolving the ground state under the impurity Hamiltonian on a quantum computer that is assumed to have the scalability and accuracy far beyond the current state-of-the-art quantum hardware. As a proof of principle demonstration targeting the Anderson impurity model we, for the first time, close the DMFT loop with current noisy hardware. With a highly optimized fast-forwarding quantum circuit and a noise-resilient spectral analysis we observe both the metallic and Mott-insulating phases. Based on a Cartan decomposition, our algorithm gives a fixed depth, fast-forwarding, quantum circuit that can evolve the initial state over arbitrarily long times without time-discretization errors typical of other product decomposition formulas such as Trotter decomposition. By exploiting the structure of the fast-forwarding circuits we reduce the gate count (to 77 cnots after optimization), simulate the dynamics, and extract frequencies from the Anderson impurity model on noisy quantum hardware. We then demonstrate the Mott transition by mapping both phases of the metal-insulator phase diagram. Near the Mott phase transition, our method maintains accuracy where the Trotter error would otherwise dominate due to the long-time evolution required to resolve quasiparticle resonance frequency extremely close to zero. This work presents the first computation on both sides of the Mott phase transition using noisy digital quantum hardware, made viable by a highly optimized computation in terms of gate depth, simulation error, and runtime on quantum hardware. To inform future computations we analyze the accuracy of our method versus a noisy Trotter evolution in the time domain. Both algebraic circuit decompositions and error mitigation techniques adopted could be applied in an attempt to solve other correlated electronic phenomena beyond DMFT on noisy quantum computers.
... Thus, it is not possible to generate PRSs on noisy intermediate-scale quantum computers [60]. Further, early fault-tolerant quantum computers [61,62], i.e. quantum computers with limited error correction where not all errors are exponentially suppressed, are expected to be unable to generate PRSs. ...
Pseudorandom quantum states (PRSs) and pseudorandom unitaries (PRUs) possess the dual nature of being efficiently constructible while appearing completely random to any efficient quantum algorithm. In this study, we establish fundamental bounds on pseudorandomness. We show that PRSs and PRUs exist only when the probability that an error occurs is negligible, ruling out their generation on noisy intermediate-scale and early fault-tolerant quantum computers. Additionally, we derive lower bounds on the imaginarity and coherence of PRSs and PRUs, rule out the existence of sparse or real PRUs, and show that PRUs are more difficult to generate than PRSs. Our work also establishes rigorous bounds on the efficiency of property testing, demonstrating the exponential complexity in distinguishing real quantum states from imaginary ones, in contrast to the efficient measurability of unitary imaginarity. Furthermore, we prove lower bounds on the testing of coherence. Lastly, we show that the transformation from a complex to a real model of quantum computation is inefficient, in contrast to the reverse process, which is efficient. Overall, our results establish fundamental limits on property testing and provide valuable insights into quantum pseudorandomness.
... Quantum error mitigation (QEM) [1][2][3][4][5] broadly refers to the class of techniques designed to deal with errors on near-term quantum computers where the number of accessible qubits is too low to enable quantum error correction. Such techniques typically attempt to mitigate errors by running the same circuit a large number of times on a noisy device and performing some classical post-analysis, for example calculating an empirical estimate of an expectation value using zero-noise Richardson extrapolation [6][7][8][9] or 'averaging out' errors using probabilistic error cancellation [3,7,[10][11][12] or virtual distillation [13][14][15][16][17]. ...
We present a scalable and modular error mitigation protocol for running $\mathsf{BQP}$ computations on a quantum computer with time-dependent noise. Utilising existing tools from quantum verification, our framework interleaves standard computation rounds alongside test rounds for error-detection and inherits a local-correctness guarantee which exponentially bounds (in the number of circuit runs) the probability that a returned classical output is correct. On top of the verification work, we introduce a post-selection technique we call basketing to address time-dependent noise behaviours and reduce overhead. The result is a first-of-its-kind error mitigation protocol which is exponentially effective and requires minimal noise assumptions, making it straightforwardly implementable on existing, NISQ devices and scalable to future, larger ones.