Preprint

Q3DE: A fault-tolerant quantum computer architecture for multi-bit burst errors by cosmic rays

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Demonstrating small error rates by integrating quantum error correction (QEC) into an architecture of quantum computing is the next milestone towards scalable fault-tolerant quantum computing (FTQC). Encoding logical qubits with superconducting qubits and surface codes is considered a promising candidate for FTQC architectures. In this paper, we propose an FTQC architecture, which we call Q3DE, that enhances the tolerance to multi-bit burst errors (MBBEs) by cosmic rays with moderate changes and overhead. There are three core components in Q3DE: in-situ anomaly DEtection, dynamic code DEformation, and optimized error DEcoding. In this architecture, MBBEs are detected only from syndrome values for error correction. The effect of MBBEs is immediately mitigated by dynamically increasing the encoding level of logical qubits and re-estimating probable recovery operation with the rollback of the decoding process. We investigate the performance and overhead of the Q3DE architecture with quantum-error simulators and demonstrate that Q3DE effectively reduces the period of MBBEs by 1000 times and halves the size of their region. Therefore, Q3DE significantly relaxes the requirement of qubit density and qubit chip size to realize FTQC. Our scheme is versatile for mitigating MBBEs, i.e., temporal variations of error properties, on a wide range of physical devices and FTQC architectures since it relies only on the standard features of topological stabilizer codes.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
To successfully execute large-scale algorithms, a quantum computer will need to perform its elementary operations near perfectly. This is a fundamental challenge since all physical qubits suffer a considerable level of noise. Moreover, real systems are likely to have a finite yield, i.e., some nonzero proportion of the components in a complex device may be irredeemably broken at the fabrication stage. We present a threshold theorem showing that an arbitrarily large quantum computation can be completed with a vanishing probability of failure using a two-dimensional array of noisy qubits with a finite density of fabrication defects. To complete our proof we introduce a robust protocol to measure high-weight stabilizers to compensate for large regions of inactive qubits. We obtain our result using a surface-code architecture. Our approach is therefore readily compatible with ongoing experimental efforts to build a large-scale quantum computer.
Article
Full-text available
Identifying, quantifying, and suppressing decoherence mechanisms in qubits are important steps towards the goal of engineering a quantum computer or simulator. Superconducting circuits offer flexibility in qubit design; however, their performance is adversely affected by quasiparticles (broken Cooper pairs). Developing a quasiparticle mitigation strategy compatible with scalable, high-coherence devices is therefore highly desirable. Here we experimentally demonstrate how to control quasiparticle generation by downsizing the qubit, capping it with a metallic cover, and equipping it with suitable quasiparticle traps. Using a flip-chip design, we shape the electromagnetic environment of the qubit above the superconducting gap, inhibiting quasiparticle poisoning. Our findings support the hypothesis that quasiparticle generation is dominated by the breaking of Cooper pairs at the junction, as a result of photon absorption by the antenna-like qubit structure. We achieve record low charge-parity switching rate (<1 Hz). Our aluminium devices also display improved stability with respect to discrete charging events.
Article
Full-text available
Quantum error correction can preserve quantum information in the presence of local errors, but correlated errors are fatal. For superconducting qubits, high-energy particle impacts from background radioactivity produce energetic phonons that travel throughout the substrate and create excitations above the superconducting ground state, known as quasiparticles, which can poison all qubits on the chip. We use normal metal reservoirs on the chip back side to downconvert phonons to low energies where they can no longer poison qubits. We introduce a pump-probe scheme involving controlled injection of pair-breaking phonons into the qubit chips. We examine quasiparticle poisoning on chips with and without back-side metallization and demonstrate a reduction in the flux of pair-breaking phonons by over a factor of 20. We use a Ramsey interferometer scheme to simultaneously monitor quasiparticle parity on three qubits for each chip and observe a two-order of magnitude reduction in correlated poisoning due to background radiation. High-energy particle impacts due to background or cosmic radiation have been identified as sources of correlated errors in superconducting qubit arrays. Iaia et al. achieve a suppression of correlated error rate by channeling the energy away from the qubits via a thick metal layer at the bottom of the chip.
Article
Full-text available
The inevitable accumulation of errors in near-future quantum devices represents a key obstacle in delivering practical quantum advantages, motivating the development of various quantum error-mitigation methods. Here, we derive fundamental bounds concerning how error-mitigation algorithms can reduce the computation error as a function of their sampling overhead. Our bounds place universal performance limits on a general error-mitigation protocol class. We use them to show (1) that the sampling overhead that ensures a certain computational accuracy for mitigating local depolarizing noise in layered circuits scales exponentially with the circuit depth for general error-mitigation protocols and (2) the optimality of probabilistic error cancellation among a wide class of strategies in mitigating the local dephasing noise on an arbitrary number of qubits. Our results provide a means to identify when a given quantum error-mitigation strategy is optimal and when there is potential room for improvement.
Article
Full-text available
Quantum computers hold the promise of solving computational problems that are intractable using conventional methods1. For fault-tolerant operation, quantum computers must correct errors occurring owing to unavoidable decoherence and limited control accuracy2. Here we demonstrate quantum error correction using the surface code, which is known for its exceptionally high tolerance to errors3–6. Using 17 physical qubits in a superconducting circuit, we encode quantum information in a distance-three logical qubit, building on recent distance-two error-detection experiments7–9. In an error-correction cycle taking only 1.1 μs, we demonstrate the preservation of four cardinal states of the logical qubit. Repeatedly executing the cycle, we measure and decode both bit-flip and phase-flip error syndromes using a minimum-weight perfect-matching algorithm in an error-model-free approach and apply corrections in post-processing. We find a low logical error probability of 3% per cycle when rejecting experimental runs in which leakage is detected. The measured characteristics of our device agree well with a numerical model. Our demonstration of repeated, fast and high-performance quantum error-correction cycles, together with recent advances in ion traps10, support our understanding that fault-tolerant quantum computation will be practically realizable. By using 17 physical qubits in a superconducting circuit to encode quantum information in a surface-code logical qubit, fast (1.1 μs) and high-performance (logical error probability of 3%) quantum error-correction cycles are demonstrated.
Article
Full-text available
Surface ion traps are among the most promising technologies for scaling up quantum computing machines, but their complicated multi-electrode geometry can make some tasks, including compensation for stray electric fields, challenging both at the level of modeling and of practical implementation. Here we demonstrate the compensation of stray electric fields using a gradient descent algorithm and a machine learning technique, which trained a deep learning network. We show automated dynamical compensation tested against induced electric charging from UV laser light hitting the chip trap surface. The results show improvement in compensation using gradient descent and the machine learner over manual compensation. This improvement is inferred from an increase of the fluorescence rate of 78% and 96% respectively, for a trapped 171Yb+ ion driven by a laser tuned to -7.8 MHz of the 2S1/2↔2P1/2 Doppler cooling transition at 369.5 nm.
Article
Full-text available
Lattice-surgery protocols allow for the efficient implementation of universal gate sets with two-dimensional topological codes where qubits are constrained to interact with one another locally. In this work, we first introduce a decoder capable of correcting spacelike and timelike errors during lattice-surgery protocols. Subsequently, we compute the logical failure rates of a lattice-surgery protocol for a biased circuit-level noise model. We then provide a protocol for performing twist-free lattice surgery, where we avoid twist defects in the bulk of the lattice. Our twist-free protocol eliminates the extra circuit components and gate-scheduling complexities associated with the measurement of higher weight stabilizers when using twist defects. We also provide a protocol for temporally encoded lattice surgery that can be used to reduce both the run times and the total space-time costs of quantum algorithms. Lastly, we propose a layout for a quantum processor that is more efficient for rectangular surface codes exploiting noise bias and that is compatible with the other techniques mentioned above.
Article
Full-text available
We describe quantum circuits with only O~(N) Toffoli complexity that block encode the spectra of quantum chemistry Hamiltonians in a basis of N arbitrary (e.g., molecular) orbitals. With O(λ/ϵ) repetitions of these circuits one can use phase estimation to sample in the molecular eigenbasis, where λ is the 1-norm of Hamiltonian coefficients and ϵ is the target precision. This is the lowest complexity shown for quantum computations of chemistry within an arbitrary basis. Furthermore, up to logarithmic factors, this matches the scaling of the most efficient prior block encodings that can work only with orthogonal-basis functions diagonalizing the Coloumb operator (e.g., the plane-wave dual basis). Our key insight is to factorize the Hamiltonian using a method known as tensor hypercontraction (THC) and then to transform the Coulomb operator into an isospectral diagonal form with a nonorthogonal basis defined by the THC factors. We then use qubitization to simulate the nonorthogonal THC Hamiltonian, in a fashion that avoids most complications of the nonorthogonal basis. We also reanalyze and reduce the cost of several of the best prior algorithms for these simulations in order to facilitate a clear comparison to the present work. In addition to having lower asymptotic scaling space-time volume, compilation of our algorithm for challenging finite-sized molecules such as FeMoCo reveals that our method requires the least fault-tolerant resources of any known approach. By laying out and optimizing the surface-code resources required of our approach we show that FeMoCo can be simulated using about four million physical qubits and under 4 days of runtime, assuming 1-μs cycle times and physical gate-error rates no worse than 0.1%.
Article
Full-text available
The central challenge in building a quantum computer is error correction. Unlike classical bits, which are susceptible to only one type of error, quantum bits (qubits) are susceptible to two types of error, corresponding to flips of the qubit state about the X and Z directions. Although the Heisenberg uncertainty principle precludes simultaneous monitoring of X- and Z-flips on a single qubit, it is possible to encode quantum information in large arrays of entangled qubits that enable accurate monitoring of all errors in the system, provided that the error rate is low1. Another crucial requirement is that errors cannot be correlated. Here we characterize a superconducting multiqubit circuit and find that charge noise in the chip is highly correlated on a length scale over 600 micrometres; moreover, discrete charge jumps are accompanied by a strong transient reduction of qubit energy relaxation time across the millimetre-scale chip. The resulting correlated errors are explained in terms of the charging event and phonon-mediated quasiparticle generation associated with absorption of γ-rays and cosmic-ray muons in the qubit substrate. Robust quantum error correction will require the development of mitigation strategies to protect multiqubit arrays from correlated errors due to particle impacts. Cosmic-ray particles and γ-rays striking superconducting circuits can generate qubit errors that are spatially correlated across several millimetres, hampering current error-correction approaches.
Article
Full-text available
Error-corrected quantum computers can only work if errors are small and uncorrelated. Here, I show how cosmic rays or stray background radiation affects superconducting qubits by modeling the phonon to electron/quasiparticle down-conversion physics. For present designs, the model predicts about 57% of the radiation energy breaks Cooper pairs into quasiparticles, which then vigorously suppress the qubit energy relaxation time ( T 1 ~ 600 ns) over a large area (cm) and for a long time (ms). Such large and correlated decay kills error correction. Using this quantitative model, I show how this energy can be channeled away from the qubit so that this error mechanism can be reduced by many orders of magnitude. I also comment on how this affects other solid-state qubits.
Article
Full-text available
We significantly reduce the cost of factoring integers and computing discrete logarithms in finite fields on a quantum computer by combining techniques from Shor 1994, Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of 10 − 3 , a surface code cycle time of 1 microsecond, and a reaction time of 10 microseconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Van Meter et al. 2009, Jones et al. 2010, Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ignores overheads from distillation, routing, and error correction) our construction uses 3 n + 0.002 n lg ⁡ n logical qubits, 0.3 n 3 + 0.0005 n 3 lg ⁡ n Toffolis, and 500 n 2 + n 2 lg ⁡ n measurement depth to factor n -bit RSA integers. We quantify the cryptographic implications of our work, both for RSA and for schemes based on the DLP in finite fields.
Article
Full-text available
The superconducting transmon qubit is a leading platform for quantum computing and quantum science. Building large, useful quantum systems based on transmon qubits will require significant improvements in qubit relaxation and coherence times, which are orders of magnitude shorter than limits imposed by bulk properties of the constituent materials. This indicates that relaxation likely originates from uncontrolled surfaces, interfaces, and contaminants. Previous efforts to improve qubit lifetimes have focused primarily on designs that minimize contributions from surfaces. However, significant improvements in the lifetime of two-dimensional transmon qubits have remained elusive for several years. Here, we fabricate two-dimensional transmon qubits that have both lifetimes and coherence times with dynamical decoupling exceeding 0.3 milliseconds by replacing niobium with tantalum in the device. We have observed increased lifetimes for seventeen devices, indicating that these material improvements are robust, paving the way for higher gate fidelities in multi-qubit processors.
Article
Full-text available
Technologies that rely on quantum bits (qubits) require long coherence times and high-fidelity operations¹. Superconducting qubits are one of the leading platforms for achieving these objectives2,3. However, the coherence of superconducting qubits is affected by the breaking of Cooper pairs of electrons4–6. The experimentally observed density of the broken Cooper pairs, referred to as quasiparticles, is orders of magnitude higher than the value predicted at equilibrium by the Bardeen–Cooper–Schrieffer theory of superconductivity7–9. Previous work10–12 has shown that infrared photons considerably increase the quasiparticle density, yet even in the best-isolated systems, it remains much higher¹⁰ than expected, suggesting that another generation mechanism exists¹³. Here we provide evidence that ionizing radiation from environmental radioactive materials and cosmic rays contributes to this observed difference. The effect of ionizing radiation leads to an elevated quasiparticle density, which we predict would ultimately limit the coherence times of superconducting qubits of the type measured here to milliseconds. We further demonstrate that radiation shielding reduces the flux of ionizing radiation and thereby increases the energy-relaxation time. Albeit a small effect for today’s qubits, reducing or mitigating the impact of ionizing radiation will be critical for realizing fault-tolerant superconducting quantum computers.
Article
Full-text available
Recent work has deployed linear combinations of unitaries techniques to reduce the cost of fault-tolerant quantum simulations of correlated electron models. Here, we show that one can sometimes improve upon those results with optimized implementations of Trotter-Suzuki-based product formulas. We show that low-order Trotter methods perform surprisingly well when used with phase estimation to compute relative precision quantities (e.g. energies per unit cell), as is often the goal for condensed-phase systems. In this context, simulations of the Hubbard and plane-wave electronic structure models with N < 10 5 fermionic modes can be performed with roughly O ( 1 ) and O ( N 2 ) T complexities. We perform numerics revealing tradeoffs between the error and gate complexity of a Trotter step; e.g., we show that split-operator techniques have less Trotter error than popular alternatives. By compiling to surface code fault-tolerant gates and assuming error rates of one part per thousand, we show that one can error-correct quantum simulations of interesting, classically intractable instances with a few hundred thousand physical qubits.
Article
Full-text available
Recent work has dramatically reduced the gate complexity required to quantum simulate chemistry by using linear combinations of unitaries based methods to exploit structure in the plane wave basis Coulomb operator. Here, we show that one can achieve similar scaling even for arbitrary basis sets (which can be hundreds of times more compact than plane waves) by using qubitized quantum walks in a fashion that takes advantage of structure in the Coulomb operator, either by directly exploiting sparseness, or via a low rank tensor factorization. We provide circuits for several variants of our algorithm (which all improve over the scaling of prior methods) including one with O~(N3/2λ) T complexity, where N is number of orbitals and λ is the 1-norm of the chemistry Hamiltonian. We deploy our algorithms to simulate the FeMoco molecule (relevant to Nitrogen fixation) and obtain circuits requiring about seven hundred times less surface code spacetime volume than prior quantum algorithms for this system, despite us using a larger and more accurate active space.
Article
Full-text available
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor¹. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits2–7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2⁵³ (about 10¹⁶). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy8–14 for this specific computational task, heralding a much-anticipated computing paradigm.
Article
Full-text available
Given a quantum gate circuit, how does one execute it in a fault-tolerant architecture with as little overhead as possible? In this paper, we discuss strategies for surface-code quantum computing on small, intermediate and large scales. They are strategies for space-time trade-offs, going from slow computations using few qubits to fast computations using many qubits. Our schemes are based on surface-code patches, which not only feature a low space cost compared to other surface-code schemes, but are also conceptually simple~--~simple enough that they can be described as a tile-based game with a small set of rules. Therefore, no knowledge of quantum error correction is necessary to understand the schemes in this paper, but only the concepts of qubits and measurements.
Article
Full-text available
We construct quantum circuits that exactly encode the spectra of correlated electron models up to errors from rotation synthesis. By invoking these circuits as oracles within the recently introduced “qubitization” framework, one can use quantum phase estimation to sample states in the Hamiltonian eigenbasis with optimal query complexity O(λ/ε), where λ is an absolute sum of Hamiltonian coefficients and ε is the target precision. For both the Hubbard model and electronic structure Hamiltonian in a second quantized basis diagonalizing the Coulomb operator, our circuits have T-gate complexity O(N+log(1/ε)), where N is the number of orbitals in the basis. This scenario enables sampling in the eigenbasis of electronic structure Hamiltonians with T complexity O(N3/ε+N2log(1/ε)/ε). Compared to prior approaches, our algorithms are asymptotically more efficient in gate complexity and require fewer T gates near the classically intractable regime. Compiling to surface code fault-tolerant gates and assuming per-gate error rates of one part in a thousand reveals that one can error correct phase estimation on interesting instances of these problems beyond the current capabilities of classical methods using only about a million superconducting qubits in a matter of hours.
Article
Full-text available
It is vital to minimise the impact of errors for near-future quantum devices that will lack the resources for full fault tolerance. Two quantum error mitigation (QEM) techniques have been introduced recently, namely error extrapolation [Li 2017,Temme 2017] and quasi-probability decomposition [Temme 2017]. To enable practical implementation of these ideas, here we account for the inevitable imperfections in the experimentalist's knowledge of the error model itself. We describe a protocol for systematically measuring the effect of errors so as to design efficient QEM circuits. We find that the effect of localised Markovian errors can be fully eliminated by inserting or replacing some gates with certain single-qubit Clifford gates and measurements. Finally, having introduced an exponential variant of the extrapolation method we contrast the QEM techniques using exact numerical simulation of up to 19 qubits in the context of a 'SWAP test' circuit. Our optimised methods dramatically reduce the circuit's output error without increasing the qubit count or time requirements.
Article
Full-text available
We study how well topological quantum codes can tolerate coherent noise caused by systematic unitary errors such as unwanted Z-rotations. Our main result is an efficient algorithm for simulating quantum error correction protocols based on the 2D surface code in the presence of coherent errors. The algorithm has runtime O(n2)O(n^2), where n is the number of physical qubits. It allows us to simulate systems with more than one thousand qubits and obtain the first error threshold estimates for several toy models of coherent noise. Numerical results are reported for storage of logical states subject to Z-rotation errors and for logical state preparation with general SU(2) errors. We observe that for large code distances the effective logical-level noise is well-approximated by random Pauli errors even though the physical-level noise is coherent. Our algorithm works by mapping the surface code to a system of Majorana fermions.
Article
Full-text available
In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of O(nα(n))O(n \alpha(n)), where n is the number of physical qubits and α\alpha is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, α(n)3\alpha(n) \leq 3. We prove that our algorithm performs optimally for errors of weight up to (d1)/2(d-1)/2 and for loss of up to d1d-1 qubits, where d is the minimum distance of the code. Numerically, we obtain a threshold of 9.9%9.9\% for the 2d-toric code with perfect syndrome measurements and 2.6%2.6\% with faulty measurements.
Conference Paper
Full-text available
The Pauli frame mechanism allows Pauli gates to be tracked in classical electronics and can relax the timing constraints for error syndrome measurement and error decoding. When building a quantum computer, such a mechanism may be beneficial, and the goal of this paper is not only to study the working principles of a Pauli frame but also to quantify its potential effect on the logical error rate. To this purpose, we implemented and simulated the Pauli frame module which, in principle, can be directly mapped into a hardware implementation. Simulation of a surface code 17 logical qubit has shown that a Pauli frame can reduce the error rate of a logical qubit up to 70% compared to the same logical qubit without Pauli frame when the decoding time equals the error correction time, and maximum parallelism can be obtained.
Article
Full-text available
The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.
Article
Full-text available
This article provides an introduction to surface code quantum computing. We first estimate the size and speed of a surface code quantum computer. We then introduce the concept of the stabilizer, using two qubits, and extend this concept to stabilizers acting on a two-dimensional array of physical qubits, on which we implement the surface code. We next describe how logical qubits are formed in the surface code array and give numerical estimates of their fault-tolerance. We outline how logical qubits are physically moved on the array, how qubit braid transformations are constructed, and how a braid between two logical qubits is equivalent to a controlled-NOT. We then describe the single-qubit Hadamard, S and T operators, completing the set of required gates for a universal quantum computer. We conclude by briefly discussing physical implementations of the surface code. We include a number of appendices in which we provide supplementary information to the main text.
Article
Full-text available
The surface code is unarguably the leading quantum error correction code for 2D nearest neighbor architectures, featuring a high threshold error rate of approximately 1%, low overhead implementations of the entire Clifford group, and flexible, arbitrarily long-range logical gates. These highly desirable features come at the cost of significant classical processing complexity. We show how to perform the processing associated with an n×n lattice of qubits, each being manipulated in a realistic, fault-tolerant manner, in O(n2) average time per round of error correction. We also describe how to parallelize the algorithm to achieve O(1) average processing per round, using only constant computing resources per unit area and local communication. Both of these complexities are optimal.
Article
Full-text available
Trends in terrestrial neutron-induced soft-error in SRAMs from a 250 nm to a 22 nm process are reviewed and predicted using the Monte-Carlo simulator CORIMS, which is validated to have less than 20% variations from experimental soft-error data on 180-130 nm SRAMs in a wide variety of neutron fields like field tests at low and high altitudes and accelerator tests in LANSCE, TSL, and CYRIC. The following results are obtained: 1) Soft-error rates per device in SRAMs will increase x6-7 from 130 nm to 22 nm process; 2) As SRAM is scaled down to a smaller size, soft-error rate is dominated more significantly by low-energy neutrons (<; 10 MeV); and 3) The area affected by one nuclear reaction spreads over 1 M bits and bit multiplicity of multi-cell upset become as high as 100 bits and more.
Article
The ideal superconductor provides a pristine environment for the delicate states of a quantum computer: because there is an energy gap to excitations, there are no spurious modes with which the qubits can interact, causing irreversible decay of the quantum state. As a practical matter, however, there exists a high density of excitations out of the superconducting ground state even at ultralow temperature; these are known as quasiparticles. Observed quasiparticle densities are of order 1 μm−3, tens of orders of magnitude greater than the equilibrium density expected from theory. Nonequilibrium quasiparticles extract energy from the qubit mode and can induce dephasing. Here we show that a dominant mechanism for quasiparticle poisoning is direct absorption of high-energy photons at the qubit junction. We use a Josephson junction-based photon source to controllably dose qubit circuits with millimeter-wave radiation, and we use an interferometric quantum gate sequence to reconstruct the charge parity of the qubit. We find that the structure of the qubit itself acts as a resonant antenna for millimeter-wave radiation, providing an efficient path for photons to generate quasiparticles. A deep understanding of this physics will pave the way to realization of next-generation superconducting qubits that are robust against quasiparticle poisoning.
Article
Quantum error correction holds the key to scaling up quantum computers. Cosmic ray events severely impact the operation of a quantum computer by causing chip-level catastrophic errors, essentially erasing the information encoded in a chip. Here, we present a distributed error correction scheme to combat the devastating effect of such events by introducing an additional layer of quantum erasure error correcting code across separate chips. We show that our scheme is fault tolerant against chip-level catastrophic errors and discuss its experimental implementation using superconducting qubits with microwave links. Our analysis shows that in state-of-the-art experiments, it is possible to suppress the rate of these errors from 1 per 10 s to less than 1 per month.
Article
Minimizing the micromotion of a trapped ion in a linear Paul trap is of great importance in maintaining long coherence time as well as implementing quantum logic gates with high fidelity, which is crucial for large-scale quantum computation with trapped ions. Here, by applying the RF (radio frequency)-photon correlation technique, we demonstrate that a machine learning method based on artificial neural networks can quickly search for optimal voltage settings of the electrodes to minimize the trapped ion's micromotion. This machine learning assisted RF-photon correlation technique can be straightforwardly applied to more complicated surface ion traps with many electrodes, where the manual minimization of the excess micromotion generated by stray electric fields would become extremely challenging for the larger number of electrodes with various voltage settings. Instead, the presented machine learning assisted method provides an effective and automatic way to address this need.
Article
We describe the design, commissioning, and operation of an ultra-low-vibration closed-cycle cryogenic ion trap apparatus. One hundred lines for low-frequency signals and eight microwave/radio frequency coaxial feed-lines offer the possibility of implementing a small-scale ion-trap quantum processor or simulator. With all supply cables attached, more than 1.3 W of cooling power at 5 K is still available for absorbing energy from electrical pulses introduced to control ions. The trap itself is isolated from vibrations induced by the cold head using a helium exchange gas interface. The performance of the vibration isolation system has been characterized using a Michelson interferometer, finding residual vibration amplitudes on the order of 10 nm rms. Trapping of ⁹Be⁺ ions has been demonstrated using a combination of laser ablation and photoionization.
Article
Quantum error correction requires decoders that are both accurate and efficient. To this end, union-find decoding has emerged as a promising candidate for error correction on the surface code. In this work, we benchmark a weighted variant of the union-find decoder on the toric code under circuit-level depolarizing noise. This variant preserves the almost-linear time complexity of the original while significantly increasing the performance in the fault-tolerance setting. In this noise model, weighting the union-find decoder increases the threshold from 0.38% to 0.62%, compared to an increase from 0.65% to 0.72% when weighting a matching decoder. Further assuming quantum nondemolition measurements, weighted union-find decoding achieves a threshold of 0.76% compared to the 0.90% threshold when matching. We additionally provide comparisons of timing as well as low error rate behavior.
Article
Quantum computers can solve problems that are inefficiently solved by classical computers, such as integer factorization. A fully programmable quantum computer requires a quantum control microarchitecture that connects the quantum software and hardware. Previous research has proposed a Quantum Instruction Set Architecture (QISA) and a quantum control microarchitecture, which targets Noisy Intermediate-Scale Quantum (NISQ) devices without fault-tolerance. However, fault-tolerant (FT) quantum computing requires FT implementation of logical operations, and repeated quantum error correction, possibly at runtime. Though highly patterned, the amount of required (physical) operations to perform logical operations is ample, which cannot be well executed by existing quantum control microarchitectures. In this paper, we propose a control microarchitecture that can efficiently support fault-tolerant quantum computing based on the rotated planar surface code with logical operations implemented by lattice surgery. It highlights a two-level address mechanism which enables a clean compilation model for a large number of qubits, and microarchitectural support for quantum error correction at runtime, which can significantly reduce the quantum program codesize and present better scalability.
Article
This article proposes a quantum microarchitecture, QuMA. Flexible programmability of a quantum processor is achieved by multilevel instructions decoding, abstracting analog control into digital control, and translating instruction execution with non-deterministic timing into event trigger with precise timing. QuMA is validated by several single-qubit experiments on a superconducting qubit.
Article
In order to realize fault-tolerant quantum computation, tight evaluation of error threshold under practical noise models is essential. While non-Clifford noise is ubiquitous in experiments, the error threshold under non-Clifford noise cannot be efficiently treated with known approaches. We construct an efficient scheme for estimating the error threshold of one-dimensional quantum repetition code under non-Clifford noise. To this end, we employ non-unitary free-fermionic formalism for efficient simulation of the one-dimensional repetition code under coherent noise. This allows us to evaluate the effect of coherence in noise on the error threshold without any approximation. The result shows that the error threshold becomes one third when noise is fully coherent. The dependence of the error threshold on noise coherence can be explained with a leading-order analysis with respect to coherence terms in the noise map. We expect that this analysis is also valid for the surface code since it is a two-dimensional extension of the one-dimensional repetition code. Moreover, since the obtained threshold is accurate, our results can be used as a benchmark for approximation or heuristic schemes for non-Clifford noise.
Article
Two schemes are presented that mitigate the effect of errors and decoherence in short depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and don't require additional qubit resources to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancel errors by resampling randomized circuits according to a quasi-probability distribution.
Article
We report high-fidelity laser-beam-induced quantum logic gates on magnetic-field-insensitive qubits comprised of hyperfine states in 9^{9}Be+^+ ions with a memory coherence time of more than 1 s. We demonstrate single-qubit gates with error per gate of 3.8(1)×1053.8(1)\times 10^{-5}. By creating a Bell state with a deterministic two-qubit gate, we deduce a gate error of 8(4)×1048(4)\times10^{-4}. We characterize the errors in our implementation and discuss methods to further reduce imperfections towards values that are compatible with fault-tolerant processing at realistic overhead.
Article
We report on radiation-induced soft error rate (SER) improvements in the 14-nm second generation hboxhighrmk+hboxmetal{hbox{high-}} {rm k} + {hbox{metal}} gate bulk tri-gate technology. Upset rates of memory cells, sequential elements, and combinational logic were investigated for terrestrial radiation environments, including thermal and high-energy neutrons, high-energy protons, and alpha-particles. SER improvements up to sim!!23timessim!! 23times with respect to devices manufactured in a 32-nm planar technology are observed. The improvements are particularly pronounced in logic devices, where aggressive fin depopulation combined with scaling of relevant fin parameters results in a sim!!8timessim!! 8times reduction of upset rates relative to the first-generation tri-gate technology.
Article
We have proposed a multi-scale Monte Carlo simulation method of neutron induced soft errors by linking a particle transport code PHITS and a 3-D TCAD simulator HyENEXSS. An interface tool between PHITS and HyENEXSS is developed to generate the mesh structure optimized for an event where multiple secondary ions extending to arbitrary directions are generated simultaneously by neutron incidence on device. Using the interface tool, we have made it possible to perform the Monte Carlo calculation of soft error rates (SERs) based on event-by-event device simulation. The PHITS-HyENEXSS code system has been successfully applied to SER analyses for 65 nm, 45 nm, and 32 nm technology MOSFETs.
Article
Contents §0. Introduction §1. Abelian problem on the stabilizer §2. Classical models of computations2.1. Boolean schemes and sequences of operations2.2. Reversible computations §3. Quantum formalism3.1. Basic notions and notation3.2. Transformations of mixed states3.3. Accuracy §4. Quantum models of computations4.1. Definitions and basic properties4.2. Construction of various operators from the elements of a basis4.3. Generalized quantum control and universal schemes §5. Measurement operators §6. Polynomial quantum algorithm for the stabilizer problem §7. Computations with perturbations: the choice of a model §8. Quantum codes (definitions and general properties)8.1. Basic notions and ideas8.2. One-to-one codes8.3. Many-to-one codes §9. Symplectic (additive) codes9.1. Algebraic preparation9.2. The basic construction9.3. Error correction procedure9.4. Torus codes §10. Error correction in the computation process: general principles10.1. Definitions and results10.2. Proofs §11. Error correction: concrete procedures11.1. The symplecto-classical case11.2. The case of a complete basis Bibliography
Article
We study Cooper-pair tunneling in a voltage-biased superconducting single-electron transistor under microwave irradiation. By tracing the peak positions of a photon-assisted Josephson-quasiparticle current as a function of the microwave frequency, we observe an energy-dispersion curve in the quasicharge space. This shows that energy-level splitting occurs between two macroscopic quantum states of charge coherently superposed by Josephson coupling. {copyright} {ital 1997} {ital The American Physical Society}
Article
We describe a new implementation of the Edmonds’s algorithm for computing a perfect matching of minimum cost, to which we refer as Blossom V. A key feature of our implementation is a combination of two ideas that were shown to be effective for this problem: the “variable dual updates” approach of Cook and Rohe (INFORMS J Comput 11(2):138–148, 1999) and the use of priority queues. We achieve this by maintaining an auxiliary graph whose nodes correspond to alternating trees in the Edmonds’s algorithm. While our use of priority queues does not improve the worst-case complexity, it appears to lead to an efficient technique. In the majority of our tests Blossom V outperformed previous implementations of Cook and Rohe (INFORMS J Comput 11(2):138–148, 1999) and Mehlhorn and Schäfer (J Algorithmics Exp (JEA) 7:4, 2002), sometimes by an order of magnitude. We also show that for large VLSI instances it is beneficial to update duals by solving a linear program, contrary to a conjecture by Cook and Rohe.
Article
We study the ±J random-plaquette Z2 gauge model (RPGM) in three spatial dimensions, a three-dimensional analog of the two-dimensional ±J random-bond Ising model (RBIM). The model is a pure Z2 gauge theory in which randomly chosen plaquettes (occurring with concentration p) have couplings with the “wrong sign” so that magnetic flux is energetically favored on these plaquettes. Excitations of the model are one-dimensional “flux tubes” that terminate at “magnetic monopoles” located inside lattice cubes that contain an odd number of wrong-sign plaquettes. Electric confinement can be driven by thermal fluctuations of the flux tubes, by the quenched background of magnetic monopoles, or by a combination of the two. Like the RBIM, the RPGM has enhanced symmetry along a “Nishimori line” in the p–T plane (where T is the temperature). The critical concentration pc of wrong-sign plaquettes at the confinement-Higgs phase transition along the Nishimori line can be identified with the accuracy threshold for robust storage of quantum information using topological error-correcting codes: if qubit phase errors, qubit bit-flip errors, and errors in the measurement of local check operators all occur at rates below pc, then encoded quantum information can be protected perfectly from damage in the limit of a large code block. Through Monte-Carlo simulations, we measure pc0, the critical concentration along the T=0 axis (a lower bound on pc), finding pc0=.0293±.0002. We also measure the critical concentration of antiferromagnetic bonds in the two-dimensional RBIM on the T=0 axis, finding pc0=.1031±.0001. Our value of pc0 is incompatible with the value of pc=.1093±.0002 found in earlier numerical studies of the RBIM, in disagreement with the conjecture that the phase boundary of the RBIM is vertical (parallel to the T axis) below the Nishimori line. The model can be generalized to a rank-r antisymmetric tensor field in d dimensions, in the presence of quenched disorder.
Book
Part I. Fundamental Concepts: 1. Introduction and overview; 2. Introduction to quantum mechanics; 3. Introduction to computer science; Part II. Quantum Computation: 4. Quantum circuits; 5. The quantum Fourier transform and its application; 6. Quantum search algorithms; 7. Quantum computers: physical realization; Part III. Quantum Information: 8. Quantum noise and quantum operations; 9. Distance measures for quantum information; 10. Quantum error-correction; 11. Entropy and information; 12. Quantum information theory; Appendices; References; Index.
Article
The escape rate of an underdamped (Q≈30), current-biased Josephson junction from the zero-voltage state has been measured. The relevant parameters of the junction were determined in situ in the thermal regime from the dependence of the escape rate on bias current and from resonant activation in the presence of microwaves. At low temperatures, the escape rate became independent of temperature with a value that, with no adjustable parameters, was in excellent agreement with the zero-temperature prediction for macroscopic quantum tunneling.