Peter W. Shor’s research while affiliated with Massachusetts Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (214)


Figure 2. Availability gain G(p) as a function of the engine initial distribution (p(0), 1 − p(0)). (a-d) correspond to four different environment initial distributions p env . Black lines show G(p) computed using Equation (17); green dots indicate predictions made using our information-theoretic expression (18) (using shorthand G(q) + ∆D in legend). Optimal initial distribution q and equilibrium initial distribution π are indicated using vertical lines. Dashed curve indicates reduction in the engine's Shannon entropy as a function of initial distribution, −∆H from Equation (22). Vertical axes have the same scale. Other parameters: T 0 = T 1 = T = 1, ϵ = 1.
Figure 3. Same as in Figure 2, but where the temperature of Work Extraction is higher than of Preparation, T 1 = 3 > T 0 = 1. Black lines show G(p) computed using Equation (17); green dots indicate predictions made using information-theoretic expression (18). (a-d) correspond to different initial states of the environment. Observe that in some cases, the function G is non-concave and may have multiple local maxima. In (d), the optimal distribution q does not have full support, so the equivalence between Equations (17) and (18) does not hold.
Maximizing Free Energy Gain
  • Article
  • Full-text available

January 2025

·

18 Reads

Entropy

·

Iman Marvian

·

Can Gokler

·

[...]

·

Maximizing the amount of work harvested from an environment is important for a wide variety of biological and technological processes, from energy-harvesting processes such as photosynthesis to energy storage systems such as fuels and batteries. Here, we consider the maximization of free energy—and by extension, the maximum extractable work—that can be gained by a classical or quantum system that undergoes driving by its environment. We consider how the free energy gain depends on the initial state of the system while also accounting for the cost of preparing the system. We provide simple necessary and sufficient conditions for increasing the gain of free energy by varying the initial state. We also derive simple formulae that relate the free energy gained using the optimal initial state rather than another suboptimal initial state. Finally, we demonstrate that the problem of finding the optimal initial state may have two distinct regimes, one easy and one difficult, depending on the temperatures used for preparation and work extraction. We illustrate our results on a simple model of an information engine.

Download

Universal graph representation of stabilizer codes

November 2024

·

23 Reads

We introduce a representation of [[n,k]][[n, k]] stabilizer codes as semi-bipartite graphs wherein k ``input'' nodes map to n ``output'' nodes, such that output nodes may connect to each other but input nodes may not. We prove that this graph representation is in bijection with tableaus and give an efficient compilation algorithm that transforms tableaus into graphs. We then show that this map is efficiently invertible, which gives a new universal recipe for code construction by way of finding graphs with sufficiently nice properties. The graph representation gives insight into both code construction and algorithms. To the former, we argue that graphs provide a flexible platform for building codes particularly at smaller (non-asymptotic) scales. We construct as examples constant-size codes, e.g. a [[64,6,5]][[64, 6, 5]] code and a family of roughly [[n,nlogn,logn]][[n, \frac{n}{\log n}, \log n]] codes. We also leverage graphs in a probabilistic analysis to extend the quantum Gilbert-Varshamov bound into a three-way distance-rate-weight tradeoff. To the latter, we show that key coding algorithms -- distance approximation, weight reduction, and decoding -- are unified as instances of a single optimization game on a graph. Moreover, key code properties such as distance, weight, and encoding circuit depth, are all controlled by the graph degree. We give efficient algorithms for producing simple encoding circuits whose depths scale as twice the degree and for implementing logical diagonal and certain Clifford gates with non-constant but reduced depth. Finally, we construct a simple efficient decoding algorithm and prove a performance guarantee for a certain class of graphs, including the roughly [[n,nlogn,logn]][[n, \frac{n}{\log n}, \log n]] code. These results give evidence that graphs are generically useful for the study of stabilizer codes and their practical implementations.


The Learning Stabilizers with Noise problem

October 2024

·

20 Reads

Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory. Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.


Bounding the Forward Classical Capacity of Bipartite Quantum Channels

May 2023

·

45 Reads

·

15 Citations

IEEE Transactions on Information Theory

We introduce various measures of forward classical communication for bipartite quantum channels. Since a point-to-point channel is a special case of a bipartite channel, the measures reduce to measures of classical communication for point-to-point channels. As it turns out, these reduced measures have been reported in prior work of Wang et al . on bounding the classical capacity of a quantum channel. As applications, we show that the measures are upper bounds on the forward classical capacity of a bipartite channel. The reduced measures are upper bounds on the classical capacity of a point-to-point quantum channel assisted by a classical feedback channel. Some of the various measures can be computed by semi-definite programming.


FIG. 1. Example of an encoder-respecting form and some ways it might violate the 4 rules.
FIG. 3. ZXCF of the seven-qubit code. The 8 nodes in the diagram form the vertices of a boolean cube.
FIG. 4. ZXCF of the five-qubit code.
Graphical quantum Clifford-encoder compilers from the ZX calculus

January 2023

·

671 Reads

·

1 Citation

We present a quantum compilation algorithm that maps Clifford encoders, an equivalence class of quantum circuits that arise universally in quantum error correction, into a representation in the ZX calculus. In particular, we develop a canonical form in the ZX calculus and prove canonicity as well as efficient reducibility of any Clifford encoder into the canonical form. The diagrams produced by our compiler explicitly visualize information propagation and entanglement structure of the encoder, revealing properties that may be obscured in the circuit or stabilizer-tableau representation.



FIG. 1. Line of lattice points of slope −2/∆ near a Gaussian ball. The two coordinates in this figure are the first two coordinates of the lattice points: x1 and x2. The separation between each pair of points is s (0) . The adversary could reach these points after measuring a lattice point and adding s (2) . After adding s (1) enough times to be near the Gaussian balls in the other cluster, the line of points shown will have moved down to not intersect the Gaussian ball.
FIG. 2. Graph of the function 1 √ 21 10 j=−10 e 2πijx . The x-axis must be scaled by P and the y-axis by 1 √ P .
Publicly verifiable quantum money from random lattices

July 2022

·

272 Reads

Publicly verifiable quantum money is a protocol for the preparation of quantum states that can be efficiently verified by any party for authenticity but is computationally infeasible to counterfeit. We develop a cryptographic scheme for publicly verifiable quantum money based on Gaussian superpositions over random lattices. We introduce a verification-of-authenticity procedure based on the lattice discrete Fourier transform, and subsequently prove the unforgeability of our quantum money under the hardness of the short vector problem from lattice-based cryptography.


Simultaneous Measurement and Entanglement

January 2022

·

130 Reads

We study scenarios which arise when two spatially-separated observers, Alice and Bob, are try to identify a quantum state sampled from several possibilities. In particular, we examine their strategies for maximizing both the probability of guessing their state correctly as well as their information gain about it. It is known that there are scenarios where allowing Alice and Bob to use LOCC offers an improvement over the case where they must make their measurements simultaneously. Similarly, Alice and Bob can sometimes improve their outcomes if they have access to a Bell pair. We show how LOCC allows Alice and Bob to distinguish between two product states optimally and find that a LOCC is almost always more helpful than a Bell pair for distinguishing product states.



FIG. 1. Illustration of the binary-tree model of depth 4. We consider OTOC between local operators originally acting on two farthest vertices (a leaf of the left and right subtrees, respectively, for example, O 1 and O 2 in the diagram), and entanglement between the left subtree (dashed circle) and the rest of the graph (the cut shown by the red double line).
Separation of Out-Of-Time-Ordered Correlation and Entanglement

June 2021

·

104 Reads

·

44 Citations

PRX Quantum

The out-of-time-ordered correlation (OTOC) and entanglement are two physically motivated and widely used probes of the “scrambling” of quantum information, a phenomenon that has drawn great interest recently in quantum gravity and many-body physics. We argue that the corresponding notions of scrambling can be fundamentally different, by proving an asymptotic separation between the time scales of the saturation of OTOC and that of entanglement entropy in a random quantum-circuit model defined on graphs with a tight bottleneck, such as tree graphs. Our result counters the intuition that a random quantum circuit mixes in time proportional to the diameter of the underlying graph of interactions. It also provides a more rigorous justification for an argument in our previous work [Shor P.W., Scrambling time and causal structure of the photon sphere of a Schwarzschild black hole, arXiv:1807.04363 (2018)], that black holes may be slow information scramblers, which in turn relates to the black-hole information problem. The bounds we obtain for OTOC are interesting in their own right in that they generalize previous studies of OTOC on lattices to the geometries on graphs in a rigorous and general fashion.


Citations (76)


... As shown in Ref. [18,Lemma 35], the limit of the geometric Rényi channel divergence as ↘ 1 is given by the Belavkin-Staszewski channel divergence: ...

Reference:

Converse bounds for quantum hypothesis exclusion: A divergence-radius approach
Bounding the Forward Classical Capacity of Bipartite Quantum Channels
  • Citing Article
  • May 2023

IEEE Transactions on Information Theory

... The delocalization of information, also known as scrambling, is measured by the tri-partite mutual information I 3 [18,20]. It is an important feature of quantum chaos [21,22], black hole physics and the information paradox [23][24][25]. Quantum chaos and its relationship with OTOCs and scrambling in quantum channels and circuits has recently been the subject of interest in the quantum information community [20,[26][27][28]. ...

Separation of Out-Of-Time-Ordered Correlation and Entanglement

PRX Quantum

... This direction started with [BN05] and was ultimately inspired by Shannon's work on the feedback-assisted capacity of a classical channel [Sha56], in which it was shown that feedback does not increase capacity. For the quantum case, it is known that a classical feedback channel does not enhance the classical capacity of 1) an entanglement-breaking channel [BN05], 2) a pure-loss bosonic channel [DQSW19], and 3) a quantum erasure channel [DQSW19]. The first aforementioned result has been strengthened to a strong-converse statement [DW18]. ...

Entropy Bound for the Classical Capacity of a Quantum Channel Assisted by Classical Feedback
  • Citing Conference Paper
  • July 2019

... Di Vincenzo et al. [1] were the first to observe that regularization is necessary by showing the underlying entropic quantity, the coherent information to be superadditive. Since then, there has been an effort to understand superadditivity and its relation to the computation of capacity in the context of sending quantum information, but also for other communication tasks where capacity is given by a regularized formula [2][3][4][5][6][7][8][9][10][11][12][13]. Notably, since generally regularized formulas are the only proxy for capacity, even elementary questions such as whether or not a channel has positive capacity have no known algorithmic answer [14] (see [15,16] for recent progress). ...

Superadditivity in Trade-Off Capacities of Quantum Channels
  • Citing Article
  • December 2018

IEEE Transactions on Information Theory

... The channel N is constructed from M in a way which was introduced in [21] in the context of equivalences of various additivity questions. The same construction has also been used in [22,23]. As only entangled states can provide an advantage in the Holevo capacity, the condition (2) with a suitable choice of M can also serve as an entropic entanglement witness for the state ρÃB. ...

Superadditivity in Trade-Off Capacities of Quantum Channels
  • Citing Conference Paper
  • June 2018

... Along with the large-scale Gaussian operations enabled by the cluster states [4,5], quantum non-Gaussian gates are essential components [6,7] for a variety of practical tasks in optical quantum technology and advanced quantum information processing including quantum communication, quantum computation, quantum algorithms, and quantum control. It has been found that non-Gaussianity [8][9][10][11] in the form of non-Gaussian quantum states [12][13][14][15][16][17][18][19][20][21] and non-Gaussian operations [2,[22][23][24][25][26][27] is crucial, due to the limited capability of Gaussian states and operations, for various CV quantum information protocols required for quantum teleportation [28][29][30], entanglement distillation [31,32], error correction [33,34], fault-tolerant universal quantum computing [2,[35][36][37][38], loophole-free test of quantum non-locality [39,40], and quantum simulations [41,42]. ...

Resource theory of non-Gaussian operations

Physical Review A

... In fact, the only known channels that admit an additive CE capacity region are the quantum erasure channels [13] and Hadamard channels [23], many fewer than the class of channels with an additive classical capacity. Coincidentally, these two classes of channels also admit an additive CQE trade-off capacity, suggesting a nontrivial connection [13,23,33]. ...

Superadditivity in trade-off capacities of quantum channels

... In particular, the syntactic information that a system has about its environment will often require some work to acquire. However, the same information may carry an arbitrarily large benefit [124], for instance by indicating the location of a large source of free energy, or a danger to avoid. To compare the benefit and the cost of the syntactic information to the system, below we define the thermodynamic multiplier as the ratio between the viability value of the information and the amount of syntactic information. ...

When is a bit worth much more than kT ln2?
  • Citing Article
  • May 2017

... In [12], Quek and Shor demonstrated the advantages of using entangled transmitters by analyzing the separation of rates. They presented a specific instance of an IC based on the CHSH game [13], but primarily focused on the analysis of superquantum non-local correlations derived from the PR-box model proposed by Popescu and Rohrlich [14]. ...

Quantum and superquantum enhancements to two-sender, two-receiver channels
  • Citing Article
  • May 2017

Physical Review A

... This work intersects with several developing directions in the literature, including studies on the role of extractable work for out-of-equilibrium systems, including those that share properties of life, [1][2][3][4] and on systems that are slow to thermalize [5][6][7][8]. Here, we develop a building block consisting of a thermodynamically closed system constructed from initially uncorrelated, mixed quantum states and evolving subject to a conservation law. ...

Maximizing free energy gain