Sandy Irani’s research while affiliated with University of California System and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (55)


Quantum Metropolis Sampling via Weak Measurement
  • Preprint

June 2024

·

2 Reads

·

Sandy Irani

Gibbs sampling is a crucial computational technique used in physics, statistics, and many other scientific fields. For classical Hamiltonians, the most commonly used Gibbs sampler is the Metropolis algorithm, known for having the Gibbs state as its unique fixed point. For quantum Hamiltonians, designing provably correct Gibbs samplers has been more challenging. [TOV+11] introduced a novel method that uses quantum phase estimation (QPE) and the Marriot-Watrous rewinding technique to mimic the classical Metropolis algorithm for quantum Hamiltonians. The analysis of their algorithm relies upon the use of a boosted and shift-invariant version of QPE which may not exist [CKBG23]. Recent efforts to design quantum Gibbs samplers take a very different approach and are based on simulating Davies generators [CKBG23,CKG23,RWW23,DLL24]. Currently, these are the only provably correct Gibbs samplers for quantum Hamiltonians. We revisit the inspiration for the Metropolis-style algorithm of [TOV+11] and incorporate weak measurement to design a conceptually simple and provably correct quantum Gibbs sampler, with the Gibbs state as its approximate unique fixed point. Our method uses a Boosted QPE which takes the median of multiple runs of QPE, but we do not require the shift-invariant property. In addition, we do not use the Marriott-Watrous rewinding technique which simplifies the algorithm significantly.


Commuting Local Hamiltonian Problem on 2D beyond qubits

September 2023

·

12 Reads

We study the complexity of local Hamiltonians in which the terms pairwise commute. Commuting local Hamiltonians (CLHs) provide a way to study the role of non-commutativity in the complexity of quantum systems and touch on many fundamental aspects of quantum computing and many-body systems, such as the quantum PCP conjecture and the area law. Despite intense research activity since Bravyi and Vyalyi introduced the CLH problem two decades ago [BV03], its complexity remains largely unresolved; it is only known to lie in NP for a few special cases. Much of the recent research has focused on the physically motivated 2D case, where particles are located on vertices of a 2D grid and each term acts non-trivially only on the particles on a single square (or plaquette) in the lattice. In particular, Schuch [Sch11] showed that the CLH problem on 2D with qubits is in NP. Aharonov, Kenneth and Vigdorovich~[AKV18] then gave a constructive version of this result, showing an explicit algorithm to construct a ground state. Resolving the complexity of the 2D CLH problem with higher dimensional particles has been elusive. We prove two results for the CLH problem in 2D: (1) We give a non-constructive proof that the CLH problem in 2D with qutrits is in NP. As far as we know, this is the first result for the commuting local Hamiltonian problem on 2D beyond qubits. Our key lemma works for general qudits and might give new insights for tackling the general case. (2) We consider the factorized case, also studied in [BV03], where each term is a tensor product of single-particle Hermitian operators. We show that a factorized CLH in 2D, even on particles of arbitrary finite dimension, is equivalent to a direct sum of qubit stabilizer Hamiltonians. This implies that the factorized 2D CLH problem is in NP. This class of CLHs contains the Toric code as an example.


Quantum Tutte Embeddings

July 2023

·

19 Reads

·

1 Citation

Using the framework of Tutte embeddings, we begin an exploration of \emph{quantum graph drawing}, which uses quantum computers to visualize graphs. The main contributions of this paper include formulating a model for quantum graph drawing, describing how to create a graph-drawing quantum circuit from a given graph, and showing how a Tutte embedding can be calculated as a quantum state in this circuit that can then be sampled to extract the embedding. To evaluate the complexity of our quantum Tutte embedding circuits, we compare them to theoretical bounds established in the classical computing setting derived from a well-known classical algorithm for solving the types of linear systems that arise from Tutte embeddings. We also present empirical results obtained from experimental quantum simulations.



Translationally Invariant Constraint Optimization Problems

September 2022

·

10 Reads

We study the complexity of classical constraint satisfaction problems on a 2D grid. Specifically, we consider the complexity of function versions of such problems, with the additional restriction that the constraints are translationally invariant, namely, the variables are located at the vertices of a 2D grid and the constraint between every pair of adjacent variables is the same in each dimension. The only input to the problem is thus the size of the grid. This problem is equivalent to one of the most interesting problems in classical physics, namely, computing the lowest energy of a classical system of particles on the grid. We provide a tight characterization of the complexity of this problem, and show that it is complete for the class FPNEXPFP^{NEXP}. Gottesman and Irani (FOCS 2009) also studied classical translationally-invariant constraint satisfaction problems; they show that the problem of deciding whether the cost of the optimal solution is below a given threshold is NEXP-complete. Our result is thus a strengthening of their result from the decision version to the function version of the problem. Our result can also be viewed as a generalization to the translationally invariant setting, of Krentel's famous result from 1988, showing that the function version of SAT is complete for the class FPNPFP^{NP}. An essential ingredient in the proof is a study of the complexity of a gapped variant of the problem. We show that it is NEXP-hard to approximate the cost of the optimal assignment to within an additive error of Ω(N1/4)\Omega(N^{1/4}), for an N×NN \times N grid. To the best of our knowledge, no gapped result is known for CSPs on the grid, even in the non-translationally invariant case. As a byproduct of our results, we also show that a decision version of the optimization problem which asks whether the cost of the optimal assignment is odd or even is also complete for PNEXPP^{NEXP}.


Figure 1: Edge cases for intervals of [θ l , θ u ]. For each of the cases pictured above, the quadrant counts R i will be 2. Note that the quadrant count begins at 0. Our rounding scheme ensures that this is indeed the case, and the code we used further handles numerical floating errors that were occurring in these regions.
Figure 2: If the interval [γ l , γ u ] lies in an even quadrant, as γ increases, the value sin 2 (γ i ) increases for i ∈ {l, u}. On the other hand, for odd quadrants, as γ increases, the value sin 2 (γ i ) decreases. This drove the calculations in the conversion described in Table 1.
Figure 5: Per-round results from running the algorithms with = 10 −5 , averaged over 10,000 runs. The red dotted line in the rightmost graph represents k max .
Figure 6: Query count vs. amplitude for each combination of algorithm and confidence interval method, recorded at different values of described in the legends. The solid lines and dotted lines correspond to the Chernoff-Hoeffding and Clopper-Pearson confidence intervals respectively. On average, the number of queries do not depend on the input amplitude in either algorithm.
Modified Iterative Quantum Amplitude Estimation is Asymptotically Optimal
  • Preprint
  • File available

August 2022

·

36 Reads

In this work, we provide the first QFT-free algorithm for Quantum Amplitude Estimation (QAE) that is asymptotically optimal while maintaining the leading numerical performance. QAE algorithms appear as a subroutine in many applications for quantum computers. The optimal query complexity achievable by a quantum algorithm for QAE is O(1ϵlog1α)O\left(\frac{1}{\epsilon}\log \frac{1}{\alpha}\right) queries, providing a speedup of a factor of 1/ϵ1/\epsilon over any other classical algorithm for the same problem. The original algorithm for QAE utilizes the quantum Fourier transform (QFT) which is expected to be a challenge for near-term quantum hardware. To solve this problem, there has been interest in designing a QAE algorithm that avoids using QFT. Recently, the iterative QAE algorithm (IQAE) was introduced by Grinko et al. with a near-optimal O(1ϵlog(1αlog1ϵ))O\left(\frac{1}{\epsilon}\log \left(\frac{1}{\alpha} \log \frac{1}{\epsilon}\right)\right) query complexity and small constant factors. In this work, we combine ideas from the preceding line of work to introduce a QFT-free QAE algorithm that maintains the asymptotically optimal O(1ϵlog1α)O\left(\frac{1}{\epsilon}\log \frac{1}{\alpha}\right) query complexity while retaining small constant factors. We supplement our analysis with numerical experiments comparing our performance with IQAE where we find that our modifications retain the high performance, and in some cases even improve the numerical results.

Download


Quantum search-to-decision reductions and the state synthesis problem

November 2021

·

58 Reads

Sandy Irani

·

Anand Natarajan

·

Chinmay Nirkhe

·

[...]

·

Henry Yuen

It is a useful fact in classical computer science that many search problems are reducible to decision problems; this has led to decision problems being regarded as the de facto\textit{de facto} computational task to study in complexity theory. In this work, we explore search-to-decision reductions for quantum search problems, wherein a quantum algorithm makes queries to a classical decision oracle to output a desired quantum state. In particular, we focus on search-to-decision reductions for QMA\mathsf{QMA}, and show that there exists a quantum polynomial-time algorithm that can generate a witness for a QMA\mathsf{QMA} problem up to inverse polynomial precision by making one query to a PP\mathsf{PP} decision oracle. We complement this result by showing that QMA\mathsf{QMA}-search does not\textit{not} reduce to QMA\mathsf{QMA}-decision in polynomial-time, relative to a quantum oracle. We also explore the more general state synthesis problem\textit{state synthesis problem}, in which the goal is to efficiently synthesize a target state by making queries to a classical oracle encoding the state. We prove that there exists a classical oracle with which any quantum state can be synthesized to inverse polynomial precision using only one oracle query and to inverse exponential precision using two oracle queries. This answers an open question of Aaronson from 2016, who presented a state synthesis algorithm that makes O(n) queries to a classical oracle to prepare an n-qubit state, and asked if the query complexity could be made sublinear.


Hamiltonian Complexity in the Thermodynamic Limit

July 2021

·

11 Reads

Despite immense progress in quantum Hamiltonian complexity in the past decade, little is known about the computational complexity of quantum physics at the thermodynamic limit. In fact, even defining the problem properly is not straight forward. We study the complexity of estimating the ground energy of a fixed, translationally invariant Hamiltonian in the thermodynamic limit, to within a given precision; this precision (given by n the number of bits of the approximation) is the sole input to the problem. Understanding the complexity of this problem captures how difficult it is for the physicist to measure or compute another digit in the approximation of a physical quantity in the thermodynamic limit. We show that this problem is contained in \mbox{FEXP}^{\mbox{QMA-EXP}} and is hard for \mbox{FEXP}^{\mbox{NEXP}}. This means that the problem is double exponentially hard in the size of the input. As an ingredient in our construction, we study the problem of computing the ground energy of translationally invariant finite 1D chains. A single Hamiltonian term, which is a fixed parameter of the problem, is applied to every pair of particles in a finite chain. In the finite case, the length of the chain is the sole input to the problem. We show that this problem is contained in \mbox{FP}^{\mbox{QMA-EXP}} and is hard for \mbox{FP}^{\mbox{NEXP}}. Our techniques employ a circular clock structure in which the ground energy is calibrated by the length of the cycle. This requires more precise expressions for the ground states of the resulting matrices and even exact analytical bounds for the infinite case which we derive using techniques from spectral graph theory. After announcing these results we found that similar results were achieved independently by Watson and Cubitt; they are posted simultaneously on the arXiv.


A competitive analysis for the Start-Gap algorithm for online memory wear leveling

February 2021

·

30 Reads

·

2 Citations

Information Processing Letters

Erase-limited memory, such as flash memory and phase change memory (PCM), has limitations on the number of times that any memory cell can be erased. The Start-Gap algorithm has shown a significant ability in practice to distribute updates across the cells of an erase-limited memory, but it has heretofore not been characterized in terms of its competitive ratio against an optimal offline algorithm that is given all the update addresses in advance. In this paper, we present a competitive analysis for the Start-Gap wear-leveling algorithm, showing that under reasonable assumptions about the sequence of update operations, the Start-Gap algorithm has a competitive ratio of 1/(1−o(1)).


Citations (44)


... To wrap up this section, in Fig. 6 we show the results of the QAMC for different payoffs depending on the underlying price at maturity when the modified iterative amplitude estimation (mIQAE) algorithm (see [20]) is used. The mIQAE is considered the current state of the art with regard to amplitude estimation algorithms. ...

Reference:

Alternative pipeline for option pricing using quantum computers
Modified Iterative Quantum Amplitude Estimation is Asymptotically Optimal
  • Citing Chapter
  • January 2023

... Which complexity class will depend on the problem. For instance, the finite-size version of the problem to decide whether a Matrix Product Operator is semidefinite positive is proven to be NP-hard in Kliesch et al. (2014), while the problem of giving an estimate for the ground state energy density of a fixed quantum system in the thermodynamic limit, as a function of the number of bits of precision, is proven to be hard for the complexity class FEXP NEXP Aharonov and Irani (2022); . It would be desirable to have general results that allow to infer finite size/precision complexity results based on the undecidability of its infinite counterpart. ...

Hamiltonian complexity in the thermodynamic limit
  • Citing Conference Paper
  • June 2022

... Discrete mathematics is the backbone of several computer science concepts, giving future professionals problem-solving capabilities and better knowledge of how and why their code executes as it does. Although important, it is considered a difficult topic by students, not only because of their different backgrounds in mathematical thinking but also because its application in the field is not as obviously clear [12]. This can be extra challenging for first-year students, who are equipped with algebraic math experience but may have a difficult time abstracting and visualizing the application of such topics. ...

Incorporating Active Learning Strategies and Instructor Presence into an Online Discrete Mathematics Class

... One of the first improvements was made in [KR03a] which improved locality to 3-local, and [KKR06], who reduced the locality to 2-local Hamiltonians. In [Aha+07] it was shown that the Local Hamiltonian problem was QMA-hard for 1D Hamiltonians of local Hilbert space dimension 13. Remarkably, this was then improved upon in form: ...

The Power of Quantum Systems on a Line
  • Citing Conference Paper
  • October 2007

... This performance measure was introduced in [8] as an extension of (exact) bijective analysis of online algorithms [7], and which in turn is based on the pairwise comparison of the costs induced by two online algorithms over all request sequences of a certain size. Bijective analysis has been applied in fundamental online problems (with a discrete, finite set of requests) such as paging and list update [9], k-server [8,17], and online search 1 [18]. ...

A Comparison of Performance Measures for Online Algorithms
  • Citing Article
  • June 2008

... There is a multitude of relevant online problems in the literature. Several online variants of Traveling Repairman Problem (otherwise known as Minimum Latency Problem) have been investigated in Shiri (2022, 2021), Zhang et al. (2019), Irani et al. (2004), Krumke et al. (2003). The primary goal in these studies is to minimize the summation of completion times of all demands. ...

On-line algorithms for the dynamic traveling repair problem
  • Citing Article
  • January 2002

... The Metric Travelling Salesman has been considered under this setup in [1,2], for instance, and the Quota TSP in [1]. When the delay is zero, OPTiWinD is equivalent to the Whack-a-Mole Problem, which is defined in Gutiérrez et al. [7], or the Dynamic Traveling Repair Problem [8]. For a space which is a truncated line [−L, L], Gutiérrez et al. [7] note that the only non trivial cases are those for θ/4 < L ≤ θ, where θ denotes the size of the time windows. ...

On-Line Algorithms for the Dynamic Traveling Repair Problem
  • Citing Article
  • May 2004

Journal of Scheduling

... There have been many research studies in the contexts of demand paging and general cache management for multi-level storage systems. Previous research in different environments has noticed the weakness of LRU-like algorithms for lower level buffer cache in a hierarchy [14, 30] and pointed out feasible solutions for different systems [4, 40, 43]. This group of work focuses on improving the lower-level caching performance in reaction to the upper-level caching effect. ...

Cost-Aware Web Proxy Caching
  • Citing Article
  • January 1997