BookPDF Available

Quantum Machine Learning: What Quantum Computing Means to Data Mining

Authors:

Abstract and Figures

Quantum Machine Learning bridges the gap between abstract developments in quantum computing and the applied research on machine learning. Paring down the complexity of the disciplines involved, it focuses on providing a synthesis that explains the most important machine learning algorithms in a quantum framework. Theoretical advances in quantum computing are hard to follow for computer scientists, and sometimes even for researchers involved in the field. The lack of a step-by-step guide hampers the broader understanding of this emergent interdisciplinary body of research. Quantum Machine Learning sets the scene for a deeper understanding of the subject for readers of different backgrounds. The author has carefully constructed a clear comparison of classical learning algorithms and their quantum counterparts, thus making differences in computational complexity and learning performance apparent. This book synthesizes of a broad array of research into a manageable and concise presentation, with practical examples and applications.
Content may be subject to copyright.
A preview of the PDF is not available
... Variational quantum circuits [3][4][5][6][7][8] can be used to optimize cost function measured on quantum computers. Specifically, these cost functions can be used for machine learning tasks [9][10][11][12][13][14][15][16]. In this case variational quantum circuits are addressed as quantum neural networks. ...
... Note that in a general supervised learning setup where one has a labeled dataset instead of just one expected value O 0 , K is a positive-semidefinite and symmetric matrix instead of a non-negative number. Here we focus on the optimization problem Equation 9: this example will demonstrate the validity of our theory, that can be readily generalized to a full supervised quantum machine learning setup. ...
Preprint
We define \emph{laziness} to describe a large suppression of variational parameter updates for neural networks, classical or quantum. In the quantum case, the suppression is exponential in the number of qubits for randomized variational quantum circuits. We discuss the difference between laziness and \emph{barren plateau} in quantum machine learning created by quantum physicists in \cite{mcclean2018barren} for the flatness of the loss function landscape during gradient descent. We address a novel theoretical understanding of those two phenomena in light of the theory of neural tangent kernels. For noiseless quantum circuits, without the measurement noise, the loss function landscape is complicated in the overparametrized regime with a large number of trainable variational angles. Instead, around a random starting point in optimization, there are large numbers of local minima that are good enough and could minimize the mean square loss function, where we still have quantum laziness, but we do not have barren plateaus. However, the complicated landscape is not visible within a limited number of iterations, and low precision in quantum control and quantum sensing. Moreover, we look at the effect of noises during optimization by assuming intuitive noise models, and show that variational quantum algorithms are noise-resilient in the overparametrization regime. Our work precisely reformulates the quantum barren plateau statement towards a precision statement and justifies the statement in certain noise models, injects new hope toward near-term variational quantum algorithms, and provides theoretical connections toward classical machine learning. Our paper provides conceptual perspectives about quantum barren plateaus, together with discussions about the gradient descent dynamics in \cite{together}.
... The quantum state of the particle is manipulated by microwave for computation [43]. A series of operations is performed on the particle for information processing. ...
Preprint
Full-text available
In this Near Intermediate-Scale Quantum era, there are two types of near-term quantum devices available on cloud: superconducting quantum processing units (QPUs) based on the discrete variable model and linear optics (photonics) QPUs based on the continuous variable (CV) model. Quantum computation in the discrete variable model is performed in a finite dimensional quantum state space and the CV model in an infinite dimensional space. In implementing quantum algorithms, the CV model offers more quantum gates that are not available in the discrete variable model. CV-based photonic quantum computers provide additional flexibility of controlling the length of the output vectors of quantum circuits, using different methods of measurement and the notion of cutoff dimension.
... We conjecture, and aim to show in the future works, that one can come up with novel advantageous systems and mechanisms of Quantum Neural Computation, and broadly with Interacting Systems Developing Quantum Intelligent Behaviour, using QMM-UEs. In the former case, we envision developing advantageous qualitatively-novel kinds of Quantum Artificial Neurons and Quantum Neural Networks, including the data-driven context of Quantum Machine Learning [118][119][120][121][122][123][124][125][126][127][128][129]. In the much-broader latter category, we envision their natural applications in developing or modeling diverse (quantum or classical) interacting many-body systems which, without (any underlying or explicit kind) of neural network structure, develop and feature various forms and hierarchical levels of intelligent quantum behaviour. ...
Preprint
Full-text available
The 2nd Edition of our paper on novel Structural Interactions between Quantum Behavior and Intelligent Behavior. Renewed chapter 6 with a broad overview on the nature and on the applications of these novel behavioral phases in the proposed closed and open quantum systems. My special thanks to the research team for a long collaboration on the amazing project!
... In recent years, lots of efforts have been directed towards developing new algorithms combing machine learning and quantum information tools, i.e. in a new research field known as quantum machine learning (QML) (Schuld et al. 2015;Wittek 2014;Adcock et al. 2015;Arunachalam and de Wolf 2017;Biamonte et al. 2017), mostly in the supervised 1 3 11 ...
Article
Full-text available
Quantum machine learning (QML) is a young but rapidly growing field where quantum information meets machine learning. Here, we will introduce a new QML model generalising the classical concept of reinforcement learning to the quantum domain, i.e. quantum reinforcement learning (QRL). In particular, we apply this idea to the maze problem, where an agent has to learn the optimal set of actions in order to escape from a maze with the highest success probability. To perform the strategy optimisation, we consider a hybrid protocol where QRL is combined with classical deep neural networks. In particular, we find that the agent learns the optimal strategy in both the classical and quantum regimes, and we also investigate its behaviour in a noisy environment. It turns out that the quantum speedup does robustly allow the agent to exploit useful actions also at very short time scales, with key roles played by the quantum coherence and the external noise. This new framework has the high potential to be applied to perform different tasks (e.g. high transmission/processing rates and quantum error correction) in the new-generation noisy intermediate-scale quantum (NISQ) devices whose topology engineering is starting to become a new and crucial control knob for practical applications in real-world problems. This work is dedicated to the memory of Peter Wittek.
... Barren plateau is a term referring to the slowness of the variational angle updates during the gradient descent dynamics of quantum machine learning. When the variational ansätze for variational quantum simulation, variational quantum optimization, and quantum machine learning [51][52][53][54][55][56][57][58][59][60][61][62][63] are random enough, the gradient descent updates of variational angles will be suppressed by the dimension of Hilbert space, requiring exponential precision to implement quantum control of variational angles [23]. The quadratic fluctuations considered in [21] will be suppressed with an assumption of 2design, which is claimed to be satisfied by their hardwareefficient variational ansätze. ...
Preprint
We develop numerical protocols for estimating the frame potential, the 2-norm distance between a given ensemble and the exact Haar randomness, using the \texttt{QTensor} platform. Our tensor-network-based algorithm has polynomial complexity for shallow circuits and is high performing using CPU and GPU parallelism. We apply the above methods to two problems: the Brown-Susskind conjecture, with local and parallel random circuits in terms of the Haar distance and the approximate $k$-design properties of the hardware efficient ans{\"a}tze in quantum machine learning, which induce the barren plateau problem. We estimate frame potentials with these ensembles up to 50 qubits and $k=5$, examine the Haar distance of the hardware-efficient ans{\"a}tze, and verify the Brown-Susskind conjecture numerically. Our work shows that large-scale tensor network simulations could provide important hints toward open problems in quantum information science.
... Quantum machine learning aims to find ways to exploit the features of quantum mechanics for machine learning purposes [9][10][11][12]. In the context of quantum AM, generalizations of classical models are mainly based on the quantized version of the HNN [13][14][15][16], where binary systems are replaced by quantum spins, and where the necessary dissipative dynamics are provided by the interaction with some external bath (which can also encode the learning rule [17]). ...
Preprint
Full-text available
Algorithms for associative memory typically rely on a network of many connected units. The prototypical example is the Hopfield model, whose generalizations to the quantum realm are mainly based on open quantum Ising models. We propose a realization of associative memory with a single driven-dissipative quantum oscillator exploiting its infinite degrees of freedom in phase space. The model can improve the storage capacity of discrete neuron-based systems in a large regime and we prove successful state discrimination between $n$ coherent states, which represent the stored patterns of the system. These can be tuned continuously by modifying the driving strength, constituting a modified learning rule. We show that the associative-memory capacity is inherently related to the existence of a spectral gap in the Liouvillian superoperator, which results in a large timescale separation in the dynamics corresponding to a metastable phase.
... 0 , where x is the input data related to the computation and θ is a set of free variables for adaptive optimizations. Variational methods have shown huge potentials in applications such as quantum ML [6], [59]- [61], numerical analysis [62], [63], quantum simulation [3], [4], [64]- [66], and optimizations [8], [67]. ...
Article
The classification of big data usually requires a mapping onto new data clusters which can then be processed by machine learning algorithms by means of more efficient and feasible linear separators. Recently, Lloyd et al. have advanced the proposal to embed classical data into quantum ones: these live in the more complex Hilbert space where they can get split into linearly separable clusters. Here, these ideas are implemented by engineering two different experimental platforms, based on quantum optics and ultra‐cold atoms, respectively, where we adapt and numerically optimize the quantum embedding protocol by deep learning methods, and test it for some trial classical data. A similar analysis is also performed on the Rigetti superconducting quantum computer. Therefore, it is found that the quantum embedding approach successfully works also at the experimental level and, in particular, we show how different platforms could work in a complementary fashion to achieve this task. These studies might pave the way for future investigations on quantum machine learning techniques especially based on hybrid quantum technologies. The concept of quantum embedding is experimentally implemented, that is, mapping classical data to be classified into a large quantum Hilbert space, by engineering two different platforms, based on quantum optics and ultra‐cold atoms, and also testing on the Rigetti superconducting quantum computer, hence paving the way for future investigations on quantum machine learning via hybrid quantum technologies.
Chapter
In this article we approach an extended Job Shop Scheduling Problem (JSSP). The goal is to create an optimized duty roster for a set of workpieces to be processed in a flexibly organized workshop, where the workpieces are transported by one or more Autonomous Ground Vehicles (AGV), that are included in the planning. We are approaching this extended, more complex variant of JSSP (still NP-complete) using Constraint Programming (CP) and Quantum Annealing (QA) as competing methods. We present and discuss: a) the results of our classical solution based on CP modeling and b) the results with modeling as quadratic unconstrained binary optimisation (QUBO) solved with hybrid quantum annealers from D-Wave, as well as with tabu search on current CPUs. The insight we get from these experiments is that solving QUBO models might lead to solutions where some immediate improvement is achievable through straight-forward, polynomial time postprocessing. Further more, QUBO proves to be suitable as an approachable modelling alternative to the expert CP modelling, as it was possible to obtain for medium sized problems similar results, but requiring more computing power. While we show that our CP approach scales now better with increased problem size than the hybrid Quantum Annealing, the number of qubits available for direct QA is increasing as well and might eventually change the winning method.
Preprint
Full-text available
Quantum machine learning has proven to be a fruitful area in which to search for potential applications of quantum computers. This is particularly true for those available in the near term, so called noisy intermediate-scale quantum (NISQ) devices. In this Thesis, we develop and study three quantum machine learning applications suitable for NISQ computers, ordered in terms of increasing complexity of data presented to them. These algorithms are variational in nature and use parameterised quantum circuits (PQCs) as the underlying quantum machine learning model. The first application area is quantum classification using PQCs, where the data is classical feature vectors and their corresponding labels. Here, we study the robustness of certain data encoding strategies in such models against noise present in a quantum computer. The second area is generative modelling using quantum computers, where we use quantum circuit Born machines to learn and sample from complex probability distributions. We discuss and present a framework for quantum advantage for such models, propose gradient-based training methods and demonstrate these both numerically and on the Rigetti quantum computer up to 28 qubits. For our final application, we propose a variational algorithm in the area of approximate quantum cloning, where the data becomes quantum in nature. For the algorithm, we derive differentiable cost functions, prove theoretical guarantees such as faithfulness, and incorporate state of the art methods such as quantum architecture search. Furthermore, we demonstrate how this algorithm is useful in discovering novel implementable attacks on quantum cryptographic protocols, focusing on quantum coin flipping and key distribution as examples.
Article
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.1
Chapter
This chapter reproduces the English translation by B. Seckler of the paper byVapnik and Chervonenkis inwhich they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady. The paper was first published in Russian as Vapnik V. N. and Qervonenkis.16(2), 264-279 (1971). © Springer International Publishing Switzerland 2015. All rights are reserved.
Article
We propose a novel approach for categorizing text documents based on the use of a special kernel. The kernel is an inner product in the feature space generated by all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences that are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. Experimental comparisons of the performance of the kernel compared with a standard word feature space kernel (Joachims, 1998) show positive results on modestly sized datasets. The case of contiguous subsequences is also considered for comparison with the subsequences kernel with different decay factors. For larger documents and datasets the paper introduces an approximation technique that is shown to deliver good approximations efficiently for large datasets.
Article
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Article
This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce such a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. The paper covers necessary high-level quantum mechanical and quantum computational ideas and introduces a quantum associative memory. Theoretical analysis proves the utility of the memory, and it is noted that a small version should be physically realizable in the near future.