BookPDF Available

Quantum Machine Learning: What Quantum Computing Means to Data Mining

Authors:

Abstract and Figures

Quantum Machine Learning bridges the gap between abstract developments in quantum computing and the applied research on machine learning. Paring down the complexity of the disciplines involved, it focuses on providing a synthesis that explains the most important machine learning algorithms in a quantum framework. Theoretical advances in quantum computing are hard to follow for computer scientists, and sometimes even for researchers involved in the field. The lack of a step-by-step guide hampers the broader understanding of this emergent interdisciplinary body of research. Quantum Machine Learning sets the scene for a deeper understanding of the subject for readers of different backgrounds. The author has carefully constructed a clear comparison of classical learning algorithms and their quantum counterparts, thus making differences in computational complexity and learning performance apparent. This book synthesizes of a broad array of research into a manageable and concise presentation, with practical examples and applications.
Content may be subject to copyright.
A preview of the PDF is not available
... Goodfellow et al. [31] position machine learning as a sub-field of artificial intelligence, concerned with building algorithms that rely on a collection of examples of some phenomenon to be useful. A few authors define machine learning rather as a broad collection of algorithms that learn patterns over feature spaces [32], [33]. Clear commonalities have been agreed upon, such as the importance of learning from patterns inherent within data [30], [31] via automatic processes [32], [34] without explicit programming [35], and the ability to improve performance based on the experience or data it is exposed to. ...
... The terms "quantum-inspired", or "quantum-like" machine learning have, in early reviews, focused on describing optimization techniques inspired by quantum phenomena, and run on classical computers [33], likely in the absence of parameterized, iterative pattern recognition models more akin to classical machine learning methods. Other authors have corroborated this idea, with more explicit mention of machine learning rather than optimization [6], [7], [9], [10]. ...
Preprint
Full-text available
Quantum-inspired Machine Learning (QiML) is a burgeoning field, receiving global attention from researchers for its potential to leverage principles of quantum mechanics within classical computational frameworks. However, current review literature often presents a superficial exploration of QiML, focusing instead on the broader Quantum Machine Learning (QML) field. In response to this gap, this survey provides an integrated and comprehensive examination of QiML, exploring QiML's diverse research domains including tensor network simulations, dequantized algorithms, and others, showcasing recent advancements, practical applications, and illuminating potential future research avenues. Further, a concrete definition of QiML is established by analyzing various prior interpretations of the term and their inherent ambiguities. As QiML continues to evolve, we anticipate a wealth of future developments drawing from quantum mechanics, quantum computing, and classical machine learning, enriching the field further. This survey serves as a guide for researchers and practitioners alike, providing a holistic understanding of QiML's current landscape and future directions.
... In recent years, there has been growing interest in the applications of machine learning to the physical sciences [1]. One particular area of that has received considerable attention is quantum machine learning [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. In quantum machine learning, evaluating a quantum kernel is analogous to computing a classical kernel in classical machine learning. ...
Article
One of the fastest growing areas of interest in quantum computing is its use within machine learning methods, in particular through the application of quantum kernels. Despite this large interest, there exist very few proposals for relevant physical platforms to evaluate quantum kernels. In this article, we propose and simulate a protocol capable of evaluating quantum kernels using Hong-Ou-Mandel (HOM) interference, an experimental technique that is widely accessible to optics researchers. Our proposal utilises the orthogonal temporal modes of a single photon, allowing one to encode multi-dimensional feature vectors. As a result, interfering two photons and using the detected coincidence counts, we can perform a direct measurement and binary classification. This physical platform confers an exponential quantum advantage also described theoretically in other works. We present a complete description of this method and perform a numerical experiment to demonstrate a sample application for binary classification of classical data.
... Both fields, QC and ML have lately converged towards a new discipline, Quantum Machine Learning (QML) [7], that brings together concepts from both fields to provide enhanced solutions, either improving ML algorithms, quantum experiments, or both. The basic hypothesis of this special session is that QML can be generalized by Quantum AI (QAI), akin to AI generalizing ML. ...
... One of the most promising applications of NISQ devices is represented by Quantum Machine Learning (QML) that is a recent interdisciplinary field merging ML and quantum computing, i.e. data to be processed and/or learning algorithms are quantum [44,[46][47][48]. Indeed, it involves the integration of ML techniques and quantum computers in order to process and subsequently analyze/learn the underlying data structure. ...
Preprint
Full-text available
Generative models realized with machine learning techniques are powerful tools to infer complex and unknown data distributions from a finite number of training samples in order to produce new synthetic data. Diffusion models are an emerging framework that have recently overcome the performance of the generative adversarial networks in creating synthetic text and high-quality images. Here, we propose and discuss the quantum generalization of diffusion models, i.e., three quantum-noise-driven generative diffusion models that could be experimentally tested on real quantum systems. The idea is to harness unique quantum features, in particular the non-trivial interplay among coherence, entanglement and noise that the currently available noisy quantum processors do unavoidably suffer from, in order to overcome the main computational burdens of classical diffusion models during inference. Hence, we suggest to exploit quantum noise not as an issue to be detected and solved but instead as a very remarkably beneficial key ingredient to generate much more complex probability distributions that would be difficult or even impossible to express classically, and from which a quantum processor might sample more efficiently than a classical one. Therefore, our results are expected to pave the way for new quantum-inspired or quantum-based generative diffusion algorithms addressing more powerfully classical tasks as data generation/prediction with widespread real-world applications ranging from climate forecasting to neuroscience, from traffic flow analysis to financial forecasting.
Chapter
In this work we explore the use of Quantum Computing for Time Series forecasting. More specifically, we design Variational Quantum Circuits as the quantum analogy of feedforward Artificial Neural Networks, and use a quantum neural network pipeline to perform time series forecasting tasks. According to our experiments, our study suggests that Quantum Neural Networks are able to improve results in error prediction while maintaining a lower number of parameters than its classical machine learning counterpart.
Chapter
In this chapter, the authors will discuss some of the many models and training methods that have been developed in the field of machine learning to address this learning challenge. Models like neural networks and stochastic gradient descent have their own “go-to” training algorithms, each with their own set of supporting terminology and communities of experts. Since the specifics of gate decomposition, compilation, and error correction all depend heavily on the physical implementation of qubits and quantum gates, it has been difficult to design quantum hardware capable of running such algorithms. Therefore, the authors can only provide asymptotic estimations of total execution times. Since developing quantum hardware is so prohibitively expensive, researchers are incentivized to use terms like “superior quantum algorithms” to justify their work. This has given rise to the contentious term “quantum supremacy” to describe experiments that definitively show a difference between classical and quantum levels of computational complexity.
Presentation
Full-text available
Using Quantum Machine Learning to Detect Lesions in the Brain
Article
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.1
Chapter
This chapter reproduces the English translation by B. Seckler of the paper byVapnik and Chervonenkis inwhich they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady. The paper was first published in Russian as Vapnik V. N. and Qervonenkis.16(2), 264-279 (1971). © Springer International Publishing Switzerland 2015. All rights are reserved.
Article
We propose a novel approach for categorizing text documents based on the use of a special kernel. The kernel is an inner product in the feature space generated by all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences that are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. Experimental comparisons of the performance of the kernel compared with a standard word feature space kernel (Joachims, 1998) show positive results on modestly sized datasets. The case of contiguous subsequences is also considered for comparison with the subsequences kernel with different decay factors. For larger documents and datasets the paper introduces an approximation technique that is shown to deliver good approximations efficiently for large datasets.
Article
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Article
This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce such a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. The paper covers necessary high-level quantum mechanical and quantum computational ideas and introduces a quantum associative memory. Theoretical analysis proves the utility of the memory, and it is noted that a small version should be physically realizable in the near future.