ArticlePDF Available

Quantum mechanical Hamiltonian models of Turing machines

Authors:

Abstract

Quantum mechanical Hamiltonian models, which represent an aribtrary but finite number of steps of any Turing machine computation, are constructed here on a finite lattice of spin-1/2 systems. Different regions of the lattice correspond to different components of the Turing machine (plus recording system). Successive states of any machine computation are represented in the model by spin configuration states. Both time-independent and time-dependent Hamiltonian models are constructed here. The time-independent models do not dissipate energy or degrade the system state as they evolve. They operate close to the quantum limit in that the total system energy uncertainty/computation speed is close to the limit given by the time-energy uncertainty relation. However, the model evolution is time global and the Hamiltonian is more complex. The time-dependent models do not degrade the system state. Also they are time local and the Hamiltonian is less complex.
A preview of the PDF is not available
... Another aspect of computation is the analysis of the Turing machine (TM). Turing machine have a deep impact in quantum information [26,27] and quantum computation [28][29][30]. Efforts to model Turing machines [19], and to develop limit on computation [18,54,55], has been analyzed. Computing machines in the real world will belong to the regime, which is bounded by the infinite dissipation and the zero-energy limit. ...
... The wave-packet spreading causes instability in the system in the quantum realm. Benioff [29] in his work has discussed about a quantum version of the ballistic computer, where he has proposed a way to culminate the effect of the noise due to the wave packet spreading by utilizing time-independent Hamiltonian. ...
... During the execution of the process, heat is extracted, and it gets converted to thermodynamic work. So in the whole process where erasure and its reversal process take place, we observe no net entropy change, as shown in Eq. (29). procedure provokes an entropy decrease in the adiabatic box, which implies a time reversal for running the backward cycle. ...
Preprint
Full-text available
One of the primary motivations of the research in the field of computation is to optimize the cost of computation. The major ingredient that a computer needs is the energy to run a process, i.e., the thermodynamic cost. The analysis of the thermodynamic cost of computation is one of the prime focuses of research. It started back since the seminal work of Landauer where it was commented that the computer spends kB T ln2 amount of energy to erase a bit of information (here T is the temperature of the system and kB represents the Boltzmann's constant). The advancement of statistical mechanics has provided us the necessary tool to understand and analyze the thermodynamic cost for the complicated processes that exist in nature, even the computation of modern computers. The advancement of physics has helped us to understand the connection of the statistical mechanics (the thermodynamics cost) with computation. Another important factor that remains a matter of concern in the field of computer science is the error correction of the error that occurs while transmitting the information through a communication channel. Here in this article, we have reviewed the progress of the thermodynamics of computation starting from Landauer's principle to the latest model, which simulates the modern complex computation mechanism. After exploring the salient parts of computation in computer science theory and information theory, we have reviewed the thermodynamic cost of computation and error correction. We have also discussed about the alternative computation models that have been proposed with thermodynamically cost-efficient.
... A quantum-computing speedup process owns the dual character that it obeys both the unitary quantum dynamics and the mathematical-logical principle of a computational problem to be solved. This is essentially different from a conventional quantum computation (algorithm) [32,33] which is essentially a purely quantum-physical process [1]. It takes into account the quantum symmetry that is considered as the fundamental quantum-computing-speedup resource. ...
... The largest difficulty facing the conventional quantum computation [32,33] is perhaps that it is subjected to the square speedup limit on solving an unstructured search problem, although most hard computational problems can not be solved in an exponential quantum-computational speedup in the conventional quantum computation. ...
... The existing research works on the non-local effect of entanglement quantum states in the past decades further strengthen orthodox quantum mechanics. The conventional quantum computation [32,33] is based on orthodox quantum mechanics. ...
Preprint
Full-text available
The HSSS quantum search process owns the dual character that it obeys both the unitary quantum dynamics and the mathematical-logical principle of the unstructured search problem. It is essentially different from a conventional quantum search algorithm. It is constructed with the duality-character oracle operations of unstructured search problem. It consists of the two consecutive steps: (1) the search-space dynamical reduction and (2) the dynamical quantum-state-difference amplification (QUANSDAM). The QUANSDAM process is directly constructed with the SIC unitary propagators, while the latter each are prepared with the basic SIC unitary operators. Here the preparation for the SIC unitary propagators of a single-atom system is concretely carried out by starting from the basic SIC unitary operators. The SIC unitary propagator of a quantum system may reflect the quantum symmetry of the quantum system, while the basic SIC unitary operators may not. The quantum symmetry is considered as the fundamental quantum-computing-speedup resource in the quantum-computing speedup theory. The purpose for the preparation is ultimately to employ the quantum symmetry to speed up the QUANSDAM process. The preparation is a solution-information transfer process. It is unitary and deterministic. It obeys the information conservation law. In methodology it is based on the energy eigenfunction expansion and the multiple-quantum operator algebra space. Furthermore, a general theory mainly based on the Feynman path integration technique and also the energy eigenfunction expansion method is established to treat theoretically and calculate a SIC unitary propagator of any quantum system in the coordinate representation, which may be further used to construct theoretically an exponential QUANSDAM process in future.
... uinas de Turing) a los que hacía operar con algunos de los principios fundamentales de la mecánica cuántica. Entre 1981 y 1982 Richard Feynman planteaba el uso de fenómenos cuánticos para realizar cálculos computacionales y exponía que, dada su naturaleza, algunos cálculos de gran complejidad se realizarían más rápidamente en un ordenador cuántico.(Benioff, 1982) En 1985, David Deutsch describió el primer computador cuántico universal, capaz de simular cualquier otro computador cuántico (principio de Church-Turing ampliado). De este modo surgió la idea de que un computador cuántico podría ejecutar diferentes algoritmos cuánticos.(Benioff, 1982) Primeros AlgoritmosCuánticos (1990Cuánticos ( -1996 ...
... ejidad se realizarían más rápidamente en un ordenador cuántico.(Benioff, 1982) En 1985, David Deutsch describió el primer computador cuántico universal, capaz de simular cualquier otro computador cuántico (principio de Church-Turing ampliado). De este modo surgió la idea de que un computador cuántico podría ejecutar diferentes algoritmos cuánticos.(Benioff, 1982) Primeros AlgoritmosCuánticos (1990Cuánticos ( -1996 A lo largo de los años 90 la teoría empezó a plasmarse en la práctica, y aparecen los primeros algoritmos cuánticos, las primeras aplicaciones cuánticas y las primeras máquinas capaces de realizar cálculos cuánticos.(Barenco, 1995) Dan Simon manifestaba en 1993, la ventaja que tendría ...
... Over the years, technological developments have allowed optimizing the response time of all kinds of operations; however, there are still a large number of mathematical calculations that would take decades to be solved (e.g., determining whether a large number is prime or not.). These types of complex calculations encouraged Paul Benioff [2,3,4] and Richard Feynman [5] to reconsider the idea of using computers under the principles of quantum physics, so they started working with classical computers that they adapted and operated with some of the main principles of quantum mechanics. On the other hand, every day it becomes more difficult to handle classical computing because of the constant demands of miniaturization that electronic components require, knowing that soon it will not be possible to reduce them anymore and the laws of classical physics will not be considered anymore. ...
... .set(j, element); System.out.println("array"+array);Preparación de Artículos revista VISIÓN ELECTRÓNICA: algo más que un estado sólido Fecha de envío: 23-09-2018 Fecha de recepción: 24-09-2018 Fecha de aceptación: 12the simulation calculates the probability using the amplitude to the second power using the findProbability() method. public double findProbability() { double probability = vector.get(value)/(Math.sqrt((Math.pow(2, numQubits))the createArray() method, it is created a matrix of 2n slots obtaining the new array that will be sent toGrover' ...
Article
Full-text available
Classical computing there are multiple algorithms to efficiently locate a certain element within a disorganized database; however, quantum computing can be applied more assertively in the face of problems in which it is complicated to verify a solution and at the same time to test multiple and possible solutions. Therefore, this article presents an introduction to Quantum Computing, developing some concepts of quantum formalism, and then approach Grover's algorithm which exploits the principle of superposition to the maximum. Finally, a classic simulation of this algorithm is performed, and the results obtained are compared with classical. Current position: Professor at Universidad Distrital Francisco José de Caldas, Colombia. algorithms such as sequential search and binary search method. A 95% is obtained as a result of greater effectiveness in times-when solving the same search-, revealing the potential advantages of quantum computing. Resumen: En la computación clásica existen múltiples algoritmos para localizar de manera eficiente un determinado elemento dentro de una base de datos desorganizada; sin embargo, la computación cuántica puede aplicarse de manera más asertiva frente a tales problemas cuando es complejo verificar una solución y a la vez probar múltiples y posibles soluciones. Por lo anterior, en este artículo se presenta una introducción a la Computación Cuántica-desarrollando algunos conceptos del formalismo cuántico-, y luego se aborda el algoritmo de Grover el cual explota al máximo el principio de superposición. Finalmente se realiza una simulación clásica de dicho algoritmo, y los resultados obtenidos se comparan con otros algoritmos clásicos como el método de búsqueda lineal y búsqueda binaria. Se obtiene como resultado un %95 de mayor efectividad en tiempos-a la hora de resolver la misma búsqueda-, logrando poner de manifiesto las ventajas potenciales de la computación cuántica.
... To execute 500 iterations in the first case, this would take 2000 qubits. This is unavoidable to maintain the reversibility [11] and various tricks in reversible compilation [12] uses techniques like repeat-until-success, dirty ancilla, etc. for space-time trade-offs. Since the computation history is not useful except to maintain the reversibility, this can be circumvented by tracing them out and restarting the computation after few steps. ...
Preprint
Full-text available
In this research we present a quantum circuit for estimating algorithmic complexity using the coding theorem method. This accelerates inferring algorithmic structure in data for discovering causal generative models. The computation model is restricted in time and space resources to make it computable in approximating the target metrics. The quantum circuit design based on our earlier work that allows executing a superposition of automata is presented. As a use-case, an application framework for protein-protein interaction ontology based on algorithmic complexity is proposed. Using small-scale quantum computers, this has the potential to enhance the results of classical block decomposition method towards bridging the causal gap in entropy based methods.
... TMs also play important roles in many facets of modern physics. For instance, TMs are used to formalize the difference between easy and hard computational problems in quantum computing [51][52][53][54][55]. There has also been some speculative, broader-ranging work on whether the foundations of physics may be restricted by some of the properties of TMs [56,57]. ...
Article
Full-text available
Turing machines (TMs) are the canonical model of computation in computer science and physics. We combine techniques from algorithmic information theory and stochastic thermodynamics to analyze the thermodynamic costs of TMs. We consider two different ways of realizing a given TM with a physical process. The first realization is designed to be thermodynamically reversible when fed with random input bits. The second realization is designed to generate less heat, up to an additive constant, than any realization that is computable (i.e., consistent with the physical Church-Turing thesis). We consider three different thermodynamic costs: The heat generated when the TM is run on each input (which we refer to as the “heat function”), the minimum heat generated when a TM is run with an input that results in some desired output (which we refer to as the “thermodynamic complexity” of the output, in analogy to the Kolmogorov complexity), and the expected heat on the input distribution that minimizes entropy production. For universal TMs, we show for both realizations that the thermodynamic complexity of any desired output is bounded by a constant (unlike the conventional Kolmogorov complexity), while the expected amount of generated heat is infinite. We also show that any computable realization faces a fundamental trade-off among heat generation, the Kolmogorov complexity of its heat function, and the Kolmogorov complexity of its input-output map. We demonstrate this trade-off by analyzing the thermodynamics of erasing a long string.
Chapter
In this article we present milestone developments in the theory and application of quantum information from historical perspectives. The domain of quantum information is very promising to develop quantum computer, quantum communication and varieties of other applications of quantum technologies. We also give the light on experimental manifestations of major theoretical developments. In addition, we present important no-go theorems frequently used in quantum information along with ideas of their respective mathematical proofs.
Article
This review summarizes the requirement of low temperature conditions in existing experimental approaches to quantum computation and quantum simulation.
Article
Full-text available
In this paper a microscopic quantum mechanical model of computers as represented by Turing machines is constructed. It is shown that for each numberN and Turing machineQ there exists a HamiltonianH N Q and a class of appropriate initial states such that if c is such an initial state, then Q N (t)=exp(–1H N Q t) Q N (0) correctly describes at timest 3,t 6,,t 3N model states that correspond to the completion of the first, second, , Nth computation step ofQ. The model parameters can be adjusted so that for an arbitrary time interval aroundt 3,t 6,,t 3N, the machine part of Q N (t) is stationary.
Article
Here the results of other work on quantum mechanical Hamiltonian models of Turing machines are extended to include any discrete process T on a countably infinite set A. The models are constructed here by use of scattering phase shifts from successive scatterers to turn on successive step interactions. Also a locality requirement is imposed. The construction is done by first associating with each process T a model quantum system M with associated Hilbert space HM and step operator UT. Since UT is not unitary in general, M, HM, and UT are extended into a (continuous time) Hamiltonian model on a larger space which satisfies the locality requirement. The construction is compared with the minimal unitary dilation of UT. It is seen that the model constructed here is larger than the minimal one. However, the minimal one does not satisfy the locality requirement.
Article
It is shown that the laws of physics impose no fundamental bound on the rate at which information can be processed. Recent claims that quantum effects impose such bounds are discussed and shown to be erroneous.
Article
From thermodynamic and causality considerations a general upper bound on the rate at which information can be transferred in terms of the message energy is inferred. This bound is consistent with Shannon's bounds for a band-limited channel. It prescribes the minimum energy cost for information transferred over a given time interval. As an application, a fundamental upper bound of 1015 operations/sec on the speed of an ideal digital computer is established.
Article
The effect of the quantum nature of matter on the maximum informationprocesssing potentialities is considered. It is shown that the degeneracy of the energy levels of a physical information-processing system results in the fact that a universal limit of information-processing rates does not exist, though for any specific system this rate is indeed bounded. A physical interpretation is then proposed for an elementary act of information-processing and the concept of information-processing depth is introduced. The example of a system of quantum oscillators is used to show that the maximal information-processing depth is bounded, only a very small fraction of the possible system states being used. The effect of thermal noise on information processing is briefly discussed.
Article
Reversible computation is briefly reviewed, utilizing a refined version of the Bennett-Fredkin-Turing machine, invoked in an earlier paper. A dissipationless classical version of this machine, which has no internal frietion, and where the computational velocity is determined by the initial kinetic energy, is also described. Such a machine requires perfect parts and also requires the unrealisstic assumption that the many extraneous degrees of freedom, which contribute to the physical structure, do not couple to the information-bearing degrees of freedom, and thus cause no friction Quantum mechanical computation is discussed at two levels. First of all we deplore the assertion. repcatedly found in the literature, that the uncertainty principle. Eth, with t equated to a switching time, yields any information about energydissipation. Similarly we point out that computation is not an iterated transmission and receiving process, and that considerations, which avoid the uncertainty principle, and instead use quantum mechanical channel capacity considerations, are equally unfounded. At a more constructive level we ask whether there is a quantum mechanical version of the dissipationless computer. Benioff has proposed one possible answer Quantum mechanical versions of dissipationless computers may suffer from the problems found in electron transport in disordered one-dimensional periodic potentials. The buildup of internal reflections may give a transmission coefficient. through the whole computation, which decreases exponentially with the length of the computation.
Article
Any computerM is subject to such laws as irreversibility and uncertainty of time-energy and maximality of the speed of light. This imposes fundamental limitations on the performance ofM and, more generally, on the power of algorithmic methods for several important logic operations; this also has an impact on the problem of what is knowable in mathematics. Ogni computerM è sottoposto a leggi quali l’irreversibilità e l’incertezza del tempoenergia, nonchè la massimalità della velocità della luce. Tutto ciò impone delle limitazioni di carattere fondamentale sulle prestazioni diM e più in generale sulle possibilità dei metodi algoritmici di certe operazioni logiche connaturate col metodo matematico deduttivo. Любое вычислительное устройствоM подчиняется таким законам, как необратимость и неопределенность времени и знергии и максимальность скорости света. Эти законы налагают фундаментальные ограничения на вьшолнение вычисленийM и, более того, на возможности алгоритмических методов для некоторых важных логических операций.
Article
Fundamental limitations on the energy dissipated during one elementary logical operation are discussed. A model of a real physical device (parametric quantron) based on the Josephson effect in superconductors is used throughout the discussion. This device is shown to be physically reversible, and moreover it can serve as the clementary cell of a logically reversible computer, both these properties being necessary to achieve the fundamental limits of energy dissipation. These limits due to classical and quantum statistics are shown to lie well below the earlier estimates,k B T and , respectively.