ArticlePDF Available

# Chemical Kinetics is Turing Universal

Authors:

## Abstract and Figures

We show that digital logic can be implemented in the chemical kinetics of homogeneous solutions: We explicitly construct logic gates and show that arbitrarily large circuits can be made from them. This proves that a subset of the constructions available to life has universal (Turing) computational power.
Content may be subject to copyright.
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
Chemical Kinetics is Turing Universal
Marcelo O. Magnasco
Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, New York 10021
We show that digital logic can be implemented in the chemical kinetics of homogeneous solutions:
We explicitly construct logic gates and show that arbitrarily large circuits can be made from them. This
proves that a subset of the constructions available to life has universal (Turing) computational power.
[S0031-9007(97)02332-6]
PACS numbers: 87.10.+e, 89.80.+h, 82.20.Mj
Interest in chemical computation has followed four dif-
ferent paths. It is one of the natural extensions of discus-
sions about information and thermodynamics, which go
back to Maxwell demon arguments and Szilard’s work
[15]. It is also a rather natural extension to the ap-
plication of dynamical systems theory to chemical reac-
tions [68], in particular logic networks stemming from
bistable reaction systems [9]. A lot of effort has been de-
voted to trying to devise nonstandard computational archi-
tectures, and chemical implementations provide a distinct
enough backdrop to silicon [1012]. Finally, in recent
years biology has presented us with what looks to be ac-
tual chemical computers: the enzymatic cascades of cell
signaling [1315].
One of the ﬁrst questions that can be asked in this
subject is whether universal (Turing) computation can
be achieved within some theoretical model of chemistry;
the most immediate one is standard chemical kinetics.
This question has been recently studied in some detail
[1622], and even subject to experimental tests [23]. In
[1820], Hjelmfelt et al. argued quite convincingly that
building blocks for universal computation indeed can be
constructed within ideal chemical kinetics, and that they
could be interconnected to achieve computation. How-
ever, many difﬁculties still lie in the way. An issue
not addressed by Hjelmfelt et al. is structural stability:
the tolerance of a system to changes in parameters and
functional structure. In particular, “gluing” together two
groups of chemical reactions will have appreciable effects
on the kinetics of both groups; the basic unit and the cou-
plings used in [1820] require case-by-case adjustment of
individual parameters for proper functioning.
The purpose of this Letter is to provide a slightly
more formal proof that chemical kinetics can be used
to construct universal computers. I will concentrate on
the “next” level of difﬁculty, which is that of the global
behavior of a fully coupled system and its structural
stability. I will do it through the simplest approach: I will
show that classical digital electronics can be implemented
through chemical reactions. Since my key problem in
this scheme is showing global consistency, and the proof
requires arbitrarily large circuits, I will have to show that
the output of one gate can be plugged into the input of
others for arbitrarily many layers, without degrading the
logic, keeping at all times full coupling.
We will need a power supply. I will deﬁne mine to
consist of two chemical species called high and low;
their concentrations will be kept clamped strongly out
of equilibrium, so an external reservoir is assumed.
This approximates the power supply in cells, the two
compounds ATP and ADP; the cellular “power plants”
keep their concentration as constant as feasible, nearly
6 decades away from equilibrium. Thermodynamics
requires the logarithm of the equilibrium constants to
lie in the (left) span of the stoichiometry matrix; it is
important that all reactions we use satisfy this constraint,
so that there are no “hidden” power supplies.
The very ﬁrst thing we need to consider is the trivial
gate, the signal repeater, which copies input onto output.
Any problems we encounter with it will recur for any
other gate. Let’s say a chemical species ais the input and
bthe output. We will need bto exist in two chemically
distinct forms, band b[24]. If bis a compound of higher
energy than b, we can couple its production to the power
supply, as in b1high %b1low;in the absence of
other reactions, fbggoes to a small value determined by
the rate of spontaneous decay in b%b. This is then
a sort of “capacitor,” which we charge with the power
supply. If then the reaction b%bis catalyzed by a,
a1b%ab %ab %a1b,(1)
then a“shorts” the capacitor and discharges it, increasing
the concentration of b. Hence when fagis low, fbgis
low, and when fagis high, fbgbecomes high, and the
transitions have certain rise and decay times determined
by the precise rates we use.
In Fig. 1 we see the output of simulating a chain of
several such gates with a°! b°! c°! d.... The
gates are all identical; the only change between them is
the name of the compound. The wave forms are dying
as we go down this chain: The difference between the
“high” and the “low” levels is becoming smaller and
smaller. So this network is not a suitable signal repeater.
Figure 2 shows the output of a similar simulation using
the reactions
2a1b%a2b%a2b%2a1b(2)
1190 0031-9007y97y78(6)y1190(4)\$10.00 © 1997 The American Physical Society
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
FIG. 1. A cascade of identical signal repeaters a°! b°!
c°! , using Eq. (1). The input to fagis a square wave.
Top (small) panels show each signal individually with varying
scales, bottom (large) panel shows all signals simultaneously on
the same scale. The amplitude of the signal gets reduced very
rapidly.
(i.e., double stoichiometry on the input). We can see that
the amplitude of the pulses gets stabilized; both high and
low now approach amply separate levels [25]. I will now
prove that higher stoichiometry is essential.
All concentrations become stationary after some tran-
sients. If we plot these steady levels as a function of the
inputs, we get the classical plots shown in Fig. 3. These
diagrams represent the concentration of bas a function
of a, but also of cas a function of b, and so on. If we
call xnthe nth compound in the chain, then the diagram
shows xn11as a function of xn;nhere labels position on
the chain. This is a recurrence relation, also called a map.
This type of map is usually studied in the theory of
dynamical systems, where it represents some dynamical
law, and nlabels time. A large part of dynamical
systems theory is devoted to the asymptotic states, i.e.,
what happens at arbitrarily long times. In our case this
translates to “arbitrarily deep into the circuit,” which
is what we want to study. Dynamical systems theory
tells us that the only asymptotic states of maps which
are monotonically increasing and bounded (our case)
points) of a map occur when xn11xn, i.e., when the
curve intersects the diagonal line. They can be stable or
unstable; stable (unstable) means that if some xnis near
the ﬁxed point, then, for m.n, the xmare nearer to
(farther away from) the ﬁxed point; this happens when
FIG. 2. A cascade of signal repeaters with double stoichiom-
etry [Eq. (2)]. Same conventions as Fig. 1. The amplitude of
the signal converges to a steady value.
the curve is shallower (steeper) than the diagonal at the
intersection.
In the case of stoichiometry one sS1dthere are
at most two ﬁxed points, and only one can be stable
[26]. For S.1there can be three ﬁxed points, the
two outer ones being stable, the middle one unstable.
We can propagate logic arbitrarily deep into the chain
FIG. 3. The steady-state concentration of the outputs of two
signal repeaters, S1[Eq. (1)] and S2[Eq. (2)] as a
function of the steady-state level of the input a. The diagonal
line is fagas a function of itself; the intersections of the two
curves with this diagonal are the ﬁxed points.
1191
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
if and only if we have at least two distinct stable ﬁxed
points, with each one corresponding to a distinct logical
state. But two stable ﬁxed points are possible only
for S.1.
Now the main conceptual problems have been solved.
The only remaining point is to construct explicitly a few
different gates (Fig. 4); if all of the gates are “built”
(i.e., the rates so chosen) so that their response is one
of our ﬁxed points when the inputs are at the ﬁxed points
then they will be globally compatible. Strictly speaking,
one needs only NAND, since all logical functions can be
constructed from it, but since each internal wire in the
circuit is a chemically distinct compound, it is desirable
to implement gates directly [27]. A precise deﬁnition of
the gates can be found elsewhere [28].
Adding is a problem that exempliﬁes rather nicely the
spirit of this work, because when we add, we have to
shift the “carry” digits to the next column. These can
accumulate to generate a cascade, so we need to be able
to propagate logic across an entire network. In order to
add two three-bit numbers (giving a four-bit number as
three-bit adder is shown in Fig. 5; it can add up to 717.
Ephemeral memory can be implemented rather directly,
but if the memory is supposed to be long-term, care
must be exercised. A ﬂip-ﬂop can be made by having a
compound in two states sc,cd, and then two inputs sa,bd
which catalyze conversion to the other state by coupling
to the power supply:
FIG. 4. The output of one implementation of the four classical
gates. aand bare the inputs. While there are artifacts, the
logic levels are still well separated. AND,OR, and NAND are
implemented directly, XOR is implemented as AND(NAND,OR).
a1c1high %··· %a1c1low (3)
and similarly for bsending c°! c. The lifetime of
this memory would appear to be the lifetime of the
uncatalyzed reaction c%c. However, such a mechanism
is not resistant to ﬂuctuations in the inputs; even a minute
amount of catalyst can reduce the lifetime dramatically.
In order to make memory stable, we need to make the
system prefer to be either all cor all c. There are many
ways to do this; for instance,
2c1c1high %··· %3c1low (4)
and vice versa [29]. The addition of these two self-
catalytic reactions makes the memory strongly robust (see
Fig. 6) and, in principle, inﬁnitely long lived even in the
presence of input ﬂuctuations; however, energy is drawn
from the power supply to “refresh” the ﬂip-ﬂop. There is
some resemblance to dynamic vs static RAM, and to the
self-phosphorylating enzyme CamK II [30], which might
be implicated in long-term memory in neurons.
I have shown one particular explicit implementation of
digital logic in chemical kinetics, and thus shown universal
computation capabilities. However, many questions still
remain open (which I will comment upon in some greater
detail elsewhere [31]): What is the interplay between in-
formation transfer and thermodynamics? Since no catalyst
FIG. 5. Numerical simulation of the three-bit adder: c
a1b. The lower traces are the three bits of input aand
the three bits of input b; the four upper traces are the four bits
of output c. The transients as the inputs are changed show the
delays in propagating carries. The ﬁve columns of different
inputs show: 0100,71714,2124,613
9, and 7118. The network has about 140 compounds in
290 reactions.
1192
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
FIG. 6. Two ﬂip-ﬂops; sta is a “static” ﬂip-ﬂop [Eq. (3)], and
dyn a “dynamic” one with autocatalytic stabilization [Eq. (4)].
Both can switch between states fast as the inputs aand bare
pulsed. At time 100 both inputs are set to 0.1; sta forgets its
state, while dyn does not.
is perfectly selective for its substrate, how robust are
computations under the massive cross talk of random
“unintended” reactions? Are there equivalents of the
gain bandwidth and other classic theorems of electronics?
And, presumably, many more.
I would like to thank A. Ajdari, G. Cecchi, D. Chate-
nay, J.-P. Eckmann, A. Libchaber, G. Stolovitzky, and
D. Thaler for many stimulating discussions. Completed
in part at U. of Buenos Aires and Asoc. de Fisica Ar-
gentina; I want to thank G. Mindlin, S. Ponce, and J.P.
Paz for their hospitality. Supported in part by the Mathers
Foundation.
[1] L. Szilard, Z. Phys. 53, 840856 (1929).
[2] H. S. Leff and A.F. Rex, Maxwell’s Demon: Information,
Entropy, Computing (Princeton University Press, Prince-
ton, New Jersey, 1990).
[3] R. Landauer, IBM J. Res. Dev. 5, 183191 (1961).
[4] C. H. Bennett, Int. J. Theor. Phys. 21, 905940 (1982).
[5] C. H. Bennett and R. Landauer, Sci. Am. 253, No. 7, 38
(1985).
[6] G. Gavalas, Nonlinear Differential Equations of Chemi-
cally Reacting Systems (Springer-Verlag, Berlin, 1968).
[7] G. Oster, A. S. Perelson, and A. Katchalsky, Q. Rev.
Biophys. 6, 1 (1973).
[8] Nonlinear Phenomena in Chemical Kinetics, edited by
C. Vidal and A. Pacault (Springer-Verlag, Berlin, 1981).
[9] O. E. Rössler, in Physics and Mathematics of the Nervous
System, edited by M. Conrad, W. Güttinger, and M. Dal
Cin, Springer-Verlag Lecture Notes in Biomathematics
Vol. 4 (Springer-Verlag, Berlin, 1974), pp. 399418,
546582.
[10] T. Head, Bull. Math. Biol. 49, 737 (1987).
[11] R. E. Siatkowsky and F.L. Carter, in Molecular Electronic
Devices, edited by F.L. Carter, R. E. Siatkowsky, and
H. Wohltjen (North-Holland, Amsterdam, 1988).
[12] L. Adleman, Science 266, 1021 (1994).
[13] B. Alberts, D. Bray, J. Lewis, M. Raff, K. Roberts, and
J.D. Watson, The Molecular Biology of the Cell (Garland,
New York, 1994), 3rd ed.
[14] T. Pawson, Nature (London) 373, 573 (1995).
[15] D. Bray, Nature (London) 376, 307 (1995).
[16] M. Okamoto, T. Sakay, and K. Hayashi, Biosystems 21,1
(1987).
[17] M. Okamoto, T. Sakay, and K. Hayashi, Biol. Cybern. 58,
295 (1988).
[18] A. Hjelmfelt, E. D. Weinberger, and J. Ross, Proc. Natl.
Acad. Sci. USA 88, 10 983 (1991).
[19] A. Hjelmfelt, E. D. Weinberger, and J. Ross, Proc. Natl.
Acad. Sci. USA 89, 383 (1992).
[20] A. Hjelmfelt and J. Ross, Proc. Natl. Acad. Sci. USA 89,
388 (1992).
[21] A. Hjelmfelt, F. W. Schneider, and J. Ross, Science 260,
335 (1993).
[22] A. Arkin, and J. Ross, Biophys. J. 67, 560 (1994).
[23] J.-P. Laplante, M. Payer, A. Hjelmfelt, and J. Ross,
J. Phys. Chem. 99, 10063 (1995).
[24] Compounds which exist in two chemically distinct states
can be implemented as proteins which can be phosphory-
lated, or as compounds which can be localized in two
different places; for instance, Ca11 in the cytosol is
“chemically distinct” from Ca11 in the ER, with channels
and pumps playing the role of kinases and phosphatases.
[25] These repeaters show the equivalent of input and output
impedanceycapacitances. If fagis changed abruptly, it
bounces, due to capture and release from the bound
complexes ab and ab. Convergence fails on the last
steps of the cascade when the last element is not
“terminated.”
[26] For S1the curve is convex. A convex curve can be
intersected by a straight line at most twice; at most one
intersection can be stable.
[27] Actual kinases from enzymatic pathways can have more
than one phosphorylation site and do logic directly on the
protein; so biological cascades can be more compact than
the networks shown here.
[28] Online at http:yytlon.rockefeller.eduy
[29] This can be done with only one autocatalytic reaction, plus
a nonspeciﬁc decay like c°! c(a self-phosphorylating
kinase and a phosphatase); in the absence of other
interactions, either S.1or cooperativity is required.
[30] H. Schulman, Curr. Opin. Cell Biol. 5, 247 253 (1993).
[31] A full version of this paper, including a careful description
of the open problems, will appear elsewhere.
1193
... For relatively small molecules in a well-mixed solution, the well-studied Chemical Reaction Network (CRN) model is a natural way to describe them. Known examples of computation with CRNs include useful small devices such as the approximate majority CRN [3] [17] and the rock-paper-scissors oscillator [32] [20] [49], boolean circuits [38] and neural networks [25], as well as more general results, including deterministic computation of arbitrary semilinear functions [2] [15] [21] and simulation of Turing machines with arbitrarily small error probability [47]. (For those not interested in computation per se, Turing universality may be taken as an assurance that these systems are capable of a wide class of complex behaviors.) ...
... The reachability problem is in an informal sense the CRN equivalent of the Turing machine halting problem; but since the Turing machine halting problem is undecidable, any CRN trying to simulate a Turing machine must have some reachable state that involves an error. Thus those CRNs that try to simulate Turing machines can either do so deterministically in a non-uniform sense, where a single CRN can simulate a Turing machine with a given bound on its tape size, and a larger CRN must be created to simulate a larger Turing machine tape [38] [28]; or do so uniformly but with some probability of error, and due to the counting argument above, using species counts exponential in the space used by the Turing machine [47]. Building on Bennett's insights relating polymer biochemistry and Turing machines [7], formal polymer systems such as Computational Nucleic Acids [31], the Biochemical Ground Form [13], DNA stack machines [42,33], DNA Turing machines [55], DNA register machines [51], and Surface CRNs [43], can all simulate classical Turing machines with no chance of error and using the same amount of space as the Turing machine. ...
Article
The Chemical Reaction Network model has been proposed as a programming language for molecular programming. Methods to implement arbitrary CRNs using DNA strand displacement circuits have been investigated, as have methods to prove the correctness of those or other implementations. However, the stochastic Chemical Reaction Network model is provably not deterministically Turing-universal, that is, it is impossible to create a stochastic CRN where a given output molecule is produced if and only if an arbitrary Turing machine accepts. A DNA stack machine that can simulate arbitrary Turing machines with minimal slowdown deterministically has been proposed, but it uses unbounded polymers that cannot be modeled as a Chemical Reaction Network. We propose an extended version of a Chemical Reaction Network that models unbounded linear polymers made from a finite number of monomers. This Polymer Reaction Network model covers the DNA stack machine, as well as copy-tolerant Turing machines and some examples from biochemistry. We adapt the bisimulation method of verifying DNA implementations of Chemical Reaction Networks to our model, and use it to prove the correctness of the DNA stack machine implementation. We define a subclass of single-locus Polymer Reaction Networks and show that any member of that class can be bisimulated by a network using only four primitives, suggesting a method of DNA implementation. Finally, we prove that deciding whether an implementation is a bisimulation is Π20-complete, and thus undecidable in the general case, although it is tractable in many special cases of interest. We hope that the ability to model and verify implementations of Polymer Reaction Networks will aid in the rational design of molecular systems.
... Computation of Boolean predicates has been extensively studied both in CRNs and population protocols. Early work on Boolean computation in CRNs is by Magnasco [34]. Signal values are encoded with low and high concentrations of corresponding species. ...
Article
Full-text available
Computing via synthetically engineered bacteria is a vibrant and active field with numerous applications in bio-production, bio-sensing, and medicine. Motivated by the lack of robustness and by resource limitation inside single cells, distributed approaches with communication among bacteria have recently gained in interest. In this paper, we focus on the problem of population growth happening concurrently, and possibly interfering, with the desired bio-computation. Specifically, we present a fast protocol in systems with continuous population growth for the majority consensus problem and prove that it correctly identifies the initial majority among two inputs with high probability if the initial difference is Ω(nlogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varOmega (\sqrt{n\log n})$$\end{document} where n is the total initial population. We also present a fast protocol that correctly computes the Nand of two inputs with high probability. By combining Nand gates with the majority consensus protocol as an amplifier, it is possible to compute arbitrary Boolean functions. Finally, we extend the protocols to several biologically relevant settings. We simulate a plausible implementation of a noisy Nand gate with engineered bacteria. In the context of continuous cultures with a constant outflow and a constant inflow of fresh media, we demonstrate that majority consensus is achieved only if the flow is slower than the maximum growth rate. Simulations suggest that flow increases consensus time over a wide parameter range. The proposed protocols help set the stage for bio-engineered distributed computation that directly addresses continuous stochastic population growth.
... To the present, there have been many attempts to link the concepts of the theory of computation to the description of natural phenomena [6][7][8][9][10][11][12][13][14][15][16][17][18]. Barahona showed that the ground state search in three-dimensional spin glasses and two-dimensional spin glasses in magnetic fields is NP-hard [6]. ...
Preprint
We present a simple model describing the assembly and disassembly of a one-dimensional molecular chain consisting of two types of molecular subunits. We show that it takes a longer time than an exponential of the molecular chain length to synthesize a certain amount of molecular chains for any external operation, while linear time is sufficient to decompose a certain amount of molecular chains. Our findings may facilitate research on more general asymmetry of operational hardness.
... It was the first empirical realization of chemical logical gates. In 1997, Magnasco [46] showed that logic gates can be constructed and executed in the chemical kinetics of homogeneous solutions. It has been proved that such constructions have computational power equivalent to Turing machine. ...
Article
Full-text available
In recent years, the modeling interest has increased significantly from molecular level to atomic and quantum levels. Computational chemistry plays a significant role in designing computational models for the operation and simulation of systems ranging from atoms and molecules to industrial processes. It is influenced by a tremendous increase in computing power and the efficiency of algorithms. The representation of chemical reactions using classical automata theory in thermodynamic terms had a great influence on computer science. The study of chemical information processing with quantum computational models is a natural goal. In this study, we have modeled chemical reactions using two-way quantum finite automata, which are halted in linear time. Additionally, classical pushdown automata can be designed for such chemical reactions with multiple stacks. It has been proven that computational versatility can be increased by combining chemical accept/reject signatures and quantum automata models.
Article
Full-text available
Regulatory processes in biology can be re-conceptualized in terms of logic gates, analogous to those in computer science. Frequently, biological systems need to respond to multiple, sometimes conflicting, inputs to provide the correct output. The language of logic gates can then be used to model complex signal transduction and metabolic processes. Advances in synthetic biology in turn can be used to construct new logic gates, which find a variety of biotechnology applications including in the production of high value chemicals, biosensing, and drug delivery. In this review, we focus on advances in the construction of logic gates that take advantage of biological catalysts, including both protein-based and nucleic acid-based enzymes. These catalyst-based biomolecular logic gates can read a variety of molecular inputs and provide chemical, optical, and electrical outputs, allowing them to interface with other types of biomolecular logic gates or even extend to inorganic systems. Continued advances in molecular modeling and engineering will facilitate the construction of new logic gates, further expanding the utility of biomolecular computing.
Article
We present a simple model describing the assembly and disassembly of heteropolymers consisting of two types of monomers A and B. We prove that no matter how we manipulate the concentrations of A and B, it takes longer than the exponential function of d to synthesize a fixed amount of the desired heteropolymer, where d is the number of A-B connections. We also prove the decomposition time is linear for chain length n. When d is proportional to n, synthesis and destruction have an exponential asymmetry. Our findings may facilitate research on the more general asymmetry of operational hardness.
Preprint
Full-text available
We demonstrate a novel computational architecture based on fluid convection logic gates and heat flux-mediated information flows. Our previous work demonstrated that Boolean logic operations can be performed by thermally-driven convection flows. In this work, we use numerical simulations to demonstrate a different, but universal Boolean logic operation (NOR), performed by simpler convective gates. The gates in the present work do not rely on obstacle flows or periodic boundary conditions, a significant improvement in terms of experimental realizability. Conductive heat transfer links can be used to connect the convective gates, and we demonstrate this with the example of binary half addition. These simulated circuits could be constructed in an experimental setting with modern, 2-dimensional fluidics equipment, such as a thin layer of fluid between acrylic plates. The presented approach thus introduces a new realm of unconventional, thermal fluid-based computation.
Article
Phosphate transfer reactions (Principles of biochemistry, Prentice Hall, Upper Saddle River, 1996) involve the transfer of a phosphate group from a donor molecule to an accepter, which is ubiquitous in biochemistry. Besides natural systems, some synthetic molecular systems such as seesaw gates are also equivalent to (subsets of) phosphate transfer reaction networks. In this paper, we study the computational power of phosphate transfer reaction networks (PTRNs). PTRNs are chemical reaction networks (CRNs) with only phosphate transfer reactions. Previously, it is known (Nat Comput 13:517–534, 2014) that a function can be deterministically computed by a CRN if and only if it is semilinear. However, the computational power of programmable phosphate transfer networks is unknown. In this paper, we present a formal model to describe PTRNs and study the computational power of these networks. We prove that when each molecule can only carry one phosphate group, the output must be the total initial count in a subset S1 minus the total initial count of another subset S2. On the other hand, when every molecule can carry up to three phosphate groups, or two phosphate groups with different functions, PTRNs can “simulate” arbitrary CRNs. Finally, when each molecule can carry up to two functionally identical phosphate groups (or, equivalently, two phosphate groups which must be added/removed in a sequential manner), we prove that the computational power is strictly stronger than PTRNs with at most one phosphate group per molecule.
Article
Life is confronted with computation problems in a variety of domains including animal behavior, single-cell behavior, and embryonic development. Yet we currently do not know of a naturally existing biological system that is capable of universal computation, i.e., Turing-equivalent in scope. Generic finite-dimensional dynamical systems (which encompass most models of neural networks, intracellular signaling cascades, and gene regulatory networks) fall short of universal computation, but are assumed to be capable of explaining cognition and development. I present a class of models that bridge two concepts from distant fields: combinatory logic (or, equivalently, lambda calculus) and RNA molecular biology. A set of basic RNA editing rules can make it possible to compute any computable function with identical algorithmic complexity to that of Turing machines. The models do not assume extraordinarily complex molecular machinery or any processes that radically differ from what we already know to occur in cells. Distinct independent enzymes can mediate each of the rules and RNA molecules solve the problem of parenthesis matching through their secondary structure. In the most plausible of these models all of the editing rules can be implemented with merely cleavage and ligation operations at fixed positions relative to predefined motifs. This demonstrates that universal computation is well within the reach of molecular biology. It is therefore reasonable to assume that life has evolved – or possibly began with – a universal computer that yet remains to be discovered. The variety of seemingly unrelated computational problems across many scales can potentially be solved using the same RNA-based computation system. Experimental validation of this theory may immensely impact our understanding of memory, cognition, development, disease, evolution, and the early stages of life.
Article
How can we rethink ‘rationality’ in the wake of animal and artificial intelligence studies? Can nonhuman systems be rational in any nontrivial sense? In this paper, we propose that all organisms, under certain circumstances, exhibit rationality to a diverse degree and aspect in the sense of the standard picture (SP): Their inferential processes conform to logic and probability rules. We first show that according to Calvo and Friston (J R Soc Interface 14(131):20170096, 2017) and Orlandi (2018), all biological systems must embody a top-down process (active inference) to minimize free energy. Next, based on Maddy’s (Second philosophy, Oxford University Press, Oxford, 2007; The logical must: Wittgenstein on logic, Oxford University Press, Oxford, 2014) analysis, we argue that this inferential process conforms to logic and probability rules; thus, it satisfies the SP, which explains the rudimentary logic and arithmetic (e.g., categorizing and numbering) found among pigeons and mice. We also hold that the mammalian brain is only one among many ways of implementing rationality. Finally, we discuss data from microorganisms to support this view.
Chapter
A finite automaton or, synonymously, a finite state machine is in the simplest case a triple (X,I,λ), whereby X is a finite set of states, I is a finite set of inputs, and λ is the next-state mapping, such that λ : X x I → X (cf. Arbib, 1969). For example, if x1 and x2 are two state variables each possessing two possible states, the whole automaton has four possible states, and λ specifies the transitions between these states in dependence on a given input. If the input is constant, one speaks of an autonomous automaton, otherwise of a nonautonomous automaton .
Article
Current (May 1995) revision of 1992 report. We consider limitations on the performance of computers arising from thermodynamics and the laws of physics. We provide upper bounds on three quantities: sustained information flux, information storage density, and sustained computational speed. All of these upper bounds are "tight" in the sense that they could be approached by plausible-sounding physical systems, and they all arise from a single unified point of view. We also make a conjecture about the rate of inevitable decay of stored information. This conjecture may be thought of as a quantitative extension of the second law of thermodynamics. It leads to a bound on the density of stable information. We carefully elucidate the assumptions behind these bounds. We give a list of 4 open problems at the end. KEYWORDS: thermodynamics, computation, reversible Turing machines, blackbody radiation, decay of information, physics, entropy, information transmission and storage, cooling requirement...
Article
The success of equilibrium thermodynamics in describing static phenomena has inspired many attempts to develop a rigorous thermodynamics of rate processes.
Article
Experiments on pattern recognition are: performed with a network of eight open, bistable, mass-coupled chemical reactors. A programming rule is used to determine the network connectivity in order to store sets of stationary patterns of reactors with low or high concentrations. Experiments show that these stored patterns can be recalled from similar initial patterns. To our knowledge, this is the first chemical implementation of a type of neural network computing device. The experiments on this small network agree with simulations and support the predictions of the performance-of large:networks.
Article
Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic computers dissipate energy vastly in excess of the mean thermal energykT, for purposes such as maintaining volatile storage devices in a bistable condition, synchronizing and standardizing signals, and maximizing switching speed. On the other hand, recent models due to Fredkin and Toffoli show that in principle a computer could compute at finite speed with zero energy dissipation and zero error. In these models, a simple assemblage of simple but idealized mechanical parts (e.g., hard spheres and flat plates) determines a ballistic trajectory isomorphic with the desired computation, a trajectory therefore not foreseen in detail by the builder of the computer. In a classical or semiclassical setting, ballistic models are unrealistic because they require the parts to be assembled with perfect precision and isolated from thermal noise, which would eventually randomize the trajectory and lead to errors. Possibly quantum effects could be exploited to prevent this undesired equipartition of the kinetic energy. Another family of models may be called Brownian computers, because they allow thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible (low-potential-energy) portion of the computer's configuration space. In these computers, a simple assemblage of simple parts determines a low-energy labyrinth isomorphic to the desired computation, through which the system executes its random walk, with a slight drift velocity due to a weak driving force in the direction of forward computation. In return for their greater realism, Brownian models are more dissipative than ballistic ones: the drift velocity is proportional to the driving force, and hence the energy dissipated approaches zero only in the limit of zero speed. In this regard Brownian models resemble the traditional apparatus of thermodynamic thought experiments, where reversibility is also typically only attainable in the limit of zero speed. The enzymatic apparatus of DNA replication, transcription, and translation appear to be nature's closest approach to a Brownian computer, dissipating 20–100kT per step. Both the ballistic and Brownian computers require a change in programming style: computations must be renderedlogically reversible, so that no machine state has more than one logical predecessor. In a ballistic computer, the merging of two trajectories clearly cannot be brought about by purely conservative forces; in a Brownian computer, any extensive amount of merging of computation paths would cause the Brownian computer to spend most of its time bogged down in extraneous predecessors of states on the intended path, unless an extra driving force ofkTln2 were applied (and dissipated) at each merge point. The mathematical means of rendering a computation logically reversible (e.g., creation and annihilation of a history file) will be discussed. The old Maxwell's demon problem is discussed in the light of the relation between logical and thermodynamic reversibility: the essential irreversible step, which prevents the demon from breaking the second law, is not the making of a measurement (which in principle can be done reversibly) but rather the logically irreversible act of erasing the record of one measurement to make room for the next. Converse to the rule that logically irreversible operations on data require an entropy increase elsewhere in the computer is the fact that a tape full of zeros, or one containing some computable pseudorandom sequence such as pi, has fuel value and can be made to do useful thermodynamic work as it randomizes itself. A tape containing an algorithmically random sequence lacks this ability.
Article
Es wird untersucht, durch welche Umstnde es bedingt ist, da man scheinbar ein Perpetuum mobile zweiter Art konstruieren kann, wenn man ein Intellekt besitzendes Wesen Eingriffe an einem thermodynamischen System vornehmen lt. Indem solche Wesen Messungen vornehmen, erzeugen sie ein Verhalten des Systems, welches es deutlich von einem sich selbst berlassenen mechanischen System unterscheidet. Wir zeigen, da bereits eine Art Erinnerungsvermgen, welches ein System, in dem sich Messungen ereignen, auszeichnet, Anla zu einer dauernden Entropieverminderung bieten kann und so zu einem Versto gegen den zweiten Hauptsatz fhren wrde, wenn nicht die Messungen selbst ihrerseits notwendig unter Entropieerzeugung vor sich gehen wrden. Zunchst wird ganz universell diese Entropieerzeugung aus der Forderung errechnet, da sie im Sinne des zweiten Hauptsatzes eine volle Kompensation darstellt [Gleichung (1)]. Es wird dann auch an Hand einer unbelebten Vorrichtung, die aber (unter dauernder Entropieerzeugung) in der Lage ist, Messungen vorzunehmen, die entstehende Entropiemenge berechnet und gefunden, da sie gerade so gro ist, wie es fr die volle Kompensation notwendig ist: die wirkliche Entropieerzeugung bei der Messung braucht also nicht grer zu sein, als es Gleichung (1) verlangt.
Article
A new manner of relating formal language theory to the study of informational macromolecules is initiated. A language is associated with each pair of sets where the first set consists of double-stranded DNA molecules and the second set consists of the recombinational behaviors allowed by specified classes of enzymatic activities. The associated language consists of strings of symbols that represent the primary structures of the DNA molecules that may potentially arise from the original set of DNA molecules under the given enzymatic activities. Attention is focused on the potential effect of sets of restriction enzymes and a ligase that allow DNA molecules to be cleaved and reassociated to produce further molecules. The associated languages are analysed by means of a new generative formalism called a splicing system. A significant subclass of these languages, which we call the persistent splicing languages, is shown to coincide with a class of regular languages which have been previously studied in other contexts: the strictly locally testable languages. This study initiates the formal analysis of the generative power of recombinational behaviors in general. The splicing system formalism allows observations to be made concerning the generative power of general recombination and also of sets of enzymatic activities that include general recombination.
Article
It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.
Article
Many proteins in living cells appear to have as their primary function the transfer and processing of information, rather than the chemical transformation of metabolic intermediates or the building of cellular structures. Such proteins are functionally linked through allosteric or other mechanisms into biochemical 'circuits' that perform a variety of simple computational tasks including amplification, integration and information storage.