ArticlePDF Available

Chemical Kinetics is Turing Universal

Authors:

Abstract and Figures

We show that digital logic can be implemented in the chemical kinetics of homogeneous solutions: We explicitly construct logic gates and show that arbitrarily large circuits can be made from them. This proves that a subset of the constructions available to life has universal (Turing) computational power.
Content may be subject to copyright.
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
Chemical Kinetics is Turing Universal
Marcelo O. Magnasco
Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, New York 10021
(Received 20 February 1996; revised manuscript received 15 August 1996)
We show that digital logic can be implemented in the chemical kinetics of homogeneous solutions:
We explicitly construct logic gates and show that arbitrarily large circuits can be made from them. This
proves that a subset of the constructions available to life has universal (Turing) computational power.
[S0031-9007(97)02332-6]
PACS numbers: 87.10.+e, 89.80.+h, 82.20.Mj
Interest in chemical computation has followed four dif-
ferent paths. It is one of the natural extensions of discus-
sions about information and thermodynamics, which go
back to Maxwell demon arguments and Szilard’s work
[15]. It is also a rather natural extension to the ap-
plication of dynamical systems theory to chemical reac-
tions [68], in particular logic networks stemming from
bistable reaction systems [9]. A lot of effort has been de-
voted to trying to devise nonstandard computational archi-
tectures, and chemical implementations provide a distinct
enough backdrop to silicon [1012]. Finally, in recent
years biology has presented us with what looks to be ac-
tual chemical computers: the enzymatic cascades of cell
signaling [1315].
One of the first questions that can be asked in this
subject is whether universal (Turing) computation can
be achieved within some theoretical model of chemistry;
the most immediate one is standard chemical kinetics.
This question has been recently studied in some detail
[1622], and even subject to experimental tests [23]. In
[1820], Hjelmfelt et al. argued quite convincingly that
building blocks for universal computation indeed can be
constructed within ideal chemical kinetics, and that they
could be interconnected to achieve computation. How-
ever, many difficulties still lie in the way. An issue
not addressed by Hjelmfelt et al. is structural stability:
the tolerance of a system to changes in parameters and
functional structure. In particular, “gluing” together two
groups of chemical reactions will have appreciable effects
on the kinetics of both groups; the basic unit and the cou-
plings used in [1820] require case-by-case adjustment of
individual parameters for proper functioning.
The purpose of this Letter is to provide a slightly
more formal proof that chemical kinetics can be used
to construct universal computers. I will concentrate on
the “next” level of difficulty, which is that of the global
behavior of a fully coupled system and its structural
stability. I will do it through the simplest approach: I will
show that classical digital electronics can be implemented
through chemical reactions. Since my key problem in
this scheme is showing global consistency, and the proof
requires arbitrarily large circuits, I will have to show that
the output of one gate can be plugged into the input of
others for arbitrarily many layers, without degrading the
logic, keeping at all times full coupling.
We will need a power supply. I will define mine to
consist of two chemical species called high and low;
their concentrations will be kept clamped strongly out
of equilibrium, so an external reservoir is assumed.
This approximates the power supply in cells, the two
compounds ATP and ADP; the cellular “power plants”
keep their concentration as constant as feasible, nearly
6 decades away from equilibrium. Thermodynamics
requires the logarithm of the equilibrium constants to
lie in the (left) span of the stoichiometry matrix; it is
important that all reactions we use satisfy this constraint,
so that there are no “hidden” power supplies.
The very first thing we need to consider is the trivial
gate, the signal repeater, which copies input onto output.
Any problems we encounter with it will recur for any
other gate. Let’s say a chemical species ais the input and
bthe output. We will need bto exist in two chemically
distinct forms, band b[24]. If bis a compound of higher
energy than b, we can couple its production to the power
supply, as in b1high %b1low;in the absence of
other reactions, fbggoes to a small value determined by
the rate of spontaneous decay in b%b. This is then
a sort of “capacitor,” which we charge with the power
supply. If then the reaction b%bis catalyzed by a,
a1b%ab %ab %a1b,(1)
then a“shorts” the capacitor and discharges it, increasing
the concentration of b. Hence when fagis low, fbgis
low, and when fagis high, fbgbecomes high, and the
transitions have certain rise and decay times determined
by the precise rates we use.
In Fig. 1 we see the output of simulating a chain of
several such gates with a°! b°! c°! d.... The
gates are all identical; the only change between them is
the name of the compound. The wave forms are dying
as we go down this chain: The difference between the
“high” and the “low” levels is becoming smaller and
smaller. So this network is not a suitable signal repeater.
Figure 2 shows the output of a similar simulation using
the reactions
2a1b%a2b%a2b%2a1b(2)
1190 0031-9007y97y78(6)y1190(4)$10.00 © 1997 The American Physical Society
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
FIG. 1. A cascade of identical signal repeaters a°! b°!
c°! , using Eq. (1). The input to fagis a square wave.
Top (small) panels show each signal individually with varying
scales, bottom (large) panel shows all signals simultaneously on
the same scale. The amplitude of the signal gets reduced very
rapidly.
(i.e., double stoichiometry on the input). We can see that
the amplitude of the pulses gets stabilized; both high and
low now approach amply separate levels [25]. I will now
prove that higher stoichiometry is essential.
All concentrations become stationary after some tran-
sients. If we plot these steady levels as a function of the
inputs, we get the classical plots shown in Fig. 3. These
diagrams represent the concentration of bas a function
of a, but also of cas a function of b, and so on. If we
call xnthe nth compound in the chain, then the diagram
shows xn11as a function of xn;nhere labels position on
the chain. This is a recurrence relation, also called a map.
This type of map is usually studied in the theory of
dynamical systems, where it represents some dynamical
law, and nlabels time. A large part of dynamical
systems theory is devoted to the asymptotic states, i.e.,
what happens at arbitrarily long times. In our case this
translates to “arbitrarily deep into the circuit,” which
is what we want to study. Dynamical systems theory
tells us that the only asymptotic states of maps which
are monotonically increasing and bounded (our case)
are steady states. The steady states (also called fixed
points) of a map occur when xn11xn, i.e., when the
curve intersects the diagonal line. They can be stable or
unstable; stable (unstable) means that if some xnis near
the fixed point, then, for m.n, the xmare nearer to
(farther away from) the fixed point; this happens when
FIG. 2. A cascade of signal repeaters with double stoichiom-
etry [Eq. (2)]. Same conventions as Fig. 1. The amplitude of
the signal converges to a steady value.
the curve is shallower (steeper) than the diagonal at the
intersection.
In the case of stoichiometry one sS1dthere are
at most two fixed points, and only one can be stable
[26]. For S.1there can be three fixed points, the
two outer ones being stable, the middle one unstable.
We can propagate logic arbitrarily deep into the chain
FIG. 3. The steady-state concentration of the outputs of two
signal repeaters, S1[Eq. (1)] and S2[Eq. (2)] as a
function of the steady-state level of the input a. The diagonal
line is fagas a function of itself; the intersections of the two
curves with this diagonal are the fixed points.
1191
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
if and only if we have at least two distinct stable fixed
points, with each one corresponding to a distinct logical
state. But two stable fixed points are possible only
for S.1.
Now the main conceptual problems have been solved.
The only remaining point is to construct explicitly a few
different gates (Fig. 4); if all of the gates are “built”
(i.e., the rates so chosen) so that their response is one
of our fixed points when the inputs are at the fixed points
then they will be globally compatible. Strictly speaking,
one needs only NAND, since all logical functions can be
constructed from it, but since each internal wire in the
circuit is a chemically distinct compound, it is desirable
to implement gates directly [27]. A precise definition of
the gates can be found elsewhere [28].
Adding is a problem that exemplifies rather nicely the
spirit of this work, because when we add, we have to
shift the “carry” digits to the next column. These can
accumulate to generate a cascade, so we need to be able
to propagate logic across an entire network. In order to
add two three-bit numbers (giving a four-bit number as
the output), we need to cascade three full adders. The
three-bit adder is shown in Fig. 5; it can add up to 717.
Ephemeral memory can be implemented rather directly,
but if the memory is supposed to be long-term, care
must be exercised. A flip-flop can be made by having a
compound in two states sc,cd, and then two inputs sa,bd
which catalyze conversion to the other state by coupling
to the power supply:
FIG. 4. The output of one implementation of the four classical
gates. aand bare the inputs. While there are artifacts, the
logic levels are still well separated. AND,OR, and NAND are
implemented directly, XOR is implemented as AND(NAND,OR).
a1c1high %··· %a1c1low (3)
and similarly for bsending c°! c. The lifetime of
this memory would appear to be the lifetime of the
uncatalyzed reaction c%c. However, such a mechanism
is not resistant to fluctuations in the inputs; even a minute
amount of catalyst can reduce the lifetime dramatically.
In order to make memory stable, we need to make the
system prefer to be either all cor all c. There are many
ways to do this; for instance,
2c1c1high %··· %3c1low (4)
and vice versa [29]. The addition of these two self-
catalytic reactions makes the memory strongly robust (see
Fig. 6) and, in principle, infinitely long lived even in the
presence of input fluctuations; however, energy is drawn
from the power supply to “refresh” the flip-flop. There is
some resemblance to dynamic vs static RAM, and to the
self-phosphorylating enzyme CamK II [30], which might
be implicated in long-term memory in neurons.
I have shown one particular explicit implementation of
digital logic in chemical kinetics, and thus shown universal
computation capabilities. However, many questions still
remain open (which I will comment upon in some greater
detail elsewhere [31]): What is the interplay between in-
formation transfer and thermodynamics? Since no catalyst
FIG. 5. Numerical simulation of the three-bit adder: c
a1b. The lower traces are the three bits of input aand
the three bits of input b; the four upper traces are the four bits
of output c. The transients as the inputs are changed show the
delays in propagating carries. The five columns of different
inputs show: 0100,71714,2124,613
9, and 7118. The network has about 140 compounds in
290 reactions.
1192
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
FIG. 6. Two flip-flops; sta is a “static” flip-flop [Eq. (3)], and
dyn a “dynamic” one with autocatalytic stabilization [Eq. (4)].
Both can switch between states fast as the inputs aand bare
pulsed. At time 100 both inputs are set to 0.1; sta forgets its
state, while dyn does not.
is perfectly selective for its substrate, how robust are
computations under the massive cross talk of random
“unintended” reactions? Are there equivalents of the
gain bandwidth and other classic theorems of electronics?
And, presumably, many more.
I would like to thank A. Ajdari, G. Cecchi, D. Chate-
nay, J.-P. Eckmann, A. Libchaber, G. Stolovitzky, and
D. Thaler for many stimulating discussions. Completed
in part at U. of Buenos Aires and Asoc. de Fisica Ar-
gentina; I want to thank G. Mindlin, S. Ponce, and J.P.
Paz for their hospitality. Supported in part by the Mathers
Foundation.
[1] L. Szilard, Z. Phys. 53, 840856 (1929).
[2] H. S. Leff and A.F. Rex, Maxwell’s Demon: Information,
Entropy, Computing (Princeton University Press, Prince-
ton, New Jersey, 1990).
[3] R. Landauer, IBM J. Res. Dev. 5, 183191 (1961).
[4] C. H. Bennett, Int. J. Theor. Phys. 21, 905940 (1982).
[5] C. H. Bennett and R. Landauer, Sci. Am. 253, No. 7, 38
(1985).
[6] G. Gavalas, Nonlinear Differential Equations of Chemi-
cally Reacting Systems (Springer-Verlag, Berlin, 1968).
[7] G. Oster, A. S. Perelson, and A. Katchalsky, Q. Rev.
Biophys. 6, 1 (1973).
[8] Nonlinear Phenomena in Chemical Kinetics, edited by
C. Vidal and A. Pacault (Springer-Verlag, Berlin, 1981).
[9] O. E. Rössler, in Physics and Mathematics of the Nervous
System, edited by M. Conrad, W. Güttinger, and M. Dal
Cin, Springer-Verlag Lecture Notes in Biomathematics
Vol. 4 (Springer-Verlag, Berlin, 1974), pp. 399418,
546582.
[10] T. Head, Bull. Math. Biol. 49, 737 (1987).
[11] R. E. Siatkowsky and F.L. Carter, in Molecular Electronic
Devices, edited by F.L. Carter, R. E. Siatkowsky, and
H. Wohltjen (North-Holland, Amsterdam, 1988).
[12] L. Adleman, Science 266, 1021 (1994).
[13] B. Alberts, D. Bray, J. Lewis, M. Raff, K. Roberts, and
J.D. Watson, The Molecular Biology of the Cell (Garland,
New York, 1994), 3rd ed.
[14] T. Pawson, Nature (London) 373, 573 (1995).
[15] D. Bray, Nature (London) 376, 307 (1995).
[16] M. Okamoto, T. Sakay, and K. Hayashi, Biosystems 21,1
(1987).
[17] M. Okamoto, T. Sakay, and K. Hayashi, Biol. Cybern. 58,
295 (1988).
[18] A. Hjelmfelt, E. D. Weinberger, and J. Ross, Proc. Natl.
Acad. Sci. USA 88, 10 983 (1991).
[19] A. Hjelmfelt, E. D. Weinberger, and J. Ross, Proc. Natl.
Acad. Sci. USA 89, 383 (1992).
[20] A. Hjelmfelt and J. Ross, Proc. Natl. Acad. Sci. USA 89,
388 (1992).
[21] A. Hjelmfelt, F. W. Schneider, and J. Ross, Science 260,
335 (1993).
[22] A. Arkin, and J. Ross, Biophys. J. 67, 560 (1994).
[23] J.-P. Laplante, M. Payer, A. Hjelmfelt, and J. Ross,
J. Phys. Chem. 99, 10063 (1995).
[24] Compounds which exist in two chemically distinct states
can be implemented as proteins which can be phosphory-
lated, or as compounds which can be localized in two
different places; for instance, Ca11 in the cytosol is
“chemically distinct” from Ca11 in the ER, with channels
and pumps playing the role of kinases and phosphatases.
[25] These repeaters show the equivalent of input and output
impedanceycapacitances. If fagis changed abruptly, it
bounces, due to capture and release from the bound
complexes ab and ab. Convergence fails on the last
steps of the cascade when the last element is not
“terminated.”
[26] For S1the curve is convex. A convex curve can be
intersected by a straight line at most twice; at most one
intersection can be stable.
[27] Actual kinases from enzymatic pathways can have more
than one phosphorylation site and do logic directly on the
protein; so biological cascades can be more compact than
the networks shown here.
[28] Online at http:yytlon.rockefeller.eduy
[29] This can be done with only one autocatalytic reaction, plus
a nonspecific decay like c°! c(a self-phosphorylating
kinase and a phosphatase); in the absence of other
interactions, either S.1or cooperativity is required.
See also [9].
[30] H. Schulman, Curr. Opin. Cell Biol. 5, 247 253 (1993).
[31] A full version of this paper, including a careful description
of the open problems, will appear elsewhere.
1193
... [10][11][12][13][14][15][16][17][18][19] and references therein. Regarding the reaction networks, we know that stochastic reaction networks are Turing universal and can compute any computable function for any nonzero error probability [20][21][22]. There are many interesting works which use chemical species and reactions as computational resources to implement the basic gates and circuits that are needed for a universal computation [23][24][25][26]. ...
... A mathematical treatment of the space of solutions and functions that can be exhibited by these equations is presented in [40]. The computational power of deterministic reaction systems is demonstrated in [20]. Similarities with neural networks was exploited in Refs. ...
Article
Full-text available
Biochemical reaction networks are expected to encode an efficient representation of the function of cells in a variable environment. It is thus important to see how these networks do learn and implement such representations. The first step in this direction is to characterize the function and learning capabilities of basic artificial reaction networks. In this study, we consider multilayer networks of reversible reactions that connect two layers of signal and response species through an intermediate layer of hidden species. We introduce a stochastic learning algorithm that updates the reaction rates based on the correlation values between reaction products and responses. Our findings indicate that the function of networks with random reaction rates, as well as their learning capacity for random signal-response activities, are critically determined by the number of reactants and reaction products. Moreover, the stored patterns exhibit different levels of robustness and qualities as the reaction rates deviate from their optimal values in a stochastic model of defect evolution. These findings can help suggest network modules that are better suited to specific functions, such as amplifiers or dampeners, or to the learning of biologically relevant signal-response activities.
... While various model protocell systems have primitive genetic components, and simpler artificial chemical systems exhibit self-replication, chemotaxis, and homeostasis, emergent learning in simple artificial systems has remained an elusive goal. Chemical computing itself is a large discipline that includes the incredible achievements of DNA computing [78][79][80][81][82], computing using the Belousov-Zhabotinsky reaction [83][84][85][86][87][88][89][90][91], and more general approaches that exploit the computational universality of chemical reaction networks [92][93][94][95][96][97][98][99][100][101][102][103][104][105]. Chemical computation has also been exploited for the control of agent-like entities such as simple robots [106], but such robots are not emergent and hence their analysis is primarily a heuristic tool for the engineering of swarm intelligence (as opposed to understanding the OoL). ...
Preprint
Full-text available
We present a benchmark study of autonomous, chemical agents exhibiting associative learning of an environmental feature. Associative learning has been widely studied in cognitive science and artificial intelligence, but are most commonly implemented in highly complex or carefully engineered systems such as animal brains, artificial neural networks, DNA computing systems and gene regulatory networks. The ability to encode environmental correlations and use them to make predictions is a benchmark of biological resilience, and underpins a plethora of adaptive responses in the living hierarchy, spanning prey animal species anticipating the arrival of predators, to epigenetic systems in microorganisms learning environmental correlations. Given the ubiquitous and essential presence of learning behaviours in the biosphere, we aimed to explore whether simple, non-living dissipative structures could also exhibit associative learning. Inspired by previous modeling of associative learning in chemical networks, we simulated simple systems composed of long and short term memory chemical species that could encode the presence or absence of temporal correlations between two external species. The ability to learn this association was implemented in Gray-Scott reaction-diffusion spots, emergent chemical patterns that exhibit self-replication and homeostasis. With the novel ability of associative learning, we demonstrate that simple chemical patterns can exhibit a broad repertoire of life-like behaviour, paving the way for in vitro studies of autonomous chemical learning systems, with potential relevance to artificial life, origins of life, and systems chemistry. The experimental realisation of these learning behaviours in protocell systems could advance a novel research direction in astrobiology, since our system significantly reduces the lower bound on the required complexity for emergent learning.
... While various model protocell systems have primitive genetic components, and simpler artificial chemical systems exhibit self-replication, chemotaxis, and homeostasis, emergent learning in simple artificial systems has remained an elusive goal. Chemical computing itself is a large discipline that includes the incredible achievements of DNA computing [78,79,80,81,82], computing using the Belousov-Zhabotinsky reaction [83,84,85,86,87,88,89,90,91], and more general approaches that exploit the computational universality of chemical reaction networks [92,93,94,95,96,97,98,99,100,101,102,103,104,105]. Chemical computation has also been exploited for the control of agent-like entities such as simple robots [106], but such robots are not emergent and hence their analysis is primarily a heuristic tool for the engineering of swarm intelligence (as opposed to understanding the OoL). ...
Article
Full-text available
We present a benchmark study of autonomous, chemical agents exhibiting associative learning of an environmental feature. Associative learning systems have been widely studied in cognitive science and artificial intelligence but are most commonly implemented in highly complex or carefully engineered systems, such as animal brains, artificial neural networks, DNA computing systems, and gene regulatory networks, among others. The ability to encode environmental information and use it to make simple predictions is a benchmark of biological resilience and underpins a plethora of adaptive responses in the living hierarchy, spanning prey animal species anticipating the arrival of predators to epigenetic systems in microorganisms learning environmental correlations. Given the ubiquitous and essential presence of learning behaviors in the biosphere, we aimed to explore whether simple, nonliving dissipative structures could also exhibit associative learning. Inspired by previous modeling of associative learning in chemical networks, we simulated simple systems composed of long- and short-term memory chemical species that could encode the presence or absence of temporal correlations between two external species. The ability to learn this association was implemented in Gray-Scott reaction-diffusion spots, emergent chemical patterns that exhibit self-replication and homeostasis. With the novel ability of associative learning, we demonstrate that simple chemical patterns can exhibit a broad repertoire of lifelike behavior, paving the way for in vitro studies of autonomous chemical learning systems, with potential relevance to artificial life, origins of life, and systems chemistry. The experimental realization of these learning behaviors in protocell or coacervate systems could advance a new research direction in astrobiology, since our system significantly reduces the lower bound on the required complexity for autonomous chemical learning.
... Journal of Theoretical Biology 537 (2022) 110984 computation (i.e. Turing-equivalence) (Magnasco, 1997;Scarle, 2009). It is quite easy to construct logic circuits with already identified bio-molecular building blocks (Benenson, 2012). ...
Article
Life is confronted with computation problems in a variety of domains including animal behavior, single-cell behavior, and embryonic development. Yet we currently do not know of a naturally existing biological system that is capable of universal computation, i.e., Turing-equivalent in scope. Generic finite-dimensional dynamical systems (which encompass most models of neural networks, intracellular signaling cascades, and gene regulatory networks) fall short of universal computation, but are assumed to be capable of explaining cognition and development. I present a class of models that bridge two concepts from distant fields: combinatory logic (or, equivalently, lambda calculus) and RNA molecular biology. A set of basic RNA editing rules can make it possible to compute any computable function with identical algorithmic complexity to that of Turing machines. The models do not assume extraordinarily complex molecular machinery or any processes that radically differ from what we already know to occur in cells. Distinct independent enzymes can mediate each of the rules and RNA molecules solve the problem of parenthesis matching through their secondary structure. In the most plausible of these models all of the editing rules can be implemented with merely cleavage and ligation operations at fixed positions relative to predefined motifs. This demonstrates that universal computation is well within the reach of molecular biology. It is therefore reasonable to assume that life has evolved – or possibly began with – a universal computer that yet remains to be discovered. The variety of seemingly unrelated computational problems across many scales can potentially be solved using the same RNA-based computation system. Experimental validation of this theory may immensely impact our understanding of memory, cognition, development, disease, evolution, and the early stages of life.
Article
Full-text available
Regulatory processes in biology can be re-conceptualized in terms of logic gates, analogous to those in computer science. Frequently, biological systems need to respond to multiple, sometimes conflicting, inputs to provide the correct output. The language of logic gates can then be used to model complex signal transduction and metabolic processes. Advances in synthetic biology in turn can be used to construct new logic gates, which find a variety of biotechnology applications including in the production of high value chemicals, biosensing, and drug delivery. In this review, we focus on advances in the construction of logic gates that take advantage of biological catalysts, including both protein-based and nucleic acid-based enzymes. These catalyst-based biomolecular logic gates can read a variety of molecular inputs and provide chemical, optical, and electrical outputs, allowing them to interface with other types of biomolecular logic gates or even extend to inorganic systems. Continued advances in molecular modeling and engineering will facilitate the construction of new logic gates, further expanding the utility of biomolecular computing.
Article
We present a simple model describing the assembly and disassembly of heteropolymers consisting of two types of monomers A and B. We prove that no matter how we manipulate the concentrations of A and B, it takes longer than the exponential function of d to synthesize a fixed amount of the desired heteropolymer, where d is the number of A-B connections. We also prove the decomposition time is linear for chain length n. When d is proportional to n, synthesis and destruction have an exponential asymmetry. Our findings may facilitate research on the more general asymmetry of operational hardness.
Preprint
Full-text available
We demonstrate a novel computational architecture based on fluid convection logic gates and heat flux-mediated information flows. Our previous work demonstrated that Boolean logic operations can be performed by thermally-driven convection flows. In this work, we use numerical simulations to demonstrate a different, but universal Boolean logic operation (NOR), performed by simpler convective gates. The gates in the present work do not rely on obstacle flows or periodic boundary conditions, a significant improvement in terms of experimental realizability. Conductive heat transfer links can be used to connect the convective gates, and we demonstrate this with the example of binary half addition. These simulated circuits could be constructed in an experimental setting with modern, 2-dimensional fluidics equipment, such as a thin layer of fluid between acrylic plates. The presented approach thus introduces a new realm of unconventional, thermal fluid-based computation.
Article
We demonstrate a novel computational architecture based on fluid convection logic gates and heat flux-mediated information flows. Our previous work demonstrated that Boolean logic operations can be performed by thermally driven convection flows. In this work, we use numerical simulations to demonstrate a different , but universal Boolean logic operation (NOR), performed by simpler convective gates. The gates in the present work do not rely on obstacle flows or periodic boundary conditions, a significant improvement in terms of experimental realizability. Conductive heat transfer links can be used to connect the convective gates, and we demonstrate this with the example of binary half addition. These simulated circuits could be constructed in an experimental setting with modern, 2-dimensional fluidics equipment, such as a thin layer of fluid between acrylic plates. The presented approach thus introduces a new realm of unconventional, thermal fluid-based computation.
Article
Phosphate transfer reactions (Principles of biochemistry, Prentice Hall, Upper Saddle River, 1996) involve the transfer of a phosphate group from a donor molecule to an accepter, which is ubiquitous in biochemistry. Besides natural systems, some synthetic molecular systems such as seesaw gates are also equivalent to (subsets of) phosphate transfer reaction networks. In this paper, we study the computational power of phosphate transfer reaction networks (PTRNs). PTRNs are chemical reaction networks (CRNs) with only phosphate transfer reactions. Previously, it is known (Nat Comput 13:517–534, 2014) that a function can be deterministically computed by a CRN if and only if it is semilinear. However, the computational power of programmable phosphate transfer networks is unknown. In this paper, we present a formal model to describe PTRNs and study the computational power of these networks. We prove that when each molecule can only carry one phosphate group, the output must be the total initial count in a subset S1 minus the total initial count of another subset S2. On the other hand, when every molecule can carry up to three phosphate groups, or two phosphate groups with different functions, PTRNs can “simulate” arbitrary CRNs. Finally, when each molecule can carry up to two functionally identical phosphate groups (or, equivalently, two phosphate groups which must be added/removed in a sequential manner), we prove that the computational power is strictly stronger than PTRNs with at most one phosphate group per molecule.
Chapter
A finite automaton or, synonymously, a finite state machine is in the simplest case a triple (X,I,λ), whereby X is a finite set of states, I is a finite set of inputs, and λ is the next-state mapping, such that λ : X x I → X (cf. Arbib, 1969). For example, if x1 and x2 are two state variables each possessing two possible states, the whole automaton has four possible states, and λ specifies the transitions between these states in dependence on a given input. If the input is constant, one speaks of an autonomous automaton, otherwise of a nonautonomous automaton .
Article
Current (May 1995) revision of 1992 report. We consider limitations on the performance of computers arising from thermodynamics and the laws of physics. We provide upper bounds on three quantities: sustained information flux, information storage density, and sustained computational speed. All of these upper bounds are "tight" in the sense that they could be approached by plausible-sounding physical systems, and they all arise from a single unified point of view. We also make a conjecture about the rate of inevitable decay of stored information. This conjecture may be thought of as a quantitative extension of the second law of thermodynamics. It leads to a bound on the density of stable information. We carefully elucidate the assumptions behind these bounds. We give a list of 4 open problems at the end. KEYWORDS: thermodynamics, computation, reversible Turing machines, blackbody radiation, decay of information, physics, entropy, information transmission and storage, cooling requirement...
Article
The success of equilibrium thermodynamics in describing static phenomena has inspired many attempts to develop a rigorous thermodynamics of rate processes.
Article
Experiments on pattern recognition are: performed with a network of eight open, bistable, mass-coupled chemical reactors. A programming rule is used to determine the network connectivity in order to store sets of stationary patterns of reactors with low or high concentrations. Experiments show that these stored patterns can be recalled from similar initial patterns. To our knowledge, this is the first chemical implementation of a type of neural network computing device. The experiments on this small network agree with simulations and support the predictions of the performance-of large:networks.
Article
Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic computers dissipate energy vastly in excess of the mean thermal energykT, for purposes such as maintaining volatile storage devices in a bistable condition, synchronizing and standardizing signals, and maximizing switching speed. On the other hand, recent models due to Fredkin and Toffoli show that in principle a computer could compute at finite speed with zero energy dissipation and zero error. In these models, a simple assemblage of simple but idealized mechanical parts (e.g., hard spheres and flat plates) determines a ballistic trajectory isomorphic with the desired computation, a trajectory therefore not foreseen in detail by the builder of the computer. In a classical or semiclassical setting, ballistic models are unrealistic because they require the parts to be assembled with perfect precision and isolated from thermal noise, which would eventually randomize the trajectory and lead to errors. Possibly quantum effects could be exploited to prevent this undesired equipartition of the kinetic energy. Another family of models may be called Brownian computers, because they allow thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible (low-potential-energy) portion of the computer's configuration space. In these computers, a simple assemblage of simple parts determines a low-energy labyrinth isomorphic to the desired computation, through which the system executes its random walk, with a slight drift velocity due to a weak driving force in the direction of forward computation. In return for their greater realism, Brownian models are more dissipative than ballistic ones: the drift velocity is proportional to the driving force, and hence the energy dissipated approaches zero only in the limit of zero speed. In this regard Brownian models resemble the traditional apparatus of thermodynamic thought experiments, where reversibility is also typically only attainable in the limit of zero speed. The enzymatic apparatus of DNA replication, transcription, and translation appear to be nature's closest approach to a Brownian computer, dissipating 20–100kT per step. Both the ballistic and Brownian computers require a change in programming style: computations must be renderedlogically reversible, so that no machine state has more than one logical predecessor. In a ballistic computer, the merging of two trajectories clearly cannot be brought about by purely conservative forces; in a Brownian computer, any extensive amount of merging of computation paths would cause the Brownian computer to spend most of its time bogged down in extraneous predecessors of states on the intended path, unless an extra driving force ofkTln2 were applied (and dissipated) at each merge point. The mathematical means of rendering a computation logically reversible (e.g., creation and annihilation of a history file) will be discussed. The old Maxwell's demon problem is discussed in the light of the relation between logical and thermodynamic reversibility: the essential irreversible step, which prevents the demon from breaking the second law, is not the making of a measurement (which in principle can be done reversibly) but rather the logically irreversible act of erasing the record of one measurement to make room for the next. Converse to the rule that logically irreversible operations on data require an entropy increase elsewhere in the computer is the fact that a tape full of zeros, or one containing some computable pseudorandom sequence such as pi, has fuel value and can be made to do useful thermodynamic work as it randomizes itself. A tape containing an algorithmically random sequence lacks this ability.
Article
Es wird untersucht, durch welche Umstnde es bedingt ist, da man scheinbar ein Perpetuum mobile zweiter Art konstruieren kann, wenn man ein Intellekt besitzendes Wesen Eingriffe an einem thermodynamischen System vornehmen lt. Indem solche Wesen Messungen vornehmen, erzeugen sie ein Verhalten des Systems, welches es deutlich von einem sich selbst berlassenen mechanischen System unterscheidet. Wir zeigen, da bereits eine Art Erinnerungsvermgen, welches ein System, in dem sich Messungen ereignen, auszeichnet, Anla zu einer dauernden Entropieverminderung bieten kann und so zu einem Versto gegen den zweiten Hauptsatz fhren wrde, wenn nicht die Messungen selbst ihrerseits notwendig unter Entropieerzeugung vor sich gehen wrden. Zunchst wird ganz universell diese Entropieerzeugung aus der Forderung errechnet, da sie im Sinne des zweiten Hauptsatzes eine volle Kompensation darstellt [Gleichung (1)]. Es wird dann auch an Hand einer unbelebten Vorrichtung, die aber (unter dauernder Entropieerzeugung) in der Lage ist, Messungen vorzunehmen, die entstehende Entropiemenge berechnet und gefunden, da sie gerade so gro ist, wie es fr die volle Kompensation notwendig ist: die wirkliche Entropieerzeugung bei der Messung braucht also nicht grer zu sein, als es Gleichung (1) verlangt.
Article
A new manner of relating formal language theory to the study of informational macromolecules is initiated. A language is associated with each pair of sets where the first set consists of double-stranded DNA molecules and the second set consists of the recombinational behaviors allowed by specified classes of enzymatic activities. The associated language consists of strings of symbols that represent the primary structures of the DNA molecules that may potentially arise from the original set of DNA molecules under the given enzymatic activities. Attention is focused on the potential effect of sets of restriction enzymes and a ligase that allow DNA molecules to be cleaved and reassociated to produce further molecules. The associated languages are analysed by means of a new generative formalism called a splicing system. A significant subclass of these languages, which we call the persistent splicing languages, is shown to coincide with a class of regular languages which have been previously studied in other contexts: the strictly locally testable languages. This study initiates the formal analysis of the generative power of recombinational behaviors in general. The splicing system formalism allows observations to be made concerning the generative power of general recombination and also of sets of enzymatic activities that include general recombination.
Article
It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.
Article
Many proteins in living cells appear to have as their primary function the transfer and processing of information, rather than the chemical transformation of metabolic intermediates or the building of cellular structures. Such proteins are functionally linked through allosteric or other mechanisms into biochemical 'circuits' that perform a variety of simple computational tasks including amplification, integration and information storage.