Content uploaded by Marcelo Osvaldo Magnasco
Author content
All content in this area was uploaded by Marcelo Osvaldo Magnasco on May 12, 2015
Content may be subject to copyright.
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
Chemical Kinetics is Turing Universal
Marcelo O. Magnasco
Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, New York 10021
(Received 20 February 1996; revised manuscript received 15 August 1996)
We show that digital logic can be implemented in the chemical kinetics of homogeneous solutions:
We explicitly construct logic gates and show that arbitrarily large circuits can be made from them. This
proves that a subset of the constructions available to life has universal (Turing) computational power.
[S0031-9007(97)02332-6]
PACS numbers: 87.10.+e, 89.80.+h, 82.20.Mj
Interest in chemical computation has followed four dif-
ferent paths. It is one of the natural extensions of discus-
sions about information and thermodynamics, which go
back to Maxwell demon arguments and Szilard’s work
[1–5]. It is also a rather natural extension to the ap-
plication of dynamical systems theory to chemical reac-
tions [6–8], in particular logic networks stemming from
bistable reaction systems [9]. A lot of effort has been de-
voted to trying to devise nonstandard computational archi-
tectures, and chemical implementations provide a distinct
enough backdrop to silicon [10–12]. Finally, in recent
years biology has presented us with what looks to be ac-
tual chemical computers: the enzymatic cascades of cell
signaling [13–15].
One of the first questions that can be asked in this
subject is whether universal (Turing) computation can
be achieved within some theoretical model of chemistry;
the most immediate one is standard chemical kinetics.
This question has been recently studied in some detail
[16–22], and even subject to experimental tests [23]. In
[18–20], Hjelmfelt et al. argued quite convincingly that
building blocks for universal computation indeed can be
constructed within ideal chemical kinetics, and that they
could be interconnected to achieve computation. How-
ever, many difficulties still lie in the way. An issue
not addressed by Hjelmfelt et al. is structural stability:
the tolerance of a system to changes in parameters and
functional structure. In particular, “gluing” together two
groups of chemical reactions will have appreciable effects
on the kinetics of both groups; the basic unit and the cou-
plings used in [18–20] require case-by-case adjustment of
individual parameters for proper functioning.
The purpose of this Letter is to provide a slightly
more formal proof that chemical kinetics can be used
to construct universal computers. I will concentrate on
the “next” level of difficulty, which is that of the global
behavior of a fully coupled system and its structural
stability. I will do it through the simplest approach: I will
show that classical digital electronics can be implemented
through chemical reactions. Since my key problem in
this scheme is showing global consistency, and the proof
requires arbitrarily large circuits, I will have to show that
the output of one gate can be plugged into the input of
others for arbitrarily many layers, without degrading the
logic, keeping at all times full coupling.
We will need a power supply. I will define mine to
consist of two chemical species called high and low;
their concentrations will be kept clamped strongly out
of equilibrium, so an external reservoir is assumed.
This approximates the power supply in cells, the two
compounds ATP and ADP; the cellular “power plants”
keep their concentration as constant as feasible, nearly
6 decades away from equilibrium. Thermodynamics
requires the logarithm of the equilibrium constants to
lie in the (left) span of the stoichiometry matrix; it is
important that all reactions we use satisfy this constraint,
so that there are no “hidden” power supplies.
The very first thing we need to consider is the trivial
gate, the signal repeater, which copies input onto output.
Any problems we encounter with it will recur for any
other gate. Let’s say a chemical species ais the input and
bthe output. We will need bto exist in two chemically
distinct forms, band b[24]. If bis a compound of higher
energy than b, we can couple its production to the power
supply, as in b1high %b1low;in the absence of
other reactions, fbggoes to a small value determined by
the rate of spontaneous decay in b%b. This is then
a sort of “capacitor,” which we charge with the power
supply. If then the reaction b%bis catalyzed by a,
a1b%ab %ab %a1b,(1)
then a“shorts” the capacitor and discharges it, increasing
the concentration of b. Hence when fagis low, fbgis
low, and when fagis high, fbgbecomes high, and the
transitions have certain rise and decay times determined
by the precise rates we use.
In Fig. 1 we see the output of simulating a chain of
several such gates with a°! b°! c°! d.... The
gates are all identical; the only change between them is
the name of the compound. The wave forms are dying
as we go down this chain: The difference between the
“high” and the “low” levels is becoming smaller and
smaller. So this network is not a suitable signal repeater.
Figure 2 shows the output of a similar simulation using
the reactions
2a1b%a2b%a2b%2a1b(2)
1190 0031-9007y97y78(6)y1190(4)$10.00 © 1997 The American Physical Society
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
FIG. 1. A cascade of identical signal repeaters a°! b°!
c°! , using Eq. (1). The input to fagis a square wave.
Top (small) panels show each signal individually with varying
scales, bottom (large) panel shows all signals simultaneously on
the same scale. The amplitude of the signal gets reduced very
rapidly.
(i.e., double stoichiometry on the input). We can see that
the amplitude of the pulses gets stabilized; both high and
low now approach amply separate levels [25]. I will now
prove that higher stoichiometry is essential.
All concentrations become stationary after some tran-
sients. If we plot these steady levels as a function of the
inputs, we get the classical plots shown in Fig. 3. These
diagrams represent the concentration of bas a function
of a, but also of cas a function of b, and so on. If we
call xnthe nth compound in the chain, then the diagram
shows xn11as a function of xn;nhere labels position on
the chain. This is a recurrence relation, also called a map.
This type of map is usually studied in the theory of
dynamical systems, where it represents some dynamical
law, and nlabels time. A large part of dynamical
systems theory is devoted to the asymptotic states, i.e.,
what happens at arbitrarily long times. In our case this
translates to “arbitrarily deep into the circuit,” which
is what we want to study. Dynamical systems theory
tells us that the only asymptotic states of maps which
are monotonically increasing and bounded (our case)
are steady states. The steady states (also called fixed
points) of a map occur when xn11xn, i.e., when the
curve intersects the diagonal line. They can be stable or
unstable; stable (unstable) means that if some xnis near
the fixed point, then, for m.n, the xmare nearer to
(farther away from) the fixed point; this happens when
FIG. 2. A cascade of signal repeaters with double stoichiom-
etry [Eq. (2)]. Same conventions as Fig. 1. The amplitude of
the signal converges to a steady value.
the curve is shallower (steeper) than the diagonal at the
intersection.
In the case of stoichiometry one sS1dthere are
at most two fixed points, and only one can be stable
[26]. For S.1there can be three fixed points, the
two outer ones being stable, the middle one unstable.
We can propagate logic arbitrarily deep into the chain
FIG. 3. The steady-state concentration of the outputs of two
signal repeaters, S1[Eq. (1)] and S2[Eq. (2)] as a
function of the steady-state level of the input a. The diagonal
line is fagas a function of itself; the intersections of the two
curves with this diagonal are the fixed points.
1191
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
if and only if we have at least two distinct stable fixed
points, with each one corresponding to a distinct logical
state. But two stable fixed points are possible only
for S.1.
Now the main conceptual problems have been solved.
The only remaining point is to construct explicitly a few
different gates (Fig. 4); if all of the gates are “built”
(i.e., the rates so chosen) so that their response is one
of our fixed points when the inputs are at the fixed points
then they will be globally compatible. Strictly speaking,
one needs only NAND, since all logical functions can be
constructed from it, but since each internal wire in the
circuit is a chemically distinct compound, it is desirable
to implement gates directly [27]. A precise definition of
the gates can be found elsewhere [28].
Adding is a problem that exemplifies rather nicely the
spirit of this work, because when we add, we have to
shift the “carry” digits to the next column. These can
accumulate to generate a cascade, so we need to be able
to propagate logic across an entire network. In order to
add two three-bit numbers (giving a four-bit number as
the output), we need to cascade three full adders. The
three-bit adder is shown in Fig. 5; it can add up to 717.
Ephemeral memory can be implemented rather directly,
but if the memory is supposed to be long-term, care
must be exercised. A flip-flop can be made by having a
compound in two states sc,cd, and then two inputs sa,bd
which catalyze conversion to the other state by coupling
to the power supply:
FIG. 4. The output of one implementation of the four classical
gates. aand bare the inputs. While there are artifacts, the
logic levels are still well separated. AND,OR, and NAND are
implemented directly, XOR is implemented as AND(NAND,OR).
a1c1high %··· %a1c1low (3)
and similarly for bsending c°! c. The lifetime of
this memory would appear to be the lifetime of the
uncatalyzed reaction c%c. However, such a mechanism
is not resistant to fluctuations in the inputs; even a minute
amount of catalyst can reduce the lifetime dramatically.
In order to make memory stable, we need to make the
system prefer to be either all cor all c. There are many
ways to do this; for instance,
2c1c1high %··· %3c1low (4)
and vice versa [29]. The addition of these two self-
catalytic reactions makes the memory strongly robust (see
Fig. 6) and, in principle, infinitely long lived even in the
presence of input fluctuations; however, energy is drawn
from the power supply to “refresh” the flip-flop. There is
some resemblance to dynamic vs static RAM, and to the
self-phosphorylating enzyme CamK II [30], which might
be implicated in long-term memory in neurons.
I have shown one particular explicit implementation of
digital logic in chemical kinetics, and thus shown universal
computation capabilities. However, many questions still
remain open (which I will comment upon in some greater
detail elsewhere [31]): What is the interplay between in-
formation transfer and thermodynamics? Since no catalyst
FIG. 5. Numerical simulation of the three-bit adder: c
a1b. The lower traces are the three bits of input aand
the three bits of input b; the four upper traces are the four bits
of output c. The transients as the inputs are changed show the
delays in propagating carries. The five columns of different
inputs show: 0100,71714,2124,613
9, and 7118. The network has about 140 compounds in
290 reactions.
1192
VOLUME 78, NUMBER 6 PHYSICAL REVIEW LETTERS 10FEBRUARY 1997
FIG. 6. Two flip-flops; sta is a “static” flip-flop [Eq. (3)], and
dyn a “dynamic” one with autocatalytic stabilization [Eq. (4)].
Both can switch between states fast as the inputs aand bare
pulsed. At time 100 both inputs are set to 0.1; sta forgets its
state, while dyn does not.
is perfectly selective for its substrate, how robust are
computations under the massive cross talk of random
“unintended” reactions? Are there equivalents of the
gain bandwidth and other classic theorems of electronics?
And, presumably, many more.
I would like to thank A. Ajdari, G. Cecchi, D. Chate-
nay, J.-P. Eckmann, A. Libchaber, G. Stolovitzky, and
D. Thaler for many stimulating discussions. Completed
in part at U. of Buenos Aires and Asoc. de Fisica Ar-
gentina; I want to thank G. Mindlin, S. Ponce, and J.P.
Paz for their hospitality. Supported in part by the Mathers
Foundation.
[1] L. Szilard, Z. Phys. 53, 840–856 (1929).
[2] H. S. Leff and A.F. Rex, Maxwell’s Demon: Information,
Entropy, Computing (Princeton University Press, Prince-
ton, New Jersey, 1990).
[3] R. Landauer, IBM J. Res. Dev. 5, 183–191 (1961).
[4] C. H. Bennett, Int. J. Theor. Phys. 21, 905–940 (1982).
[5] C. H. Bennett and R. Landauer, Sci. Am. 253, No. 7, 38
(1985).
[6] G. Gavalas, Nonlinear Differential Equations of Chemi-
cally Reacting Systems (Springer-Verlag, Berlin, 1968).
[7] G. Oster, A. S. Perelson, and A. Katchalsky, Q. Rev.
Biophys. 6, 1 (1973).
[8] Nonlinear Phenomena in Chemical Kinetics, edited by
C. Vidal and A. Pacault (Springer-Verlag, Berlin, 1981).
[9] O. E. Rössler, in Physics and Mathematics of the Nervous
System, edited by M. Conrad, W. Güttinger, and M. Dal
Cin, Springer-Verlag Lecture Notes in Biomathematics
Vol. 4 (Springer-Verlag, Berlin, 1974), pp. 399–418,
546–582.
[10] T. Head, Bull. Math. Biol. 49, 737 (1987).
[11] R. E. Siatkowsky and F.L. Carter, in Molecular Electronic
Devices, edited by F.L. Carter, R. E. Siatkowsky, and
H. Wohltjen (North-Holland, Amsterdam, 1988).
[12] L. Adleman, Science 266, 1021 (1994).
[13] B. Alberts, D. Bray, J. Lewis, M. Raff, K. Roberts, and
J.D. Watson, The Molecular Biology of the Cell (Garland,
New York, 1994), 3rd ed.
[14] T. Pawson, Nature (London) 373, 573 (1995).
[15] D. Bray, Nature (London) 376, 307 (1995).
[16] M. Okamoto, T. Sakay, and K. Hayashi, Biosystems 21,1
(1987).
[17] M. Okamoto, T. Sakay, and K. Hayashi, Biol. Cybern. 58,
295 (1988).
[18] A. Hjelmfelt, E. D. Weinberger, and J. Ross, Proc. Natl.
Acad. Sci. USA 88, 10 983 (1991).
[19] A. Hjelmfelt, E. D. Weinberger, and J. Ross, Proc. Natl.
Acad. Sci. USA 89, 383 (1992).
[20] A. Hjelmfelt and J. Ross, Proc. Natl. Acad. Sci. USA 89,
388 (1992).
[21] A. Hjelmfelt, F. W. Schneider, and J. Ross, Science 260,
335 (1993).
[22] A. Arkin, and J. Ross, Biophys. J. 67, 560 (1994).
[23] J.-P. Laplante, M. Payer, A. Hjelmfelt, and J. Ross,
J. Phys. Chem. 99, 10063 (1995).
[24] Compounds which exist in two chemically distinct states
can be implemented as proteins which can be phosphory-
lated, or as compounds which can be localized in two
different places; for instance, Ca11 in the cytosol is
“chemically distinct” from Ca11 in the ER, with channels
and pumps playing the role of kinases and phosphatases.
[25] These repeaters show the equivalent of input and output
impedanceycapacitances. If fagis changed abruptly, it
bounces, due to capture and release from the bound
complexes ab and ab. Convergence fails on the last
steps of the cascade when the last element is not
“terminated.”
[26] For S1the curve is convex. A convex curve can be
intersected by a straight line at most twice; at most one
intersection can be stable.
[27] Actual kinases from enzymatic pathways can have more
than one phosphorylation site and do logic directly on the
protein; so biological cascades can be more compact than
the networks shown here.
[28] Online at http:yytlon.rockefeller.eduy
[29] This can be done with only one autocatalytic reaction, plus
a nonspecific decay like c°! c(a self-phosphorylating
kinase and a phosphatase); in the absence of other
interactions, either S.1or cooperativity is required.
See also [9].
[30] H. Schulman, Curr. Opin. Cell Biol. 5, 247– 253 (1993).
[31] A full version of this paper, including a careful description
of the open problems, will appear elsewhere.
1193