ArticlePDF Available

Chemical implementation of finite-state machines

Authors:

Abstract and Figures

With methods developed in a prior article on the chemical kinetic implementation of a McCulloch-Pitts neuron, connections among neurons, logic gates, and a clocking mechanism, we construct examples of clocked finite-state machines. These machines include a binary decoder, a binary adder, and a stack memory. An example of the operation of the binary adder is given, and the chemical concentrations corresponding to the state of each chemical neuron are followed in time. Using these methods, we can, in principle, construct a universal Turing machine, and these chemical networks inherit the halting problem
Content may be subject to copyright.
Proc.
Nati.
Acad.
Sci.
USA
Vol.
89,
pp.
383-387,
January
1992
Chemistry
Chemical
implementation
of
finite-state
machines
ALLEN
HJELMFELTt,
EDWARD
D.
WEINBERGERt,
AND
JOHN
Rosst
tMax-Planck-Institut
fur
Biophysikalische
Chemie,
D-3400
Gottingen,
Federal
Republic
of
Germany;
and
*Department
of
Chemistry,
Stanford
University,
Stanford,
CA
94305
Contributed
by
John
Ross,
September
23,
1991
ABSTRACT
With
methods
developed
in
a
prior
article
on
the
chemical
kinetic
implementation
of
a
McCulloch-Pitts
neuron,
connections
among
neurons,
logic
gates,
and
a
clocking
mechanism,
we
construct
examples
of
clocked
rmite-state
ma-
chines.
These
machines
include
a
binary
decoder,
a
binary
adder,
and
a
stack
memory.
An
example
of
the
operation
of
the
binary
adder
is
given,
and
the
chemical
concentrations
corre-
sponding
to
the
state
of
each
chemical
neuron
are
followed
in
time.
Using
these
methods,
we
can,
in
principle,
construct
a
universal
Turing
machine,
and
these
chemical
networks
inherit
the
halting
problem.
In
a
prior
article
(1)
we
discussed
the
implementation
of
a
chemical
neural
network:
we
wrote
a
reaction
mechanism
with
stationary-state
properties
of
a
McCulloch-Pitts
neuron
(2,
3)
and
developed
chemical
interneuronal
connections,
basic
logic
gates,
a
clocking
mechanism,
and
input
and
output
of
the
entire
neural
network.
In
this
article
we
combine
these
chemical
components
to
construct
three
devices:
a
binary
decoder,
a
binary
adder,
and
a
stack
memory.
The
method
of
construction
can
be
used
to
make
the
finite-state
component
of
a
universal
Turing
machine
(4-6),
as
any
finite-state
machine
can
be
simulated
by
clocked
neural
networks
(5).
In
principle,
by
coupling
this
particular
finite-state
machine
with
a
readable-writable
tape,
such
as
a
polymer
like
DNA
or
a
pair
of
stack
memory
devices,
the
chemical
implementation
of
a
universal
Turing
machine
based
on
kinetic
reaction
mechanisms
is
realizable.
We
leave
for
later
study
a
related
issue:
given
a
biological
(chemical)
reaction
mechanism
what
logic
operations,
what
computations,
can
this
mechanism
perform
for
given
inputs.
We
begin
with
a
brief
review
of
the
components
of
a
chemical
neural
network,
and
then
we
discuss
the
construc-
tion
of
a
binary
adder
and
a
stack
memory.
Construction
of
Chemical
Neural
Networks
vation
constraint,
Ai
+
Bi
=
AO.
The
stationary-state
con-
centrations
are
functions
of
the
concentration
of
the
catalyst
Ci.
By
using
the
rate
constants
given
in
ref.
1,
the
stationary-
state
concentration
of
Ai
is
<2
x
10-4
mmol/liter
and
of
B1
is
>0.999
mmol/liter
for
Ci
<0.90
mmol/liter,
and
the
con-
centration
of
Ai
is
>0.999
mmol/liter
and
of
Bi
is
<2
x
10-4
mmol/liter
for
Ci
>1.10
mmol/liter.
Thus,
the
chemical
neuron
has
two
states,
and
the
concentration
of
Ci
determines
the
state
of
neuron
i.
Clocking.
In
the
neural
networks
we
describe
here
the
state
of
a
chemical
neuron
is
allowed
to
change
only
at
discrete
times.
This
discreteness
of
time
and
synchronization
of
state
changes
can
be
implemented
chemically
by
the
use
of
an
autonomously
oscillating
catalyst
E.
We
assume
that
E
oscil-
lates
in
a
nonsinusoidal
manner,
as
is
common
in
many
chemical
oscillators
(9).
The
concentration
of
E
is
assumed
to
be
very
small,
except
during
an
interval
short
compared
with
the
oscillator
period
and
with
the
relaxation
time of
a
chem-
ical
neuron
(Eq.
1).
The
catalyst
E
interacts
with
the
species
Aj
(or
Bj)
of
each
neuron
j,
e
e
Aj
=
Aj
By
=
Bj,
[2]
and
rapid
equilibration
occurs
only
during
the
short
time
interval
when
the
concentration
of
E
is
large.
In
Fig.
1
we
show
schematically
the
time
variation
of
the
concentrations
of
Ai
and
A,
as
determined
by
the
concentration
of
Ci.
Ai
is
the
state
of
neuron
i
at
a
given
time,
say
t
=
0
for
the
interval
0
to
1
in
Fig.
1
and
determines
A,
in
the
next
time
interval,
1
to
2.
The
state
of
neuron
j
at
time
t
-
1
determines
the
state
of
neuron
i
at
time
t.
Thus,
the
A;
at
time
t
determines
the
state
of
neuron
i
at
time
t.
Interneuronal
Connections.
The
effect
of
the
state
of
the
other
neurons
j,
k,
. .
.
on
neuron
i
is
expressed
in
Ci.
The
species
Ai,
.
. .
or
B.%
.
.
.
affect
the
concentration
of
the
catalyst
Ci
by
activation
reactions,
A
Single
Chemical
Neuron.
As
a
basis
for
a
"chemical
neuron"
we
choose
a
cyclic
enzyme
mechanism
studied
by
Okamoto
et
al.
(7,
8):
I*i
+
Ci
=
X
i
+
Ci
Ejj
+
AjB=
C,1
Jjj=
kCi
-
k-1CiXli
X1+
B,=
X~
+
Ai
J2i
=
k2X11Bi
-2Ai
X3i
+
Ai
=
X
+Bi
J3
=
k3X3A-
-k
X3i
=
I*2i
J4i=
k4X3i
-k4,
where
the
concentrations
of
the
species
marked
by
the
superscript
are
held
at
a
constant
value,
either
by
buffer
or
by
flows,
and
have
been
absorbed
into
the
rate
constal
Ai
and
Bi
are
the
state
species
and
are
related
by
a
con
E9
cij=
1
1
+
KAj
[3]
[4]
E90
Eij
+
Bj
=-
Cjj
Cii
=
1
1
+
K(Ao-Aj)
which
are
assumed
to
equilibrate
on
the
time
scale
of
the
[1]
pulse
of
the
catalyst
E
and
to
be
fast
compared
with
the
time
scale
of
mechanism
1.
The
sum
of
the
active
forms
of
the
(*)
enzyme
determines
C,:
I
I
ring
nts.
ser-
[5]
Ci=
ICij.
J
In
Fig.
2
we
show
schematically
the
influence
of
neurons
j,
k,
and
1
on
neuron
i.
The
state
of
neuron
i
determines
the
concentration
of
Cij,
and
the
firing
of
neuron
i
is
inhibited
by
the
firing
of
neuronj(Eq.
4).
The
states
of
neurons
k
and
I
(not
383
The
publication
costs
of
this
article
were
defrayed
in
part
by
page
charge
payment.
This
article
must
therefore
be
hereby
marked
"advertisement"
in
accordance
with
18
U.S.C.
§1734
solely
to
indicate
this
fact.
Proc.
Natl.
Acad.
Sci.
USA
89
(1992)
I*
Cj
>
x4N
X3j
<
2j
1
2
Time
FIG.
1.
Representation
of
the
variation
of
concentrations
of
Ai
and
A,
as
a
function
of
time.
Change
in
concentration
of
Cl
at
t
=
1
causes
change
in
concentration
of
Ai,
which,
in
turn,
determines
concentration
of
A,
at
t
=
2.
shown)
likewise
determine
the
concentrations
Cjk
and
Cil,
and
the
sum
of
CV,
Cik,
and
Cil
is
Ci,
the
parameter
that
determines
the
state
of
neuron
i.
The
state
of
neuron
i
determines
the
concentration
of
Cki,
and
the
firing
of
neuron
k
is
excited
by
the
firing
of
neuron
i.
The
combination
of
reactions
3
and
4
determines
the
logical
operation
of
neuron
i
on
the
states
of
neuronsj,
.
.
..
That
is,
the
state
of
neuron
i
at
time
t
is
determined
by
a
logical
operation
on
the
states
of
neuronsj,
.
.
.
at
time
t
-
1.
In
ref.
1
we
describe
how
various
logical
operations
can
be
repre-
sented,
such
as
AND,
OR,
NOR,
etc.
We
also
use
a
connection
where
the
connection
enzyme
(C)
in
Eq.
5
is
inhibited
or
activated
by
more
than
one
species.
Aj
and
A'
interact
with
the
same
enzyme
Ei.
El
+
Aj>=
C,,
[6]
Ej
+
Al
=
(EiAl),
[7]
and
E,
Ci=
1
K1Ak
[8]
1+
+
KAAi
KAAi
where
Ei,
=
Ei
+
Ci
+
(EiAl),
KA
is
the
equilibrium
constant
of
she
activation
reaction
(Eq.
6),
and
K,
is
the
equilibrium
constant
of
the
inhibition
reaction
(Eq.
7).
These
reactions
allow
specific
inhibition
of
one
connection,
instead
of
the
nonspecific
inhibition
given
by
Eq.
4.
Examples
of
Finite-State
Machines
One
copy
of
the
basic
reaction
mechanism
of
a
neuron
(Eq.
1)
exists
for
each
chemical
neuron
in
the
network.
Each
neuron
is
chemically
distinct,
but
for
convenience
we
assume
that
the
reactions
that
constitute
each
neuron
are
mechanis-
tically
similar.
A
machine
is
specified
by
the
number
of
neurons,
the
form
of
the
connections
between
the
neurons
(Eqs.
3,
4,
or
8),
which
neurons
represent
the
output
of
the
machine,
and
which
concentrations
represent
the
input
to
the
machine.
Binary
Decoder.
The
first
device
we
construct
is
a
binary
decoder
composed
of
four
neurons
(i
=
3-6)
and
two
input
concentrations
A1(t)
and
A2(t),
which
we
assume
to
be
controlled
by
the
external
world,
which
are
represented
here
as
the
state
species
of
neurons
1
and
2.
A
binary
number
is
represented
as
a
string
of
digits
presented
to
the
machine
sequentially
in
time
with
the
least-significant
digit
first.
A1(t)
and
A2(t)
are
each
digits
of
such
numbers.
These
numbers
are
presented,
digit
by
digit
in
parallel,
to
the
binary
decoder,
E~~~~~~~~ki
A.
X:,
x
FIG.
2.
Schematic
of
two
reaction
mechanisms
constituting
neu-
rons
i
and
j
and
the
influence
of
neurons
j,
k,
and
1
on
neuron
i.
All
reactions
are
reversible.
The
firing
of
neuronj
inhibits
the
firing
of
neuron
i,
and
neurons
k
and
I
(not
shown)
also influence
the
state
of
neuron
i.
The
firing
of
neuron
i
inhibits
the
firing
of
neuron
k.
which
causes
one
and
only
one
of
the
neurons
with
i
=
3-6
to
fire
at
time
t
+
1.
The
catalyst
concentrations
of
the
four
neurons
are
given
by
1 1
C3
=
+
;
neuron
3
fires
only
if
1
1
Al
=
1,
A'=
1,
1
+-
1+-
2
2A1
2A'
1
1
C4
=
+
;
neuron
4
fires
only
if
1
A1=1,A
=O,
14--
1A
2A
1
2(1
A')
1
C5
=
+
;
neuron
5
fires
only
if
1+
1+-
A
=O,A'=1,
1
+
1
+-1
2(1
-
At)
2A'
and
1 1
19]
[10]
[11]
C6
=
1
+
;
neuron
6
fires
only
if
1
+
At
=
0,
A'
=
0.
[12]
1+
1+
2
2(1
-
A')
2(1
-
A')
Neurons
i
=
3-6
excite
neurons
in
the
binary
adder
described
in
the
next
section.
The
entire
device
(decoder
and
adder)
is
pictured
in
Fig.
3.
The
purpose
of
the
decoder
is
to
convert
the
pairs
of
input
digits
into
the
firing
of
a
unique
neuron.
If
the
input
is
decoded
in
this
form,
then
the
operation
of
the
adder
on
this
decoded
input
can
have
a
canonical
form.
Binary
Adder.
A
two-state
machine
can
add
arbitrarily
large
binary
numbers
when
pairs
of
digits
of
the
numbers
are
supplied
to
the
machine
serially
(3).
The
two
states
of
the
machine
represent
"carry
0"
or
"carry
1"
from
the
sum
of
the
previous
two
digits.
For
a
binary
adder
the
two
digits
and
the
machine
state
at
time
t
uniquely
determine
the
output
and
the
machine
state
at
time
t
+
1
through
the
rules
in
Table
1.
Any
clocked
finite-state
machine,
such
as
a
binary
adder,
can
be
simulated
by
a
neural
network
of
a
certain
canonical
form,
provided
the
inputs
are
suitably
decoded
(3),
as
in
the
section
on
Binary
Decoder.
The
canonical
form
represents
arranging
AND-neurons
in
a
matrix
where
each
column
0
a.
-
14
C.)
0
384
Chemistry:
Hjelmfelt
et
al.
X*
_
2j
Proc.
Natl.
Acad.
Sci.
USA
89
(1992)
Adder
FIG.
3.
Schematic
of
the
neurons
and
connections
in
the
binary
decoder
and
adder.
The
half-shaded
circles
denote
neurons.
The
connection
emerging
from
the
shaded
side
is
the
output
(state)
of
the
neuron.
Connections
entering
the
unshaded
provide
input
to
the
neuron:
-,
excitatory
connections;
-o,
inhibitory
connections.
The number
of
firing
excitatory
inputs
is
summed,
and
when
that
number
is
greater
than
the
number
in
the
neuron
and
no
inhibitory
inputs
are
firing,
the
neuron
fires.
In
this
notation
neuron
6
is
a
NOR
gate,
neuron
5
is
an
A2
AND
NOT
Al
gate,
and
neurons
3
and
7-12
are
AND
gates.
Some
of
the
connections
are
denoted
by
broken
lines
for
clarity.
Each
column
of
the
adder
represents
one
state
of
the
adder:
carry
0
or
carry
1;
and
each
row
represents
one
of
the
input
combinations:
[1
1],
[1
01,
[0
0].
Thus
each
neuron
in
the
adder
portion
represents
one
row
in
Table
1.
represents
one
of
the
machine
states
and
each
row
represents
one
of
the
possible
inputs
to
the
adder.
In
Fig.
3,
neurons
7-9
represent
the
carry-0
state,
and
neurons
7
and
10
represent
the
input
[O
0].
Each
neuron
i
in
the
adder
represents
line
i
of
Table
1.
At
any
given
time
exactly
one
neuron
in
the
adder
is
firing.
Each
neuron
in
the
adder
excites
the
neurons
in
the
column
representing
its
state
at
t
+
1
(determined
from
Table
1),
and
each
input
to
the
adder
from
the
decoder
excites
the
neurons
in
one
row.
Thus,
only
one
neuron
in
the
adder
has
two
firing
inputs
at
any
given
time.
Because
all
the
neurons
of
the
adder
are
AND-neurons,
only
one
neuron
will
fire
at
time
t
+
1.
This
one
firing
neuron
also
gives
the
output
of
the
adder,
which
is
determined
by
Table
1.
To
follow
the
rules
of
Table
1
we
choose
the
catalyst
concentrations
for
the
six
AND-neurons
that
compose
the
binary
adder
as
follows:
1
1
1
1
C7=
1 1
1
1
[13+
1+1
1+
1+
1+
2A'
2A;
2A'
2A'0
1
1 1
c8=
+
+
1+
1+
1+
2A' 2A'
2A'7
1
1
+
+
[14]
1+
1+
2A'
2A'0
1
1
1
1
C9
~+
+
+
[15]
9
1
1
1
1
'
[5
1+
1+
1+
1+
2A'
2A'
2A'
2A'0
1 1
1
1
ClO=
1+
1+
1
1+-
1+-
1+
1+
2A'
2A'
2A'1
2A'2
1
1 1
C
11=-
+
+
1
1
1
1+-
1+-
1+
2A'
2A5
2A'
1
1
+1
1
'
1+
1+
2A'1
2A'2
and
1 1 1
1
C12
=
+
~~+
+
1
1
1
1
1+
1+-
1+
1+
2Af
2A4
2A'1
2A'2
[16]
[17]
[18]
where
A;
is
the
state
species
of
the
neuron
corresponding
to
Ci
and
for
i
<
5,
these are
the
neurons
of
the
previous
subsection
(the
binary
decoder).
As
indicated
in
Table
1
the
input
of
[O
1]
and
[1
0]
is
degenerate.
Thus,
the
decoder
neurons
4
and
5
both
excite
the
same
row
(neurons
8
and
11
in
Fig.
3)
of
the
AND-neuron
matrix.
The
operation
of
the
decoder
and
adder
is
illustrated
in
Fig.
4,
where
we
plot
the
time
evolution
of
the
state
species
Ai
concentrations.
At
t
=
0
the
two
binary
digits
A1(O)
=
1
and
A2(0)
=
0
are
presented
to
the
decoder.
At
t
=
1
neuron
4 of
the
decoder,
which
fires
if
and
only
if
Al
=
1
and
A2
=
0
(Eq.
10),
fires
and
excites
one
row
of
neurons,
neurons
8
and
11,
in
the
binary
adder.
Also
at
time
t
=
1,
neuron
7
of
the
adder
is
firing,
and
it
is
an
input
to
the
carry-O
column
(neurons
7-9)
of
the
adder.
At
t
=
2,
only
neuron
8
of
the
adder
has
two
firing
inputs,
so
in
the
adder
only
neuron
8
fires.
From
Table
1
neuron
8
signifies
that
the
adder
outputs
a
1
and
carries
a
0.
The
output
of
the
adder
lags
behind
the
input
to
the
decoder
by
two
time
steps.
The
first
two
digits
of
output
(t
=
1,
2)
are
discounted
bits
because
of
the
time
lag.
The
relevant
output
starts
at
t
=
3.
Likewise,
the
last
two
input
digits
are
not
part
of
the
output
due
to
the
time
lag.
Stack.
The
last
clocked
device
to
be
described
is
a
first-in
last-out
stack
memory.
A
finite-state
machine
augmented
with
two
infinite
stacks
is
equivalent
in
power
to
a
Turing
machine
with
one
infinite
tape
(4);
it
is
computationally
universal.
The
typical
example
of
a
stack
is
a
stack
of
plates
on
a
spring,
and
the
spring
pushes
the
plates
up
so
that
only
one
plate
is
visible.
If
the
plates
represent
data
(binary
digits
for
example),
then
a
particular
data
item
can
only
be
reached
by
removing
all
the
plates
above
it.
Following
the
analogy,
a
stack
can
be
imagined
as
a
linear
array
of
neurons
extending
downward
from
a
top
neuron.
Each
neuron,
Eq.
1,
is
coupled
Table
1.
Transition
table
for
the
binary
adder
i
Al
A2
S
St+,
0
7
0
0
0
0
0
8
0
1*
0
0
1
9
1 1
0
1
0
10
0
0
1
0
1
11
0
1*
1
1
0
12
1
1
1
1
1
Al
and
A2
are
the
two
input
digits,
S
and
St+1
are
the
machine
states
at
time
t
and
t
+
1,
and
0
is
the
output
digit.
Each
neuron
i
(7-12)
in
the
adder
corresponds
to
the
same
indexed
row
i
in
the
table.
*The
case
for
A,
=
1
and
A2
=
0
is
equivalent.
Chemistry:
Hjelmfelt
et
al.
385
Proc.
Natl.
Acad.
Sci.
USA
89
(1992)
1
2
3
4
5
C
2
6
z
7
8
9
10
1
1
12
4
0
4J
CL
M
o-
L.
0
0
L.
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Time
FIG.
4.
Time
dependence
of
the
concentration
of
Ai
in
neurons
1-12
of
the
binary
decoder
and
adder.
The
concentration
of
Ai
changes
between
0
and
1
for
each
neuron.
Neurons
1
and
2
are
the
input
digits
that
are
controlled
by
the
external
world.
to
its
two
neighbors,
and
only
the
top
neuron
can
be
read
or
modified
by
an
external
finite-state
machine,
as
for
example
the
binary
adder
in
the
previous
section.
At
each
time
step
the
stack
can
perform
one
of
three
operations
based
on
the
command
received
from
the
external
finite-state
machine:
"remember,"
"pop,"
and
"push."
For
the
remember
oper-
ation
none
of
the
neurons
in
the
stack
change
state.
For
the
pop
operation,
each
neuron
transfers
its
state
to
the
neuron
above
it
in
the
stack,
and
the
state
of
the
top
neuron
is
transferred
to
the
external
finite-state
machine.
For
the
push
operation,
each
neuron
transfers
its
state
to
the
neuron
below
it
in
the
stack,
and
the
external
finite-state
machine
transfers
information
to
the
top
neuron.
Pictured
in
Fig.
5
is
a
stack
consisting
of
four
neurons
(i
=
1-4).
Neurons
5
and
6
(not
shown)
are
part
of
an
external
finite-state
machine
and
determine
the
operation
imple-
mented
by
the
stack.
Neuron
7
is
also
part
of
the
external
finite-state
machine
and
on
the
push
operation
A1
accepts
data
from
A7.
To
allow
neurons
5
and
6
to
control
the
stack
we
use
the
type
of
connections
given
by
Eqs.
6-8.
In
Fig.
5
o
denotes
a
connection
that
is
excited
by
neuron
5
(i.e.,
the
B5
participates
in
reaction
7),
>
denotes
a
connection
excited
by
neuron
6,
and
o
denotes
a
connection
that
is
inhibited
by
both
neuron
5
and
neuron
6.
The
catalyst
concentrations
of
the
four
stack
neurons
are
chosen
as
follows:
1
1
Cl=
+
1+50A5
1+50AI
1+
1+
2A'
2A'
2
2
+
+1
+
50(1-A
+
1
+
50(1-A'
1
1+
1+
2A'
2A'
1
1
C2=
+
1+50A
1
+5OA'
1+
1+
2A2
2A2
2
2
+
+
1
+
50(1
-A)
1
+
50(1-At)
1+
1+
2Ai
2Aj
[20]
1
1
C3
=
I+
1
+
5OA
1
+
5OA
1
+
51+
2A'
2A'
2
2
1
1
+
50(1
-Al)
1
1
+
50(1-
A')
'
[21]
+A2
+A
6
1
+
1+L
IA
2A'
and
1
1
C4=
+
1
+
1
+
5OA5
1
+
1
+
0A6'
2A'
2A'
2
2
1
+
50(1-
Al)
1
+
50(1-
A')
+
+~~~~
1+
1+
2A3
2A4
[22]
The
concentrations
A5-A7
are
controlled
by
the
external
processor,
and
we
take
them
as
given.
Consider
the
three
stack
operations
as
determined
by
the
external
control
neu-
rons
A5
and
A6.
When
A5
=
A6
=
0,
the
last
two
terms
are
always
small,
and
the
magnitude
of
the
first
two
terms
depends
on
A!
for
all
Ci:
Ci
4/3
when
A!
1
or
0
when
Ai
0.
Thus,
the
state
of
neuron
i
at
t
+
1
is
the
same
as
its
state
at
time
t,
and
A5
=
A6
=
0
causes
the
remember
operation.
When
A5
=
1
and
A6
=
0,
the
first
and
last
terms
are
always
small
for
all
C;
values.
The
second
term
is
either
0
or
2/3,
depending
on
A;,
and
the
third
term
is
either
0
or
4/3,
depending
on
the state
of
the
next
neuron
higher
in
the
stack.
Thus,
the
state
of
neuron
i
at
time
t
+
1
is
wholly
determined
by
the
state
of
the
next
neuron
higher
in
the
stack,
and
A5
=
1
and
A6
=
0
cause
the
push
operation.
If
A'
=
0,
A'
=
1,
the
second
and
third
terms
are
always
small
for
all
C;.
The
first
term
is
either
0
or
2/3,
depending
on
A,,
and
the
fourth
term
is
either
0
or
4/3,
depending
on
the
state
of
the
next
neuron
lower
in
the
stack.
Thus,
the
state
of
neuron
i
at
time
t
+
1
is
wholly
determined
by
the
state
of
the
next
neuron
lower
in
the
stack,
and
A'
=
0
and
A'
=
1
causes
the
pop
operation.
.7
.
_
.
:
.
.
.
.
.
.
. .
~:F-.\
I
I
---
I
I
I
I-
I
I
k
I
--I-
I-
-i
L
I
I
386
Chemistry:
Hjelmfelt
et
al.
*
.
.
.
Proc.
Natl.
Acad.
Sci.
USA
89
(1992)
387
FIG.
5.
Schematic
of
the
neurons
and
connections
in
the
stack
memory.
The
control
neurons
5
and
6
are
not
shown
because
they
are
assumed
to
be
controlled
by
the
external
world.
Neurons
5
and
6
affect
the
connections
between
the
neurons
in
the
stack
through
the
mechanism
of
Eqs.
6-8.
We
show
these
effects
by
symbols:
neuron
6
excites
connections
marked
by
D
and
when
neuron
6
is
firing,
this
connection
affects
the
state
of
neuron
i;
neuron
5
excites
connections
marked
by
o,
and
when
neuron
5
is
firing,
this
connection
affects
the
state
of
neuron
i.
Both
neurons
5
and
6
inhibit
connections
marked
by
n,
and
when
either
neuron
5
or
neuron
6
is
firing,
this
connection
does
not
affect
the
state
of
neuron
i.
Conclusion
We
have
constructed
a
chemical
kinetic
system
that
can
perform
a
programmed
computation.
We
have
demonstrated
only
simple
computations,
where
both
the
computation
and
the
underlying
chemical
dynamics
are
easily
understood.
In
principle,
a
universal
Turing
machine
can
be
constructed
from
two
infinite
chemical
stacks
and
a
neural
network
of
the
general
form
discussed
in
the
sections
on
Binary
Decoder
and
Binary
Adder.
Computational
systems
may,
however,
show
much
more
complex
behavior.
Computation
theory
encom-
passes
the
possibility
of
computations
with
dynamical
be-
havior
that
shows
unpredictability
stronger
than
"determin-
istic
chaos."
This
unpredictability
is
due
to
Turing's
halting
problem
(3),
which
states
that
it
is
unpredictable,
without
direct
simulation,
whether
any
arbitrary
program
will
halt
or
attain
a
solution
in
finite
time.
The
dynamical
manifestation
of
unpredictability
is
a
question
about
the
existence
and
domain
of
basins
of
attraction
(10).
Computations
may
be
viewed
as
the
transient
relaxation
to
a
steady
state,
where
the
steady
state
represents
the
solution.
Computationally
pow-
erful
systems
must
be
able
to
support
arbitrarily
(and
unpre-
dictably)
long
transients.
The
halting
problem
implies
that
direct
simulation
is
the
only
general
procedure
to
determine
whether
the
transients
will
ever
decay
to
a
stationary
state;
in
finite
time
an
answer
is
not
guaranteed.
This
unpredict-
ability
is
stronger
than
that
of