ChapterPDF Available

Abstract

The theme of this symposium is “Communication in Development,” and, as an outsider to the field of developmental biology, I am going to begin by asking a question: How do we tell when there is communication in living systems? Most workers in the field probably do not worry too much about defining the idea of communication since so many concrete, experimental questions about developmental control do not depend on what communication means. But I am interested in the origin of life, and I am convinced that the problem of the origin of life cannot even be formulated without a better understanding of how molecules can function symbolically, that is, as records, codes, and signals. Or as I imply in my title, to understand origins, we need to know how a molecule becomes a message.
Published in Communication in Development Proceedings of the 28th Symposium,
The Society for Developmental Biology, Boulder, CO, June 16-18, 1969
Editor-in -Chief ~ M. V. EDDS, .JR.
Academic Press, New York and London, 1969
Organized and Edited by Anton Lang
M S U / AEC Plant Research Laboratory
Michigan State University
Alternate ref: Developmental Biology Supplement 3, 1-16 (1969)
_____________________________________________________________________________
p. 1 How Does a Molecule Become a Message?
H. H. PATTEE
W. W. Hansen Laboratories of Physics, Stanford University, Stanford,
California 94305
INTRODUCTION
The theme of this symposium is "Communication in Development," and, as an outsider to the
field of developmental biology, I am going to begin by asking a question: How do we tell when
there is communication in living systems? Most workers in the field probably do not worry too
much about defining the idea of communication since so many concrete, experimental questions
about developmental control do not depend on what communication means. But I am interested
in the origin of life, and I am convinced that the problem of the origin of life cannot even be
formulated without a better understanding of how molecules can function symbolically, that is,
as records, codes, and signals. Or as I imply in my title, to understand origins, we need to know
how a molecule becomes a message.
More specifically, as a physicist, I want to know how to distinguish communication between
molecules from the normal physical interactions or forces between molecules which we believe
account for all their motions. Furthermore, I need to make this distinction at the simplest
possible level, since it does not answer the origin question to look at highly evolved organisms in
which communication processes are reasonably clear and distinct. Therefore I need to know how
messages originated.
Most biologists will say that, while this is an interesting question, there are many problems to
be solved about "how life works," before we worry about how it all began. I am not going to
suggest that most of the "how it works" problems have been solved, but at the same time I do not
see that knowing much more about "how it works" in the current style of molecular biology and
genetics is likely to lead to an answer to origin problems. Nothing I have learned from molecular
biology tells me in terms of basic physical principles why matter should ever come alive or why
it should evolve along an entirely different pathway than inanimate matter. Furthermore, at every
hierarchical level of biological organization we are presented
p. 2
with very much the same kind of problem. Every evolutionary innovation amounts to a new
level of integrated control. To see how this integrated control works, that is, to see how the
physical implementation of this control is accomplished, is not the same as understanding how it
came to exist.
The incredible successes of biochemistry in unraveling the genetic code and the basic
mechanism of protein synthesis may suggest that we can proceed to the next hierarchical level
with assurance that if we pay enough attention to molecular details, then all the data will
somehow fall into place. I, for one, am not at all satisfied that this kind of answer even at the
level of replication should be promulgated as the "secret of life" or the "reduction of life to
ordinary physics and chemistry," although I have no doubt that some of these molecular
descriptions are a necessary step toward the answer. I am even less satisfied that developmental
programs will be comprehended only by taking more and more molecular data.
Let me make it quite clear at this point that I believe that all the molecules in the living cell
obey precisely the laws of normal physics and chemistry (Pattee, 1969). We are not trying to
understand molecular structure, but language structure in the most elementary sense, and this
means understanding not only "how it works," but how it originated. Nor do I agree with
Polanyi's (1968) conclusion that the constraints of language and machines are "irreducible";
although I do believe Polanyi has presented this problem a problem which is too often evaded
by molecular biologists with the maximum clarity. Whatever the case may be, it is not likely
that an acceptable resolution of either origin or reduction problems will come about only by
taking more data. I believe we need both a theory of the origin of hierarchical organization as
well as experiments or demonstrations showing that the hierarchical constraints of a "language"
can actually originate from the normal physical constraints that hold molecules together and the
laws which govern their motions.
It is essential in discussions of origins to distinguish the sequence of causal events from the
sequence of control events. For example, the replicative controls of cells harness the molecules
of the environment to produce more cells, and the developmental controls harness the cells to
produce" the organism; so we can say that development is one level higher than replication in
the biological hierarchy. One might argue then that insofar as developmental messages turn off
or on selected genes in single cells according to specific interactions
p. 3
with neighboring cells, they can only be a later evolutionary elaboration of the basic rules of
self-replication.
However, I believe we must be very cautious in accepting the conclusion of the evolutionary
sequence too generally, and especially in extending it to the origin of life. Single, isolated cells
clearly exhibit developmental controls in the growth of their structure, so that messages must be
generated by interactions of the growing cell with its own structure, so to speak. But since this
characteristic structure is certainly a part of the "self' which is being replicated, it becomes
unclear how to separate the developmental from the replicative controls. Furthermore, it is one
of the most general characteristics of biological evolution that life has increasingly buffered
itself from the changes and ambient conditions of the environments. This buffering is
accomplished by establishing hierarchical levels of control that grow more and more distinct in
their structure and function as evolution progresses. But we must remember that these hier
archical levels always become blurred at their origin. Therefore, when viewing a highly evolved
hierarchical organization we must not confuse the existing control chains in the final hierarchical
system with the causal chains or evolutionary sequence of their origin.
Our own symbolic languages have many examples of hierarchical structure which do not
correspond to a causal order or the sequence in which the structures appeared (e.g., Lenneburg,
1967). The evolution of all hierarchical rules is a bootstrap process. The rules do not create a
function-they improve an existing function. The functions do not create the rules-they give the
rules meaning. For example, stoplights do not account for how people drive they help people
drive more effectively. Nor does traffic create stop lights. Traffic is the reason why stop lights
make sense.
Therefore it is reasonable to consider the hypothesis that the first "messages" were expressed
not in the highly integrated and precise genetic code that we find today, but in a more global set
of geophysical and geochemical constraints, which we could call the primeval "ecosystem
language," from which the genetic code condensed in much the same way that our formal rules
of syntax and dictionaries condensed from the functional usage of primitive symbols in a com
plex environment. If this were indeed the case, then it would be more likely that "developmental
replication" in the form of external cycles not only preceded autonomous "self-replication," but
may have accounted for the form of the genetic code itself.
p. 4
SOME PROPERTIES OF LANGUAGES AND SYMBOLS
The origin of languages and messages is inseparable from the origin of arbitrary rules. It is a
general property of languages and symbol systems that their constraints are arbitrary in the sense
that the same function can be accomplished by many different physical and logical structures.
For example in the case of human language we find many symbol vehicles and alphabets, many
dictionaries and syntactical rules, and many styles of writing, all of which function adequately
for human communication. The same is true for the machine languages which man has invented
to communicate with computers; and as for the physical embodiment of these language
structures it is clear, at least in the case of the machine, that the particular physical structures
which perform the logic, memory, reading and writing functions are almost incidental and have
very little to do with the essential logical constraints of the language system itself.
The arbitrariness in primitive biological languages is less clear. We know that there are many
examples of differing organ design with essentially the same function. On the other hand, the
universality of the genetic code could be used as an argument against arbitrariness in biological
languages. This would be a weak argument at present, however, since the origin of the code is
completely unknown. Furthermore, the only experimental evidence, which is meager, indirectly
supports the "frozen accident" theory (Crick, 1968) which implies that almost any other code
would also work.
The "frozen accident" theory also illustrates what I have found to be a principle of hierarchical
structures in general, a principle that may be stated as a principle of impotence: Hierarchical
organizations obscure their own origins as they evolve. There are several ways to interpret this.
We may think of a hierarchical control as a collective constraint or rule imposed on the motion
of individual elements of the collection. For such a constraint to appear as a "rule" it must be
much simpler than the detailed motions of the elements. The better the hierarchical rule, the
more selective it is in measuring particular details of the elements it is constraining. For example,
a good stoplight system does not measure all the dynamical details of the traffic, but only the
minimum amount of information about the time and direction of cars which, in principle at least,
makes the traffic flow as safely and rapidly as practical. This essential simplification, or loss of
detail is also what obscures the origin of the rule.
p. 5
This ill-defined property of simplification is common to all language and machine constraints,
and hierarchical systems in general that the essential function of the system is "obscured" by
too many details of how it works. One well-known example is our spoken language. If while
speaking about these problems I were to begin thinking about the details of what I am saying
the syntax of my sentences, my pronunciation, how the symbols will appear on the printed page
I would rapidly lose the function of communication, which was the purpose of all these
complex constraints of the language in the first place. In the same way the function of a
computer, or for that matter an automobile or a watch, would be lost if to use them we always
had to analyze the mechanical details of their components. I would say that the secret of good
communication in general lies in knowing what to ignore rather than in finding out in great detail
what is going on.
Therefore as a preliminary answer to our first question of how we distinguish communication
between molecules from the normal physical interactions, I suggest that one necessary condition
for the appearance of a message is that very complex interactions lead to a very simple result.
The nonliving world, at least as viewed by the physicist, often ends up the other way, with the
simplest possible problem producing a very complicated result. The more details or degrees of
freedom that the physicist considers in his problem the more complex and intricate becomes the
solution. This complexity grows so rapidly with the number of particles that the physicist very
quickly resorts to a drastic program of relinquishing all detailed knowledge, and then talks only
about the statistics of very large aggregations of particles. It is only through some "postulate of
ignorance" of the dynamical details that these statistical descriptions can be used consistently.
Even so, the passage from the dynamical description to the statistical description in physics
poses very deep problems which are unavoidably related to the communication of information or
messages from the physical system to the observer (Brillouin, 1962). If we accept this general
idea that communication is in some way a simplification of a complex dynamical process, then
we are led by the origin problem to consider what the simplest communication system can be.
Only by conceiving of a language in the most elementary terms can we hope to distinguish what
is really essential from the "frozen accidents."
p. 6
WHAT IS THE SIMPLEST MESSAGE?
The biological literature today is full of words like activator, inhibitor, repressor, derepressor,
inducer, initiator, regulator. These general words describe messengers, specific examples of
which are being discovered every day. I would simplify the messages in all these cases by saying
they mean "turn on" or "turn off." It is difficult to think of a simpler message. But taken by itself,
outside the cell or the context of some language, "turn on" is not really a message since it means
nothing unless we know from where the signal came and what is turned on as a result of its
transmission. It is also clear that the idea of sending and receiving messages involves a definite
time sequence and a collection of alternative messages. "Turn on" makes no sense unless it is
related by a temporal as well as by a spatial network. On the other hand, one must not be misled
by the apparent simplicity of this message. For when such simple messages are concatenated in
networks, logicians have shown us that the descriptive potential of such "sequential switching
machines" or "automata" are incredibly rich, and that in a formal sense they can duplicate many
of the most complex biological activities including many aspects of thought itself. Almost all
molecular biological systems operate in this discrete, on-off mode rather than by a continuous
modulation type of control. Since many essential input and output variables are continuous, such
as concentration gradients and muscle movements, this poses the serious problem, familiar to
logicians as well as computer designers, of transcribing discrete variables into continuous
variables and vice versa. The transcription process also determines to a large degree the
simplicity as well as the reliability of the function.
If the simplest message is to turn something on, then we also need to know the physical origin
and limits of the simplest device that will accomplish this operation. Such a device is commonly
called a switch, and we shall use this term, bearing in mind that it is defined by its function, not
by our design of artificial switches that we use to turn on lights or direct trains. The switch is a
good example of an element with an exceedingly simple functionit is hard to imagine a
simpler functionbut with a detailed behavior, expressed in terms of physical equations of
motion, which is exceedingly complex. Switches in certain forms, such as ratchets and Maxwell
demons, have caused physicists a great deal of difficulty. In a way, this is contrary to our
intuition since even a small child can look at a switch
p. 7
or a ratchet and tell us "how it works." With considerably more effort, using more sophisticated
physical and chemical techniques, it may soon be possible to look at allosteric enzyme switches
and explain "how they work."
We must bear in mind, however, that in both cases there are always deeper levels of answers.
For example, the physical description "how it works" is possible only if we ignore certain details
of the dynamical motion. This is because the switching event which produces a single choice
from at least two alternatives is not symmetrical in time and must therefore involve dissipation
of energy, that is, loss of detailed information about the motions of the particles in the switch. As
a consequence of this dissipation or loss of detail it is physically impossible for a switch to
operate with absolute precision. In other words, no matter how well it is designed or how well it
is built, all devices operating as switches have a finite probability of being "off' when they
should be "on," and vice versa. This is not to say that some switches are not better than others. In
fact the enzyme switches of the cell have such high speed and reliability compared with the
artificial switches made by man that it is doubtful if their behavior can be explained
quantitatively in terms of classical models. Since no one has yet explained a switch in terms of
quantum mechanics, the speed and reliability of enzymes remains a serious problem for the
physicist (Pattee, 1968). But even though we cannot yet explain molecular switches in terms of
fundamental physics, we can proceed here by simply assuming their existence and consider
under what conditions a network of switches might be expected to function in the context of a
language.
WHAT IS THE SIMPLEST NATURAL LANGUAGE?
We come now to the crucial question. An isolated switch in nature, even if we could explain
its origin, would have no function in the sense that we commonly use the word. We see here
merely the simplest possible instance of what is perhaps the most fundamental problem in
biology-the question of how large a system one must consider before biological function has
meaning. Classical biology generally considers the cell to be the minimum unit of life. But if we
consider life as distinguished from nonliving matter by its evolutionary behavior in the course of
time, then it is clear that the isolated cell is too small a system, since it is only through the com
munication of cells with the outside environment that natural selec-
p. 8
tion can take place. The same may be said of developmental systems in which collections of
cells create messages that control the replication and expression of individual cells.
The problem of the origin of life raises this same question. How large a system must we
consider in order to give meaning to the idea of life? Most people who study the origin of life
have made the assumption that the hierarchical structure of highly evolved life tells us by its
sequence of control which molecules came first on the primeval earth. Thus, it is generally
assumed that some form of nonenzymatic, self-replicating nucleic acid first appeared in the
sterile ocean, and that by random search some kind of meaningful message was eventually
spelled out in the sequence of bases, though it is never clear from these descriptions how this
lonely "message" would be read. Alternatively, there are some who believe the first important
molecules were the enzymes or the switches which controlled metabolic processes in primitive
cell-like units. I find it more reasonable to begin, not with switching mechanisms or meaningless
messages, but rather with a primitive communication network which could be called the
primeval ecosystem. Such a system might consist of primitive geochemical matter cycles in
which matter is catalytically shunted through cell-like structures which occur spontaneously
without initial genetic instructions or metabolic control. In my picture, it is the constraints of the
primeval ecosystem which, in effect, generate the language in which the first specific messages
can make evolutionary sense. The course of evolution by natural selection will now produce
better, more precise, messages as measured in this ecological language; and in this case signals
from the outside world would have preceded the autonomous genetic controls which now
originate inside the cell.
But these speculations are not my main point. What I want to say is that a molecule does not
become a message because of any particular shape or structure or behavior of the molecule. A
molecule becomes a message only in the context of a larger system of physical constraints which
I have called a "language" in analogy to our normal usage of the concept of message. The
trouble with this analogy is that our human languages are far too complex and depend too
strongly on the structure and evolution of the brain and the whole human organism to clarify the
problem. We are explaining the most simple language in terms of the most complex. Anyway,
since the origin of language is so mysterious that linguists have practically
p. 9
given up on the problem, we cannot expect any help even from this questionable analogy. What
approaches, then, can we find to clarify what we mean by the simplest message or the simplest
language?
THE SIMPLEST ARTIFICIAL LANGUAGES
The most valuable and stimulating ideas I have found for studying the origin of language
constraints has come from the logicians and mathematicians, who also try to find the simplest
possible formal languages which nevertheless can generate an infinitely rich body of theorems.
A practical aspect of this problem is to build a computer with the smallest number of switches
which can give you answers to the maximum number of problems. This subject is often called
"automata theory" or "computability theory," but it has its roots in symbolic logic, which is itself
a mathematical language to study all mathematical languages. This is why it is of such interest to
mathematicians: all types of mathematics can be developed using this very general language.
The basic processes of replication, development, cognitive activity, and even evolution, offer an
intriguing challenge to the automata theorist as fundamental conceptual and logical problems,
and also to the computer scientist who now has the capability of "experimental" study of these
simulated biological events. There is often a considerable communication gap between the
experimental biologist and the mathematician interested in biological functions, and this is most
unfortunate, for it is unlikely that any other type of problem requires such a comprehensive
approach to achieve solutions.
But let us return to our particular problem of the origin of language structure and messages.
What can we learn from studying artificial languages? As I see it, the basic difficulty with
computer simulation is that whenever we try to invent a model of an elementary or essential
biological function, the program of our model turns out to be unexpectedly complex if it actually
accomplishes the defined function in a realistic way. The most instructive examples of this that I
know are the models of self-replication. I shall not discuss any of these in detail, ,but only give
the "results." It is possible to imagine many primitive types of mechanical, chemical, and logical
processes which perform some kind of replication (e.g., Penrose, 1958; Pattee, 1961; Moore,
1962). It is also quite obvious that most of these systems have no conceivable evolutionary
potential, nor can one easily add on any developmental elaborations without redesigning the
whole system or causing its failure.
p. 10
The first profound model of a self-replicating system that I know, was that of the
mathematician John von Neumann (1956), who explicitly required of his model that it be
capable of evolving a more elaborate model without altering its basic rules. Von Neumann was
influenced strongly by the work of Turing (1937), who carried the concept of computation to the
simplest extreme in terms of basic operations with symbols, and showed that with these basic
rules one can construct a "universal" machine which could compute any function that any other
machine could compute. Von Neumann also made use of the McCulloch and Pitts (1943) models
of neuronal switching networks in his thinking about replication, but he extended both these
models to include a "construction" process, which was not physically realistic, but which
allowed him to describe a "universal self-replicating automaton" which had the potential for
evolution and to which developmental programs could be added without changing the basic
organization of the automaton.
But what was the significance of such a model? What impressed von Neumann was the final
complexity of what started out as the "simplest" self-replicating machine that could evolve. He
concluded that there must be a "threshold of complexity" necessary to evolve even greater
complexity, but below which order deteriorates. Furthermore, this threshold appeared to be so
complex that its spontaneous origin was inconceivable.
Since von Neumann's work on self-replication, there have been further serious logical attempts
to simplify or restate the problem (e.g., Arbib, 1967a; Thatcher, 1963). Automata theory has also
been used to describe developmental processes (e.g., Apter and Wolpert, 1965; Arbib, 1967b).
But the basic results are the same. If the program does anything which could be called interesting
from a biological point of view, or if it can even be expected to actually work as a program on
any real computer, then such programs turn out to be unexpectedly complex with no hint as to
how they could have originated spontaneously. For example, one of the simplest models of
morphogenesis is the French Flag problem, in which it is required that a sheet of self-replicating
cells develop into the pattern of the French Flag. This can be done in several ways (e.g., Wolpert,
1968), but the program is not nearly as simple as one might expect from the simplicity of the
final pattern it produces.
It is the common feeling among automata theorists, as well as computer programmers, that if
one has never produced a working,
p. 11
developmental, replicative, or evolutionary program, then one is in for a discouraging surprise.
To help popularize this fact, Michie and Longuet-Higgins (1966) published a short paper called
"A Party Game Model of Biological Replication" which will give some idea of the logic to the
reader who has had no computer experience. But as computer scientists emphasize, there is no
substitute for writing a program and making it work.
Why are all biological functions so difficult to model? Why is it so difficult to imitate
something which looks so simple? Indeed, functional simplicity is not easy to achieve, and very
often the more stringent the requirements for simplicity of function, the more difficult will be the
integration of the dynamical details necessary to carry out the function. While it is relatively
easy to imagine ad hoc "thought machines" that will perform well-defined functions, the
structure of real machines is always evolved through the challenges of the environment to what
are initially very poorly defined functions. These challenges usually have more to do with how
the machine fails than how it works. In other words, it is the reliability, stability, or persistence
of the function, rather than the abstract concept of the pure function itself, which is the source of
structure. We can see this by studying the evolution of any of our manmade machines. Of course
in this case man himself defines the general function, but how the structure of the machine
finally turns out is not determined by man alone. The history of timepieces is a good example. It
is relatively easy to see superficially with each escapement or gear train "how it works," but only
by understanding the requirements of precision and stability for "survival," as well as the
environmental challenges to these requirements in the form of temperature variations, external
accelerations, corrosion, and wear, can we begin to understand the particular designs of
escapements, gear teeth, and power trains which have survived.
Our understanding of the genetic code and of developmental programs is still at the "how does
it work" level, and although we may be able to trace the evolutionary changes, even with
molecular detail, we have almost no feeling for which details are crucial and which are
incidental to the integrated structure of the organism. The analytical style of molecular biology,
which has brought us to this level, first recognizes a highly evolved function and then proceeds
to look at the structures in more and more detail until all the parts can be isolated in the test tube,
and perhaps reassembled to function
p. 12
again. But if we wish to explain origins or evolutionary innovations, this style may be backward.
If we believe that selective catalysts or "switching molecules" do not make messages by
themselves, then we should study not them by themselves, but in switching networks as they
might have occurred in a primitive "sterile" ecosystem. Nor should we try, if we are looking for
origins, to design switching networks to perform well defined functions such as universal self-
replication or the development of a French Flag morphology, since there is no reason to expect
such functions to exist in the beginning. A more realistic approach would be to ask what
behavior of more or less random networks of switching catalysts would appear because of its
persistence or stability in the face of surrounding disorder. In other words, we should look not
for the elements that accomplish well-defined functions, but for the functions that appear
spontaneously from collections of welldefined elements. How can this be done?
THE SIMULATION OF ORIGINS
The experimental study of the origin of function or any evolutionary innovation is
exceptionally difficult because, to observe such innovation naturally, we must let nature take its
course. For the crucial innovations we are discussing, like the origin of molecular messages,
language constraints, and codes, nature has already taken its course or is going about it too
slowly for us to observe. So again we are left with computer simulation of nature, hoping that
the underlying dynamics of the origin of hierarchical organization is so fundamental that it can
be observed even in a properly designed artificial environment.
The essential condition for the study of "natural" origins in artificial machines is that we
cannot overdefine the function that we hope will originate spontaneously. In other words, we
must let the computer take its own course to some degree. A good example of this strategy has
been reported by Kauffman (1969). In this example he constructed a "random network" of
"random switches" and then observed the behavior. The switches were random in the sense that
one of the 2k Boolean functions of the k inputs to each switch was chosen at random. Once
chosen, however, both the switch function and the network structure connecting inputs and
outputs of the switches were fixed.
The significant results were that for low connectivity, that is, two or three inp.uts per switch,
the network produced cycles of activity
p. 13
that were both short and stable short compared to the enormous number of states, and stable
in the sense that the network returns to the same cycle even if a switch in that cycle is
momentarily off when it should be on, or vice versa. Kauffman pictured his network as a very
simple model of the genetically controlled enzymatic processes in the single cell; I believe,
however, this type of model would more appropriately represent a primeval ecosystem in which
initially random sequences in copolymer chains begin to act as selective catalysts for further
monomer condensations. With the allowance for the creation of new switching catalysts, we
would expect condensation of catalytic sequences produced by the switching cycles, to act very
much like a primitive set of language constraints. The copolymer sequences would then
represent a "record" of the cycle structure.
In our own group, Conrad (1969) has taken a more realistic view of the physical constraints
that are likely to exist on the primitive sterile earth, as well as the competitive interactions and
requirements for growth that must exist between replicating organism in a finite, closed matter
system. These competitive growth constraints have been programmed into an evolutionary
model of a multi-niche ecosystem with organisms represented by genetic strings subject to
random mutation and corresponding phenotypic strings which interact with the other organisms.
Although this program includes much more structure than the Kauffman program, neither the
species nor the environmental niches are initially constrained by the program, but they are left to
find their own type of stability and persistence. The population dynamics is determined, not by
solving differential equations that can only represent hypothetical laws, but by actually counting
the individuals in the course of evolution of the program. Such a program to a large extent finds
its own structure in its most stable dynamical configuration, which we can observe in the course
of its evolution.
These computer programs illustrate one approach to the study of the origin of the language
constraints we have been talking about. They are empirical studies of the natural behavior of
switching networks which do not have specific functions designed into them. This is the way
biological constraints must have evolved. But even so, you will ask whether these computer
simulations are not too far removed from the biological structures, the cells, enzymes, and
hormones that are the real objects of our studies.
This is true the computer is quite different from a cell but this
p. 14
disadvantage for most studies of "how it works" is also the strength of such simulation for origin
studies. The crucial point I want to make is that the collective behavior we are studying in these
models is not dependent on exactly how the individual switches work or what they are made of.
We are not studying how the switches work, but how the network behaves. Only by this method
can we hope to find developmental and evolutionary principles that are common to all types of
hierarchical organizations. Only by studies of this type can we hope to separate the essential
rules from the frozen accidents in living organisms.
THE ROLE OF THEORY IN BIOLOGY
There has always been a great difference in style between the physical and biological sciences,
a difference which is reflected most clearly in their different attitudes toward theory. Stated
bluntly, physics is a collection of basic theories, whereas biology is a collection of basic facts.
Of course this is not only a difference in style but also a difference in subject matter. The
significant facts of life are indeed more numerous than the facts of inanimate matter. But
physicists still hope that they can understand the nature of life without having to learn all the
facts.
Many of us who are not directly engaged in studying developmental biology or in
experimenting with particular systems of communication in cells look at the proliferation of
experimental data in developmental biology, neurobiology, and ecology and wonder how all this
will end. Perhaps some of you who try to keep up with the literature wonder the same thing.
Living systems are of course much more complicated than formal languages, or present
computer programs, since living systems actually construct new molecules on the basis of
genetic instruction. But even with a few simple rules and small memories, we know it is possible
to write "developmental" programs that lead to incredibly rich and formally unpredictable
behavior (e.g., Post, 1943). Therefore in the biological sciences it is not altogether reassuring to
find that all our data handling facilities, our journals, our symposia, our mail, and even our
largest, quickest computers are overburdened with information. The physicist Edward Condon
once suggested that the whole scientific endeavor will come to an end because this "data collec
tion" does not converge. Certainly if our knowledge is to be effective in our civilization, we must
see to it that our theoretical
p. 15
conceptions are based on the elements of simplicity that we find in all our other integrated
biological functions; otherwise our knowledge will not survive.
What we may all hope is that the language constraints at all levels of biological organization
are similar to the rules of our formal languages, which are finite and relatively simple even
though they are sufficient to generate an infinite number of sentences and meanings. We must
remember, at the same time, that the potential variety of programs is indeed infinite, and that we
must not consume our experimental talents on this endless variety without careful selection
based on hypotheses which must be tested. Of course we shall need more experimental data on
specific messenger molecules and how they exercise their developmental controls. But to under
stand how the molecules became messages, and how they are designed and integrated to perform
with such incredible effectiveness, we must also account for the reliability of the controlling
molecules as well as the challenges and constraints of the ecosystem which controlled their
evolution. This in turn will require a much deeper appreciation of the physics of switches and the
logic of networks.
ACKNOWLEDGMENT
This work was supported by the National Science Foundation. Grant GB 6932 of the
Biological Oceanography Program in the Division of Biological and Medical Sciences.
REFERENCES
APTER, M. J., and WOLPERT, L. (1965). Cybernetics and development. J. Theoret.Biol. 8,
244.
ARBIB, M. A. (1967a). Some comments on self-reproducing automata. In Systems and
Computer Science, J. F. Hart and S. Takasu, eds., p. 42. Univ. of Toronto Press, Toronto,
Canada.
ARBIB, M. A. (1967b). Automata theory and development: Part 1. J. Theoret. Biol. 14, 131.
BRILLOUIN, L. (1962). Science and Information Theory, 2nd ed., Chapters 20 and 21,
Academic
Press, NY.
CONRAD, M. E. (1969). Computer experiments on the evolution of co-adaptation in a primitive
ecosystem. Dissertation, Stanford University.
CRICK, F. H. C. (1968). The origin of the genetic code. J. Mol. Biol. 38, 367.
KAUFFMAN, S. A. (1969). Metabolic stability and epigenesis in randomly constructed genetic
nets. J. Theoret. Biol. 22, 437.
LENNEBURG, E. H. (1967). Biological Foundations of Language. Wiley, New York.
MCCULLOCH, W. S. and PITTS, W. (1943). logical calculus of the ideas immanent in nervous
activity. Bull. Math. Bioph.ys. 5, 115. Ap. 16
MICHIE, D. and LONGUET-HIGGINS, C. (1966). A party game model of biological replica
tion. Nature 212, 10.
MOORE, E. F. (1962). Machine models of self-reproduction. Proc. Symp. Appl. Math., Vol. 14,
Mathematical Problems in the Biological Sciences, American Math. Soc. Providence, R. I. p.
17.
PATTEE, H. H. (1961). On the origin of macromolecular sequences. Biophys. J. 1, 683.
PATTEE, H. H. (1968). The physical basis of coding and reliability in biological evolution. In
Towards a Theoretical Biology, Vol. 1, C. H. Waddington, ed., p. 67. Edinburgh Univ. Press,
Edinburgh, Scotland.
PATTEE, H. H. (1969). Physical problems of heredity and evolution. In Towards a Theoretical
Biology, Vol. 2, C. H. Waddington, ed., p. 268. Edinburgh Univ. Press, Edinburgh, Scotland.
PENROSE, L. S. (1958). The mechanics of self-reproduction. Ann. Human Genet. 23, part I, 59.
POLANYI, M. (1968). Life's irreducible structure. Science 160, 1308.
POST, E. L. (1943). Formal reductions of the general combinational decision problem. Am. J.
Math. 65, 197.
THATCHER, J. W. (1963). The construction of the self-describing Turing machine. Proc. Symp.
Math. Theory of Automata; Vol. 12 of the Microwave Research Institute Symposia Series,
Brooklyn Polytechnic Press, p. 165.
TURING, A. M. (1937). On computable numbers, with an application to the Entscheidungs
problem. Proc. London Math. Soc. Ser. 2, 42, 230.
VON NEUMANN, J. (1956). The general and logical theory of automata. Reprinted in The
World of Mathematics (J. R. Newman. ed.). Vol., p. 2070. Simon & Schuster. New York.
WOLPERT, L. (1968). The French Flag problem: A contribution to the discussion on pattern
development and regulation. In Towards a Theoretical Biology, Vol. 1, C. H. Waddington, ed.,
p. 125. Edinburgh Univ. Press.
... Deacon's target article is dedicated to Pattee's, 1969 paper about communication within biological systems (Pattee, 1969). In that paper, and others, Pattee develops a biosemiotic perspective for tackling questions in theoretical biology. ...
... Deacon's target article is dedicated to Pattee's, 1969 paper about communication within biological systems (Pattee, 1969). In that paper, and others, Pattee develops a biosemiotic perspective for tackling questions in theoretical biology. ...
... Pattee discusses the role of molecules in signalling to a system the requirement to turn < on > or < off > , a situation he characterizes in terms of the mechanism of a switch (Pattee, 1969). He notes that this mechanism can be accounted for symbolically in terms of a conditional architecture, but only makes functional sense in the context of a larger system of constraints and he draws an analogy between this larger system and a language. ...
Article
Deacon presents a fascinating model that adds to explanations of the origins of life from physical matter. Deacon’s paper owes much to the work of Howard Pattee, who saw semiotic relations in informational terms, and Deacon binds his model to criticism of current information concepts in biology which he sees as semantically inadequate. In this commentary I first outline the broader project from Pattee, and then I present a cybernetic perspective on information. My claim is that this view of information is already present within biology and provides what Deacon seeks.
... Both these phrases appear to evade the problem because symbol, matter, and cut are relatively simple to describe compared to what is necessary to describe for an actual measurement process or any system of interpretance. I have said, along with most biosemioticians, that the simplest system of interpretance is the living cell (Pattee 1969). I have also suggested that the enzyme constitutes the simplest functional measuring device (Pattee 1971b). ...
... A molecule becomes a message only in the context of a larger system of physical constraints which I have called a 'language' in analogy to our normal usage of the concept of message. (Pattee 1969) And as we agree, the simplest language or semiotic control process arises in the simplest self-replicating unit. ...
Chapter
Full-text available
In this dialogue, we discuss the contrast between inexorable physical laws and the semiotic freedom of life. We agree that material and symbolic structures require complementary descriptions, as do the many hierarchical levels of their organizations. We try to clarify our concepts of laws, constraints, rules, symbols, memory, interpreters, and semiotic control. We briefly describe our different personal backgrounds that led us to a biosemiotic approach, and we speculate on the future directions of biosemiotics.
... This contrasts with particular conditions because at any time they are necessarily founded on initial conditions at some time 0 so there is a memory and a direction in time. This dependence on initial conditions is one of the great differences between biology (where history is crucial) and physics (where initial conditions perform the auxiliary role of specifying a particular case)a point recognised by Pattee (1969Pattee ( , 2001. We may note here that entropic forces (Taylor and Tabachni, 2013) are efficient causes that are uninformed, i.e. produced by random assemblies of force fields acting in the absence of (particular) formal cause. ...
Article
Living systems have long been a puzzle to physics, leading some to claim that new laws of physics are needed to explain them. Separating physical reality into the general (laws) and the particular (location of particles in space and time), it is possible to see that the combination of these amounts to efficient causation, whereby forces are constrained by patterns that constitute embodied information which acts as formal cause. Embodied information can only be produced by correlation with existing patterns, but sets of patterns can be arranged to form reflexive relations in which constraints on force are themselves formed by the pattern that results from action of those same constrained forces. This inevitably produces a higher level of pattern which reflexively reinforces itself. From this, multi-level hierarchies and downward causation by information are seen to be patterns of patterns that constrain forces. Such patterns, when causally cyclical, are closed to efficient causation. But to be autonomous, a system must also have its formative information accumulated by repeated cycles of selection until sufficient is obtained to represent the information content of the whole (which is the essential purpose of information oligomers such as DNA). Living systems are the result of that process and therefore cannot exist unless they are both closed to efficient causation and capable of embodying an independent supply of information sufficient to constitute their causal structure. Understanding this is not beyond the scope of standard physics, but it does recognise the far greater importance of information accumulation in living than in non-living systems and, as a corollary, emphasises the dependence of biological systems on the whole history of life, leading up to the present state of any and all organisms.
Article
Autonomy, meaning freedom from exogenous control, requires independence of both constitution and cybernetic regulation. Here, the necessity of biological codes to achieve both is explained, assuming that Aristotelean efficient cause is 'formal cause empowered by physical force'. Constitutive independence requires closure to efficient causation (in the Rosen sense); cybernetic independence requires transformation of cause-effect into signal-response relations at the organism boundary; the combination of both kinds of independence enables adaptation and evolution. Codes and cyphers translate information from one form of physical embodiment (domain) to another. Because information can only contribute as formal cause to efficient cause within the domain of its embodiment, translation can extend or restrict the range over which information is effective. Closure to efficient causation requires internalised information to be isolated from the cycle of efficient causes that it informs: e.g. Von Neumann self-replicator requires a (template) source of information that is causally isolated from the physical replication system. Life operationalises this isolation with the genetic code translating from the (isolated) domain of codons to that of protein interactions. Separately, cybernetic freedom is achieved at the cell boundary because transducers, which embody molecular coding, translate exogenous information into a domain where it no longer has the power of efficient cause. Information, not efficient cause, passes through the boundary to serve as stimulus for an internally generated response. Coding further extends freedom by enabling historically accumulated information to be selectively transformed into efficient cause under internal control, leaving it otherwise stored inactive. Code-based translation thus enables selective causal isolation, controlling the flow from cause to effect. Genetic code, cell-signalling codes and, in eukaryotes, the histone code, signal sequence based protein sorting and other code-dependent processes all regulate and separate causal chains. The existence of life can be seen as an expression of the power of molecular codes to selectively isolate and thereby organise causal relations among molecular interactions to form an organism.
Article
The prion is a self-replicating protein that infects the central nervous system. This essay applies Georges Canguilhem’s criterion for life, biological normativity, to the prion for the purpose of arguing that the existence of the prion within living systems requires attention to how biological matter uses space. Without the involvement of DNA, the prion protein is physically capable of transforming nonprion proteins into prion proteins—a capacity afforded by the specific characteristics of the energy landscape it propagates within, which in turn is determined by the specific arrangement of atoms in its molecular architecture. Like a hammer that is a mirror, the prion compresses and folds surrounding proteins, making its environment identical to itself. This essay studies how information exchange occurs for the prion for the purpose of arguing for a philosophy of biology premised on the analysis of space with attention to form over the analysis of language with attention to genetic code.
Article
Full-text available
The notion of self-organisation plays a major role in enactive cognitive science. In this paper, I review several formal models of self-organisation that various approaches in modern cognitive science rely upon. I then focus on Rosen’s account of self-organisation as closure to efficient cause and his argument that models of systems closed to efficient cause – ( M, R) systems – are uncomputable. Despite being sometimes relied on by enactivists this argument is problematic it rests on assumptions unacceptable for enactivists: that living systems can be modelled as time-invariant and material-independent. I then argue that there exists a simple and philosophically appealing reparametrisation of ( M, R)–systems that accounts for the temporal dimensions of life but renders Rosen’s argument invalid.
Chapter
Our knowledge related to the entailments of functionalities of different biological processes as they enable sentience to arise in the human is still limited due to the biological complexity of the body. There are two interrelated research paradigms that can be developed to approach this problem– one paradigm seeks to study the body and articulate its entailments (intra-functionalities) at multiple scales over time; the second paradigm seeks to glean knowledge from this study of biological processes and create new forms of computation to enable us to transcend the limitations of current computational modes. The nature and scope of the question necessitates an transdisciplinary approach to research through the development of a multi-perspective approach to knowledge production. Here, key solutions can in part arise at the interstices between disciplines, and potentially enable us to define and ‘chip away’ at the problem set. Central is observing the body as a distributed network of computational processes that function at different physical scales as well as across time-dependent, process-oriented accretive frames. We can articulate the study of the body by calling it an electrochemical computer— a computer whose deep functionality is not yet fully entailed. Historically the nature of the problem has been to isolate a biological system and study its entailments to ascertain its functionality. Yet, the nature of sentience asks us as researchers to take a more holistic approach, despite the complexity at play. These two paradigms then become a long-term problem set that a network of high-end researchers can collaborate on, by bringing different areas of expertise to the table. The notion of developing a biomimetic/bio-relational Engine of Engines— A Computational Ecology (Stengers 2005) derives from observing computational systems at work in the body and approaching them through observation— through technological, mathematical and/or computational abstraction. Where the body has been described as functioning as a computational system that transcends the Turing limit (Siegelmann 1999)(Maclennan 2003)(Penrose 1989) new approaches to computation need to be undertaken to reflect this deep complexity.
Chapter
The morphogenetic processes of Drosophila melanogaster (fruit fly) and Homo sapiens are homologous (i.e., similar) since they have been found to utilize closely related sets of genes in highly conserved manner. Because of easy genetic manipulations possible with insects relative to humans, most of our current knowledge about the molecular basis of animal morphogenesis has come from researches performed on Drosophila.
Chapter
Any theory purporting to account for life cannot avoid facing the fundamental question about how life originated on this planet. One of the most physically realistic models of the origin of biological information (and hence of life) that I know of was proposed by P. W. Anderson and his coworkers (1983, 1987) (see Fig. 13.1). The model was based on thermal cycling (i.e., the cyclical changes of the temperature on the surface of the earth due to its daily rotation around its axis) of an RNA “soup” presumed to be present somewhere on the primordial earth surface some 3.5 billion years ago. The following quotation from Anderson (1987) describes the key ideas behind his model:
Chapter
Bohr thought that biologists should accept the functions of living systems as given, just as physicists accept quantum of action as an irreducible unit of physical reality at the microscopic level. This is the essence of the holism in biology that Bohr suggested seven decades ago. This seminal idea was further developed and elaborated on by W. Elsasser from the 1960s through the 1990s, laying the logical foundation for Bohr’s intuitive grasp of the essence of the phenomenon of life. Hence, we may well consider Bohr and Elsasser as the originators of holistic or systems biology that has been gaining momentum in recent years (Hartwell et al. 1999; Bechtel 2010).
Book
The coming of language occurs at about the same age in every healthy child throughout the world, strongly supporting the concept that genetically determined processes of maturation, rather than environmental influences, underlie capacity for speech and verbal understanding. Dr. Lenneberg points out the implications of this concept for the therapeutic and educational approach to children with hearing or speech deficits.
Chapter
Within the intellectual discipline of the physicist there has developed a belief in the existence of general and universal theories of nature, and it is the search for such theories which may be said to guide and justify the intellectual efforts of the physicist as well as the design of most physics experiments. What a physicist means by a ‘good theory’ cannot be exhaustively spelled out. Of course it must include ‘fitting the data’ or ‘predicting observations’ in some general sense. However, much deeper and more obscure criteria are also applied, often tacitly or intuitively, to evaluate the quality of a physical theory. For example, general theories can never be ‘just so’ stories which are only built up bit by bit as data accumulate. General physical theories often stem from relatively simple hypotheses that can be checked by experiment such as the constancy of the speed of light and the discrete energies of photons from atoms, but they must also be founded upon broad principles that express concepts of conservation, invariance, or symmetry. These abstract principles come to be accepted because from our experience we find that in some sense they appear unavoidable. In other words, without such principles it is difficult even to imagine what we mean by a general physical theory of the universe.
Article
Proto-organisms probably were randomly aggregated nets of chemical reactions. The hypothesis that contemporary organisms are also randomly constructed molecular automata is examined by modeling the gene as a binary (on-off) device and studying the behavior of large, randomly constructed nets of these binary "genes". The results suggest that, if each "gene" is directly affected by two or three other "genes", then such random nets: behave with great order and stability; undergo behavior cycles whose length predicts cell replication time as a function of the number of genes per cell; possess different modes of behavior whose number per net predicts roughly the number of cell types in an organism as a function of its number of genes; and under the stimulus of noise are capable of differentiating directly from any mode of behavior to at most a few other modes of behavior. Cellular differentation is modeled as a Markov chain among the modes of behavior of a genetic net. The possibility of a general theory of metabolic behavior is suggested.