ArticlePublisher preview available

Self-organization in computation and chemistry: Return to AlChemy

AIP Publishing
Chaos
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

How do complex adaptive systems, such as life, emerge from simple constituent parts? In the 1990s, Walter Fontana and Leo Buss proposed a novel modeling approach to this question, based on a formal model of computation known as the λ calculus. The model demonstrated how simple rules, embedded in a combinatorially large space of possibilities, could yield complex, dynamically stable organizations, reminiscent of biochemical reaction networks. Here, we revisit this classic model, called AlChemy, which has been understudied over the past 30 years. We reproduce the original results and study the robustness of those results using the greater computing resources available today. Our analysis reveals several unanticipated features of the system, demonstrating a surprising mix of dynamical robustness and fragility. Specifically, we find that complex, stable organizations emerge more frequently than previously expected, that these organizations are robust against collapse into trivial fixed points, but that these stable organizations cannot be easily combined into higher order entities. We also study the role played by the random generators used in the model, characterizing the initial distribution of objects produced by two random expression generators, and their consequences on the results. Finally, we provide a constructive proof that shows how an extension of the model, based on the typed λ calculus, could simulate transitions between arbitrary states in any possible chemical reaction network, thus indicating a concrete connection between AlChemy and chemical reaction networks. We conclude with a discussion of possible applications of AlChemy to self-organization in modern programming languages and quantitative approaches to the origin of life.
This content is subject to copyright. Terms and conditions apply.
Chaos ARTICLE pubs.aip.org/aip/cha
Self-organization in computation and chemistry:
Return to AlChemy
Cite as: Chaos 34, 093142 (2024); doi: 10.1063/5.0207358
Submitted: 7 March 2024 ·Accepted: 19 August 2024 ·
Published Online: 30 September 2024
View Online
Export Citation
CrossMark
Cole Mathis,1,2,a)Devansh Patel,1,3 Westley Weimer,4and Stephanie Forrest1,2,3,5
AFFILIATIONS
1Biodesign Institute, Arizona State University, Tempe, Arizona 85281, USA
2School of Complex Adaptive Systems, Arizona State University, Tempe, Arizona 85281, USA
3School of Computing and Augmented Intelligence, Arizona State University, Tempe, Arizona 85281, USA
4Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan 48109, USA
5Santa Fe Institute, Santa Fe, New Mexico 87501, USA
Note: This paper is part of the Focus Issue on Topics in Nonlinear Science: Dedicated to David K. Campbell’s 80th Birthday.
a)Author to whom correspondence should be addressed: cole.mathis@asu.edu
ABSTRACT
How do complex adaptive systems, such as life, emerge from simple constituent parts? In the 1990s, Walter Fontana and Leo Buss proposed
a novel modeling approach to this question, based on a formal model of computation known as the λcalculus. The model demonstrated how
simple rules, embedded in a combinatorially large space of possibilities, could yield complex, dynamically stable organizations, reminiscent of
biochemical reaction networks. Here, we revisit this classic model, called AlChemy, which has been understudied over the past 30 years. We
reproduce the original results and study the robustness of those results using the greater computing resources available today. Our analysis
reveals several unanticipated features of the system, demonstrating a surprising mix of dynamical robustness and fragility. Specifically, we
find that complex, stable organizations emerge more frequently than previously expected, that these organizations are robust against collapse
into trivial fixed points, but that these stable organizations cannot be easily combined into higher order entities. We also study the role played
by the random generators used in the model, characterizing the initial distribution of objects produced by two random expression generators,
and their consequences on the results. Finally, we provide a constructive proof that shows how an extension of the model, based on the typed λ
calculus, could simulate transitions between arbitrary states in any possible chemical reaction network, thus indicating a concrete connection
between AlChemy and chemical reaction networks. We conclude with a discussion of possible applications of AlChemy to self-organization
in modern programming languages and quantitative approaches to the origin of life.
Published under an exclusive license by AIP Publishing. https://doi.org/10.1063/5.0207358
How life emerged from simple molecular constituents is a long-
standing question that has not been fully resolved. This paper
resurrects a computational model called AlChemy that was first
proposed over 30 years ago to study how a set of simple compu-
tational rules could produce complex, dynamically stable organi-
zations reminiscent of biochemical reaction networks. Using the
vastly increased computing resources available today, this paper
reports on new computational experiments and analysis, which
show that in AlChemy, stable and complex organizations emerge
more frequently than previously expected and that these organi-
zations are often robust against collapse. The paper also probes
a key component of AlChemy, its method for generating random
initial conditions, and finds that it is crucial to the model’s suc-
cess. Finally, the paper gives a constructive proof showing that
a slightly modified version of AlChemy can simulate state tran-
sitions in any chemical reaction network (CRN), thereby estab-
lishing a closer connection between the model and the chemical
reactions believed to have produced life.
I. INTRODUCTION
The origin(s) of life remains an unresolved mystery in science,
that is, how unconstrained reactive compounds were selected to
form the organized biochemical networks that form the basis of Dar-
winian lineages. In the early 1990s, Fontana and Buss proposed a
novel approach to this problem based on a formal model of compu-
tation known as the λcalculus.13In a departure from the prevailing
Chaos 34, 093142 (2024); doi: 10.1063/5.0207358 34, 093142-1
Published under an exclusive license by AIP Publishing
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Scientists have grappled with reconciling biological evolution1,2 with the immutable laws of the Universe defined by physics. These laws underpin life’s origin, evolution and the development of human culture and technology, yet they do not predict the emergence of these phenomena. Evolutionary theory explains why some things exist and others do not through the lens of selection. To comprehend how diverse, open-ended forms can emerge from physics without an inherent design blueprint, a new approach to understanding and quantifying selection is necessary3–5. We present assembly theory (AT) as a framework that does not alter the laws of physics, but redefines the concept of an ‘object’ on which these laws act. AT conceptualizes objects not as point particles, but as entities defined by their possible formation histories. This allows objects to show evidence of selection, within well-defined boundaries of individuals or selected units. We introduce a measure called assembly (A), capturing the degree of causation required to produce a given ensemble of objects. This approach enables us to incorporate novelty generation and selection into the physics of complex objects. It explains how these objects can be characterized through a forward dynamical process considering their assembly. By reimagining the concept of matter within assembly spaces, AT provides a powerful interface between physics and biology. It discloses a new aspect of physics emerging at the chemical scale, whereby history and causal contingency influence what exists.
Article
Full-text available
The evolution of life from the prebiotic environment required a gradual process of chemical evolution towards greater molecular complexity. Elaborate prebiotically relevant synthetic routes to the building blocks of life have been established. However, it is still unclear how functional chemical systems evolved with direction using only the interaction between inherent molecular chemical reactivity and the abiotic environment. Here we demonstrate how complex systems of chemical reactions exhibit well-defined self-organization in response to varying environmental conditions. This self-organization allows the compositional complexity of the reaction products to be controlled as a function of factors such as feedstock and catalyst availability. We observe how Breslow’s cycle contributes to the reaction composition by feeding C2 building blocks into the network, alongside reaction pathways dominated by formaldehyde-driven chain growth. The emergence of organized systems of chemical reactions in response to changes in the environment offers a potential mechanism for a chemical evolution process that bridges the gap between prebiotic chemical building blocks and the origin of life. The process by which life arose using information from the prebiotic environment and inherent molecular reactivity is unclear. Now, it has been shown that systems of chemical reactions exhibit well-defined self-organization in varying environments, providing a potential mechanism for chemical evolution processes that bridge the gap between prebiotic building blocks and life’s origin.
Article
Full-text available
One of the main goals of Artificial Life is to research the conditions for the emergence of life, not necessarily as it is, but as it could be. Artificial chemistries are one of the most important tools for this purpose because they provide us with a basic framework to investigate under which conditions metabolisms capable of reproducing themselves, and ultimately, of evolving, can emerge. While there have been successful attempts at producing examples of emergent self-reproducing metabolisms, the set of rules involved remain too complex to shed much light on the underlying principles at work. In this article, we hypothesize that the key property needed for self-reproducing metabolisms to emerge is the existence of an autocatalyzed subset of Turing-complete reactions. We validate this hypothesis with a minimalistic artificial chemistry with conservation laws, which is based on a Turing-complete rewriting system called combinatory logic. Our experiments show that a single run of this chemistry, starting from a tabula rasa state, discovers-with no external intervention-a wide range of emergent structures including ones that self-reproduce in each cycle. All of these structures take the form of recursive algorithms that acquire basic constituents from the environment and decompose them in a process that is remarkably similar to biological metabolisms.
Article
Full-text available
The rule-based search of chemical space can generate an almost infinite number of molecules, but exploration of known molecules as a function of the minimum number of steps needed to build up the target graphs promises to uncover new motifs and transformations. Assembly theory is an approach to compare the intrinsic complexity and properties of molecules by the minimum number of steps needed to build up the target graphs. Here, we apply this approach to prebiotic chemistry, gene sequences, plasticizers, and opiates. This allows us to explore molecules connected to the assembly tree, rather than the entire space of molecules possible. Last, by developing a reassembly method, based on assembly trees, we found that in the case of the opiates, a new set of drug candidates could be generated that would not be accessible via conventional fragment-based drug design, thereby demonstrating how this approach might find application in drug discovery.
Article
Full-text available
To experimentally test hypotheses about the emergence of living systems from abiotic chemistry, researchers need to be able to run intelligent, automated, and long-term experiments to explore chemical space. Here we report a robotic prebiotic chemist equipped with an automatic sensor system designed for long-term chemical experiments exploring unconstrained multicomponent reactions, which can run autonomously over long periods. The system collects mass spectrometry data from over 10 experiments, with 60 to 150 algorithmically controlled cycles per experiment, running continuously for over 4 weeks. We show that the robot can discover the production of high complexity molecules from simple precursors, as well as deal with the vast amount of data produced by a recursive and unconstrained experiment. This approach represents what we believe to be a necessary step towards the design of new types of Origin of Life experiments that allow testable hypotheses for the emergence of life from prebiotic chemistry. The transition of prebiotic chemistry to present-day chemistry lasted a very long period of time, but the current laboratory investigations of this process are mostly limited to a couple of days. Here, the authors develop a fully automated robotic prebiotic chemist designed for long-term chemical experiments exploring unconstrained multicomponent reactions, which can run autonomously and uses simple chemical inputs.
Article
Full-text available
The search for alien life is hard because we do not know what signatures are unique to life. We show why complex molecules found in high abundance are universal biosignatures and demonstrate the first intrinsic experimentally tractable measure of molecular complexity, called the molecular assembly index (MA). To do this we calculate the complexity of several million molecules and validate that their complexity can be experimentally determined by mass spectrometry. This approach allows us to identify molecular biosignatures from a set of diverse samples from around the world, outer space, and the laboratory, demonstrating it is possible to build a life detection experiment based on MA that could be deployed to extraterrestrial locations, and used as a complexity scale to quantify constraints needed to direct prebiotically plausible processes in the laboratory. Such an approach is vital for finding life elsewhere in the universe or creating de-novo life in the lab. The search for life in the universe is difficult due to issues with defining signatures of living systems. Here, the authors present an approach based on the molecular assembly number and tandem mass spectrometry that allows identification of molecules produced by biological systems, and use it to identify biosignatures from a range of samples, including ones from outer space.
Article
Full-text available
We mathematically analyze the solutions to the dynamical system induced by the two-step exponential (growth-)decay (2SED) reaction network involving three species and two rate parameters. We study the influence of the rate parameters on the shape of the solutions. We compare the latter to those of the classic Kermack–McKendrick epidemiological SIR model. We then discuss the similarities and differences between the 2SED and the SIR models from the perspective of chemical reaction network theory (CRNT), as well as from epidemiological modelling view-point. The CRNT approach suggests that the classical SIR model, based on the logistic reaction mechanism, describes well epidemic events related to diseases spreading via a ‘one-to-one’ contact pattern between individuals. On the other side, the 2SED model can be used to simulate epidemic data coming from non-communicable diseases. Our comparative analysis naturally suggests the formulation of a SIR-type model, which is situated between the classic SIR model and the 2SED model, such that the logistic ‘one-to-one’ contact mechanism is replaced by a catalytic (Gompertzian) one. The proposed G-SIR model can be considered as an intermediate step between the SIR and the 2SED models. We compare the shapes of the solutions to the three discussed models and formulate a hypothesis that relates the characteristics of the solution shapes to the model reaction mechanism, resp. to the contact patterns of the particular disease.
Chapter
The term "artificial life" describes research into synthetic systems that possess some of the essential properties of life. This interdisciplinary field includes biologists, computer scientists, physicists, chemists, geneticists, and others. Artificial life may be viewed as an attempt to understand high-level behavior from low-level rules—for example, how the simple interactions between ants and their environment lead to complex trail-following behavior. An understanding of such relationships in particular systems can suggest novel solutions to complex real-world problems such as disease prevention, stock-market prediction, and data mining on the Internet. Since their inception in 1987, the Artificial Life meetings have grown from small workshops to truly international conferences, reflecting the field's increasing appeal to researchers in all areas of science. Bradford Books imprint
Article
Evolutionary computation is inspired by the mechanisms of biological evolution. With algorithmic improvements and increasing computing resources, evolutionary computation has discovered creative and innovative solutions to challenging practical problems. This paper evaluates how today’s evolutionary computation compares to biological evolution and how it may fall short. A small number of well-accepted characteristics of biological evolution are considered: openendedness, major transitions in organizational structure, neutrality and genetic drift, multi-objectivity, complex genotype-to-phenotype mappings and co-evolution. Evolutionary computation exhibits many of these to some extent but more can be achieved by scaling up with available computing and by emulating biology more carefully. In particular, evolutionary computation diverges from biological evolution in three key respects: it is based on small populations and strong selection; it typically uses direct genotype-to-phenotype mappings; and it does not achieve major organizational transitions. These shortcomings suggest a roadmap for future evolutionary computation research, and point to gaps in our understanding of how biology discovers major transitions. Advances in these areas can lead to evolutionary computation that approaches the complexity and flexibility of biology, and can serve as an executable model of biological processes.
Book
This book provides an authoritative introduction to the rapidly growing field of chemical reaction network theory. In particular, the book presents deep and surprising theorems that relate the graphical and algebraic structure of a reaction network to qualitative properties of the intricate system of nonlinear differential equations that the network induces. Over the course of three main parts, Feinberg provides a gradual transition from a tutorial on the basics of reaction network theory, to a survey of some of its principal theorems, and, finally, to a discussion of the theory’s more technical aspects. Written with great clarity, this book will be of value to mathematicians and to mathematically-inclined biologists, chemists, physicists, and engineers who want to contribute to chemical reaction network theory or make use of its powerful results.