ArticlePDF Available

Where Do New Ideas Come From? How Do They Emerge? Epistemology as Computation (Information Processing)



This essay presents arguments for the claim that in the best of all possible worlds (Leibniz) there are sources of unpredictability and creativity for us humans, even given a pancomputational stance. A suggested answer to Chaitin’s questions: “Where do new mathematical and biological ideas come from? How do they emerge?” is that they come from the world and emerge from basic physical (computational) laws. For humans as a tiny subset of the universe, a part of the new ideas comes as the result of the re-configuration and reshaping of already existing elements and another part comes from the outside as a consequence of openness and interactivity of the system. For the universe at large it is randomness that is the source of unpredictability on the fundamental level. In order to be able to completely predict the Universe-computer we would need the Universe-computer itself to compute its next state; as Chaitin already demonstrated there are incompressible truths which means truths that cannot be computed by any other computer but the universe itself.
Gordana Dodig-Crnkovic
Mälardalen University, Västerås, Sweden
This essay presents arguments for the claim that in the best of all
possible worlds (Leibniz) there are sources of unpredictability and
creativity for us humans, even given a pancomputational stance. A
suggested answer to Chaitin’s questions: “Where do new mathematical
and biological ideas come from? How do they emerge?” is that they
come from the world and emerge from basic physical (computational)
laws. For humans as a tiny subset of the universe, a part of the new ideas
comes as the result of the re-configuration and reshaping of already
existing elements and another part comes from the outside as a
consequence of openness and interactivity of the system. For the
universe at large it is randomness that is the source of unpredictability on
the fundamental level. In order to be able to completely predict the
Universe-computer we would need the Universe-computer itself to
compute its next state; as Chaitin already demonstrated there are
incompressible truths which means truths that cannot be computed by
any other computer but the universe itself.
Randomness & Complexity, from Leibniz to Chaitin
The previous century had logical positivism and all that emphasis on the
philosophy of language, and completely shunned speculative
metaphysics, but a number of us think that it is time to start again. There
is an emerging digital philosophy and digital physics, a new metaphysics
associated with names like Edward Fredkin and Stephen Wolfram and a
handful of like-minded individuals, among whom I include myself.
(Chaitin, Epistemology as Information Theory: From Leibniz to , 2006)
It was in June 2005 I first met Greg Chaitin at the E-CAP 2005
conference in Sweden, where he delivered the Alan Turing Lecture, and
presented his book Meta Math! It was a remarkable lecture and a
remarkable book that has left me wondering, reading and thinking since
then1. The overwhelming effect was a feeling of liberation: we were
again allowed to think big, think systeme du monde, and the one Chaitin
suggested was constructed as digital philosophy – something I as a
computer scientist and physicist found extremely appealing. God is a
computer programmer, Chaitin claims, and to understand the world
amounts to be able to program it!
Under these premises the theory of information, specifically Chaitin’s
algorithmic theory of information becomes a very elegant and natural
way to reconstruct epistemology, as demonstrated in Chaitin (2006). The
epistemological model that is according to Chaitin central to algorithmic
information theory is that a scientific or mathematical theory is a
computer program for calculating the facts, and the smaller the program,
the better the theory. In other words, understanding is compression of
In exploring epistemology as information theory, Chaitin addresses
the question of the nature of mathematics as our most reliable
knowledge, illustrated by Hilbert’s program for its formalization and
1 I had the privilege to discuss the Turing Lecture article with Chaitin, while editing the
forthcoming book Dodig-Crnkovic G. and Stuart S., eds. (2007), Computation,
Information, Cognition – The Nexus and The Liminal, Cambridge Scholars Publishing.
The present paper is meant as a continuation of that dialog.
2 For a detailed implementation of the idea of information compression, see Wolff
Where Do New Ideas Come From? How Do They Emerge?
automatization. Based on algorithmic information theory Chaitin comes
to this enlightening conclusion:
In other words, the normal, Hilbertian view of math is that all of
mathematical truth, an infinite number of truths, can be compressed into
a finite number of axioms. But there are an infinity of mathematical
truths that cannot be compressed at all, not one bit!
This is a very important result, which sheds a new light on
epistemology. It sheds a new light on the meaning of Gödel’s and
Turing’s negative responses to Hilbert’s program. What is scientific truth
today after all3, if not even mathematics is able to prove every true
statement within its own domain? Chaitin offers a new and encouraging
suggestion – mathematics may be not as monolithic and a priori as
Hilbert believed:
But we have seen that the world of mathematical ideas has infinite
complexity; it cannot be explained with any theory having a finite
number of bits, which from a sufficiently abstract point of view seems
much more like biology, the domain of the complex, than like physics,
where simple equations reign supreme.
The consequence is that the ambition of having one grand unified
theory of mathematics must be abandoned. The domain of mathematics
is more like an archipelago consisting of islands of truths in an ocean of
incomprehensible and uncompressible information. Chaitin, in an
interview in September 2003 says:
You see, you have all of mathematical truth, this ocean of
mathematical truth. And this ocean has islands. An island here, algebraic
truths. An island there, arithmetic truths. An island here, the calculus.
And these are different fields of mathematics where all the ideas are
interconnected in ways that mathematicians love; they fall into nice,
3 Tasic, in his Mathematics and the Roots of Postmodern Thought gives an eloquent
answer to this question in the context of human knowledge in general.
Randomness & Complexity, from Leibniz to Chaitin
interconnected patterns. But what I've discovered is all this sea around
the islands.
So, it seems that apart from Leibniz bewildering question quoted by
Chaitin (2006):
“Why is there something rather than nothing? For nothing is simpler and
easier than something.” Leibniz, Section 7 of Principles of Nature and
there is the following, equally puzzling one:
Why is that something which exists made of parts rather than in one
single piece?
For there are two significant aspects of the world which we observe:
the world exists, and it appears to us as divisible, made of parts. The
parts, however, are not totally unrelated universes in a perfectly empty
vacuum4. On the contrary, physical objects constitute myriads of intricate
complex structures on many different scales, and as we view them
through various optics we find distinct characteristic complex structures.
Starting from the constatation that our understanding of the world is
fragmented, it is easy to adopt a biological paradigm and see human
knowledge as an eco-system with many sub-systems with different
interacting parts that behave like organisms. Even though an organism is
an autonomous individual it is not an isolated system but a part of a
whole interconnected living network.
Contrary to the common model of a computing mechanism, in which
the computer given a suitable procedure and an input, sequentially
processes the data until the procedure ends (i.e. the program halts) or a
model of a physical system which is assumed to be hermetically isolated
with all possible conservation laws in effect, a model of a biological
4 Here the interesting question of the nature of a vacuum is worth mentioning. A vacuum
in modern physics is anything but empty – it is simmering with continuous activity,
with virtual particles popping up from it and disappearing into it. Chaitin’s ocean of the
unknown can be imagined as a vacuum full of the activity of virtual particles.
Where Do New Ideas Come From? How Do They Emerge?
system must necessarily be open. A biological system is critically reliant
on its environment for survival. Separate parts of an ecological system
communicate and are vitally dependent on each other.
To sum up, extremely briefly, Chaitin’s informational take on
epistemology, the world is for a human effectively an infinite resource of
truths, many of them incompressible and incomprehensible. Mathematics
is not a monolithic, perfect, eternal crystal of the definite true essence of
the world. It is rather, like other sciences, a fragmented and open
structure, living and growing as a complex biological adaptive eco-
In the conclusion of Epistemology as Information Theory: From
Leibniz To , Chaitin leaves us with the following assignment:
In fact, I believe that this is actually the central question in biology as
well as in mathematics; it's the mystery of creation, of creativity:
Where do new mathematical and biological ideas come from? How do
they emerge?
Normally one equates a new biological idea with a new species, but in
fact every time a child is born, that's actually a new idea incarnating; it's
reinventing the notion of “human being,” which changes constantly.
I have no idea how to answer this extremely important question; I wish I
could. Maybe you will be able to do it. Just try! You might have to keep
it cooking on a back burner while concentrating on other things, but don't
give up! All it takes is a new idea! Somebody has to come up with it.
Why not you? (Chaitin 2006)
That is where I want to start. After reading Meta Math! and a number
of Chaitin’s philosophical articles5, and after having written a thesis
based on the philosophy of computationalism/informationalism (Dodig-
Crnkovic, 2006) I dare to present my modest attempt to answer the big
question above, as a part of a Socratic dialogue. My thinking is deeply
5 A goldmine of articles may be found on Chaitin’s web page. See especially Thinking About Gödel & Turing
Randomness & Complexity, from Leibniz to Chaitin
rooted in pancomputationalism, characterized by Chaitin in the following
And how about the entire universe, can it be considered to be a
computer? Yes, it certainly can, it is constantly computing its future state
from its current state, it's constantly computing its own time-evolution!
And as I believe Tom Toffoli pointed out, actual computers like your PC
just hitch a ride on this universal computation! (Chaitin 2006)
If computation is seen as information processing,
pancomputationalism turns to paninformationalism. Historically, within
the field of computing and philosophy, two distinct branches have been
established: informationalism, in which the focus is on information as the
stuff of the universe; (Floridi 2002, 2003 and 2004) and
computationalism, where the universe is seen as a computer. Chaitin
(2006) mentions the cellular automata researchers and computer
scientists Fredkin, Wolfram, Toffoli, and Margolus, and the physicists
Wheeler, Zeilinger, 't Hooft, Smolin, Lloyd, Zizzi, Mäkelä, and
Jacobson, as the most prominent computationalists. In Dodig-Crnkovic
(2006) I put forward a dual-aspect info-computationalism, in which the
universe is viewed as a structure (information) in a permanent process of
change (computation). According to this view, information and
computation constitute two aspects of reality, and like the particle and
wave, or matter and energy, capture different facets of the same physical
world. Computation may be either discrete or continuous6 (digital or
analogue). The present approach offers a generalization of traditional
computationalism in the sense that “computation” is understood as the
process governing the dynamics of the physical universe.
Digital philosophy is fundamentally neo-Pythagorean especially in its
focusing on software aspects of the physical universe (either code or a
process). Starting from the pancomputationalist version of digital
philosophy, epistemology can be naturalized so that knowledge
6 The universe is a network of computing processes and its phenomena are info-
computational. Both continuous as discrete, analogue as digital computing are parts of
the computing universe. (Dodig-Crnkovic, 2006). For the discussion about the necessity
of both computational modes on the quantum mechanical level see Lloyd (2006).
Where Do New Ideas Come From? How Do They Emerge?
generation can be explained in pure computationalist terms (Dodig-
Crnkovic, 2006). This will enable us to suggest a mechanism that
produces meaningful behavior and knowledge in biological matter and
that will also help us understand what we might need in order to be able
to construct intelligent artifacts.
Epistemology Naturalized by Info-Computation
Naturalized epistemology is an idea that the subject matter of
epistemology is not our concept of knowledge, but knowledge as a
natural phenomenon (Feldman, Kornblith, Stich, Dennett). In what
follows I will try to present knowledge generation as natural
computation, i.e. information processing. One of the reasons to taking
this approach is that info-computationalism provides a unifying
framework which makes it possible for different research fields such as
philosophy, computer science, neuroscience, cognitive science, biology,
and a number of others to communicate within a common framework.
In this account naturalized epistemology is based on the computational
understanding of cognition and agency. This entails evolutionary
understanding of cognition (Lorenz 1977, Popper 1978, Toulmin 1972
and Campbell et al. 1989, Harms 2004, Dawkins 1976, Dennett 1991).
Knowledge is a result of the structuring of input data (data
information knowledge) (Stonier, 1997) by an interactive
computational process going on in the nervous system during the
adaptive interplay of an agent with the environment, which increases
agents’ ability to cope with the world and its dynamics. The mind is seen
as a computational process on an informational structure that, both in its
digital and analogue forms, occurs through changes in the structures of
our brains and bodies as a consequence of interaction with the physical
universe. This approach leads to a naturalized, evolutionary
epistemology that understands cognition as a phenomenon of interactive
information processing which can be ascribed even to the simplest living
organisms (Maturana and Varela) and likewise to artificial life.
In order to be able to comprehend cognitive systems we can learn
from the historical development of biological cognitive functions and
Randomness & Complexity, from Leibniz to Chaitin
structures from the simple ones upward. A very interesting account of
developmental ascendancy, from bottom-up to top-down control, is given
by Coffman 2006. Among others this article addresses the question of the
origin of complexity in biological organisms, including the analysis of
the relationship between the parts and the whole.
Natural Computation beyond the Turing Limit
As a direct consequence of the computationalist view that every natural
process is computation in a computing universe, “computation” must be
generalized to mean natural computation. MacLennan 2004 defines
“natural computation” as “computation occurring in nature or inspired by
that in nature”, which besides classical computation also includes
quantum computing and molecular computation, and may be represented
by either discrete or continuous models. Examples of computation
occurring in nature encompass information processing in evolution by
natural selection, in the brain, in the immune system, in the self-
organized collective behavior of groups of animals such as ant colonies,
and in particle swarms. Computation inspired by nature includes genetic
algorithms, artificial neural nets, simulated immune systems, and so
forth. There is a considerable synergy gain in relating human-designed
computing with the computing in nature. Here we can illustrate Chaitin’s
claim that “we only understand something if we can program it”: In the
iterative course of modeling and computationally simulating
(programming) natural processes, we learn to reproduce and predict more
and more of the characteristic features of the natural systems.
Classical ideal theoretical computers are mathematical objects and are
equivalent to algorithms, abstract automata (Turing machines or “logical
machines” as Turing called them), effective procedures, recursive
functions, or formal languages. Contrary to traditional Turing
computation, in which the computer is an isolated box provided with a
suitable algorithm and an input, left alone to compute until the algorithm
terminated, interactive computation (Wegner 1988, Goldin et al. 2006)
presupposes interaction i.e. communication of the computing process
with the environment during computation. Interaction consequently
Where Do New Ideas Come From? How Do They Emerge?
provides a new conceptualization of computational phenomena which
involves communication and information processing. Compared with
new emerging computing paradigms, in particular with interactive
computing and natural computing, Turing machines form the proper
subset of the set of information processing devices. (Dodig-Crnkovic,
2006, paper B)
The Wegner-Goldin interactive computer is conceived as an open
system in communication with the environment, the boundary of which
is dynamic, as in living biological systems and thus particularly suitable
to model natural computation. In a computationalist view, organisms
may be seen as constituted by computational processes; they are “living
computers”. In the living cell an info-computational process takes place
using DNA, in an open system exchanging information, matter and
energy with the environment.
Burgin (2005) in his book explores computing beyond the Turing
limit and identifies three distinct components of information processing
systems: hardware (physical devices), software (programs that regulate
its functioning and sometimes can be identical with hardware, as in
biological computing), and infoware (information processed by the
system). Infoware is a shell built around the software-hardware core,
which is the traditional domain of automata and algorithm theory.
Semantic Web is an example of infoware that is adding a semantic
component to the information present on the web (Berners-Lee, Hendler
and Lassila, 2001).
For the implementations of computationalism, interactive computing
is the most appropriate general model of natural computing, as it suits the
purpose of modeling a network of mutually communicating processes
(Dodig-Crnkovic 2006). It will be of particular interest to computational
accounts of epistemology, as a cognizing agent interacts with the
environment in order to gain experience and knowledge. It also provides
a unifying framework for the reconciliation of classical and connectionist
views of cognition.
Randomness & Complexity, from Leibniz to Chaitin
Cognitive Agents Processing Data Information Knowledge
Our specific interest is in how the structuring from data to information
and knowledge develops on a phenomenological level in a cognitive
agent (biological or artificial) in its interaction with the environment. The
central role of interaction is expressed by Goerzel (1994) in the
following way:
Today, more and more biologists are waking up to the sensitive
environment-dependence of fitness, to the fact that the properties which
make an organism fit may not even be present in the organism, but may
be emergent between the organism and its environment.
One can say that living organisms are “about” the environment, that
they have developed adaptive strategies to survive by internalizing
environmental constraints. The interaction between an organism and its
environment is realized through the exchange of physical signals that
might be seen as data, or when structured, as information. Organizing
and mutually relating different pieces of information results in
knowledge. In that context, computationalism appears as the most
suitable framework for naturalizing epistemology.
Maturana and Varela (1980) presented a very interesting idea that
even the simplest organisms possess cognition and that their meaning-
production apparatus is contained in their metabolism. Of course, there
are also non-metabolic interactions with the environment, such as
locomotion, that also generates meaning for an organism by changing its
environment and providing new input data. We will take Maturana and
Varelas’ theory as the basis for a computationalist account of
evolutionary epistemology.
At the physical level, living beings are open complex computational
systems in a regime on the edge of chaos7, characterized by maximal
7 Bertschinger N. and Natschläger T. (2004) claim “Employing a recently developed
framework for analyzing real-time computations we show that only near the critical
boundary such networks can perform complex computations on time series. Hence, this
result strongly supports conjectures that dynamical systems which are capable of doing
complex computational tasks should operate near the edge of chaos, i.e. the transition
from ordered to chaotic dynamics.”
Where Do New Ideas Come From? How Do They Emerge?
informational content. Complexity is found between orderly systems
with high information compressibility and low information content and
random systems with low compressibility and high information content.
Living systems are “open, coherent, space-time structures maintained far
from thermodynamic equilibrium by a flow of energy”. (Chaisson, 2002)
Langton has compared these different regions to the different states of
matter. Fixed points are like crystals in that they are for the most part
static and orderly. Chaotic dynamics are similar to gases, which can be
described only statistically. Periodic behavior is similar to a non-crystal
solid, and complexity is like a liquid that is close to both the solid and the
gaseous states. In this way, we can once again view complexity and
computation as existing on the edge of chaos and simplicity. (Flake
Artificial agents may be treated analogously with animals in terms of
different degrees of complexity; they may range from software agents
with no sensory inputs at all to cognitive robots with varying degrees of
sophistication of sensors and varying bodily architecture.
The question is: how does information acquire meaning naturally in the
process of an organism’s interaction with its environment? A
straightforward approach to naturalized epistemology attempts to answer
this question via study of evolution and its impact on the cognitive,
linguistic, and social structures of living beings, from the simplest ones
to those at highest levels of organizational complexity (Bates 2005).
Various animals are equipped with varying physical hardware, sets of
sensory apparatuses goals and behaviors. For different animals, the
“aboutness” concerning the same physical reality is different in terms of
causes and their effects.
Indeed, cognitive ethologists find the only way to make sense of the
cognitive equipment in animal is to treat it as an information processing
system, including equipment for perception, as well as the storage and
integration of information; that is, after all, the point of calling it
Randomness & Complexity, from Leibniz to Chaitin
cognitive equipment. That equipment which can play such a role confers
selective advantage over animals lacking such equipment no longer
requires any argument. (Kornblith 1999)
An agent receives inputs from the physical environment (data) and
interprets these in terms of its own earlier experiences, comparing them
with stored data in a feedback loop. Through that interaction between the
environmental data and the inner structure of an agent, a dynamical state
is obtained in which the agent has established a representation of the
situation. The next step in the loop is to match the present state with
goals and preferences (saved in an associative memory). This process
results in the anticipation of what various actions from the given state
might have for consequences (Goertzel 1994). Compare with Dennett’s
(1991) Multiple Drafts Model. Here is an alternative formulation:
This approach is not a hybrid dynamic/symbolic one, but interplay
between analogue and digital information spaces, in an attempt to model
the representational behavior of a system. The focus on the explicitly
referential covariation of information between system and environment is
shifted towards the interactive modulation of implicit internal content
and therefore, the resulting pragmatic adaptation of the system via its
interaction with the environment. The basic components of the
framework, its nodal points and their dynamic relations are analyzed,
aiming at providing a functional framework for the complex realm of
autonomous information systems (Arnellos et al. 2005)
Very close to the above ideas is the interactivist approach of Bickhard
(2004), and Kulakov & Stojanov (2002). On the ontological level, it
involves naturalism, which means that the physical world (matter) and
mind are integrated, mind being an emergent property of a physical
process, closely related to the process metaphysics of Whitehead (1978).
Where Do New Ideas Come From? How Do They Emerge?
Evolutionary Development of Cognition
Evolutionary development is the best known explanatory model for life
on earth. If we want to understand the functional characteristics of life, it
is helpful to reveal its paths of development.
One cannot account for the functional architecture, reliability, and goals
of a nervous system without understanding its adaptive history.
Consequently, a successful science of knowledge must include standard
techniques for modeling the interaction between evolution and learning.
(Harms, 2005)
A central question is thus what the mechanism is of the evolutionary
development of cognitive abilities in organisms. Critics of the
evolutionary approach mention the impossibility of “blind chance” to
produce such highly complex structures as intelligent living organisms.
Proverbial monkeys typing Shakespeare are often used as an illustration.
However, Lloyd 2006 mentions a following, first-rate counter argument,
originally due to Chaitin and Bennet. The “typing monkeys” argument
does not take into account the physical laws of the universe, which
dramatically limit what can be typed. The universe is not a typewriter,
but a computer, so a monkey types random input into a computer.
Quantum mechanics supplies the universe with “monkeys” in the form of
random fluctuations, such as those that seeded the locations of galaxies.
The computer into which they type is the universe itself. From a simple
initial state, obeying simple physical laws, the universe has
systematically processed and amplified the bits of information embodied
in those quantum fluctuations. The result of this information processing
is the diverse, information-packed universe we see around us:
programmed by quanta, physics give rise first to chemistry and then to
life; programmed by mutation and recombination, life gave rise to
Shakespeare; programmed by experience and imagination, Shakespeare
gave rise to Hamlet. You might say that the difference between a
monkey at a typewriter and a monkey at a computer is all the difference
in the world. (Lloyd 2006)
Randomness & Complexity, from Leibniz to Chaitin
Allow me to add one comment on Lloyd’s computationalist claim.
The universe/ computer on which a monkey types is at the same time the
hardware and the program, in a way similar to the Turing machine. An
example from biological computing is the DNA where the hardware (the
molecule) is at the same time the software (the program, the code). In
general, each new input restructures the computational universe and
changes the preconditions for future inputs. Those processes are
interactive and self-organizing. That makes the essential speed-up for the
process of getting more and more complex structures.
Informational Complexity of Cognitive Structures
Dynamics lead to statics, statics leads to dynamics, and the simultaneous
analysis of the two provides the beginning of an understanding of that
mysterious process called mind. (Goertzel 1994)
In the info-computationalist vocabulary, “statics” (structure)
corresponds to “information” and “dynamics” corresponds to
One question which may be asked is: why doesn’t an organism
exclusively react to data as it is received from the world/environment?
Why is information used as building blocks, and why is knowledge
constructed? In principle, one could imagine a reactive agent that
responds directly to input data without building an informational
structure out of raw input.
The reason may be found in the computational efficiency of the
computation concerned. Storage of data that are constant or are often
reused saves huge amounts of time. So, for instance, if instead of dealing
with each individual pixel in a picture, we can make use of symbols or
patterns that can be identified with similar memorized symbols or
patterns, the picture can be handled much more quickly.
Studies of vision show that cognition focuses on that part of the scene
which is variable and dynamic, and uses memorized data for the rest that
is static (this is the notorious frame problem of AI). Based on the same
mechanism, we use ideas already existing to recognize, classify, and
Where Do New Ideas Come From? How Do They Emerge?
characterize phenomena. Our cognition is thus an emergent phenomenon,
resulting from both memorized (static) and observed (dynamic) streams.
Forming chunks of structured data into building blocks, instead of
performing time-consuming computations on those data sets in real time,
is an enormously powerful acceleration mechanism. With each higher
level of organization, the computing capacity of an organism’s cognitive
apparatus is further increased. The efficiency of meta-levels is becoming
evident in computational implementations. Goertzel illustrates this
multilevel control structure by means of the three-level “pyramidal”
vision processing parallel computer developed by Levitan and his
colleagues at the University of Massachusetts. The bottom level deals
with sensory data and with low-level processing such as segmentation
into components. The intermediate level handles grouping, shape
detection and such; and the top level processes this information
“symbolically”, constructing an overall interpretation of the scene. This
three-level perceptual hierarchy appears to be an exceptionally effective
approach to computer vision.
We look for those objects that we expect to see and we look for those
shapes that we are used to seeing. If a level 5 process corresponds to an
expected object, then it will tell its children [i. e. sub-processes] to look
for the parts corresponding to that object, and its children will tell their
children to look for the complex geometrical forms making up the parts
to which they refer, et cetera. (Goertzel 1994)
Human intelligence is indivisible from its presence in a body
(Dreyfus 1972, Gärdenfors 2000, 2005, Stuart 2003). When we observe,
act and reason, we relate different ideas in a way that resembles the
relation of our body with various external objects. Cognitive structures of
living organisms are complex systems with evolutionary history (Gell-
Mann 1995) evolved in the interaction between first proto-organisms
with the environment, and evolving towards more and more complex
structures which is in a complete agreement with the info-computational
view, and the understanding of human cognition as a part of this overall
Randomness & Complexity, from Leibniz to Chaitin
This essay attempts to address the question posed by Chaitin (2006)
about the origin of creativity and novelty in a computational universe.
For that end, an info-computationalist framework was assumed within
which information is the stuff of the universe while computation is its
dynamics. Based on the understanding of natural phenomena as info-
computational, the computer in general is conceived as an open
interactive system, and the Classical Turing machine is understood as a
subset of a general interactive/adaptive/self-organizing universal natural
computer. In a computationalist view, organisms are constituted by
computational processes, implementing computation in vivo.
All cognizing beings are physical (informational) systems in constant
interaction with their environment. The essential feature of cognizing
living organisms is their ability to manage complexity, and to handle
complicated environmental conditions with a variety of responses that
are results of adaptation, variation, selection, learning, and/or reasoning.
Increasingly complex living organisms arise as a consequence of
evolution. They are able to register inputs (data) from the environment,
to structure those into information, and, in more developed organisms,
into knowledge. The evolutionary advantage of using structured,
component-based approaches (data information knowledge) is
improving response time and the computational efficiency of cognitive
The main reason for choosing an info-computationalist view for
naturalizing epistemology is that it presents a unifying framework which
enables research fields of philosophy, computer science, neuroscience,
cognitive science, biology, artificial intelligence and number of others to
communicate, exchange their results and build a common knowledge. It
also provides the natural solution to the old problem of the role of
representation, a discussion about two seemingly incompatible views: a
symbolic, explicit and static notion of representation versus implicit and
dynamic (interactive, neural-network-type) one. Within info-
computational framework, those classical (Turing-machine type) and
connectionist views are reconciled and used to describe different levels
or aspects of cognition.
Where Do New Ideas Come From? How Do They Emerge?
So where do new mathematical and biological ideas come from? How
do they emerge?
It seems to me that as a conclusion we can confidently say that they
come from the world. Humans, just as other biological organisms, are
just a tiny subset of the universe, and the universe has definitely an
impact on us. A part of the new ideas is the consequence of the re-
configuration and reshaping of already existing elements in the
biosphere, like in component-based engineering. Life learns from both,
from already existing elements and from something that comes from the
outside of our horizon.
Even if the universe is a huge (quantum mechanical) computer for us
it is an infinite reservoir of new discoveries and surprises. For even if the
universe as a whole would be a totally deterministic mechanism, for
humans to know its functioning and predict its behavior would take
infinite time, as Chaitin already demonstrated that there are
incompressible truths. In short, in order to be able to predict the
Universe-computer we would need the Universe-computer itself to
compute its next state.
That was my attempt to argue that in the best of all possible worlds
(“le meilleur des mondes possibles” – Leibniz 1710) there are sources of
creativity and unpredictability, for us humans, even given a
pancomputational stance. I have done my homework.
I would like to thank Greg Chaitin for his inspiring ideas presented in his
Turing Lecture on epistemology as information theory and the
subsequent paper, and for his kindness in answering my numerous
questions. Thanks to Chris Calude, too. It was a great privilege to be
invited to contribute to this book.
Randomness & Complexity, from Leibniz to Chaitin
Arnellos, A., Spyrou, T. and Darzentas, J. “The Emergence of Interactive Meaning
Processes in Autonomous Systems”, In: Proceedings of FIS 2005: Third International
Conference on the Foundations of Information Science. Paris, July 4-7, 2005.
Bates, M. J. “Information and Knowledge: An Evolutionary Framework for Information
Science”. Information Research 10, no. 4 (2005) Accessible at
Bertschinger, N. and Natschläger, T. “Real-Time Computation at the Edge of Chaos in
Recurrent Neural Networks”, Neural Comp. 16 (2004) 1413-1436
Berners-Lee, T., Hendler, J. and Lassila, O. “The Semantic Web”. Scientific American,
Vol. 284, 5, pp.34-43 (2001). Accessible at
Bickhard, M. H. “The Dynamic Emergence of Representation”. In H. Clapin, P. Staines,
P. Slezak (Eds.) Representation in Mind: New Approaches to Mental Representation.
(71-90). Elsevier. 2004.
Burgin, M. (2005) Super-Recursive Algorithms, Springer Monographs in Computer
Campbell, D. T. and Paller, B. T. “Extending Evolutionary Epistemology to “Justifying”
Scientific Beliefs (A sociological rapprochement with a fallibilist perceptual
foundationalism?).” In Issues in evolutionary epistemology, edited by K. Hahlweg and C.
A. Hooker, (1989) 231-257. Albany: State University of New York Press.
Chaisson, E.J. (2001) Cosmic Evolution. The Rise of Complexity in Nature. pp. 16-78.
Harvard University Press, Cambridge
Chaitin, G. J. (1987) Algorithmic Information Theory, Cambridge University Press
Chaitin, G “Epistemology as Information Theory”, Collapse, (2006) Volume I, pp. 27-51.
Alan Turing Lecture given at E-CAP 2005,
Chaitin, G. J. (1987) Information Randomness & Incompleteness: Papers on Algorithmic
Information Theory. Singapore: World Scientific.
Chaitin, G. J. (2003) Dijon Lecture
Chaitin, G. J. (2005). Meta Math!: The Quest for Omega. Pantheon.
Coffman, A. J. “Developmental Ascendency: From Bottom-up to Top-down Control”,
Biological Theory Spring 2006, Vol. 1, No. 2: 165-178.
Dawkins, R. 1976, 1982. The Selfish Gene. Oxford University Press.
Dennett, D. (1995), Darwin's Dangerous Idea, Simon & Schuster
Dennett, D. (1991) Consciousness Explained. Penguin Books
Dodig-Crnkovic, G. (2006) Investigations into Information Semantics and Ethics of
Computing, Mälardalen University Press, http://www.diva-
Dreyfus, H. L. (1972) What Computers Can't Do: A Critique of Artificial Reason. Harper
& Row
Flake, G. W. (1998) The Computational Beauty of Nature: Computer Explorations of
Fractals, Chaos, Complex Systems, and Adaptation, MIT Press
Where Do New Ideas Come From? How Do They Emerge?
Floridi, L. (2002) “What is the Philosophy of Information?”, Metaphilosophy (33.1/2),
Floridi, L. (2003) Blackwell Guide to the Philosophy of Computing and Information
Floridi, L. (2004) “Open Problems in the Philosophy of Information”, Metaphilosophy,
Volume 35: Issue 4
Fredkin, E. Digital Philosophy,
Gärdenfors, P. (2000) Conceptual Spaces, Bradford Books, MIT Press
_______ , Zlatev, J. and Persson, T. “Bodily mimesis as 'the missing link' in human
cognitive evolution”, Lund University Cognitive Studies 121, Lund. 2005
Gell-Mann, M. (1995) The Quark and the Jaguar: Adventures in the Simple and the
Complex. Owl Books.
Goertzel, B. (1993) The Evolving Mind. Gordon and Breach
_______ (1994) Chaotic Logic. Plenum Press.
Goldin, D., Smolka S. and Wegner P. eds. (2006) Interactive Computation: The New
Paradigm, to be published by Springer-Verlag
Harms, W. F. “Naturalizing Epistemology: Prospectus 2006”, Biological Theory 1(1)
2006, 23–24.
Harms, W. F. Information and Meaning in Evolutionary Processes. Cambridge University
Press, 2004
Kornblith, H. (1999) Knowledge in Humans and Other Animals. Noûs 33 (s13), 327.
Kornblith, H. ed. (1994) Naturalizing Epistemology, second edition, Cambridge: The
MIT Press
Kulakov, A. and Stojanov, G. “Structures, Inner Values, Hierarchies And Stages:
Essentials For Developmental Robot Architecture”, 2nd International Workshop on
Epigenetic Robotics, Edinbourgh, 2002
Leibniz, G. W. Philosophical Papers and Letters, ed. Leroy E. Loemaker (Dodrecht,
Reidel, 1969)
Lloyd, S (2006) Programming the Universe: A Quantum Computer Scientist Takes on the
Cosmos, Alfred A. Knopf
Lorenz, K. (1977) Behind the Mirror. London: Methuen
MacLennan, B. “Natural computation and non-Turing models of computation”,
Theoretical Computer Science 317 (2004) 115 – 145
Maturana, H. and Varela, F. (1992) The Tree of Knowledge. Shambala
_______ (1980) Autopoiesis and Cognition: The Realization of the Living. D. Reidel.
Popper, K. R. (1972) Objective Knowledge: An Evolutionary Approach. Oxford: The
Clarendon Press.
Stich, S. (1993) “Naturalizing Epistemology: Quine, Simon and the Prospects for
Pragmatism” in C. Hookway & D. Peterson, eds., Philosophy and Cognitive Science,
Royal Inst. of Philosophy, Supplement no. 34 (Cambridge University Press) p. 1-17.
Stonier, T. (1997) Information and Meaning. An Evolutionary Perspective, Springer,
Berlin, N.Y.
Stuart, S. (2003) “The Self as an Embedded Agent”, Minds and Machines, 13 (2): 187
Tasic, V. (2001) Mathematics and the Roots of Postmodern Thought. Oxford University
Randomness & Complexity, from Leibniz to Chaitin
Toulmin, S. (1972) Human Understanding: The Collective Use and Evolution of
Concepts. Princeton University Press.
Wegner, P. “Interactive Foundations of Computing”, Theoretical Computer Science 192
(1998) 315-51.
Whitehead, A. N. (1978) Process and Reality: An Essay in Cosmology. New York: The
Free Press.
Wolff, J. G. (2006) Unifying Computing and Cognition,
Wolfram, S. (2002) A New Kind of Science. Wolfram Science.
... You compare the complexity of the axioms with the complexity of the result you're trying to derive, and if the result is more complex than the axioms, then you can't get it from those axioms. (Chaitin, 1999) 22 We can now discuss the role of incompleteness in formalized physics. 20 They call Suppes predicates those set-theoretic predicates they use in the semantical approach to axiomatization, as opposed to a syntactic approach à la Bourbaki, which however they do not disdain. ...
... 21 There is some interest in saying that da Costa and Doria consider this method extendable to mathematized economical science and mathematical biology. 22 Online publication. ...
A formal introduction to Kolmogorov complexity is given, along with its fundamental theorems. Most importantly the theorem of undecidability of a random string and the information-theoretic reformulation of Gödel's first theorem of incompleteness, stated by Chaitin. Then, the discussion moves on to inquire about some philosophical implications the concept randomness has in the fields of physics and mathematics. Starting from the notion of " understanding as compression " of information, as it is illuminated by algorithmic information theory, it is investigated (1) what K-randomness has to say about the concept of natural law, (2) what is the role of incompleteness in physics and (3) how K-randomness is related to unpredictability, chance and determinism. Regarding mathematics, the discourse starts with a general exposition of the relationship between proving and programming, to propose then some ideas on the nature of mathematics itself; namely, that it is not as unworldly as it is often regarded: indeed, it should be considered a quasi-empirical science (the terminology is from Lakatos, the metamathematical argument by Chaitin) and, more interestingly, random at its core. Finally, a further proposal about the relationship between physics and mathematics is made: the boundary between the two disciplines is blurred; no conceptual separation is possible. Mathematics can be seen as a computational activity (i.e. a physical process), the structure of such a type of process is analyzed.
... The result of this computation is the body shape and material optimized for the class of organisms in a given type of environment. According to computing nature/natural computationalism/pancomputationalism [76] [77][42] [34] one can view the time development (dynamics) of physical states in nature as information processing, and learn about its computational characteristics. Such processes include self-assembly, developmental processes, gene regulation networks, gene assembly in unicellular organisms, protein-protein interaction networks, biological transport networks, and similar. ...
Full-text available
This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted.
... Searle's Chinese Room argument [Searle 1980] (CRA) is not an objection against Turing's main argument, which has its own virtues despite its many limitations, but a call to distinguish human consciousness from intelligent behavior in general. Additionally, the philosophy of mind has transitioned from materialism to functionalism to computationalism [Dodig-Crnkovic 2007], but until very recently little had been done by way of formally-conceptually and technically-connecting computation as a model of consciousness and cognition. ...
Full-text available
We survey concepts at the frontier of research connecting artificial, animal and human cognition to computation and information processing---from the Turing test to Searle's Chinese Room argument, from Integrated Information Theory to computational and algorithmic complexity. We start by arguing that passing the Turing test is a trivial computational problem and that its pragmatic difficulty sheds light on the computational nature of the human mind more than it does on the challenge of artificial intelligence. We then review our proposed algorithmic information-theoretic measures for quantifying and characterizing cognition in various forms. These are capable of accounting for known biases in human behavior, thus vindicating a computational algorithmic view of cognition as first suggested by Turing, but this time rooted in the concept of algorithmic probability, which in turn is based on computational universality while being independent of computational model, and which has the virtue of being predictive and testable as a model theory of cognitive behavior.
Full-text available
The concept of an Universal Knowledge Field was previously also framed as Universal Consciousness, Cosmic Consciousness, Universal Mind, Universal Memory, Universal Intelligence, Holographic Memory, Collective Consciousness, Implicate order and the Plenum, among many other terms. The concept that information can take a universal character and that all information is present in a general knowledge field can be treated from a number of backgrounds and perspectives
Full-text available
Presented at ECAP 2009 Barcelona COMPUTING BEYOND TURING MACHINE MODEL Gordana Dodig Crnkovic Mälardalen University, Sweden. Abstract. A number of contemporary philosophers and computer scientists have noticed the necessity of considering computation beyond Turing machine model. New research results from computer science, physics, logics, bioinformatics and related research fields provide strong and increasing support for this necessity. Generalization of the concept of computation is addressed in essentially two ways: 1. Generalization of the model, extending the idea of algorithm to a non-halting process (Wegner, Burgin, Rice). 2. Generalization of the physical realization of computation process (Copeland, Lloyd, MacLennan, Cooper, Hogarth, Dodig-Crnkovic). The two above are necessarily linked. As soon as a new kind of physical process is identified as computation we need an adequate theoretical model. Likewise, a new model will necessarily be linked with its implementations. In search for a new generalized model of computation, we specifically address interactive computation (Wegner, Goldin), which unlike Turing machine (basically an isolated box processing an algorithm that must halt) implies communication of the computing process with the external world during the ongoing process of computation. The search for new physical computation processes aims at enrichment of the conventional computing repertoire. Present-day computers have developed from the tools for mechanizing calculations into adaptive devices interacting with the physical world, which itself may be conceived of as a computer (Zuse, Fredkin, Wolfram, Chaitin, Lloyd). In that sense Natural computing represents the extension of the domain of physical phenomena which are understood as computational processes and goes beyond Turing model of computation in its expressiveness and applicability. TRACK: 1. Philosophy of Computer Science 2. Biocomputing, Evolutionary and Complex Systems See description of ECAP 2009 in: European Computing and Philosophy Gordana Dodig-Crnkovic The Reasoner 3 (9):18-19 (2009)
Full-text available
Computational and information-theoretic research in philosophy has become increasingly fertile and pervasive, giving rise to a wealth of interesting results. In consequence, a new and vitally important field has emerged, the philosophy of information (PI). This essay is the first attempt to analyse the nature of PI systematically. PI is defined as the philosophical field concerned with the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation, and sciences, and the elaboration and application of information-theoretic and computational methodologies to philosophical problems. I argue that PI is a mature discipline for three reasons: it represents an autonomous field of research; it provides an innovative approach to both traditional and new philosophical topics; and it can stand beside other branches of philosophy, offering a systematic treatment of the conceptual foundations of the world of information and the information society.
Full-text available
Using astronomical telescopes and biological microscopes, among a virtual arsenal of other tools of high technology, modern scientists are weaving a thread of understanding spanning the origin, existence, and destiny of all things. Now emerging is a unified scenario of the cosmos, including ourselves as sentient beings, based on the time-honored concept of change. From galaxies to snowflakes, from stars and planets to life itself, we are beginning to identify an underlying, ubiquitous pattern penetrating the fabric of all the natural sciences—a sweepingly encompassing view of the order and structure of every known class of object in our richly endowed Universe. We call this subject "cosmic evolution." Recent advances throughout the sciences suggest that all organized systems share generic phenomena characterizing their emergence, development and evolution. Whether they are physical, biological or cultural systems, certain similarities and homologies pervade evolving entities throughout an amazingly diverse Universe. How strong are the apparent continuities among Nature's historical epochs and how realistic is the quest for unification? To what extent might we broaden conventional evolutionary thinking, into both the pre-biological and post-biological domains? Is such an extension valid, merely metaphorical, or just plain confusing? For many years at Harvard University, starting in the 1970s and continuing to the present, I have taught, initially with George B. Field, an introductory course on cosmic evolution that sought to identify common denominators bridging a wide variety of specialized science subjects—physics, astronomy, geology, chemistry, biology, and anthropology, among others. The principal aim of this interdisciplinary course explored a universal framework against which to address some of the most basic issues ever contemplated: the origin of matter and the origin of life, as well as how radiation, matter, and life interact and change with time. Our intention was to help sketch a grand evolutionary synthesis that would better enable us to understand who we are, whence we came, and how we fit into the overall scheme of things. In doing so, my students and I gained a broader, integrated knowledge of stars and galaxies, plants and animals, air, land, and sea. Of paramount import, we learned how the evident order and increasing complexity of the many varied, localized structures within the Universe in no way violate the principles of modern physics, which, prima facie, maintain that the Universe itself, globally and necessarily, becomes irreversibly and increasingly disordered. Beginning in the late 1980s while on sabbatical leave at MIT, and continuing for several years thereafter while on the faculty of the Space Telescope Science Institute at Johns Hopkins University, I occasionally offered an advanced version of the introductory course. This senior seminar attempted to raise substantially the quantitative aspects of the earlier course, to develop even deeper insights into the nature and role of change in Nature, and thus to elevate the subject of cosmic evolution to a level that colleague scientists and intelligent lay persons alike might better appreciate. This brief and broadly brushed monograph—written mostly in the late 1990s during a stint as Phi Beta Kappa National Lecturer, and polished while resuming the teaching at Harvard of my original course on cosmic evolution--is an intentionally lean synopsis of the salient features of that more advanced effort. Some will see this work as reductionistic, with its analytical approach to the understanding of all material things. Others will regard it as holistic, with its overarching theme of the whole exceeding the sum of Nature's many fragmented parts. In the spirit of complementarity, I offer this work as an evolutionary synthesis of both these methodologies, integrating the deconstructionism of the former and the constructivist tendencies of the latter. Openly admitted, my inspiration for writing this book has been Erwin Schroedinger's seminal little tract of a half-century ago, What is Life?, yet herein to straighten and extend the analysis to include all known manifestations of order and complexity in the Universe. No attempt is made to be comprehensive in so far as details are concerned; much meat has been left off the bones. Nor is this work meant to be technically rigorous; that will be addressed in a forthcoming opus. Rather, the intent here is to articulate a skeletal précis—a lengthy essay, really—of a truly voluminous subject in a distilled and readable manner. To bend a hackneyed cliché, although the individual trees are most assuredly an integral part of the forest, in this particular work the forest is of greater import. My aim is to avoid diverting the reader from the main lines of argument, to stay focused on target regarding the grand sweep of change from big bang to humankind. Of special note, this is not a New Age book with mystical overtones however embraced or vulgarized by past scholars, nor one about the history and philosophy of antiquated views of Nature. It grants no speculation on the pseudo-science fringe about morphic fields or quantum vitalism or interfering dieties all mysteriously affecting the ways and means of evolution; nor do we entertain epistemological discussions about the limits of human knowledge or post-modernist opinions about the sociological implications of science writ large. This is a book about mainstream science, pure and simple, outlining the essence of an ongoing research program admittedly multidisciplinary in character and colored by the modern scientific method's unavoidable mix of short-term subjectivity and long-term objectivity. In writing this book, I have assumed an undergraduate knowledge of natural science, especially statistical and deterministic physics, since as we shall see, much as for classical biological evolution, both chance and necessity have roles to play in all evolving systems. The mathematical level includes that of integral calculus and differential equations, with a smattering of symbolism throughout; the units are those of the centimeter-gram-second (cgs) system, those most widely used by practitioners in the field, editorial conventions notwithstanding. And although a degree of pedagogy has been included when these prerequisites are exceeded, some scientific language has been assumed. "The book of Nature is written in the language of mathematics," said one of my two intellectual heroes, Galileo Galilei, and so are parts of this one. Readers with unalterable math phobia will benefit from the unorthodox design of this work, wherein the "bookends" of Prologue-Introduction and Discussion-Epilogue, comprising more than half of the book, can be mastered without encountering much mathematics at all. What is presented here, then, is merely a sketch of a developing research agenda, itself evolving, ordering and complexifying—an abstract of scholarship-in-progress incorporating much data and many ideas from the entire spectrum of natural science, yet which attempts to surpass scientific popularizations (including some of my own) that avoid technical lingo, most numbers, and all mathematics. As such, this book should be of interest to most thinking people—active researchers receptive to an uncommonly broad view of science, sagacious students of many disciplines within and beyond science, the erudite public in search of themselves and a credible worldview—in short, anyone having a panoramic, persistent curiosity about the nature of the Universe and of our existence in it. -- Summary Abstract of This Work -- The essence of this book outlines the grand scenario of cosmic evolution by qualitatively and quantitatively examining the natural changes among radiation, matter, and life within the context of big-bang cosmology. The early Universe is shown to have been flooded with pure energy whose radiation energy density was initially so high as to preclude the existence of any appreciable structure. As the Universe cooled and thinned, a preeminent phase change occurred a few hundred centuries after the origin of all things, at which time matter's energy density overthrew the earlier primacy of radiation. Only with the onset of technologically manipulative beings (on Earth and perhaps elsewhere) has the energy density contained within matter become, in turn, locally dominated by the rate of free energy density flowing through open organic structures. Using non-equilibrium thermodynamics at the crux, especially energy flow considerations, we argue that it is the contrasting temporal behavior of various energy densities that have given rise to the environments needed for the emergence of galaxies, stars, planets, and life forms. We furthermore maintain that a necessary (though perhaps not sufficient) condition—a veritable prime mover—for the emergence of such ordered structures of rising complexity is the expansion of the Universe itself. Neither demonstrably new science nor appeals to non-science are needed to explain the impressive hierarchy of the cosmic-evolutionary scenario, from quark to quasar, from microbe to mind.
William Harms develops the conceptual foundations and tools for a science of knowledge through the application of evolutionary theory, thus allowing us to acknowledge the legacy of skepticism while denying its relativistic offspring. The most significant legacy of philosophical skepticism is the realization that our concepts, beliefs and theories are social constructs. This belief has led to epistemological relativism, or the thesis that, since there is no ultimate truth about the world, theory preferences are only a matter of opinion.
All approaches to high performance computing is naturally divided into three main directions: development of computational elements and their networks, advancement of computational methods and procedures, and evolution of the computed structures. In the paper the second direction is developed in the context of the theory of super-recursive algorithms. It is demonstrated that such super-recursive algorithms as inductive Turing machines are more adequate for simulating many processes, have much more computing power, and are more efficient than recursive algorithms.