Content uploaded by Gordana Dodig Crnkovic
Author content
All content in this area was uploaded by Gordana Dodig Crnkovic
Content may be subject to copyright.
1
CHAPTER X
WHERE DO NEW IDEAS COME FROM? HOW DO THEY
EMERGE? - EPISTEMOLOGY AS COMPUTATION
(INFORMATION PROCESSING)
Gordana Dodig-Crnkovic
Mälardalen University, Västerås, Sweden
gordana.dodig-crnkovic@mdh.se
http://www.idt.mdh.se/personal/gdc/
This essay presents arguments for the claim that in the best of all
possible worlds (Leibniz) there are sources of unpredictability and
creativity for us humans, even given a pancomputational stance. A
suggested answer to Chaitin’s questions: “Where do new mathematical
and biological ideas come from? How do they emerge?” is that they
come from the world and emerge from basic physical (computational)
laws. For humans as a tiny subset of the universe, a part of the new ideas
comes as the result of the re-configuration and reshaping of already
existing elements and another part comes from the outside as a
consequence of openness and interactivity of the system. For the
universe at large it is randomness that is the source of unpredictability on
the fundamental level. In order to be able to completely predict the
Universe-computer we would need the Universe-computer itself to
compute its next state; as Chaitin already demonstrated there are
incompressible truths which means truths that cannot be computed by
any other computer but the universe itself.
Randomness & Complexity, from Leibniz to Chaitin
2
Introduction
The previous century had logical positivism and all that emphasis on the
philosophy of language, and completely shunned speculative
metaphysics, but a number of us think that it is time to start again. There
is an emerging digital philosophy and digital physics, a new metaphysics
associated with names like Edward Fredkin and Stephen Wolfram and a
handful of like-minded individuals, among whom I include myself.
(Chaitin, Epistemology as Information Theory: From Leibniz to Ω, 2006)
It was in June 2005 I first met Greg Chaitin at the E-CAP 2005
conference in Sweden, where he delivered the Alan Turing Lecture, and
presented his book Meta Math! It was a remarkable lecture and a
remarkable book that has left me wondering, reading and thinking since
then1. The overwhelming effect was a feeling of liberation: we were
again allowed to think big, think systeme du monde, and the one Chaitin
suggested was constructed as digital philosophy – something I as a
computer scientist and physicist found extremely appealing. God is a
computer programmer, Chaitin claims, and to understand the world
amounts to be able to program it!
Under these premises the theory of information, specifically Chaitin’s
algorithmic theory of information becomes a very elegant and natural
way to reconstruct epistemology, as demonstrated in Chaitin (2006). The
epistemological model that is according to Chaitin central to algorithmic
information theory is that a scientific or mathematical theory is a
computer program for calculating the facts, and the smaller the program,
the better the theory. In other words, understanding is compression of
information!2
In exploring epistemology as information theory, Chaitin addresses
the question of the nature of mathematics as our most reliable
knowledge, illustrated by Hilbert’s program for its formalization and
1 I had the privilege to discuss the Turing Lecture article with Chaitin, while editing the
forthcoming book Dodig-Crnkovic G. and Stuart S., eds. (2007), Computation,
Information, Cognition – The Nexus and The Liminal, Cambridge Scholars Publishing.
The present paper is meant as a continuation of that dialog.
2 For a detailed implementation of the idea of information compression, see Wolff
(2006).
Where Do New Ideas Come From? How Do They Emerge?
3
automatization. Based on algorithmic information theory Chaitin comes
to this enlightening conclusion:
In other words, the normal, Hilbertian view of math is that all of
mathematical truth, an infinite number of truths, can be compressed into
a finite number of axioms. But there are an infinity of mathematical
truths that cannot be compressed at all, not one bit!
This is a very important result, which sheds a new light on
epistemology. It sheds a new light on the meaning of Gödel’s and
Turing’s negative responses to Hilbert’s program. What is scientific truth
today after all3, if not even mathematics is able to prove every true
statement within its own domain? Chaitin offers a new and encouraging
suggestion – mathematics may be not as monolithic and a priori as
Hilbert believed:
But we have seen that the world of mathematical ideas has infinite
complexity; it cannot be explained with any theory having a finite
number of bits, which from a sufficiently abstract point of view seems
much more like biology, the domain of the complex, than like physics,
where simple equations reign supreme.
The consequence is that the ambition of having one grand unified
theory of mathematics must be abandoned. The domain of mathematics
is more like an archipelago consisting of islands of truths in an ocean of
incomprehensible and uncompressible information. Chaitin, in an
interview in September 2003 says:
You see, you have all of mathematical truth, this ocean of
mathematical truth. And this ocean has islands. An island here, algebraic
truths. An island there, arithmetic truths. An island here, the calculus.
And these are different fields of mathematics where all the ideas are
interconnected in ways that mathematicians love; they fall into nice,
3 Tasic, in his Mathematics and the Roots of Postmodern Thought gives an eloquent
answer to this question in the context of human knowledge in general.
Randomness & Complexity, from Leibniz to Chaitin
4
interconnected patterns. But what I've discovered is all this sea around
the islands.
So, it seems that apart from Leibniz bewildering question quoted by
Chaitin (2006):
“Why is there something rather than nothing? For nothing is simpler and
easier than something.” Leibniz, Section 7 of Principles of Nature and
Grace
there is the following, equally puzzling one:
Why is that something which exists made of parts rather than in one
single piece?
For there are two significant aspects of the world which we observe:
the world exists, and it appears to us as divisible, made of parts. The
parts, however, are not totally unrelated universes in a perfectly empty
vacuum4. On the contrary, physical objects constitute myriads of intricate
complex structures on many different scales, and as we view them
through various optics we find distinct characteristic complex structures.
Starting from the constatation that our understanding of the world is
fragmented, it is easy to adopt a biological paradigm and see human
knowledge as an eco-system with many sub-systems with different
interacting parts that behave like organisms. Even though an organism is
an autonomous individual it is not an isolated system but a part of a
whole interconnected living network.
Contrary to the common model of a computing mechanism, in which
the computer given a suitable procedure and an input, sequentially
processes the data until the procedure ends (i.e. the program halts) or a
model of a physical system which is assumed to be hermetically isolated
with all possible conservation laws in effect, a model of a biological
4 Here the interesting question of the nature of a vacuum is worth mentioning. A vacuum
in modern physics is anything but empty – it is simmering with continuous activity,
with virtual particles popping up from it and disappearing into it. Chaitin’s ocean of the
unknown can be imagined as a vacuum full of the activity of virtual particles.
Where Do New Ideas Come From? How Do They Emerge?
5
system must necessarily be open. A biological system is critically reliant
on its environment for survival. Separate parts of an ecological system
communicate and are vitally dependent on each other.
To sum up, extremely briefly, Chaitin’s informational take on
epistemology, the world is for a human effectively an infinite resource of
truths, many of them incompressible and incomprehensible. Mathematics
is not a monolithic, perfect, eternal crystal of the definite true essence of
the world. It is rather, like other sciences, a fragmented and open
structure, living and growing as a complex biological adaptive eco-
system.
In the conclusion of Epistemology as Information Theory: From
Leibniz To Ω, Chaitin leaves us with the following assignment:
In fact, I believe that this is actually the central question in biology as
well as in mathematics; it's the mystery of creation, of creativity:
Where do new mathematical and biological ideas come from? How do
they emerge?
Normally one equates a new biological idea with a new species, but in
fact every time a child is born, that's actually a new idea incarnating; it's
reinventing the notion of “human being,” which changes constantly.
I have no idea how to answer this extremely important question; I wish I
could. Maybe you will be able to do it. Just try! You might have to keep
it cooking on a back burner while concentrating on other things, but don't
give up! All it takes is a new idea! Somebody has to come up with it.
Why not you? (Chaitin 2006)
That is where I want to start. After reading Meta Math! and a number
of Chaitin’s philosophical articles5, and after having written a thesis
based on the philosophy of computationalism/informationalism (Dodig-
Crnkovic, 2006) I dare to present my modest attempt to answer the big
question above, as a part of a Socratic dialogue. My thinking is deeply
5 A goldmine of articles may be found on Chaitin’s web page. See especially
http://www.umcs.maine.edu/~chaitin/g.pdf Thinking About Gödel & Turing
Randomness & Complexity, from Leibniz to Chaitin
6
rooted in pancomputationalism, characterized by Chaitin in the following
way:
And how about the entire universe, can it be considered to be a
computer? Yes, it certainly can, it is constantly computing its future state
from its current state, it's constantly computing its own time-evolution!
And as I believe Tom Toffoli pointed out, actual computers like your PC
just hitch a ride on this universal computation! (Chaitin 2006)
If computation is seen as information processing,
pancomputationalism turns to paninformationalism. Historically, within
the field of computing and philosophy, two distinct branches have been
established: informationalism, in which the focus is on information as the
stuff of the universe; (Floridi 2002, 2003 and 2004) and
computationalism, where the universe is seen as a computer. Chaitin
(2006) mentions the cellular automata researchers and computer
scientists Fredkin, Wolfram, Toffoli, and Margolus, and the physicists
Wheeler, Zeilinger, 't Hooft, Smolin, Lloyd, Zizzi, Mäkelä, and
Jacobson, as the most prominent computationalists. In Dodig-Crnkovic
(2006) I put forward a dual-aspect info-computationalism, in which the
universe is viewed as a structure (information) in a permanent process of
change (computation). According to this view, information and
computation constitute two aspects of reality, and like the particle and
wave, or matter and energy, capture different facets of the same physical
world. Computation may be either discrete or continuous6 (digital or
analogue). The present approach offers a generalization of traditional
computationalism in the sense that “computation” is understood as the
process governing the dynamics of the physical universe.
Digital philosophy is fundamentally neo-Pythagorean especially in its
focusing on software aspects of the physical universe (either code or a
process). Starting from the pancomputationalist version of digital
philosophy, epistemology can be naturalized so that knowledge
6 The universe is a network of computing processes and its phenomena are info-
computational. Both continuous as discrete, analogue as digital computing are parts of
the computing universe. (Dodig-Crnkovic, 2006). For the discussion about the necessity
of both computational modes on the quantum mechanical level see Lloyd (2006).
Where Do New Ideas Come From? How Do They Emerge?
7
generation can be explained in pure computationalist terms (Dodig-
Crnkovic, 2006). This will enable us to suggest a mechanism that
produces meaningful behavior and knowledge in biological matter and
that will also help us understand what we might need in order to be able
to construct intelligent artifacts.
Epistemology Naturalized by Info-Computation
Naturalized epistemology is an idea that the subject matter of
epistemology is not our concept of knowledge, but knowledge as a
natural phenomenon (Feldman, Kornblith, Stich, Dennett). In what
follows I will try to present knowledge generation as natural
computation, i.e. information processing. One of the reasons to taking
this approach is that info-computationalism provides a unifying
framework which makes it possible for different research fields such as
philosophy, computer science, neuroscience, cognitive science, biology,
and a number of others to communicate within a common framework.
In this account naturalized epistemology is based on the computational
understanding of cognition and agency. This entails evolutionary
understanding of cognition (Lorenz 1977, Popper 1978, Toulmin 1972
and Campbell et al. 1989, Harms 2004, Dawkins 1976, Dennett 1991).
Knowledge is a result of the structuring of input data (data →
information → knowledge) (Stonier, 1997) by an interactive
computational process going on in the nervous system during the
adaptive interplay of an agent with the environment, which increases
agents’ ability to cope with the world and its dynamics. The mind is seen
as a computational process on an informational structure that, both in its
digital and analogue forms, occurs through changes in the structures of
our brains and bodies as a consequence of interaction with the physical
universe. This approach leads to a naturalized, evolutionary
epistemology that understands cognition as a phenomenon of interactive
information processing which can be ascribed even to the simplest living
organisms (Maturana and Varela) and likewise to artificial life.
In order to be able to comprehend cognitive systems we can learn
from the historical development of biological cognitive functions and
Randomness & Complexity, from Leibniz to Chaitin
8
structures from the simple ones upward. A very interesting account of
developmental ascendancy, from bottom-up to top-down control, is given
by Coffman 2006. Among others this article addresses the question of the
origin of complexity in biological organisms, including the analysis of
the relationship between the parts and the whole.
Natural Computation beyond the Turing Limit
As a direct consequence of the computationalist view that every natural
process is computation in a computing universe, “computation” must be
generalized to mean natural computation. MacLennan 2004 defines
“natural computation” as “computation occurring in nature or inspired by
that in nature”, which besides classical computation also includes
quantum computing and molecular computation, and may be represented
by either discrete or continuous models. Examples of computation
occurring in nature encompass information processing in evolution by
natural selection, in the brain, in the immune system, in the self-
organized collective behavior of groups of animals such as ant colonies,
and in particle swarms. Computation inspired by nature includes genetic
algorithms, artificial neural nets, simulated immune systems, and so
forth. There is a considerable synergy gain in relating human-designed
computing with the computing in nature. Here we can illustrate Chaitin’s
claim that “we only understand something if we can program it”: In the
iterative course of modeling and computationally simulating
(programming) natural processes, we learn to reproduce and predict more
and more of the characteristic features of the natural systems.
Classical ideal theoretical computers are mathematical objects and are
equivalent to algorithms, abstract automata (Turing machines or “logical
machines” as Turing called them), effective procedures, recursive
functions, or formal languages. Contrary to traditional Turing
computation, in which the computer is an isolated box provided with a
suitable algorithm and an input, left alone to compute until the algorithm
terminated, interactive computation (Wegner 1988, Goldin et al. 2006)
presupposes interaction i.e. communication of the computing process
with the environment during computation. Interaction consequently
Where Do New Ideas Come From? How Do They Emerge?
9
provides a new conceptualization of computational phenomena which
involves communication and information processing. Compared with
new emerging computing paradigms, in particular with interactive
computing and natural computing, Turing machines form the proper
subset of the set of information processing devices. (Dodig-Crnkovic,
2006, paper B)
The Wegner-Goldin interactive computer is conceived as an open
system in communication with the environment, the boundary of which
is dynamic, as in living biological systems and thus particularly suitable
to model natural computation. In a computationalist view, organisms
may be seen as constituted by computational processes; they are “living
computers”. In the living cell an info-computational process takes place
using DNA, in an open system exchanging information, matter and
energy with the environment.
Burgin (2005) in his book explores computing beyond the Turing
limit and identifies three distinct components of information processing
systems: hardware (physical devices), software (programs that regulate
its functioning and sometimes can be identical with hardware, as in
biological computing), and infoware (information processed by the
system). Infoware is a shell built around the software-hardware core,
which is the traditional domain of automata and algorithm theory.
Semantic Web is an example of infoware that is adding a semantic
component to the information present on the web (Berners-Lee, Hendler
and Lassila, 2001).
For the implementations of computationalism, interactive computing
is the most appropriate general model of natural computing, as it suits the
purpose of modeling a network of mutually communicating processes
(Dodig-Crnkovic 2006). It will be of particular interest to computational
accounts of epistemology, as a cognizing agent interacts with the
environment in order to gain experience and knowledge. It also provides
a unifying framework for the reconciliation of classical and connectionist
views of cognition.
Randomness & Complexity, from Leibniz to Chaitin
10
Cognitive Agents Processing Data → Information → Knowledge
Our specific interest is in how the structuring from data to information
and knowledge develops on a phenomenological level in a cognitive
agent (biological or artificial) in its interaction with the environment. The
central role of interaction is expressed by Goerzel (1994) in the
following way:
Today, more and more biologists are waking up to the sensitive
environment-dependence of fitness, to the fact that the properties which
make an organism fit may not even be present in the organism, but may
be emergent between the organism and its environment.
One can say that living organisms are “about” the environment, that
they have developed adaptive strategies to survive by internalizing
environmental constraints. The interaction between an organism and its
environment is realized through the exchange of physical signals that
might be seen as data, or when structured, as information. Organizing
and mutually relating different pieces of information results in
knowledge. In that context, computationalism appears as the most
suitable framework for naturalizing epistemology.
Maturana and Varela (1980) presented a very interesting idea that
even the simplest organisms possess cognition and that their meaning-
production apparatus is contained in their metabolism. Of course, there
are also non-metabolic interactions with the environment, such as
locomotion, that also generates meaning for an organism by changing its
environment and providing new input data. We will take Maturana and
Varelas’ theory as the basis for a computationalist account of
evolutionary epistemology.
At the physical level, living beings are open complex computational
systems in a regime on the edge of chaos7, characterized by maximal
7 Bertschinger N. and Natschläger T. (2004) claim “Employing a recently developed
framework for analyzing real-time computations we show that only near the critical
boundary such networks can perform complex computations on time series. Hence, this
result strongly supports conjectures that dynamical systems which are capable of doing
complex computational tasks should operate near the edge of chaos, i.e. the transition
from ordered to chaotic dynamics.”
Where Do New Ideas Come From? How Do They Emerge?
11
informational content. Complexity is found between orderly systems
with high information compressibility and low information content and
random systems with low compressibility and high information content.
Living systems are “open, coherent, space-time structures maintained far
from thermodynamic equilibrium by a flow of energy”. (Chaisson, 2002)
Langton has compared these different regions to the different states of
matter. Fixed points are like crystals in that they are for the most part
static and orderly. Chaotic dynamics are similar to gases, which can be
described only statistically. Periodic behavior is similar to a non-crystal
solid, and complexity is like a liquid that is close to both the solid and the
gaseous states. In this way, we can once again view complexity and
computation as existing on the edge of chaos and simplicity. (Flake
1998)
Artificial agents may be treated analogously with animals in terms of
different degrees of complexity; they may range from software agents
with no sensory inputs at all to cognitive robots with varying degrees of
sophistication of sensors and varying bodily architecture.
The question is: how does information acquire meaning naturally in the
process of an organism’s interaction with its environment? A
straightforward approach to naturalized epistemology attempts to answer
this question via study of evolution and its impact on the cognitive,
linguistic, and social structures of living beings, from the simplest ones
to those at highest levels of organizational complexity (Bates 2005).
Various animals are equipped with varying physical hardware, sets of
sensory apparatuses goals and behaviors. For different animals, the
“aboutness” concerning the same physical reality is different in terms of
causes and their effects.
Indeed, cognitive ethologists find the only way to make sense of the
cognitive equipment in animal is to treat it as an information processing
system, including equipment for perception, as well as the storage and
integration of information; that is, after all, the point of calling it
Randomness & Complexity, from Leibniz to Chaitin
12
cognitive equipment. That equipment which can play such a role confers
selective advantage over animals lacking such equipment no longer
requires any argument. (Kornblith 1999)
An agent receives inputs from the physical environment (data) and
interprets these in terms of its own earlier experiences, comparing them
with stored data in a feedback loop. Through that interaction between the
environmental data and the inner structure of an agent, a dynamical state
is obtained in which the agent has established a representation of the
situation. The next step in the loop is to match the present state with
goals and preferences (saved in an associative memory). This process
results in the anticipation of what various actions from the given state
might have for consequences (Goertzel 1994). Compare with Dennett’s
(1991) Multiple Drafts Model. Here is an alternative formulation:
This approach is not a hybrid dynamic/symbolic one, but interplay
between analogue and digital information spaces, in an attempt to model
the representational behavior of a system. The focus on the explicitly
referential covariation of information between system and environment is
shifted towards the interactive modulation of implicit internal content
and therefore, the resulting pragmatic adaptation of the system via its
interaction with the environment. The basic components of the
framework, its nodal points and their dynamic relations are analyzed,
aiming at providing a functional framework for the complex realm of
autonomous information systems (Arnellos et al. 2005)
Very close to the above ideas is the interactivist approach of Bickhard
(2004), and Kulakov & Stojanov (2002). On the ontological level, it
involves naturalism, which means that the physical world (matter) and
mind are integrated, mind being an emergent property of a physical
process, closely related to the process metaphysics of Whitehead (1978).
Where Do New Ideas Come From? How Do They Emerge?
13
Evolutionary Development of Cognition
Evolutionary development is the best known explanatory model for life
on earth. If we want to understand the functional characteristics of life, it
is helpful to reveal its paths of development.
One cannot account for the functional architecture, reliability, and goals
of a nervous system without understanding its adaptive history.
Consequently, a successful science of knowledge must include standard
techniques for modeling the interaction between evolution and learning.
(Harms, 2005)
A central question is thus what the mechanism is of the evolutionary
development of cognitive abilities in organisms. Critics of the
evolutionary approach mention the impossibility of “blind chance” to
produce such highly complex structures as intelligent living organisms.
Proverbial monkeys typing Shakespeare are often used as an illustration.
However, Lloyd 2006 mentions a following, first-rate counter argument,
originally due to Chaitin and Bennet. The “typing monkeys” argument
does not take into account the physical laws of the universe, which
dramatically limit what can be typed. The universe is not a typewriter,
but a computer, so a monkey types random input into a computer.
Quantum mechanics supplies the universe with “monkeys” in the form of
random fluctuations, such as those that seeded the locations of galaxies.
The computer into which they type is the universe itself. From a simple
initial state, obeying simple physical laws, the universe has
systematically processed and amplified the bits of information embodied
in those quantum fluctuations. The result of this information processing
is the diverse, information-packed universe we see around us:
programmed by quanta, physics give rise first to chemistry and then to
life; programmed by mutation and recombination, life gave rise to
Shakespeare; programmed by experience and imagination, Shakespeare
gave rise to Hamlet. You might say that the difference between a
monkey at a typewriter and a monkey at a computer is all the difference
in the world. (Lloyd 2006)
Randomness & Complexity, from Leibniz to Chaitin
14
Allow me to add one comment on Lloyd’s computationalist claim.
The universe/ computer on which a monkey types is at the same time the
hardware and the program, in a way similar to the Turing machine. An
example from biological computing is the DNA where the hardware (the
molecule) is at the same time the software (the program, the code). In
general, each new input restructures the computational universe and
changes the preconditions for future inputs. Those processes are
interactive and self-organizing. That makes the essential speed-up for the
process of getting more and more complex structures.
Informational Complexity of Cognitive Structures
Dynamics lead to statics, statics leads to dynamics, and the simultaneous
analysis of the two provides the beginning of an understanding of that
mysterious process called mind. (Goertzel 1994)
In the info-computationalist vocabulary, “statics” (structure)
corresponds to “information” and “dynamics” corresponds to
“computation”.
One question which may be asked is: why doesn’t an organism
exclusively react to data as it is received from the world/environment?
Why is information used as building blocks, and why is knowledge
constructed? In principle, one could imagine a reactive agent that
responds directly to input data without building an informational
structure out of raw input.
The reason may be found in the computational efficiency of the
computation concerned. Storage of data that are constant or are often
reused saves huge amounts of time. So, for instance, if instead of dealing
with each individual pixel in a picture, we can make use of symbols or
patterns that can be identified with similar memorized symbols or
patterns, the picture can be handled much more quickly.
Studies of vision show that cognition focuses on that part of the scene
which is variable and dynamic, and uses memorized data for the rest that
is static (this is the notorious frame problem of AI). Based on the same
mechanism, we use ideas already existing to recognize, classify, and
Where Do New Ideas Come From? How Do They Emerge?
15
characterize phenomena. Our cognition is thus an emergent phenomenon,
resulting from both memorized (static) and observed (dynamic) streams.
Forming chunks of structured data into building blocks, instead of
performing time-consuming computations on those data sets in real time,
is an enormously powerful acceleration mechanism. With each higher
level of organization, the computing capacity of an organism’s cognitive
apparatus is further increased. The efficiency of meta-levels is becoming
evident in computational implementations. Goertzel illustrates this
multilevel control structure by means of the three-level “pyramidal”
vision processing parallel computer developed by Levitan and his
colleagues at the University of Massachusetts. The bottom level deals
with sensory data and with low-level processing such as segmentation
into components. The intermediate level handles grouping, shape
detection and such; and the top level processes this information
“symbolically”, constructing an overall interpretation of the scene. This
three-level perceptual hierarchy appears to be an exceptionally effective
approach to computer vision.
We look for those objects that we expect to see and we look for those
shapes that we are used to seeing. If a level 5 process corresponds to an
expected object, then it will tell its children [i. e. sub-processes] to look
for the parts corresponding to that object, and its children will tell their
children to look for the complex geometrical forms making up the parts
to which they refer, et cetera. (Goertzel 1994)
Human intelligence is indivisible from its presence in a body
(Dreyfus 1972, Gärdenfors 2000, 2005, Stuart 2003). When we observe,
act and reason, we relate different ideas in a way that resembles the
relation of our body with various external objects. Cognitive structures of
living organisms are complex systems with evolutionary history (Gell-
Mann 1995) evolved in the interaction between first proto-organisms
with the environment, and evolving towards more and more complex
structures which is in a complete agreement with the info-computational
view, and the understanding of human cognition as a part of this overall
picture.
Randomness & Complexity, from Leibniz to Chaitin
16
Conclusions
This essay attempts to address the question posed by Chaitin (2006)
about the origin of creativity and novelty in a computational universe.
For that end, an info-computationalist framework was assumed within
which information is the stuff of the universe while computation is its
dynamics. Based on the understanding of natural phenomena as info-
computational, the computer in general is conceived as an open
interactive system, and the Classical Turing machine is understood as a
subset of a general interactive/adaptive/self-organizing universal natural
computer. In a computationalist view, organisms are constituted by
computational processes, implementing computation in vivo.
All cognizing beings are physical (informational) systems in constant
interaction with their environment. The essential feature of cognizing
living organisms is their ability to manage complexity, and to handle
complicated environmental conditions with a variety of responses that
are results of adaptation, variation, selection, learning, and/or reasoning.
Increasingly complex living organisms arise as a consequence of
evolution. They are able to register inputs (data) from the environment,
to structure those into information, and, in more developed organisms,
into knowledge. The evolutionary advantage of using structured,
component-based approaches (data → information → knowledge) is
improving response time and the computational efficiency of cognitive
processes.
The main reason for choosing an info-computationalist view for
naturalizing epistemology is that it presents a unifying framework which
enables research fields of philosophy, computer science, neuroscience,
cognitive science, biology, artificial intelligence and number of others to
communicate, exchange their results and build a common knowledge. It
also provides the natural solution to the old problem of the role of
representation, a discussion about two seemingly incompatible views: a
symbolic, explicit and static notion of representation versus implicit and
dynamic (interactive, neural-network-type) one. Within info-
computational framework, those classical (Turing-machine type) and
connectionist views are reconciled and used to describe different levels
or aspects of cognition.
Where Do New Ideas Come From? How Do They Emerge?
17
So where do new mathematical and biological ideas come from? How
do they emerge?
It seems to me that as a conclusion we can confidently say that they
come from the world. Humans, just as other biological organisms, are
just a tiny subset of the universe, and the universe has definitely an
impact on us. A part of the new ideas is the consequence of the re-
configuration and reshaping of already existing elements in the
biosphere, like in component-based engineering. Life learns from both,
from already existing elements and from something that comes from the
outside of our horizon.
Even if the universe is a huge (quantum mechanical) computer for us
it is an infinite reservoir of new discoveries and surprises. For even if the
universe as a whole would be a totally deterministic mechanism, for
humans to know its functioning and predict its behavior would take
infinite time, as Chaitin already demonstrated that there are
incompressible truths. In short, in order to be able to predict the
Universe-computer we would need the Universe-computer itself to
compute its next state.
That was my attempt to argue that in the best of all possible worlds
(“le meilleur des mondes possibles” – Leibniz 1710) there are sources of
creativity and unpredictability, for us humans, even given a
pancomputational stance. I have done my homework.
Acknowledgements
I would like to thank Greg Chaitin for his inspiring ideas presented in his
Turing Lecture on epistemology as information theory and the
subsequent paper, and for his kindness in answering my numerous
questions. Thanks to Chris Calude, too. It was a great privilege to be
invited to contribute to this book.
Randomness & Complexity, from Leibniz to Chaitin
18
References
Arnellos, A., Spyrou, T. and Darzentas, J. “The Emergence of Interactive Meaning
Processes in Autonomous Systems”, In: Proceedings of FIS 2005: Third International
Conference on the Foundations of Information Science. Paris, July 4-7, 2005.
Bates, M. J. “Information and Knowledge: An Evolutionary Framework for Information
Science”. Information Research 10, no. 4 (2005) Accessible at
http://InformationR.net/ir/10-4/paper239.html
Bertschinger, N. and Natschläger, T. “Real-Time Computation at the Edge of Chaos in
Recurrent Neural Networks”, Neural Comp. 16 (2004) 1413-1436
Berners-Lee, T., Hendler, J. and Lassila, O. “The Semantic Web”. Scientific American,
Vol. 284, 5, pp.34-43 (2001). Accessible at
http://www.sciam.com/article.cfm?articleID=00048144-10D2-1C70-
84A9809EC588EF21&ref=sciam
Bickhard, M. H. “The Dynamic Emergence of Representation”. In H. Clapin, P. Staines,
P. Slezak (Eds.) Representation in Mind: New Approaches to Mental Representation.
(71-90). Elsevier. 2004.
Burgin, M. (2005) Super-Recursive Algorithms, Springer Monographs in Computer
Science.
Campbell, D. T. and Paller, B. T. “Extending Evolutionary Epistemology to “Justifying”
Scientific Beliefs (A sociological rapprochement with a fallibilist perceptual
foundationalism?).” In Issues in evolutionary epistemology, edited by K. Hahlweg and C.
A. Hooker, (1989) 231-257. Albany: State University of New York Press.
Chaisson, E.J. (2001) Cosmic Evolution. The Rise of Complexity in Nature. pp. 16-78.
Harvard University Press, Cambridge
Chaitin, G. J. (1987) Algorithmic Information Theory, Cambridge University Press
Chaitin, G “Epistemology as Information Theory”, Collapse, (2006) Volume I, pp. 27-51.
Alan Turing Lecture given at E-CAP 2005,
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ecap.html
Chaitin, G. J. (1987) Information Randomness & Incompleteness: Papers on Algorithmic
Information Theory. Singapore: World Scientific.
Chaitin, G. J. (2003) Dijon Lecture http://www.cs.auckland.ac.nz/CDMTCS/chaitin
Chaitin, G. J. (2005). Meta Math!: The Quest for Omega. Pantheon.
Coffman, A. J. “Developmental Ascendency: From Bottom-up to Top-down Control”,
Biological Theory Spring 2006, Vol. 1, No. 2: 165-178.
Dawkins, R. 1976, 1982. The Selfish Gene. Oxford University Press.
Dennett, D. (1995), Darwin's Dangerous Idea, Simon & Schuster
Dennett, D. (1991) Consciousness Explained. Penguin Books
Dodig-Crnkovic, G. (2006) Investigations into Information Semantics and Ethics of
Computing, Mälardalen University Press, http://www.diva-
portal.org/mdh/abstract.xsql?dbid=153
Dreyfus, H. L. (1972) What Computers Can't Do: A Critique of Artificial Reason. Harper
& Row
Flake, G. W. (1998) The Computational Beauty of Nature: Computer Explorations of
Fractals, Chaos, Complex Systems, and Adaptation, MIT Press
Where Do New Ideas Come From? How Do They Emerge?
19
Floridi, L. (2002) “What is the Philosophy of Information?”, Metaphilosophy (33.1/2),
123-145
Floridi, L. (2003) Blackwell Guide to the Philosophy of Computing and Information
Floridi, L. (2004) “Open Problems in the Philosophy of Information”, Metaphilosophy,
Volume 35: Issue 4
Fredkin, E. Digital Philosophy, http://www.digitalphilosophy.org/finite_nature.htm
Gärdenfors, P. (2000) Conceptual Spaces, Bradford Books, MIT Press
_______ , Zlatev, J. and Persson, T. “Bodily mimesis as 'the missing link' in human
cognitive evolution”, Lund University Cognitive Studies 121, Lund. 2005
Gell-Mann, M. (1995) The Quark and the Jaguar: Adventures in the Simple and the
Complex. Owl Books.
Goertzel, B. (1993) The Evolving Mind. Gordon and Breach
_______ (1994) Chaotic Logic. Plenum Press.
http://www.goertzel.org/books/logic/contents.html
Goldin, D., Smolka S. and Wegner P. eds. (2006) Interactive Computation: The New
Paradigm, to be published by Springer-Verlag
Harms, W. F. “Naturalizing Epistemology: Prospectus 2006”, Biological Theory 1(1)
2006, 23–24.
Harms, W. F. Information and Meaning in Evolutionary Processes. Cambridge University
Press, 2004
Kornblith, H. (1999) Knowledge in Humans and Other Animals. Noûs 33 (s13), 327.
Kornblith, H. ed. (1994) Naturalizing Epistemology, second edition, Cambridge: The
MIT Press
Kulakov, A. and Stojanov, G. “Structures, Inner Values, Hierarchies And Stages:
Essentials For Developmental Robot Architecture”, 2nd International Workshop on
Epigenetic Robotics, Edinbourgh, 2002
Leibniz, G. W. Philosophical Papers and Letters, ed. Leroy E. Loemaker (Dodrecht,
Reidel, 1969)
Lloyd, S (2006) Programming the Universe: A Quantum Computer Scientist Takes on the
Cosmos, Alfred A. Knopf
Lorenz, K. (1977) Behind the Mirror. London: Methuen
MacLennan, B. “Natural computation and non-Turing models of computation”,
Theoretical Computer Science 317 (2004) 115 – 145
Maturana, H. and Varela, F. (1992) The Tree of Knowledge. Shambala
_______ (1980) Autopoiesis and Cognition: The Realization of the Living. D. Reidel.
Popper, K. R. (1972) Objective Knowledge: An Evolutionary Approach. Oxford: The
Clarendon Press.
Stich, S. (1993) “Naturalizing Epistemology: Quine, Simon and the Prospects for
Pragmatism” in C. Hookway & D. Peterson, eds., Philosophy and Cognitive Science,
Royal Inst. of Philosophy, Supplement no. 34 (Cambridge University Press) p. 1-17.
Stonier, T. (1997) Information and Meaning. An Evolutionary Perspective, Springer,
Berlin, N.Y.
Stuart, S. (2003) “The Self as an Embedded Agent”, Minds and Machines, 13 (2): 187
Tasic, V. (2001) Mathematics and the Roots of Postmodern Thought. Oxford University
Press.
Randomness & Complexity, from Leibniz to Chaitin
20
Toulmin, S. (1972) Human Understanding: The Collective Use and Evolution of
Concepts. Princeton University Press.
Wegner, P. “Interactive Foundations of Computing”, Theoretical Computer Science 192
(1998) 315-51.
Whitehead, A. N. (1978) Process and Reality: An Essay in Cosmology. New York: The
Free Press.
Wolff, J. G. (2006) Unifying Computing and Cognition, CognitionResearch.org.uk
Wolfram, S. (2002) A New Kind of Science. Wolfram Science.