ArticlePDF Available

From the Closed Classical Algorithmic Universe to an Open World of Algorithmic Constellations

Authors:

Abstract

In this paper we analyze methodological and philosophical implications of algorithmic aspects of unconventional computation. At first, we describe how the classical algorithmic universe developed and analyze why it became closed in the conventional approach to computation. Then we explain how new models of algorithms turned the classical closed algorithmic universe into the open world of algorithmic constellations, allowing higher flexibility and expressive power, supporting constructivism and creativity in mathematical modeling. As Goedels undecidability theorems demonstrate, the closed algorithmic universe restricts essential forms of mathematical cognition. In contrast, the open algorithmic universe, and even more the open world of algorithmic constellations, remove such restrictions and enable new, richer understanding of computation.
FromtheClosedClassicalAlgorithmicUniverse
toanOpenWorldofAlgorithmicConstellations
Mark Burgin1 and Gordana Dodig-Crnkovic2
1 Dept. of Mathematics, UCLA, Los Angeles, USA. E-mail: mburgin@math.ucla.edu
2 Mälardalen University, Department of Computer Science and Networks,
School of Innovation, Design and Engineering, Västerås, Sweden;
E-mail: gordana.dodig-crnkovic@mdh.se
Abstract
In this paper we analyze methodological and philosophical implications of al-
gorithmic aspects of unconventional computation. At first, we describe how the
classical algorithmic universe developed and analyze why it became closed in
the conventional approach to computation. Then we explain how new models
of algorithms turned the classical closed algorithmic universe into the open
world of algorithmic constellations, allowing higher flexibility and expressive
power, supporting constructivism and creativity in mathematical modeling. As
Gödel’s undecidability theorems demonstrate, the closed algorithmic universe
restricts essential forms of mathematical cognition. In contrast, the open algo-
rithmic universe, and even more the open world of algorithmic constellations,
remove such restrictions and enable new, richer understanding of computation.
Keywords: Unconventional algorithms, unconventional computing, algorith-
mic constellations, Computing beyond Turing machine model.
Introduction
Te development of various systems is characterized by a tension be-
tween forces of conservation (tradition) and change (innovation). Tradi-
2
tion sustains system and its parts, while innovation moves it forward ad-
vancing some segments and weakening the others. Efficient functioning
of a system depends on the equilibrium between tradition and innova-
tion. When there is no equilibrium, system declines; too much tradition
brings stagnation and often collapse under the pressure of inner or/and
outer forces, while too much innovation leads to instability and frequent-
ly in rupture.
The same is true of the development of different areas and aspects of
social systems, such as science and technology. In this article we are in-
terested in computation, which has become increasingly important for
society as the basic aspect of information technology. Tradition in com-
putation is represented by conventional computation and classical algo-
rithms, while unconventional computation stands for the far-reaching in-
novation.
It is possible to distinguish three areas in which computation can be
unconventional:
1. Novel hardware (e.g. quantum systems) provides material realiza-
tion for unconventional computation.
2. Novel algorithms (e.g. super-recursive algorithms) provide opera-
tional realization for unconventional computation.
3. Novel organization (e.g. evolutionary computation or self-
optimizing computation) provides structural realization for unconven-
tional computation.
Here we focus on algorithmic aspects of unconventional computation
and analyze methodological and philosophical problems related to it,
3
making a distinction between three classes of algorithms: recursive,
subrecursive, and super-recursive algorithms.
Each type of recursive algorithms form a class in which it is possible
to compute exactly the same functions that are computable by Turing
machines. Examples of recursive algorithms are partial recursive func-
tions, RAM, von Neumann automata, Kolmogorov algorithms, and
Minsky machines.
Each type of subrecursive algorithms forms a class that has less com-
putational power than the class of all Turing machines. Examples of
subrecursive algorithms are finite automata, primitive recursive func-
tions and recursive functions.
Each type of super-recursive algorithms forms a class that has more
computational power than the class of all Turing machines. Examples of
super-recursive algorithms are inductive and limit Turing machines, lim-
it partial recursive functions and limit recursive functions.
The main problem is that conventional types and models of algorithms
make the algorithmic universe, i.e., the world of all existing and possible
algorithms, closed because there is a rigid boundary in this universe
formed by recursive algorithms, such as Turing machines, and described
by the Church-Turing Thesis. This closed system has been overtly dom-
inated by discouraging incompleteness results, such as Gödel incom-
pleteness theorems.
Contrary to this, super-recursive algorithms controlling and directing
unconventional computations break this boundary leading to an open al-
gorithmic multiverse – world of unrestricted creativity.
4
The paper is organized as follows. First, we summarize how the closed
algorithmic universe was created and what are advantages and disad-
vantages of living inside such a closed universe. Next, we describe the
breakthrough brought about by the creation of super-recursive algo-
rithms. In Section 4, we analyze super-recursive algorithms as cognitive
tools. The main effect is the immense growth of cognitive possibilities
and computational power that enables corresponding growth of informa-
tion processing devices.
The Closed Universe of Turing Machines and other Recursive
Algorithms
Historically, after having an extensive experience of problem solving,
mathematicians understood that problem solutions were based on vari-
ous algorithms. Construction algorithms and deduction algorithms have
been the main tools of mathematical research. When they repeatedly en-
countered problems they were not able to solve, mathematicians, and es-
pecially experts in mathematical logic, came to the conclusion that it was
necessary to develop a rigorous mathematical concept of algorithm and
to prove that some problems are indeed unsolvable. Consequently, a di-
versity of exact mathematical models of algorithm as a general concept
was proposed. The first models were λ-calculus developed by Church in
1931 – 1933, general recursive functions introduced by Gödel in 1934,
ordinary Turing machines constructed by Turing in 1936 and in a less
explicit form by Post in 1936, and partial recursive functions built by
Kleene in 1936. Creating λ-calculus, Church was developing a logical
theory of functions and suggested a formalization of the notion of com-
5
putability by means of λ-definability. In 1936, Kleene demonstrated that
λ-definability is computationally equivalent to general recursive func-
tions. In 1937, Turing showed that λ-definability is computationally
equivalent to Turing machines. Church was so impressed by these re-
sults that he suggested what was later called the Church-Turing thesis.
Turing formulated a similar conjecture in the Ph.D. thesis that he wrote
under Church's supervision.
It is interesting to know that the theory of Frege [1] actually contains
λ-calculus. So, there were chances to develop a theory of algorithms and
computability in the 19th century. However, at that time, the mathemati-
cal community did not feel a need of such a theory and probably, would
not accept it if somebody created it.
The Church-Turing thesis explicitly mark out a rigid boundary for the
algorithmic universe, making this universe closed by Turing machines.
Any algorithm from this universe was inside that boundary.
After the first breakthrough, other mathematical models of algorithms
were suggested. They include a variety of Turing machines: multihead,
multitape Turing machines, Turing machines with n-dimensional tapes,
nondeterministic, probabilistic, alternating and reflexive Turing ma-
chines, Turing machines with oracles, Las Vegas Turing machines, etc.;
neural networks of various types – fixed-weights, unsupervised, super-
vised, feedforward, and recurrent neural networks; von Neumann au-
tomata and general cellular automata; Kolmogorov algorithms finite au-
tomata of different forms – automata without memory, autonomous
automata, automata without output or accepting automata, determinis-
tic, nondeterministic, probabilistic automata, etc.; Minsky machines;
6
Storage Modification Machines or simply, Shönhage machines; Random
Access Machines (RAM) and their modifications - Random Access Ma-
chines with the Stored Program (RASP), Parallel Random Access Ma-
chines (PRAM); Petri nets of various types – ordinary and ordinary with
restrictions, regular, free, colored, and self-modifying Petri nets, etc.;
vector machines; array machines; multidimensional structured model of
computation and computing systems; systolic arrays; hardware modifi-
cation machines; Post productions; normal Markov algorithms; formal
grammars of many forms – regular, context-free, context-sensitive,
phrase-structure, etc.; and so on. As a result, the theory of algorithms,
automata and computation has become one of the foundations of com-
puter science.
In spite of all differences between and diversity of algorithms, there is
a unity in the system of algorithms. While new models of algorithm ap-
peared, it was proved that no one of them could compute more functions
than the simplest Turing machine with a one-dimensional tape. All this
give more and more evidence to validity of the Church-Turing Thesis.
Even more, all attempts to find mathematical models of algorithms
that were stronger than Turing machines were fruitless. Equivalence
with Turing machines has been proved for many models of algorithms.
That is why the majority of mathematicians and computer scientists have
believed that the Church-Turing Thesis was true. Many logicians assume
that the Thesis is an axiom that does not need any proof. Few believe
that it is possible to prove this Thesis utilizing some evident axioms.
More accurate researchers consider this conjecture as a law of the theory
of algorithms, which is similar to the laws of nature that might be sup-
7
ported by more and more evidence or refuted by a counter-example but
cannot be proved.
Besides, the Church-Turing Thesis is extensively utilized in the theory
of algorithms, as well as in the methodological context of computer sci-
ence. It has become almost an axiom. Some researchers even consider
this Thesis as a unique absolute law of computer science.
Thus, we can see that the initial aim of mathematicians was to build a
closed algorithmic universe, in which a universal model of algorithm
provided a firm foundation and as it was found later, a rigid boundary
for this universe supposed to contain all of mathematics.
It is possible to see the following advantages and disadvantages of the
closed algorithmic universe.
Advantages:
1. Turing machines and partial recursive functions are feasible math-
ematical models.
2. These and other recursive models of algorithms provide an efficient
possibility to apply mathematical techniques.
3. The closed algorithmic universe allowed mathematicians to build
beautiful theories of Turing machines, partial recursive functions and
some other recursive and subrecursive algorithms.
4. The closed algorithmic universe provides sufficiently exact bounda-
ries for knowing what is possible to achieve with algorithms and what is
impossible.
5. The closed algorithmic universe provides a common formal lan-
guage for researchers.
8
6. For computer science and its applications, the closed algorithmic
universe provides a diversity of mathematical models with the same
computing power.
Disadvantages:
1. The main disadvantage of this universe is that its main principle -
the Church-Turing Thesis - is not true.
2. The closed algorithmic universe restricts applications and in par-
ticular, mathematical models of cognition.
3. The closed algorithmic universe does not correctly reflect the exist-
ing computing practice.
The Open World of Super-Recursive Algorithms and Algorithmic
Constellations
Contrary to the general opinion, some researchers expressed their con-
cern for the Church-Turing Thesis. As Nelson writes [2], "Although
Church-Turing Thesis has been central to the theory of effective decida-
bility for fifty years, the question of its epistemological status is still an
open one.” There were also researchers who directly suggested argu-
ments against validity of the Church-Turing Thesis. For instance, Kal-
mar [3] raised intuitionistic objections, while Lucas and Benacerraf dis-
cussed objections to mechanism based on theorems of Gödel that
indirectly threaten the Church-Turing Thesis. In 1972, Gödel’s observa-
tion entitled “A philosophical error in Turing’s work” was published
where he declared that: "Turing in his 1937, p. 250 (1965, p. 136), gives
an argument which is supposed to show that mental procedures cannot
go beyond mechanical procedures. However, this argument is inconclu-
9
sive. What Turing disregards completely is the fact that mind, in its use,
is not static, but constantly developing, i.e., that we understand abstract
terms more and more precisely as we go on using them, and that more
and more abstract terms enter the sphere of our understanding. There
may exist systematic methods of actualizing this development, which
could form part of the procedure. Therefore, although at each stage the
number and precision of the abstract terms at our disposal may be finite,
both (and, therefore, also Turing’s number of distinguishable states of
mind) may converge toward infinity in the course of the application of
the procedure.” [4]
Thus, pointing that Turing disregarded completely the fact that mind,
in its use, is not static, but constantly developing, Gödel predicted neces-
sity for super-recursive algorithms that realize inductive and topological
computations [5]. Recently, Sloman [6] explained why recursive models
of algorithms, such as Turing machines, are irrelevant for artificial intel-
ligence.
Even if we abandon theoretical considerations and ask the practical
question whether recursive algorithms provide an adequate model of
modern computers, we will find that people do not see correctly how
computers are functioning. An analysis demonstrates that while recur-
sive algorithms gave a correct theoretical representation for computers at
the beginning of the “computer era”, super-recursive algorithms are
more adequate for modern computers. Indeed, at the beginning, when
computers appeared and were utilized for some time, it was necessary to
print out data produced by computer to get a result. After printing, the
computer stopped functioning or began to solve another problem. Now
10
people are working with displays and computers produce their results
mostly on the screen of a monitor. These results on the screen exist there
only if the computer functions. If this computer halts, then the result on
its screen disappears. This is opposite to the basic condition on ordinary
(recursive) algorithms that implies halting for giving a result.
Such big networks as Internet give another important example of a sit-
uation in which conventional algorithms are not adequate. Algorithms
embodied in a multiplicity of different programs organize network func-
tions. It is generally assumed that any computer program is a conven-
tional, that is, recursive algorithm. However, a recursive algorithm has to
stop to give a result, but if a network shuts down, then something is
wrong and it gives no results. Consequently, recursive algorithms turn
out to be too weak for the network representation, modeling and study.
Even more, no computer works without an operating system. Any op-
erating system is a program and any computer program is an algorithm
according to the general understanding. While a recursive algorithm has
to halt to give a result, we cannot say that a result of functioning of oper-
ating system is obtained when computer stops functioning. To the con-
trary, when the operating system does not work, it does not give an ex-
pected result.
Looking at the history of unconventional computations and super-
recursive algorithms we see that Turing was the first who went beyond
the “Turing” computation that is bounded by the Church-Turing Thesis.
In his 1938 doctoral dissertation, Turing introduced the concept of a Tu-
ring machine with an oracle. This, work was subsequently published in
1939. Another approach that went beyond the Turing-Church Thesis was
11
developed by Shannon [7], who introduced the differential analyzer, a
device that was able to perform continuous operations with real numbers
such as operation of differentiation. However, mathematical community
did not accept operations with real numbers as tractable because irra-
tional numbers do not have finite numerical representations.
In 1957, Grzegorczyk introduced a number of equivalent definitions of
computable real functions. Three of Grzegorczyk’s constructions have
been extended and elaborated independently to super-recursive method-
ologies: the domain approach [8,9], type 2 theory of effectivity or type 2
recursion theory [10,11], and the polynomial approximation approach
[12]. In 1963, Scarpellini introduced the class M1 of functions that are
built with the help of five operations. The first three are elementary: sub-
stitutions, sums and products of functions. The two remaining operations
are performed with real numbers: integration over finite intervals and
taking solutions of Fredholm integral equations of the second kind.
Yet another type of super-recursive algorithms was introduced in 1965
by Gold and Putnam, who brought in concepts of limiting recursive
function and limiting partial recursive function. In 1967, Gold produced
a new version of limiting recursion, also called inductive inference, and
applied it to problems of learning. Now inductive inference is a fruitful
direction in machine learning and artificial intelligence.
One more direction in the theory of super-recursive algorithms
emerged in 1967 when Zadeh introduced fuzzy algorithms. It is interest-
ing that limiting recursive function and limiting partial recursive func-
tion were not considered as valid models of algorithms even by their au-
thors. A proof that fuzzy algorithms are more powerful than Turing
12
machines was obtained much later (Wiedermann, 2004). Thus, in spite
of the existence of super-recursive algorithms, researchers continued to
believe in the Church-Turing Thesis as an absolute law of computer sci-
ence.
After the first types of super-recursive models had been studied, a lot
of other super-recursive algorithmic models have been created: inductive
Turing machines, limit Turing machines, infinite time Turing machines,
general Turing machines, accelerating Turing machines, type 2 Turing
machines, mathematical machines, δ-Q-machines, general dynamical
systems, hybrid systems, finite dimensional machines over real numbers,
R-recursive functions and so on.
To organize the diverse variety of algorithmic models, we introduce
the concept of an algorithmic constellation. Namely, an algorithmic con-
stellation is a system of algorithmic models that have the same type.
Some algorithmic constellations are disjoint, while other algorithmic
constellations intersect. There are algorithmic constellations that are
parts of other algorithmic constellations.
Below some of algorithmic constellations are described.
The sequential algorithmic constellation consists of models of sequen-
tial algorithms. This constellation includes such models as deterministic
finite automata, deterministic pushdown automata with one stack, evolu-
tionary finite automata, Turing machines with one head and one tape,
Post productions, partial recursive functions, normal Markov algorithms,
formal grammars, inductive Turing machines with one head and one
tape, limit Turing machines with one head and one tape, reflexive Turing
machines with one head and one tape, infinite time Turing machines,
13
general Turing machines with one head and one tape, evolutionary Tu-
ring machines with one head and one tape, accelerating Turing machines
with one head and one tape, type 2 Turing machines with one head and
one tape, Turing machines with oracles.
The concurrent algorithmic constellation consists of models of con-
current algorithms. This constellation includes such models as Petri nets,
artificial neural networks, nondeterministic Turing machines, probabilis-
tic Turing machines, alternating Turing machines, Communicating Se-
quential Processes (CSP) of Hoare, Actor model, Calculus of Communi-
cating Systems (CCS) of Milner, Kahn process networks, dataflow
process networks, discrete event simulators, View-Centric Reasoning
(VCR) model of Smith, event-signal-process (ESP) model of Lee and
Sangiovanni-Vincentelli, extended view-centric reasoning (EVCR)
model of Burgin and Smith, labeled transition systems, Algebra of
Communicating Processes (ACP) of Bergstra and Klop, event-action-
process (EAP) model of Burgin and Smith, synchronization trees, and
grid automata.
The parallel algorithmic constellation consists of models of parallel
algorithms and is a part of the concurrent algorithmic constellation. This
constellation includes such models as pushdown automata with several
stacks, Turing machines with several heads and one or several tapes,
Parallel Random Access Machines, Kolmogorov algorithms, formal
grammars with prohibition, inductive Turing machines with several
heads and one or several tapes, limit Turing machines with several heads
and one or several tapes, reflexive Turing machines with several heads
and one or several tapes, general Turing machines with several heads
14
and one or several tapes, accelerating Turing machines with several
heads and one or several tapes, type 2 Turing machines with several
heads and one or several tapes.
The discrete algorithmic constellation consists of models of algo-
rithms that work with discrete data, such as words of formal language.
This constellation includes such models as finite automata, Turing ma-
chines, partial recursive functions, formal grammars, inductive Turing
machines and Turing machines with oracles.
The topological algorithmic constellation consists of models of algo-
rithms that work with data that belong to a topological space, such as re-
al numbers. This constellation includes such models as the differential
analyzer of Shannon, limit Turing machines, finite dimensional and gen-
eral machines of Blum, Shub, and Smale, fixed point models, topologi-
cal algorithms, neural networks with real number parameters.
Although several models of super-recursive algorithms already existed
in 1980s, the first publication where it was explicitly stated and proved
that there are algorithms more powerful than Turing machines was [13].
In this work, among others, relations between Gödel’s incompleteness
results and super-recursive algorithms were discussed.
Super-recursive algorithms have different computing and accepting
power. The closest to conventional algorithms are inductive Turing ma-
chines of the first order because they work with constructive objects, all
steps of their computation are the same as the steps of conventional Tu-
ring machines and the result is obtained in a finite time. In spite of these
similarities, inductive Turing machines of the first order can compute
much more than conventional Turing machines [14, 5].
15
Inductive Turing machines of the first order form only the lowest level
of super-recursive algorithms. There are infinitely more levels and as a
result, the algorithmic universe grows into the algorithmic multiverse
becoming open and amenable. Taking into consideration algorithmic
schemas, which go beyond super-recursive algorithms, we come to an
open world of information processing, which includes the algorithmic
multiverse with its algorithmic constellations. Openness of this world
has many implications for human cognition in general and mathematical
cognition in particular. For instance, it is possible to demonstrate that not
only computers but also the brain can work not only in the recursive
mode but also in the inductive mode, which is essentially more powerful
and efficient. Some of the examples are considered in the next section.
Absolute Prohibition in The Closed Universe
and Infinite Opportunities in The Open World
To provide sound and secure foundations for mathematics, David Hilbert
proposed an ambitious and wide-ranging program in the philosophy and
foundations of mathematics. His approach formulated in 1921 stipulated
two stages. At first, it was necessary to formalize classical mathematics
as an axiomatic system. Then, using only restricted, "finitary" means, it
was necessary to give proofs of the consistency of this axiomatic system.
Achieving a definite progress in this direction, Hilbert became very
optimistic. As a response to the Latin dictum: "Ignoramus et
ignorabimus" or "We do not know, we cannot know", in his speech in
Königsberg in 1930, he made a famous statement:
Wir müssen wissen. Wir werden wissen.
(We must know. We will know.)
16
Next year the Gödel undecidability theorems were published [15].
They undermined Hilbert’s statement and his whole program. Indeed,
the first Gödel undecidability theorem states that it is impossible to vali-
date truth for all true statements about objects in an axiomatic theory that
includes formal arithmetic. This is a consequence of the fact that it is
impossible to build all sets from the arithmetical hierarchy by Turing
machines. In such a way, the closed Algorithmic Universe imposed re-
striction on the mathematical exploration. Indeed, rigorous mathematical
proofs are done in formal mathematical systems. As it is demonstrated
(cf., for example, [16]), such systems are equivalent to Turing machines
as they are built by means of Post productions. Thus, as Turing machines
can model proofs in formal systems, it is possible to assume that proofs
are performed by Turing machines.
The second Gödel undecidability theorem states that for an effectively
generated consistent axiomatic theory T that includes formal arithmetic
and has means for formal deduction, it is impossible to prove consisten-
cy of T using these means.
From the very beginning, Gödel undecidability theorems have been
comprehended as absolute restrictions for scientific cognition. That is
why Gödel undecidability theorems were so discouraging that many
mathematicians consciously or unconsciously disregarded them. For in-
stance, the influential group of mostly French mathematicians who wrote
under the name Bourbaki completely ignored results of Gödel [17].
However, later researchers came to the conclusion that these theorems
have such drastic implications only for formalized cognition based on
rigorous mathematical tools. For instance, in the 1964 postscript, Gödel
17
wrote that undecidability theorems “do not establish any bounds for the
powers of human reason, but rather for the potentialities of pure formal-
ism in mathematics.”
Discovery of super-recursive algorithms and acquisition of the
knowledge of their abilities drastically changed understanding of the
Gödel’s results. Being a consequence of the closed nature of the closed
algorithmic universe, these undecidability results lose their fatality in the
open algorithmic universe. They become relativistic being dependent on
the tools used for cognition. For instance, the first undecidability theo-
rem is equivalent to the statement that it is impossible to compute by Tu-
ring machines or other recursive algorithms all levels of the Arithmetical
Hierarchy [18]. However, as it is demonstrated in [19], there is a hierar-
chy of inductive Turing machines so that all levels of the Arithmetical
Hierarchy are computable and even decidable by these inductive Turing
machines. Complete proofs of these results were published only in 2003
due to the active opposition of the proponents of the Church-Turing
Thesis [14]. In spite of the fast development of computer technology and
computer science, the research community in these areas is rather con-
servative although more and more researchers understand that the
Church-Turing Thesis is not correct.
The possibility to use inductive proofs makes the Gödel’s results rela-
tive to the means used for proving mathematical statements because de-
cidability of the Arithmetical Hierarchy implies decidability of the for-
mal arithmetic. For instance, the first Gödel undecidability theorem is
true when recursive algorithms are used for proofs but it becomes false
when inductive algorithms, such as inductive Turing machines, are uti-
18
lized. History of mathematics also gives supportive evidence for this
conclusion. For instance, in 1936 by Gentzen, who in contrast to the se-
cond Gödel undecidability theorem, proved consistency of the formal
arithmetic using ordinal induction.
The hierarchy of inductive Turing machines also explains why the
brain of people is more powerful than Turing machines, supporting the
conjecture of Roger Penrose [20]. Besides, this hierarchy allows re-
searchers to eliminate restrictions of recursive models of algorithms in
artificial intelligence described by Sloman [6].
It is important to remark that limit Turing machines and other topolog-
ical algorithms [21] open even broader perspectives for information pro-
cessing technology and artificial intelligence than inductive Turing ma-
chines.
The Open World of Knowledge and The Internet
The open world, or more exactly, the open world of knowledge, is an
important concept for the knowledge society and its knowledge econo-
my. According to Rossini [12], it emerges from a world of pre-Internet
political systems, but it has come to encompass an entire worldview
based on the transformative potential of open, shared, and connected
technological systems. The idea of an open world synthesizes much of
the social and political discourse around modern education and scientific
endeavor and is at the core of the Open Access (OA) and Open Educa-
tional Resources (OER) movements. While the term open society comes
from international relations, where it was developed to describe the tran-
sition from political oppression into a more democratic society, it is now
19
being appropriated into a broader concept of an open world connected
via technology [22]. The idea of openness in access to knowledge and
education is a reaction to the potential afforded by the global networks,
but is inspired by the sociopolitical concept of the open society.
Open Access (OA) is a knowledge-distribution model by which schol-
arly, peer-reviewed journal articles and other scientific publications are
made freely available to anyone, anywhere over the Internet. It is the
foundation for the open world of scientific knowledge, and thus, a prin-
cipal component of the open world of knowledge as a whole. In the era
of print, open access was economically and physically impossible. In-
deed, the lack of physical access implied the lack of knowledge access -
if one did not have physical access to a well-stocked library, knowledge
access was impossible. The Internet has changed all of that, and OA is a
movement that recognizes the full potential of an open world metaphor
for the network.
In OA, the old tradition of publishing for the sake of inquiry,
knowledge, and peer acclaim and the new technology of the Internet
have converged to make possible an unprecedented public good: "the
world-wide electronic distribution of the peer-reviewed journal litera-
ture" [23].
The open world of knowledge is based on the Internet, while the In-
ternet is based on computations that go beyond Turing machines. One of
the basic principles of the Internet is that it is always on, always availa-
ble. Without these features, the Internet cannot provide the necessary
support for the open world of knowledge because ubiquitous availability
of knowledge resources demands non-stopping work of the Internet. At
20
the same time, classical models of algorithms, such as Turing machines,
stop after giving that result. This contradicts the main principles of the
Internet. In contrast to classical models of computation, as it is demon-
strated in [5], if an automatic system, e.g., a computer or computer net-
work, works without halting, gives results in this mode and can simulate
any operation of a universal Turing machine, then this automatic (com-
puter) system is more powerful than any Turing machine. This means
that this automatic (computer) system, in particular, the Internet, per-
forms unconventional computations and is controlled by super-recursive
algorithms. As it is explained in [5], attempts to reduce some of these
systems, e.g., the Internet, to the recursive mode, which allows modeling
by Turing machines, make these systems irrelevant.
Conclusions
This paper shows how the universe (the world) of algorithms became
open with the discovery of super-recursive algorithms, providing more
powerful tools for computational cognition and artificial intelligence.
Here we considered only some of the consequences of the open world
environment of unconventional algorithms and algorithmic constella-
tions for mathematical (computation-theoretical) cognition. It would be
interesting to study other consequences of current break through into an
open world of unconventional algorithms and computation.
It is known that not all quantum mechanical events are Turing-
computable. So, it would be interesting to find a class of super-recursive
algorithms that compute all such events or to prove that such a class
does not exist.
21
It might be methodologically and philosophically interesting to con-
template relations between the Open World of Algorithmic Constella-
tions and the Open Science in the sense of Nielsen [24]. For instance,
one of the pivotal features of the Open Science is accessibility of re-
search results on the Internet. At the same time, as it is demonstrated in
[5], the Internet and other big networks of computers are always working
in the inductive mode or some other super-recursive mode. Moreover,
actual accessibility depends on such modes of functioning.
One more interesting problem is to explore relations between the Open
World of Algorithmic Constellations with the theoretical framework of
Info-computationalism, a synthesis of Pancomputationalism (Naturalist
Computationalism) with Informational Structural Realism – the model of
a universe as a network of computational processes on informational
structures. Info-computationalism connects algorithms with interactive
computing in natural (physical) systems [25,26][28]. Connecting new
unconventional models of super-recursive algorithms and Algorithmic
Constellations with unconventional computations performed by natural
systems opens new possibilities for the development of innovative mod-
els of physical computation with “Trans-Turing” algorithms and “Non-
Von” computing architectures. [27].
22
Acknowledgements
The authors would like to thank Andree Ehresmann, Hector Zenil and
Marcin Schroeder for useful and constructive comments on the previous
version of this work.
References
1 G. Frege, Grundgesetze der Arithmetik, Begriffschriftlich Abgeleitet, Viena
(1893/1903)
2 R. J. Nelson, Church's thesis and cognitive science, Notre Dame J. of Formal Logic,
v. 28, no. 4, 581—614 (1987)
3 L. Kalmar, An argument against the plausibility of Church's thesis, in Constructivity
in mathematics, North-Holland Publishing Co., Amsterdam, pp. 72-80 (1959)
4 K. Gödel, Some Remarks on the Undecidability Results, in G¨odel, K. (1986–1995),
Collected Works, v. II, Oxford University Press, Oxford, pp. 305–306 (1972)
5 M. Burgin, Super-recursive Algorithms, Springer, New York/Heidelberg/Berlin
(2005)
6 A. Sloman, The Irrelevance of Turing machines to AI Aaron Sloman. In M. Scheutz
(ed.), Computationalism: New Directions. MIT Press. http://www.cs.bham.ac.uk/~axs/
(2002)
7 C. Shannon, Mathematical Theory of the Differential Analyzer, J. Math. Physics,
MIT, v. 20, 337-354 (1941)
8 S. Abramsky, A. Jung, Domain theory. In S. Abramsky, D. M. Gabbay, T. S. E.
Maibaum, editors, (PDF). Handbook of Logic in Computer Science. III. Oxford Uni-
versity Press. (1994).
9 A. Edalat, Domains for computation in mathematics, physics and exact real arithme-
tic, Bulletin Of Symbolic Logic, Vol:3, 401-452 (1997)
10 K. Ko, Computational Complexity of Real Functions, Birkhauser Boston, Boston,
MA (1991)
11 K. Weihrauch, Computable Analysis. An Introduction. Springer-Verlag Berlin/ Hei-
delberg (2000)
12 M. B. Pour-El, and J. I. Richards, Computability in Analysis and Physics. Perspec-
tives in Mathematical Logic, Vol. 1. Berlin: Springer. (1989)
23
13 M. Burgin, The Notion of Algorithm and the Turing-Church Thesis, In Proceedings
of the VIII International Congress on Logic, Methodology and Philosophy of Science,
Moscow, v. 5, part 1, pp. 138-140 (1987)
14 M. Burgin, Nonlinear Phenomena in Spaces of Algorithms, International Journal of
Computer Mathematics, v. 80, No. 12, pp. 1449-1476 (2003)
15 K. Gödel, Über formal unentscheidbare Sätze der Principia Mathematics und
verwandter System I, Monatshefte für Mathematik und Physik, b. 38, s.173-198 (1931)
16 R.M. Smullian, Theory of Formal Systems, Princeton University Press (1962)
17 A.R.D. Mathias, The Ignorance of Bourbaki, Physis Riv. Internaz. Storia Sci (N.S.)
28, pp. 887-904 (1991)
18 H. Rogers, Theory of Recursive Functions and Effective Computability, MIT Press,
Cambridge Massachusetts (1987)
19 M. Burgin, Arithmetic Hierarchy and Inductive Turing Machines, Notices of the
Russian Academy of Sciences, v. 299, No. 3, pp. 390-393 (1988)
20 Penrose, R. The Emperor’s New Mind, Oxford University Press, Oxford (1989)
21 Burgin, M. Topological Algorithms, in Proceedings of the ISCA 16th International
Conference “Computers and their Applications”, ISCA, Seattle, Washington, pp. 61-64
(2001)
22 C. Rossini, Access to Knowledge as a Foundation for an Open World, EDUCAUSE
Review, v. 45, No. 4, pp. 60–68 (2010)
23 Budapest Open Access Initiative: <http://www.soros.org/openaccess/read.shtml>.
24 M. Nielsen, Reinventing Discovery: The New Era of Networked Science, Princeton
University Press, Princeton and Oxford (2012)
25 Dodig-Crnkovic, G. and Muller, V.C. A Dialogue Concerning Two World Systems,
in Information and Computation, World Scientific, New York/Singapore, pp. 107-148
(2011)
26 G. Rozenberg, T.H.W. Bäck & J.N. Kok (Eds.): Handbook of Natural Computing.
Heidelberg, Germany, Springer (2012)
27 Dodig Crnkovic G. and Burgin M., Unconventional Algorithms: Complementarity
of Axiomatics and Construction, Entropy, Special issue "Selected Papers from the
Symposium on Natural/Unconventional Computing and its Philosophical Significance"
http://www.mdpi.com/journal/entropy/special_issues/unconvent_computing, forthcom-
ing 2012
24
28 Dodig-Crnkovic G., Significance of Models of Computation from Turing Model to
Natural Computation. Minds and Machines, ( R. Turner and A. Eden guest eds.) Vol-
ume 21, Issue 2 (2011), Page 301.
All the links accessed at 08 06 2012
... The current state of the art on typologies of computation and computational models is presented in [77][78][79], outlining a basic structural framework of computation. More about bodily information processing and the role of morphological computation can be found in [80,81] as well as in the review article [82] addressing the recent progress in the understanding of the embodiment of computing systems. ...
Article
Full-text available
Three special issues of Entropy journal have been dedicated to the topics of “Information-Processing and Embodied, Embedded, Enactive Cognition”. They addressed morphological computing, cognitive agency, and the evolution of cognition. The contributions show the diversity of views present in the research community on the topic of computation and its relation to cognition. This paper is an attempt to elucidate current debates on computation that are central to cognitive science. It is written in the form of a dialog between two authors representing two opposed positions regarding the issue of what computation is and could be, and how it can be related to cognition. Given the different backgrounds of the two researchers, which span physics, philosophy of computing and information, cognitive science, and philosophy, we found the discussions in the form of Socratic dialogue appropriate for this multidisciplinary/cross-disciplinary conceptual analysis. We proceed as follows. First, the proponent (GDC) introduces the info-computational framework as a naturalistic model of embodied, embedded, and enacted cognition. Next, objections are raised by the critic (MM) from the point of view of the new mechanistic approach to explanation. Subsequently, the proponent and the critic provide their replies. The conclusion is that there is a fundamental role for computation, understood as information processing, in the understanding of embodied cognition.
... This is only natural because UC is curious about any kind of physical phenomenon that could be harnessed for computing, while NC efforts are oriented toward a single role model, the brain. A variety of surveys, conceptualizations and classification schemes for UC have been proposed [Harnad 1994, de Castro 2006, Mills 2008, Burgin and Dodig-Crnkovic 2013 Stepney 2017, James et al 2017, Bournez and Pouly 2018]. However, in my opinion, where general unifying theory frameworks have been proposed, as in MacLennan (2004), Mills (2008), Horsman et al (2017) or Stepney and Kendon (2020), they are too closely leaning on paradigmatic assumptions inherited from DC (in particular, identifying 'computing' with calculating function values) and/or formulated on an abstraction level that is too high to give concrete guidance for system design and use. ...
Article
Full-text available
The accelerating race of digital computing technologies seems to be steering towards impasses—technological, economical and environmental—a condition that has spurred research efforts in alternative, ‘neuromorphic’ (brain-like) computing technologies. Furthermore, for decades, the idea of exploiting nonlinear physical phenomena ‘directly’ for non-digital computing has been explored under names like ‘unconventional computing’, ‘natural computing’, ‘physical computing’, or ‘in-materio computing’. In this article I investigate coordinates and conditions for a generalized concept of ‘computing’ which comprises digital, neuromorphic, unconventional and possible future ‘computing’ paradigms. The main contribution of this paper is an in-depth inspection of existing formal conceptualizations of ‘computing’ in discrete-symbolic, probabilistic and dynamical-systems oriented views. It turns out that different choices of background mathematics lead to decisively different understandings of what ‘computing’ is. However, across this diversity a unifying coordinate system for theorizing about ‘computing’ can be distilled.
... This theme has been investigated for decades from different angles under a diversity of namings -for instance unconventional, natural, emergent, physical, in-materio computing, -sometimes evoking a strong echo like DNA computing (van Noort et al., 2002), sometimes rather restricted to an academic niche like computing with fungi (Adamatzky, 2018). A variety of classification schemes have been proposed (Harnad, 1994;de Castro, 2006;Burgin and Dodig-Crnkovic, 2013;Stepney, 2017) for approaches in the unconventional computing (UC) research landscape. However, a unifying theoretical framework does not yet exist. ...
Preprint
Full-text available
The acceleration race of digital computing technologies seems to be steering toward impasses -- technological, economical and environmental -- a condition that has spurred research efforts in alternative, "neuromorphic" (brain-like) computing technologies. Furthermore, since decades the idea of exploiting nonlinear physical phenomena "directly" for non-digital computing has been explored under names like "unconventional computing", "natural computing", "physical computing", or "in-materio computing". This has been taking place in niches which are small compared to other sectors of computer science. In this paper I stake out the grounds of how a general concept of "computing" can be developed which comprises digital, neuromorphic, unconventional and possible future "computing" paradigms. The main contribution of this paper is a wide-scope survey of existing formal conceptualizations of "computing". The survey inspects approaches rooted in three different kinds of background mathematics: discrete-symbolic formalisms, probabilistic modeling, and dynamical-systems oriented views. It turns out that different choices of background mathematics lead to decisively different understandings of what "computing" is. Across all of this diversity, a unifying coordinate system for theorizing about "computing" can be distilled. Within these coordinates I locate anchor points for a foundational formal theory of a future computing-engineering discipline that includes, but will reach beyond, digital and neuromorphic computing.
... Advances in our understanding of the nature of cognition in its myriad forms (embodied, embedded, extended and enactive) [1,2] displayed in all living beings (cellular organisms, plants, animals and humans) and new theories of information, info-computation and knowledge [3][4][5][6] are throwing light on how we should build software systems (in the digital universe) that mimic and interact with intelligent, sentient and resilient beings in the physical universe. Info-computational constructivism asserts that living organisms are cognizing agent structures who construct knowledge through interactions with their environment. ...
Article
Full-text available
In the physical world, computing processes, message communication networks and cognitive apparatuses are the building blocks of sentient beings. Genes and neural networks provide complementary information processing models that enable execution of mechanisms dealing with “life” using physical, chemical and biological processes. A cognizing agent architecture (mind) provides the orchestration of body and the brain to manage the “life” processes to deal with fluctuations and maintain survival and sustenance. We present a new information processing architecture that enables “digital genes” and “digital neurons” with cognizing agent architecture to design and implement sentient, resilient and intelligent systems in the digital world.
... Advances in our understanding of the nature of cognition in its myriad forms (embodied, embedded, extended and enactive) [1,2] displayed in all living beings (cellular organisms, plants, animals and humans) and new theories of information, info-computation and knowledge [3][4][5][6] are throwing light on how we should build software systems (in the digital universe) that mimic and interact with intelligent, sentient and resilient beings in the physical universe. Info-computational constructivism asserts that living organisms are cognizing agent structures who construct knowledge through interactions with their environment. ...
Article
Full-text available
In the physical world, computing processes, message communication networks and cognitive apparatuses are the building blocks of sentient beings. Genes and neural networks provide complementary information processing models that enable execution of mechanisms dealing with “life” using physical, chemical and biological processes. A cognizing agent architecture (mind) provides the orchestration of body and the brain to manage the “life” processes to deal with fluctuations and maintain survival and sustenance. We present a new information processing architecture that enables “digital genes” and “digital neurons” with cognizing agent architecture to design and implement sentient, resilient and intelligent systems in the digital world.
... However, it is important to distinguish between the mechanism and model of computation. (Burgin & Dodig-Crnkovic, 2013). ...
Chapter
Full-text available
Computational models and tools provide increasingly solid foundations for the study of cognition and model-based reasoning, with knowledge generation in different types of cognizing agents, from the simplest ones like bacteria to the complex human distributed cognition. After the introduction of the computational turn, we proceed to models of computation and the relationship between information and computation. A distinction is made between mathematical and computational (executable) models , which are central for biology and cognition. Computation as it appears in cognitive systems is physical, natural, embodied, and distributed computation, and we explain how it relates to the symbol manipulation view of classical computationalism . As present day models of distributed, asynchronous, heterogeneous, and concurrent networks are becoming increasingly well suited for modeling of cognitive systems with their dynamic properties, they can be used to study mechanisms of abduction and scientific discovery. We conclude the chapter with the presentation of software modeling with computationally automated reasoning and the discussion of model transformations and separation between semantics and ontology.
Article
Full-text available
Defining computation as information processing (information dynamics) with information as a relational property of data structures (the difference in one system that makes a difference in another system) makes it very suitable to use operator formulation, with similarities to category theory. The concept of the operator is exceedingly important in many knowledge areas as a tool of theoretical studies and practical applications. Here we introduce the operator theory of computing, opening new opportunities for the exploration of computing devices, processes, and their networks.
Chapter
Current social problems are multiscale-order deficiencies, which cannot be fixed by the traditional hierarchical approach alone, by doing what we do better or more intensely, but rather by changing the way we do. As the experiences in the past 50 years have shown, unpredictable changes can be very disorienting at enterprise level. In a continuously changing operational environment, even if operational parameters cannot be closely pre-defined at system planning and design level, we need to be able to plan and to design self-organizing, self-regulating, and self-adapting system quite easily anyway. Attempts to optimize hierarchical systems in the traditional top-down way only will be less and less effective and cannot be done in real time. In fact, current human-made application and system can be quite fragile to unexpected perturbation because statistics by itself can fool you, unfortunately. What Nassim Taleb has identified and calls “antifragility” is that category of things not only gain from chaos but need it in order to survive and flourish and proposes that systems be built in an antifragile manner. The resilient system resists shocks and stays the same; the antifragile system gets better and better. To face the problem of social multiscale ontological uncertainty management, we need application resilience and antifragility at system level first, no anticipation, no learning, and no antifragility. With antifragility system homeodynamic operating equilibria can emerge from a self-organizing landscape of self-structuring attractor points. The present contribution offers an innovative and original solution proposal to the problem of social multiscale ontological uncertainty management. Due to its intrinsic self-scaling properties, this system approach can be applied at any system scale: from single quantum system to full system governance strategic assessment policies and beyond. The reason for this is the postulate that society is an arbitrary complex multiscale system of purposive agents and actors within continuous change.
Article
Is the exponential function computable? Are union and intersection of closed subsets of the real plane computable? Are differentiation and integration computable operators? Is zero finding for complex polynomials computable? Is the Mandelbrot set decidable? And in case of computability, what is the computational complexity? Computable analysis supplies exact definitions for these and many other similar questions and tries to solve them. - Merging fundamental concepts of analysis and recursion theory to a new exciting theory, this book provides a solid basis for studying various aspects of computability and complexity in analysis. It is the result of an introductory course given for several years and is written in a style suitable for graduate-level and senior students in computer science and mathematics. Many examples illustrate the new concepts while numerous exercises of varying difficulty extend the material and stimulate readers to work actively on the text.
Article
All approaches to high performance computing is naturally divided into three main directions: development of computational elements and their networks, advancement of computational methods and procedures, and evolution of the computed structures. In the paper the second direction is developed in the context of the theory of super-recursive algorithms. It is demonstrated that such super-recursive algorithms as inductive Turing machines are more adequate for simulating many processes, have much more computing power, and are more efficient than recursive algorithms.
Book
In Reinventing Discovery, Michael Nielsen argues that we are living at the dawn of the most dramatic change in science in more than 300 years. This change is being driven by powerful new cognitive tools, enabled by the internet, which are greatly accelerating scientific discovery. There are many books about how the internet is changing business or the workplace or government. But this is the first book about something much more fundamental: how the internet is transforming the nature of our collective intelligence and how we understand the world. Reinventing Discovery tells the exciting story of an unprecedented new era of networked science. We learn, for example, how mathematicians in the Polymath Project are spontaneously coming together to collaborate online, tackling and rapidly demolishing previously unsolved problems. We learn how 250,000 amateur astronomers are working together in a project called Galaxy Zoo to understand the large-scale structure of the Universe, and how they are making astonishing discoveries, including an entirely new kind of galaxy. These efforts are just a small part of the larger story told in this book--the story of how scientists are using the internet to dramatically expand our problem-solving ability and increase our combined brainpower. This is a book for anyone who wants to understand how the online world is revolutionizing scientific discovery today--and why the revolution is just beginning.