We need much better understanding of information processing and
comp
u-
tation
as
its pri
mary form
.
Future
progress
of new computational devices capable of
dealing with problems of big data, internet of things, semantic web, co
g
nitive robotics
and neuroinformatics
depends on the
adequate
models of computation.
In this article
w
e
first
present
the current state
of the art
through
systematisation of
existing
models
and mechanisms
, and
outline
basic
structural framework of computation.
We argue
th
at d
efining c
omputation
as
information processing
,
and
given that
there is no info
r-
mation without (physical) representation, the dynamics of information
on
the
fund
a-
mental
level is physical
/
intrinsic
/
natural
computation
.
As a
special case
,
intrinsic
comp
utation
is
used for designed computation in comput
ing
machinery
.
Intrinsic n
at
u-
ral computation occurs
on
variety of
levels of physical processes,
containing
the le
v
el
s
of computation
of
living organisms
(
including
highly intel
ligent animals) as well as
des
igned
comput
ational
devices
.
The
present article
offers a
typology
of current mo
d-
els of computation and indicates future paths for the
advancement
of the field
;
both
by
the
develop
ment of
new computational models and
by
learning from nature how to
better c
ompute
using different mechanisms of intrinsic comput
a
tion
.
1
Introduction
Many researchers have asked the questi
All content in this area was uploaded by Gordana Dodig Crnkovic
Content may be subject to copyright.
A preview of the PDF is not available
... Intrinsic natural computation occurs on variety of levels of physical processes, such as the levels of computation of living organisms as well as designed computational devices. The present article is building on our typology of models of computation as information processing (Burgin & Dodig-Crnkovic, 2013). It is indicating future paths for the advancement of the field, expected both as a result of the development of new computational models and learning from nature how to better compute using information transformation mechanisms of intrinsic computation. ...
... Variety of current approaches to the concept of computation shows remarkable complexity that makes communication of related results and ideas increasingly difficult. We explicated present diversity of concepts and models in (Burgin & Dodig-Crnkovic, 2013) to highlight the necessity of establishing relationships and common understanding. The analysis of the present state of the art allowed us to discover basic structures inherent for computation and to develop a multifaceted typology of computations. ...
Future progress of new information processing devices capable of dealing with problems such as big data, Internet of things, semantic web, cognitive robotics, neuroinformatics and similar, depends on the adequate and efficient models of computation. We argue that defining computation as information transformation, and given that there is no information without representation, the dynamics of information on the fundamental level is physical/ intrinsic/ natural computation (Dodig-Crnkovic, 2011) (Dodig-Crnkovic, 2014). Intrinsic natural computation occurs on variety of levels of physical processes, such as the levels of computation of living organisms as well as designed computational devices. The present article is building on our typology of models of computation as information processing (Burgin & Dodig-Crnkovic, 2013). It is indicating future paths for the advancement of the field, expected both as a result of the development of new computational models and learning from nature how to better compute using information transformation mechanisms of intrinsic computation.
... At the same time there is parallel computation, and multilevel computation, non-local computation as well as distributed computation, fuzzy and random computation very much as also quantum computation and emergent computation, not to mention interactive computation. A typology of computation and computational models can be seen in [12]. A metaphor can be introduced here, namely a variety of computations corresponds to the diversity of life and living beings. ...
... It should also be noted that methodological reflection on the nature of computation has already resulted in first classifications. Of particular interest is the proposal by Mark Burgin and Gordana Dodig-Crnkovic presented in their overview work on the concept of computations (Burgin and Dodig-Crnkovic, 2013). It is worth quoting the said attempt at classification to show the diversity of contexts in which the problems of computational processes are considered. ...
This article presents the evolution of philosophical and methodological considerations concerning empiricism in computer/computing science. In this study, we trace the most important current events in the history of reflection on computing. The forerunners of Artificial Intelligence H.A. Simon and A. Newell in their paper Computer Science As Empirical Inquiry (1975) started these considerations. Later the concept of empirical computer science was developed by S.S. Shapiro, P. Wegner, A.H. Eden and P.J. Denning. They showed various empirical aspects of computing. This led to a view of the science of computing (or science of information processing) - the science of general scope. Some interesting contemporary ways towards a generalized perspective on computations were also shown (e.g. natural computing).
This paper presents taxonomy of models of computation. It includes Existential (Physical, Abstract and Cognitive), Organizational, Temporal, Representational, Domain/Data, Operational, Process-oriented and Level-based taxonomy. It is connected to more general notion of natural computation, intrinsic to physical systems, and particularly to cognitive computation in living organisms and artificial cognitive systems. Computation is often understood through the Turing machine model, in the fields of computability, computational complexity and even as a basis for the present-day computer hardware and software architectures. However, several aspects of computation, even those existing in today's applications, are left outside in this model, thus adequate models of real-time, distributed, self-organized, resource-aware, adaptive, learning computation systems are currently being developed.
This paper connects information with computation and cognition via concept of agents that appear at variety of levels of organization of physical/chemical/cognitive systems – from elementary particles to atoms, molecules, life-like chemical systems, to cognitive systems starting with living cells, up to organisms and ecologies. In order to obtain this generalized framework, concepts of information, computation and cognition are generalized. In this framework, nature can be seen as informational structure with computational dynamics, where an (info-computational) agent is needed for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system – which combines Bateson and Hewitt’s definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world, where information processing is constantly going on, on a variety of levels of organization. This setting helps us to elucidate the relationships between computation, information, agency and cognition, within the common conceptual framework, with special relevance for biology and robotics.
The development of models of computation induces the development of technology and natural sciences and vice versa. Current state of the art of technology and sciences, especially networks of concurrent processes such as Internet or biological and sociological systems, calls for new computational models. It is necessary to extend classical Turing machine model towards physical/ natural computation. Important aspects are openness and interactivity of computational systems, as well as concurrency of computational processes. The development proceeds in two directions - as a search for new mathematical structures beyond algorithms as well as a search for different modes of physical computation that are not equivalent to actions of human executing an algorithm, but appear in physical systems in which concurrent interactive information processing takes place. The article presents the framework of infocomputationalism as applied on computing nature, where nature is an informational structure and its dynamics (information processing) is understood as computation. In natural computing, new developments in both understanding of natural systems and in their computational modelling are needed, and those two converge and enhance each other.
In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.
A Source Book for the History of Mathematics, but one which offers a different perspective by focusing on algorithms. With the development of computing has come an awakening of interest in algorithms. Often neglegted by historians and modern scientists, more concerned with the nature of concepts, algorithmic procedures turn out to have been instrumental in the development of fundamental ideas: practice led to theory just as much as the other way round. The purpose of this book is to offer a historical background to contemporary algorithmic practice. Each chapter centres around a theme, more or less in chronological order, and the story is told through the reading of over 200 original texts, faithfully reproduced. This provides an opportunity for the reader to sit alongside such mathematicians as Archimedes, Omar Khayyam, Newton, Euler and Gauss as they explain their techniques. The book ends with an account of the development of the modern concept of algorithm.
A Computable Universe is a collection of papers discussing computation in nature and the nature of computation, a compilation of the views of the pioneers in the contemporary area of intellectual inquiry focused on computational and informational theories of the world. This volume is the definitive source of informational/computational views of the world, and of cutting-edge models of the universe, both digital and quantum, discussed from a philosophical perspective as well as in the greatest technical detail. The book discusses the foundations of computation in relation to nature. It focuses on two main questions: What is computation? How does nature compute? The contributors are world-renowned experts who have helped shape a cutting-edge computational understanding of the universe. They discuss computation in the world from a variety of perspectives, ranging from foundational concepts to pragmatic models to ontological conceptions and their philosophical implications. The volume provides a state-of-the-art collection of technical papers and non-technical essays representing a field that takes information and computation to be key to understanding and explaining the basic structure underpinning physical reality. It also includes a new edition of Konrad Zuse's "Calculating Space", and a panel discussion transcription on the topic, featuring worldwide experts (including a Nobel prize) in quantum mechanics, physics, cognition, computation and algorithmic complexity.A Computable Universe is a collection of papers discussing computation in nature and the nature of computation, a compilation of the views of the pioneers in the contemporary area of intellectual inquiry focused on computational and informational theories of the world. This volume is the definitive source of informational/computational views of the world, and of cutting-edge models of the universe, both digital and quantum, discussed from a philosophical perspective as well as in the greatest technical detail. The book discusses the foundations of computation in relation to nature. It focuses on two main questions: What is computation? How does nature compute? The contributors are world-renowned experts who have helped shape a cutting-edge computational understanding of the universe. They discuss computation in the world from a variety of perspectives, ranging from foundational concepts to pragmatic models to ontological conceptions and their philosophical implications. The volume provides a state-of-the-art collection of technical papers and non-technical essays representing a field that takes information and computation to be key to understanding and explaining the basic structure underpinning physical reality. It also includes a new edition of Konrad Zuse's "Calculating Space", and a panel discussion transcription on the topic, featuring worldwide experts (including a Nobel prize) in quantum mechanics, physics, cognition, computation and algorithmic complexity.
To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase ‘of a certain kind’. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind.
Introduction: what the brain's-eye view tells the mind's-eye view. Part 1 The mind's-eye view: classical cognitivism situation and substance folk psychology, thought, and context biological constraints. Part 2 The brain's-eye view: parallel distributed processing informational holism the multiplicity of mind - a limited defence of classical cognitivism structured through reassembling the jigsaw. Epilogue: The parable of the high-level architect. Appendix: Beyond eliminativism.
All approaches to high performance computing is naturally divided into three main directions: development of computational elements and their networks, advancement of computational methods and procedures, and evolution of the computed structures. In the paper the second direction is developed in the context of the theory of super-recursive algorithms. It is demonstrated that such super-recursive algorithms as inductive Turing machines are more adequate for simulating many processes, have much more computing power, and are more efficient than recursive algorithms.
Concurrency is of crucial importance to the science and engineering of computation in part because of the rise of the Internet and many-core architectures. However, concurrency extends computation beyond the conceptual framework of Church, Gandy, Gödel, Herbrand, Kleene, Post, Rosser, Sieg, Turing, etc. because there are effective computations that cannot be performed by Turing machines. In the Actor model, computation is conceived as distributed in space where computational devices communicate asynchronously and the entire computation is not in any well-defined state. (An Actor can have stable information about about what it was like when it receives a message.) Turing’s Model is a special case of the Actor model. A non-deterministic Turing machine has bounded non-determinism (i.e. there is a bound on the size of integer that can be computed starting on a blank tape by an always-halting machine). Proving that a server will actually provide service to its clients requires unbounded non-determinism. In the semantics of bounded nondeterminism, a request to a shared resource might never receive service because a nondeterministic transition is always made to service another request instead. That’s why the semantics of CSP were reversed from bounded non-determinism to unbounded non-determinism. However, bounded non-determinism was but a symptom of deeper underlying issues with communicating sequential processes as a foundation for concurrency. The Computational Representation Theorem characterizes the semantics of Actor systems without making use of sequential processes.