Project

Morphological Computing in Cognitive Systems

Updates
0 new
3
Recommendations
0 new
1
Followers
0 new
53
Reads
1 new
487

Project log

Gordana Dodig Crnkovic
added a research item
The emerging contemporary natural philosophy provides a common ground for the integrative view of the natural, the artificial, and the human-social knowledge and practices. Learning process is central for acquiring, maintaining, and managing knowledge, both theoretical and practical. This paper explores the relationships between the present advances in understanding of learning in the sciences of the artificial (deep learning, robotics), natural sciences (neuroscience, cognitive science, biology), and philosophy (philosophy of computing, philosophy of mind, natural philosophy). The question is, what at this stage of the development the inspiration from nature, specifically its computational models such as info-computation through morphological computing, can contribute to machine learning and artificial intelligence, and how much on the other hand models and experiments in machine learning and robotics can motivate, justify, and inform research in computational cognitive science, neurosciences, and computing nature. We propose that one contribution can be understanding of the mechanisms of ‘learning to learn’, as a step towards deep learning with symbolic layer of computation/information processing in a framework linking connectionism with symbolism. As all natural systems possessing intelligence are cognitive systems, we describe the evolutionary arguments for the necessity of learning to learn for a system to reach humanlevel intelligence through evolution and development. The paper thus presents a contribution to the epistemology of the contemporary philosophy of nature.
Gordana Dodig Crnkovic
added 5 research items
Floridis Theory of Strongly Semantic Information defines information as consisting of data and truth in contrast to the standard definition prevailing in empirical sciences in which information is de- fined as meaningful data. I argue that meaningful data does not neces- sarily need to be true to constitute information. Partially true informa- tion or even completely false information can lead to an outcome ade- quate and relevant for inquiry. Instead of insisting on the truth of an empirical model, the focus is on basic criteria such as the validity of the model and its appropriateness within a certain well-defined con- text, as the meaning of the information content of the model is strongly contextual. Even though empirical models could in general only be 'adequate' and not 'true' they may produce results and data from which relevant conclusions could be drawn. If truthlikeness ad- mits of degrees, then the history of inquiry is one of steady progress towards the truth. In that sense models can generate information for improving our knowledge about the empirical world.
The ontology of each theory is always embedded in natural language with all of its am-biguity. Attempts to automate the communication between different ontologies face the problem of compatibility of concepts with different semantic origins. Coming from different Universes, terms with the same spelling may have a continuum of meanings. The formalization problem met in the semantic web or ontology engineering is thus closely related to the natural language seman-tic continuum. The emergence of a common context necessary to assure the minimum "common language" is a natural consequence of this process of intense communication that develops in parallel with com-putationalization of almost every conceivable field of human activity. The necessity of conceptu-alization of this new global space calls for understanding across the borders of previously rela-tively independent, locally defined Universes. In that way a need and potential for a new Renais-sance, in which sciences and humanities, arts and engineering can reach a new synthesis, has emerged. 1 Computing/Informatics and a New Renaissance Computing/Informatics are characterizing our epoch in the most profound ways, in everything from the ubiquity of computers in our everyday life to the computational tools for simulation and testing of scientific and philosophical theories (Floridi, 2003). There is a significant shift relative to the previous industrial-technological era when the ideal was the perfect machine and "objective knowledge" reduced at best to an algorithm for constructing a complete theory according to a set of derivation rules, starting from a limited number of axioms (Hilbert's pro-gram). The problem is that every theory is inevitably coupled to its context. This implies that no scientific method can be completely disconnected from the rest of the world. There are always subtle connections established through the use of the semantic continuum of natural language that are impossible to avoid even in the most formal theories. Contrary to the preceding mechanistic ideal, Computing/Informatics has successively de-veloping into a very much human-centered discipline. Insight into the limitations of the for-malization/mechanisation project has led to a new awareness of the eminently human charac-ter of knowledge and its connection to value systems and the totality of the cultural context. This indicates that there is a potential for a new Renaissance, in which science and humani-ties, arts and engineering can reach a new synthesis, enriching and inspiring each other via modern computing and communication tools (Dodig-Crnkovic, 2003). In spite of the insufficiency of formal systems for building up a complete world-view, their appeal nowadays seems to be stronger than ever, see e.g. ontology engineering (Gruber, 1995; Smith and Welty, 2001).
Do predictions obtained from models constitute information on which reliable decisions can be made? Is it necessary, that to be of interest, predictions and other information generated by models must be true? This paper investigates the relation between the model and reality, information and truth. It will argue that meaningful data need not necessarily be true in order to constitute information. Partially true information or even completely false information can lead to a desirable outcome such as a technological innovation or a scientific breakthrough. Sometimes sheer serendipity gives rise to an invention. A combination of true and false information may result in an epoch-making event such as Columbus' discovery of America, on his intended voyage to India. An even more basic problem prevents scientists from thinking exclusively in terms of "true" information in the research process. In beginning from an existing theory (say Aristotelian physics), and developing a new theory (say Galilean physics) one can talk about the truth within each model, but during the transition between the two, there is a mixture of old and new concepts in which truth is not well defined. Instead of the veridicity of a model, the two basic concepts that are commonly used in empirical sciences are models correctness (validity) and its appropriateness within a context. The conclusion is that despite the empirical models being in general not true but only truthlike, they may nevertheless produce results from which adequate conclusions can be drawn, and therefore can serve as the grounds for decision-making. In that sense they can yield information vital for improving our knowledge about the actual empirical world that is the precondition for technological innovation and scientific discovery.
Gordana Dodig Crnkovic
added a research item
The absolute α-decay width of 212Po is calculated within a harmonic oscillator representation. Clustering features induced by the nuclear interaction appear by considering a large configuration space. The role of the neutronproton interaction is analysed and a reasonable account of the experimental alpha-decay width is given.
Gordana Dodig Crnkovic
added 2 research items
This paper investigates the relationship between reality and model, information and truth. It will argue that meaningful data need not be true in order to constitute information. Information to which truth-value cannot be ascribed, partially true information or even false information can lead to an interesting outcome such as technological innovation or scientific breakthrough. In the research process, during the transition between two theoretical frameworks, there is a dynamic mixture of old and new concepts in which truth is not well defined. Instead of veridicity, correctness of a model and its appropriateness within a context are commonly required. Despite empirical models being in general only truthlike, they are nevertheless capable of producing results from which conclusions can be drawn and adequate decisions made.
This essay presents arguments for the claim that in the best of all possible worlds (Leibniz) there are sources of unpredictability and creativity for us humans, even given a pancomputational stance. A suggested answer to Chaitin’s questions: “Where do new mathematical and biological ideas come from? How do they emerge?” is that they come from the world and emerge from basic physical (computational) laws. For humans as a tiny subset of the universe, a part of the new ideas comes as the result of the re-configuration and reshaping of already existing elements and another part comes from the outside as a consequence of openness and interactivity of the system. For the universe at large it is randomness that is the source of unpredictability on the fundamental level. In order to be able to completely predict the Universe-computer we would need the Universe-computer itself to compute its next state; as Chaitin already demonstrated there are incompressible truths which means truths that cannot be computed by any other computer but the universe itself.
Gordana Dodig Crnkovic
added 2 research items
Philosophy of Computing and Information -- 5 Questions. Edited by Luciano Floridi. Automatic Press / VIP, 2008, 204 pp. ISBN-10: 8792130097; ISBN-13: 978-8792130099. Contributors: Margaret A. Boden, Valentino Braitenberg, Brian Cantwell-Smith, Gregory Chaitin, Daniel C. Dennett, Keith Devlin, Fred Dretske, Hubert L. Dreyfus, Luciano Floridi, Tony Hoare, John McCarthy, John R. Searle, Aaron Sloman, Patrick Suppes, Johan van Benthem,Terry Winograd,
In this Editorial note, Guest Editors introduce the theme of the Special Issue of the journal Philosophies, titled Contemporary Natural Philosophy and Philosophies.
Gordana Dodig Crnkovic
added 22 research items
Cognitive science is considered to be the study of mind (consciousness and thought) and intelligence in humans. Under such definition variety of unsolved/unsolvable problems appear. This article argues for a broad understanding of cognition based on empirical results from i.a. natural sciences, self-organization, artificial intelligence and artificial life, network science and neuroscience, that apart from the high level mental activities in humans, includes sub-symbolic and sub-conscious processes, such as emotions, recognizes cognition in other living beings as well as extended and distributed/social cognition. The new idea of cognition as complex multiscale phenomenon evolved in living organisms based on bodily structures that process information, linking cognitivists and EEEE (embodied, embedded, enactive, extended) cognition approaches with the idea of morphological computation (info-computational self-organisation) in cognizing agents, emerging in evolution through interactions of a (living/cognizing) agent with the environment.
Following the worldwide increase in communi- cations through computer networking, not only economies, entertainment, and arts but also research and education are transforming into global systems. Attempts to automate knowledge discovery and enable the communication be- tween computerized knowledge bases encounter the prob- lem of the incompatibility of syntactically identical expres- sions of different semantic and pragmatic provenance. Coming from different universes, terms with the same spelling may have a continuum of meanings. The formal- ization problem is related to the characteristics of the natu- ral language semantic continuum. The human brain has
In this chapter, different notions of allostasis (the process of achieving stability through change ) as they apply to adaptive behavior are presented. The authors discuss how notions of allostasis can be usefully applied to Cybernetics-based homeostatic systems. Particular emphasis is placed upon affective states - motivational and emotional - and, above all, the notion of 'predictive' regulation, as distinct from forms of 'reactive' regulation, in homeostatic systems. The authors focus here on Ashby's ultrastability concept that entails behavior change for correcting homeostatic errors (deviations from the healthy range of essential, physiological, variables). The authors consider how the ultrastability concept can be broadened to incorporate allostatic mechanisms and how they may enhance adaptive physiological and behavioral activity. Finally, this chapter references different Cybernetics frameworks that incorporate the notion of allostasis. The article then attempts to untangle how the given perspectives fit into the 'allostatic ultrastable systems' framework postulated.
Mark Burgin
added a research item
For millennia, the enigma of the world of Ideas or Forms, which Plato suggested and advocated, has been challenging the most prominent thinkers of the humankind. This paper presents a solution to this problem, namely, that an Idea in the Platoʼs sense can be interpreted as a scientific object called a structure. To validate this statement, this paper provides rigorous definition of a structure and demonstrates that structures have the basic properties of Platoʼs Ideas. In addition, we describe the world of structures and prove its existence. This allows us to resolve the controversy between Plato and Aristotle concerning Ideas or Forms and to build a scientific interpretation of the metaphor of the Divided Line, which Plato uses in his theory of Ideas.
Mark Burgin
added 5 research items
According to the contemporary computer science, working in the functional recursive mode, computers and computer networks function as recursive algorithms. At the same time, working in the functional super-recursive mode, such as inductive or limit modes, computers and computer networks function as super-recursive algorithms (Burgin, 2005). While one group of notable researchers claims that interactive computation is more powerful than Turing machines (cf., for example, (Wegner, 1997)), others argue that the Church-Turing Thesis, which equates algorithms with Turing machines, still holds and that interaction does not add anything new (cf., for example, (Prasse and Rittgen, 1998) and (Van Leeuwen and Wiedermann, 2000). The cause of this misunderstanding is that the standard computability theory does not take into account time and space where real computers and networks function. Under such artificial conditions, interacting abstract automata and algorithms cannot achieve super-recursive power if they are all recursive as it is proved in (Burgin, 2006). In contrast to this, even a finite system of interacting recursive automata or algorithms functioning in the real time and space can become super-recursive (Burgin, 2007).In this paper, we study modes of information processing in abstract automata, material, e.g., physical or biological, computers and networks. Computational and networking practice shows that taking into account modes of information processing is important for efficient design of distributed hardware and software systems. Modes of computation studied here for abstract automata are actually directions and explanations of how to utilize computers and network computations.
A Physarum machine is a programmable amorphous biological computer experimentally implemented in the vegetative state of true slime mould Physarum polycephalum. It comprises an amorphous yellowish mass with networks of protoplasmic veins, programmed by spatial configurations of attracting and repelling gradients. The goal of this paper to advance formalism of Physarum machines providing theoretical tools for exploration of possibilities of these machines and extension of their applications. To achieve this goal, we introduce structural machines and study their properties.
Gordana Dodig Crnkovic
added 4 research items
The dynamics of natural systems, and particularly organic systems, specialized in self-organization and complexity management, presents a vast source of ideas for new approaches to computing, such as natural computing and its special case organic computing. Based on paninformationalism (understanding of all physical structures as informational) and pancomputationalism or natural computationalism (understanding of the dynamics of physical structures as computation) a new approach of info-computational naturalism emerges as a result of their synthesis. This includes naturalistic view of mind and hence naturalized epistemology based on evolution from inanimate to biological systems through the increase in complexity of informational structures by natural computation. Learning on the info-computational level about structures and processes in nature and especially those in intelligent and autonomous biological agents enables the development of advanced autonomous adaptive intelligent artifacts and makes possible connection (both theoretical and practical) between organic and inorganic systems.
The dynamics of natural systems, and particularly organic systems, specialized in self-organization and complexity management, presents a vast source of ideas for new approaches to computing, such as natural computing and its special case organic computing. Based on paninformationalism (understanding of all physical structures as informational) and pancomputationalism or natural computationalism (understanding of the dynamics of physical structures as computation) a new approach of info-computational naturalism emerges as a result of their synthesis. This includes naturalistic view of mind and hence naturalized epistemology based on evolution from inanimate to biological systems through the increase in complexity of informational structures by natural computation. Learning on the info-computational level about structures and processes in nature and especially those in intelligent and autonomous biological agents enables the development of advanced autonomous adaptive intelligent artifacts and makes possible connection (both theoretical and practical) between organic and inorganic systems.
This book enriches our views on representation and deepens our understanding of its different aspects. It arises out of several years of dialog between the editors and the authors, an interdisciplinary team of highly experienced researchers, and it reflects the best contemporary view of representation and reality in humans, other living beings, and intelligent machines. Structured into parts on the cognitive, computational, natural sciences, philosophical, logical, and machine perspectives, a theme of the field and the book is building and presenting networks, and the editors hope that the contributed chapters will spur understanding and collaboration between researchers in domains such as computer science, philosophy, logic, systems theory, engineering, psychology, sociology, anthropology, neuroscience, linguistics, and synthetic biology.
Gordana Dodig Crnkovic
added a project reference
Gordana Dodig Crnkovic
added a research item
This paper presents a theoretical study of the binary oppositions underlying the mechanisms of natural computation understood as dynamical processes on natural information morphologies. Of special interest are the oppositions of discrete vs. continuous, structure vs. process, and differentiation vs. integration. The framework used is that of computing nature, where all natural processes at different levels of organisation are computations over informational structures. The interactions at different levels of granularity/organisation in nature, and the character of the phenomena that unfold through those interactions, are modeled from the perspective of an observing agent. This brings us to the movement from binary oppositions to dynamic networks built upon mutually related binary oppositions, where each node has several properties.
Gordana Dodig Crnkovic
added 2 research items
What is reality for an agent? What is minimal cognition? How does the morphology of a cognitive agent affect cognition? These are still open questions among scientists and philosophers. In this chapter we propose the idea of info-computational nature as a framework for answering those questions. Within the info-computational framework, information is defined as a structure (for an agent), and computation as the dynamics of information (information processing). To an agent, nature therefore appears as an informational structure with computational dynamics. Both information and computation in this context have broader meaning than in everyday use, and both are necessarily grounded in physical implementation. Evolution of increasingly complex living agents is understood as a process of morphological (physical, embodied) computation driven by agents’ interactions with the environment. It is a process much more complex than random variation; instead the mechanisms of change are morphological computational processes of self-organisation (and re-organisation). Reality for an agent emerges as a result of interactions with the environment together with internal information processing. Following Maturana and Varela, we take cognition to be the process of living of an organism, and thus it appears on different levels of complexity, from cellular via organismic to social. The simpler the agent, the simpler its “reality” defined by the network of networks of info-computational processes, which constitute its cognition. The debated topic of consciousness takes its natural place in this framework, as a process of information integration that we suggest naturally evolved in organisms with a nervous system. Computing nature/pancomputationalism is sometimes confused with panpsychism or claimed to necessarily imply panpsychism, which we show is not the case. Even though we focus on natural systems in this chapter, the info-computational approach is general and can be used to model both biological and artifactual cognitive agents.
http://www.worldscientific.com/worldscibooks/10.1142/7637
Gordana Dodig Crnkovic
added 24 research items
Processes considered rendering information dynamics have been studied, among others in: questions and answers, observations, communication, learning, belief revision, logical inference, game-theoretic interactions and computation. This article will put the computational approaches into a broader context of natural computation, where information dynamics is not only found in human communication and computational machinery but also in the entire nature. Information is understood as representing the world (reality as an informational web) for a cognizing agent, while information dynamics (information processing, computation) realizes physical laws through which all the changes of informational structures unfold. Computation as it appears in the natural world is more general than the human process of calculation modeled by the Turing machine. Natural computing is epitomized through the interactions of concurrent, in general asynchronous computational processes which are adequately represented by what Abramsky names “the second generation models of computation” [1] which we argue to be the most general representation of information dynamics.
The dialogue develops arguments for and against adopting a new world system, info-computationalist naturalism, that is poised to replace the traditional mechanistic world system. We try to figure out what the info-computational paradigm would mean, in particular its pancomputationalism. We make some steps towards developing the notion of computing that is necessary here, especially in relation to traditional notions. We investigate whether pancomputationalism can possibly provide the basic causal structure to the world, whether the overall research programme appears productive and whether it can revigorate computationalism in the philosophy of mind.
Stephen Wolfram’s work, and especially his New Kind of Science, presents as much a new science as a new natural philosophy-natural computationalism. In the same way as Andrew Hodges, based on Alan Turing’s pioneering work on computability and his ideas on morphological computing and artificial intelligence, argues that Turing is best viewed as a natural philosopher we can also assert that Wolfram’s work constitutes natural philosophy. It is evident through natural and formal computational phenomena studied in different media, from the book with related materials to programs and demonstrations and computational knowledge engine. Wolfram’s theoretical studies and practical computational constructs including Mathematica and Wolfram|Alpha reveal a research program reminiscent of Leibniz’ Mathesis universalis, the project of a universal science supported by a logical calculation framework. Wolfram’s new kind of science may be seen in the sense of Newton’s Philosophiæ Naturalis Principia Mathematica being both natural philosophy and science, not only because of the new methodology of experimental computer science and simulation, or because of particular contributions addressing variety of phenomena, but in the first place as a new unified scientific framework for all of knowledge. It is not only about explaining special patterns seen in nature and models of complex behaviors; it is about the computational nature derived from the first computational principles. Wolfram’s as well as Turing’s natural philosophy differs from Galileo’s view of nature. Computation used in modeling is more than a language. It produces real time behaviors of physical systems: computation is the way nature is. Cellular automata as explored by Wolfram are a whole fascinating computational universe. Do they exhaust all possible computational behaviors that our physical universe exhibit? If we understand physical processes as computations in a more general sense than the computations performed by symbol manipulation done by our current computers, then universal Turing machines and universal cellular automata exhibit only a subset of all possible information processing behaviors found in nature. Even though mathematically, there is a principle of computational equivalence, in physical nature exists a hierarchy of emergent processes on many levels of organization that exhibits different physical behavior and thus can be said compute with different expressive power. This article argues that, based on the notion of computing nature, where computing stands for all kinds of information processing, the development of natural computationalism have a potential to enrich computational studies in the same way as the explorations in the computational universe hold a promise to provide computational models applicable to the physical universe.
Gordana Dodig Crnkovic
added 2 research items
>Context • At present, we lack a common understanding of both the process of cognition in living organisms and the construction of knowledge in embodied, embedded cognizing agents in general, including future artifactual cognitive agents under development, such as cognitive robots and softbots. > Purpose • This paper aims to show how the info-computational approach (IC) can reinforce constructivist ideas about the nature of cognition and knowledge and, conversely, how constructivist insights (such as that the process of cognition is the process of life) can inspire new models of computing. > Method • The info-computational constructive framework is presented for the modeling of cognitive processes in cognizing agents. Parallels are drawn with other constructivist approaches to cognition and knowledge generation. We describe how cognition as a process of life itself functions based on info-computation and how the process of knowledge generation proceeds through interactions with the environment and among agents. > Results • Cognition and knowledge generation in a cognizing agent is understood as interaction with the world (potential information), which by processes of natural computation becomes actual information. That actual information after integration becomes knowledge for the agent. Heinz von Foerster is identified as a precursor of natural computing, in particular bio computing. > Implications • IC provides a framework for unified study of cognition in living organisms (from the simplest ones, such as bacteria, to the most complex ones) as well as in artifactual cognitive systems. > Constructivist content • It supports the constructivist view that knowledge is actively constructed by cognizing agents and shared in a process of social cognition. IC argues that this process can be modeled as info-computation. > Key words • Constructivism, info-computationalism, computing nature, morphological computing, and cognition, self-organization, autopoiesis.
Some intriguing questions such as: What is reality for an agent? How does reality of a bacterium differ from a reality of a human brain? Do we need representation in order to understand reality? are still widely debated. Starting with the presentation of the computing nature as an info-computational framework, where information is defined as a structure, and computation as information processing, I will address questions of evolution of increasingly complex living agents through interactions with the environment. In this context, the concept of computation will be discussed and the sense in which computation is observer-relative. Using the results on morphological/morphogenetic computation as information self-organization I argue that reality for an agent is a result of networked agent-based computation. Consciousness is a (computational) process of information integration that evolved in organisms with nervous system. I present an argument why pancomputationalism (computing nature) is a sound scientific strategy and why panpsychism is not.
Gordana Dodig Crnkovic
added 2 research items
The historical development has lead to the decay of Natural Philosophy which until 19th century included all of our knowledge about the physical world into the growing multitude of specialized sciences. The focus on the in-depth enquiry disentangled from its broad context lead to the problem of loss of common world-view and impossibility of communication between specialist research fields because of different languages they developed in isolation. The need for a new unifying framework is becoming increasingly apparent with the information technology enabling and intensifying the communication between different research fields and knowledge communities. This time, not only natural sciences, but also all of human knowledge is being integrated in a global network such as Internet with its diverse knowledge and language communities. Info-computationalism (ICON) as a synthesis of pancomputationalism and paninformationalism presents a unifying framework for understanding of natural phenomena including living beings and their cognition, their ways of processing information and producing knowledge. Within ICON physical universe is understood as a network of computational processes on an informational structure.
Gordana Dodig Crnkovic
added 2 research items
Talking about models of cognition, the very mention of “computationalism” often incites reactions against Turing machine model of the brain and perceived determinism of the computational model. Neither of those two objections affects models based on natural computation or computing nature where model of computation is broader than deterministic symbol manipulation. Computing nature consists of physical structures that form levels of organization, on which computation processes differ, from quantum level up. It has been argued that on the lower levels of organization finite automata or Turing machines might be adequate, while on the level of the whole-brain non-Turing computation is necessary, according to Andre Ehresmann (Ehresmann, 2012) and Subrata Ghosh et al. (Ghosh et al., 2014) http://www.pt-ai.org/iacap/2014/program
This article presents a naturalist approach to cognition understood as a network of info-computational, autopoietic processes in living systems. It provides a conceptual framework for the unified view of cognition as evolved from the simplest to the most complex organisms, based on new empirical and theoretical results. It addresses three fundamental questions: what cognition is, how cognition works and what cognition does at different levels of complexity of living organisms. By explicating the info-computational character of cognition, its evolution, agent-dependency and generative mechanisms we can better understand its life-sustaining and life-propagating role. The info-computational approach contributes to rethinking cognition as a process of natural computation in living beings that can be applied for cognitive computation in artificial systems.
Gordana Dodig Crnkovic
added 4 research items
Fresco, Nir, Physical Computation And Cognitive Science. Berlin Heidelberg: Springer, 2014, Studies in Applied Philosophy, Epistemology and Rational Ethics, Vol. 12, XXII, 229, 83,29 €. According to the author, the objective of this book is to establish a clearer understanding of computation in cognitive science, and to argue for the fundamental role of concrete (physical) computation, for cognition. He succeeds in both. At the same time he is searching for the adequate scope of computation, repudiating attempts of Putnam, Searle and others, who argued against (classical) computationalism in cognitive science, thereby trivializing computation. The book identifies ambiguities in present day approaches to computation and presents and compares different concepts of computation and their applicability to cognitive systems. The main claim is that for computation to be effective in a cognitive system, computation must be physical (concrete). That requirement is motivated by the development of cognitive theories in the direction of embodiment and embeddedness. The corollary is that the Turing model of computation does not suffice to cover all kinds of cognitive computational processes, as it is a model of a logical procedure describing computation of a mathematical function, while cognitive processes in an organism cover a much broader range of information processing. Fresco presents the computation as a concept that philosophers of computing and computer scientists as well as cognitive scientists understand in multiple ways. He lists seven different conceptions of computation predominant in the literature. The argument shows – what should be obvious in any case – that one accepted formalization of a concept (Turing machine) neither precludes reflection on its meaning, nor prevents other quite different formalizations. At present it is common to approach cognition through computation in a particular formalization based on Turing model of computation. However, computing is much broader than its logical aspects and its physical implementation (dependent also on types of objects manipulated and time-dependent processes of execution) while it is an aspect very central for understanding of cognition. In the same way as the model of informational universe (always relative to an agent) is not trivial because of layered architecture of the informational universe organized in hierarchical structure of levels of abstraction (Floridi 2009) – the dynamics of that informational universe (which is also a computational universe) is not trivial either. But then, as Fresco rightly emphasizes, it is necessary to generalize Turing model of computation to “concrete” (physical) computation. Fresco explores specifically digital physical computation (and he does not insist on a distinction between digital and discrete) – so he deliberately limits his domain. Floridi convincingly argued against digital ontology on the principal grounds (Floridi 2009). Nevertheless, when it comes to practical physical implementation of computation, current digital computers are successfully used for calculation of continua such as found in fluid dynamics. But that is on the modeling side, and the question is only how fine-grained model is sufficient to represent continuous system. The distinction continuous/discrete is not only the property of the physical world; it is a property of the relation between the cognizing agent and the world. (Dodig-Crnkovic and Müller 2009) p. 164. As the basis of an IP (information processing) account of computation, Fresco have chosen instructional information, “prescriptive in that its processing function is aimed at making something happen.” p. 140. The book presents key requirements for a physical system to perform nontrivial digital computation in terms of information processing. The system must have the capacity to: 1. Send information. 2. Receive information. 3. Store and retrieve information. 4. Process information. 5. Actualize control information. (Implementing this requirement is what separates trivial from nontrivial computation.) In the above list of requirements, strong influence of conventional computers is visible. As a summary, on p. 205 Fig. 8.1, there is a diagram showing the relations among the six different accounts of computation analyzed: 6. The most specific account: PSS (physical symbol system) account. UTMs (Universal Turing Machines) and universal stored program digital computers. 7. FSM (formal symbol manipulation) account: program controlled digital computing systems – special purpose TMs, special purpose digital computers 8. Algorithm execution account: digital systems acting in accordance with an algorithm, FSA (Finite State Automata), Hypercomputers 9. The Mechanic and IIP (Instructional Information Processing) accounts: Logic gates, Logic circuits, Discrete connectionist networks 10. The most general account: The “Triviality” account: every physical object computes - Searle-triviality thesis and the Putnam-triviality theorem imply that every sufficiently complex system performs every digital computation In the list above, between items 4 and 5, the account of computing nature is missing, that is the claim that the whole of nature computes, in general as a network of computational networks on different levels of organization (Dodig-Crnkovic 2014). It continuously performs information processing that computes its next state, (Chaitin 2007), where every physical system performs some computation. It is very important to make a distinction between Computing Nature (Stepney et al. 2006; Stepney 2008) (Dodig-Crnkovic and Müller 2009)(Rozenberg et al. 2012)(Zenil 2012) and “Triviality account” in which every physical system performs every kind of computation. Fresco seems to be skeptical about the computing nature approach, as his focus in this book is to present the state of the art and to clear existing muddles around computation and cognition, and not so much to introduce new developments in the field. “It remains an open question though whether embodied computation is indeed the type of computation that takes place in nature. But at least traditionally, it has not been invoked as the basis of the Computational Theory of Mind (CTM).” (p. 4). See also (Fresco and Staines 2014). Even though the CTM does not assume natural computation as a basis of computational approaches to cognition, in the computing nature approach, embodied computation comes naturally from the basic assumptions. If cognition is explained computationally, that computation must be embodied. The fact that traditional CTM did not realize the importance of embodiment points out CTM’s historical limitations. At the time classical computational theory of mind was developed, the belief prevailed that it would be possible to grow and sustain a conscious “brain-in-a-vat”. However, understanding of cognition has increased dramatically since the days of classical CTM, and any respectable contemporary theory of cognition must address embodiment. Fresco in this book makes an important and correct argument that the explanatory frameworks of computationalism, connectionism and dynamicism, contrary to frequent claims are not mutually exclusive but rather complementary approaches, suggesting the way for their integration. Some open questions that remain outside of the scope of the book Physical Computation And Cognitive Science are still of interest and should be mentioned. One fundamental perspective that is missing when it comes to cognition is the biological one. Cognition is a fundamentally biological phenomenon and in order to be able to construct cognitive computational artifacts it is important to understand how natural cognition functions, develops, and evolves. (Maturana and Varela 1980) It is hard to address cognitive phenomena without biological perspective. Computing nature approach includes those aspects and makes them integral part of its discourse. As a consequence of the aims and the framework chosen in the book, computers are taken to be the machines we have today, which also brings some assumptions and constraints that are not necessary. Among others the assumption about necessary infallibility of computation that is implicitly taken for granted, for example in the discussion of miscomputation (p. 41). Turing’ s own view of intelligent computing machines with learning capability is different, as he claims: “There are indications however that it is possible to make the machine display intelligence at the risk of its making occasional serious mistakes.” (Turing 1947) as quoted in (Siegelmann 2013). The allowance for cognitive computation making mistakes and even fatal errors might change the arguments and conclusions offered in the book. The next discussion that I find lacking is the role of explicit account of an agent for whom/which a process is computation. In the computing nature approach, with Hewitt model of computation (Hewitt 2012) in the background, agency-based view of cognition becomes visible and obvious. Instead of having one single definition of computation for all levels of organization, we can define computation in the sense of Hewitt model by interactions between computational agents (actors) that exchange information. The prospect of further development of computational accounts of cognition is nicely outlined in the concluding chapter of the book: “Research attention should be directed toward gaining a better understanding of the types of information processed by natural cognitive agents, how they are processed and interact and how such processing throws light on human cognitive architectures. Such research should examine how cognitive agents produce, acquire, extract, analyze and use information in learning, planning and decision making, for example. It can inform contemporary cognitive science by identifying the mechanisms in the human cognitive architecture that are necessary for these information-processing operations.” p. 225 To sum up, the main virtues of this timely and enlightening book are: systematicity and unusual clarity in eliciting key requirements for a physical system to perform concrete digital computation and providing comparison between different existing approaches to cognition. The book shows clearly that computing in general is broader than abstract models of computation, and cognitive science should be based on it. Gordana Dodig-Crnkovic Chalmers Technical University and University of Gothenburg References Chaitin, G. 2007. Epistemology as Information Theory: From Leibniz to Ω, in G. Dodig-Crnkovic, ed. Computation, Information, Cognition – The Nexus and The Liminal. Newcastle UK: Cambridge Scholars: 2–17. Dodig-Crnkovic, G. 2007. Epistemology Naturalized: The Info-Computationalist Approach. APA Newsletter on Philosophy and Computers, 06/2: 9–13. Dodig-Crnkovic, G. 2014. Modeling Life as Cognitive Info-Computation, in Computability in Europe 2014. eds. A. Beckmann, E. Csuhaj-Varjú, and K. Meer, LNCS. Berlin Heidelberg: Springer: 153–163. Dodig-Crnkovic, G. and Mueller, V. 2009. A Dialogue Concerning Two World Systems: Info-Computational vs. Mechanistic, in Information and Computation eds. G. Dodig-Crnkovic and M. Burgin, Singapore: World Scientific Pub Co Inc. Floridi, L. 2009. Against digital ontology. Synthese, 168/1, 151–178. Fresco, N. and Staines, P. 2014. A revised attack on computational ontology. Minds and Machines, 24/1: 101–122. Hewitt, C., 2012. What is computation? Actor Model versus Turing’s Model, in A Computable Universe, Understanding Computation and Exploring Nature As Computation. ed. H. Zenil, World Scientific Publishing Company/Imperial College Press: 159-177. Maturana, H. and Varela, F. 1980. Autopoiesis and cognition: the realization of the living, Dordrecht Holland: D. Reidel Pub. Co. Rozenberg, G., Bäck, T. and Kok, J.N. eds. 2012. Handbook of Natural Computing, Berlin Heidelberg: Springer. Siegelmann, H.T., 2013. Turing on Super-Turing and Adaptivity. Progress in Biophysics and Molecular Biology, 113/1: 117–126. Stepney, S. et al. 2006. Journeys in Non-Classical Computation II: Initial Journeys and Waypoints. Int. J. Parallel Emerg. Distr. Syst., 21, 97–125. Stepney, S. 2008. The neglected pillar of material computation. Physica D: Nonlinear Phenomena, 237/9: 1157–1164. Zenil, H. ed. 2012. A Computable Universe. Understanding Computation and Exploring Nature As Computation. Singapore: World Scientific Publishing Company/Imperial College Press.
Abstract. Informational Structural Realism, ISR (Floridi 2008) describes the reality as a complex informational structure for an epistemic agent interacting with the universe by the exchange of data as constraining affordances. In conjunction with Naturalist Computationalism - the view that the dynamics of the nature can be understood as computation - Floridi’s Informational Structural Realism presents a basis for constructing of the unified framework of Infocomputationalism. In this framework the fundamental mechanism of all natural computation is morphological computation, expressed as a process of information self-organization, with information structure understood in the sense of Floridi’s ISR. Recently, in robotics, morphological computing has been used for decentralized embodied control of robots. In this paper we describe how appropriate body morphology saves information processing (computation) resources as well as enables learning through self-structuring of information in an epistemic, cognizing agent.
As a global community we are facing number of existential challenges like global warming, deficit of basic commodities, environmental degradation and other threats to life on earth, as well as possible unintended consequences of AI, nano-technology, biotechnology, and similar. Among world-wide responses to those challenges the framework programme for European research and technological development, Horizon 2020, have formulated the Science with and for Society Work Programme, based on Responsible Research and Innovation with a goal to support research contributing to the progress of humanity and preventing catastrophic events and their consequences. This goal may only be reached if we educate responsible researchers and engineers with both deep technical knowledge and broad disciplinary and social competence. From the perspective of experiences at two Swedish Universities, this paper argues for the benefits of teaching professional ethics and sustainable development to engineering students.
Gordana Dodig Crnkovic
added an update
Gordana Dodig Crnkovic
added a research item
This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted.
Gordana Dodig Crnkovic
added a project reference
Gordana Dodig Crnkovic
added an update
We have a post-doc position open in the project Morphological Computing in Cognitive Systems::
 
Gordana Dodig Crnkovic
added an update
We are organising a conference as a part of is4si summit in 2017
Complexity, Structure, Causation in the Computing Nature – (Marcin J. Schroeder Chair) that will cover the topic of Morphological computing as well.
 
Gordana Dodig Crnkovic
added 4 project references
Gordana Dodig Crnkovic
added a project goal