Chapter

Philosophy of Computation

Authors:
  • U of Hertfordshire, LSE
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Unconventional computation emerged as a response to a series of technological and societal challenges. The main source of these challenges is the expected collapse of Moore’s law. It is very likely that the existing trend of building faster digital information processing machines will come to an end. This chapter provides a broad philosophical discussion of what might be needed to construct a theoretical machinery that could be used to understand the obstacles and identify the alternative designs. The key issue that has been addressed is simple to formulate: given a physical system, what can it compute? There is an enormous conceptual depth to this question and some specific aspects are systematically discussed. The discussion covers digital philosophy of computation, two reasons why rocks cannot be used for computation are given, a new depth to the ontology of number, and the ensemble computation inspired by recent understanding of the computing ability of living cell aggregates.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The current understanding of AI ethics is rather vague, due to the broad definitions of AI used in the literature, and do not necessarily reflect the aspects and demarcations within the research community, the algorithms and methods, the computing substrates [11], and the target applications. ...
Chapter
Full-text available
In this paper, we present a set of key demarcations, particularly important when discussing ethical and societal issues of current AI research and applications. Properly distinguishing issues and concerns related to Artificial General Intelligence and weak AI, between symbolic and connectionist AI, AI methods, data and applications are prerequisites for an informed debate. Such demarcations would not only facilitate much-needed discussions on ethics on current AI technologies and research. In addition, sufficiently establishing such demarcations would also enhance knowledge-sharing and support rigor in interdisciplinary research between technical and social sciences.
... Although Turing's model provides a framework for answering fundamental questions about computation, "…as soon as one leaves the comfort provided by the abundance of mathematical machinery used to describe digital computation, the world seems to be packed with paradoxes." 44 . ...
Article
Full-text available
Synthetic biology uses living cells as the substrate for performing human-defined computations. Many current implementations of cellular computing are based on the “genetic circuit” metaphor, an approximation of the operation of silicon-based computers. Although this conceptual mapping has been relatively successful, we argue that it fundamentally limits the types of computation that may be engineered inside the cell, and fails to exploit the rich and diverse functionality available in natural living systems. We propose the notion of “cellular supremacy” to focus attention on domains in which biocomputing might offer superior performance over traditional computers. We consider potential pathways toward cellular supremacy, and suggest application areas in which it may be found. Synthetic biology uses cells as its computing substrate, often based on the genetic circuit concept. In this Perspective, the authors argue that existing synthetic biology approaches based on classical models of computation limit the potential of biocomputing, and propose that living organisms have under-exploited capabilities.
... The current understanding of AI ethics is rather vague, due to the broad definitions of AI used in the literature, and do not necessarily reflect the aspects and demarcations within the research community, the algorithms and methods, the computing substrates [11], and the target applications. ...
Preprint
In this paper we present a set of key demarcations, particularly important when discussing ethical and societal issues of current AI research and applications. Properly distinguishing issues and concerns related to Artificial General Intelligence and weak AI, between symbolic and connectionist AI, AI methods, data and applications are prerequisites for an informed debate. Such demarcations would not only facilitate much-needed discussions on ethics on current AI technologies and research. In addition sufficiently establishing such demarcations would also enhance knowledge-sharing and support rigor in interdisciplinary research between technical and social sciences.
Article
Full-text available
We give a rigorous framework for the interaction of physical computing devices with abstract computation. Device and program are mediated by the non-logical representation relation; we give the conditions under which representation and device theory give rise to commuting diagrams between logical and physical domains, and the conditions for computation to occur. We give the interface of this new framework with currently existing formal methods, showing in particular its close relationship to refinement theory, and the implications for questions of meaning and reference in theoretical computer science. The case of hybrid computing is considered in detail, addressing in particular the example of an Internet-mediated social machine, and the abstraction/representation framework used to provide a formal distinction between heterotic and hybrid computing. This forms the basis for future use of the framework in formal treatments of non-standard physical computers. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Article
Full-text available
We introduce and define 'heterotic computing' as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This first requires a definition of physical computation. We take the framework in Horsman et al. (Horsman et al. 2014 Proc. R. Soc. A 470, 20140182. (doi:10.1098/rspa.2014.0182)), now known as abstract-representation theory, then outline how to compose such computational systems. We use examples to illustrate the ubiquity of heterotic computing, and to discuss the issues raised when one or more of the substrates is not a conventional silicon-based computer. We briefly outline the requirements for a proper theoretical treatment of heterotic computational systems, and the advantages such a theory would provide. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Article
Full-text available
Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53–mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to ‘pools of reversibility’. These natural subsystems are related to one another in a hierarchical manner by the notion of ‘*weak control*’. We present *natural subsystems* arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize *finitary universal computation*. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, *interaction machines* that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.
Article
Full-text available
Purpose This paper aims to describe relationships between cybernetics and design, especially service design, which is a component of service‐craft; to frame cybernetics as a language for design, especially behavior‐focused design. Design/methodology/approach The material in this paper was developed for a course on cybernetics and design. Work began by framing material on cybernetics in terms of models. As the course progressed, the relevance of the models to design became clearer. A first focus was on applying the models to describe human‐computer interaction; later another focus emerged, viewing cybernetic processes as analogs for design processes. These observations led to a review of the history of design methods and design rationale. Findings The paper argues that design practice has moved from hand‐craft to service‐craft and that service‐craft exemplifies a growing focus on systems within design practice. It also proposes cybernetics as a source for practical frameworks that enable understanding of dynamic systems, including specific interactions, larger systems of service, and the activity of design itself. It also shows that development of first‐ and second‐generation design methods parallels development of first‐ and second‐generation cybernetics. Finally, it argues that design is essentially political, frames design as conversation, and proposes cybernetics as a language for design and a foundation of a broad design education. Research limitations/implications The paper suggests opportunities for more research on the historical relationship between cybernetics and design methods, and design research on modeling user goals. Practical implications The paper offers tools for understanding and managing the complicated communities of systems that designers increasingly face. Originality/value The paper suggests models useful for practicing designers and proposes changes to design education.
Conference Paper
Full-text available
Abstract—We overview how sensorimotor experience can be operationalized for interaction scenarios in which humanoid robots acquire skills and linguistic behaviours via enacting a “form-of-life”’ in interaction games (following Wittgenstein) with humans. The enactive paradigm is introduced which provides a powerful framework for the construction of complex adaptive systems, based on interaction, habit, and experience. Enactive cognitive architectures (following insights of Varela, Thompson and Rosch) that we have developed support social learning and robot ontogeny by harnessing information-theoretic methods and raw uninterpreted sensorimotor experience to scaffold the acquisition of behaviours. The success criterion here is validation by the robot engaging in ongoing human-robot interaction with naive participants who, over the course of iterated interactions, shape the robot’s behavioural and linguistic development. Engagement in such interaction exhibiting aspects of purposeful, habitual recurring structure evidences the developed capability of the humanoid to enact language and interaction games as a successful participant.
Article
Full-text available
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable.
Article
Full-text available
After briefly discussing the relevance of the notions "computation" and "implementation" for cognitive science, I summarize some of the problems that have been found in their most common interpretations. In particular, I argue that standard notions of computation together with a "state-to-state correspondence view of implementation" cannot overcome difficulties posed by Putnam's Realization Theorem and that, therefore, a different approach to implementation is required. The notion "realization of a function", developed out of physical theories, is then introduced as a replacement for the notional pair "computationimplementation ". After gradual refinement, taking practical constraints into account, this notion gives rise to the notion "digital system" which singles out physical systems that could be actually used, and possibly even built.
Book
A Physarum machine is a programmable amorphous biological computer experimentally implemented in the vegetative state of true slime mould Physarum polycephalum. It comprises an amorphous yellowish mass with networks of protoplasmic veins, programmed by spatial configurations of attracting and repelling gradients. This book demonstrates how to create experimental Physarum machines for computational geometry and optimization, distributed manipulation and transportation, and general-purpose computation. Being very cheap to make and easy to maintain, the machine also functions on a wide range of substrates and in a broad scope of environmental conditions. As such a Physarum machine is a ‘green’ and environmentally friendly unconventional computer. The book is readily accessible to a nonprofessional reader, and is a priceless source of experimental tips and inventive theoretical ideas for anyone who is inspired by novel and emerging non-silicon computers and robots.
Article
As unconventional computing comes of age, we believe a revolution is needed in our view of computer science.
Article
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper we introduce a formal framework that can be used to determine whether or not a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, drawing the comparison with the use of mathematical models to represent physical objects in experimental science. This powerful formulation allows a precise description of the similarities between experiments, computation, simulation, and technology, leading to our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions that must be satisfied in order for computation to be occurring, and illustrate these with a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We define the critical notion of a ‘computational entity’, and show the role this plays in defining when computing is taking place in physical systems.
Article
Putnam proved a theorem stating that, under very generic conditions, every physical system implements any finite state automaton. This challenges the whole field of unconventional computation in the most fundamental way. The present study is a response to this challenge. Putnam’s theorem is revisited in the context of unconventional computation where the emphasis is on the practical use of a system for a computation. The main goal of this study is to identify an algorithm that can solve the following (natural computability) problem: Given a physical system, identify the automaton (automata) that it naturally implements. A rigorous definition of the concept of natural computation is suggested: The natural implementation results in the largest computing power and has the lowest realisation cost. These ideas are formalized using rigorous mathematical reasoning. A generic algorithm for identifying the automata of interest is presented.
Article
To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase ‘of a certain kind’. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind.
Article
The Law of Requisite Variety is a mathematical theorem relating the number of control states of a system to the number of variations in control that is necessary for effective response. The Law of Requisite Variety does not consider the components of a system and how they must act together to respond effectively. Here we consider the additional requirement of scale of response and the effect of coordinated versus uncoordinated response as a key attribute of complex systems. The components of a system perform a task, with a number of such components needed to act in concert to perform subtasks. We apply the resulting generalization—a Multiscale Law of Requisite Variety—to understanding effective function of complex biological and social systems. This allows us to formalize an understanding of the limitations of hierarchical control structures and the inadequacy of central control and planning in the solution of many complex social problems and the functioning of complex social organizations, e.g., the military, healthcare, and education systems. © 2004 Wiley Periodicals, Inc. Complexity 9: 37–45, 2004
Article
Putnam (Representations and reality. MIT Press, Cambridge, 1988) and Searle (The rediscovery of the mind. MIT Press, Cambridge, 1992) famously argue that almost every physical system implements every finite computation. This universal implementation claim, if correct, puts at the risk of triviality certain functional and computational views of the mind. Several authors have offered theories of implementation that allegedly avoid the pitfalls of universal implementation. My aim in this paper is to suggest that these theories are still consistent with a weaker result, which is the nomological possibility of systems that simultaneously implement different complex automata. Elsewhere I (Shagrir in J Cogn Sci, 2012) argue that this simultaneous implementation result challenges a computational sufficiency thesis (articulated by Chalmers in J Cogn Sci, 2012). My focus here is on theories of implementation. After presenting the basic simultaneous implementation construction, I argue that these theories do not avoid the simultaneous implementation result. The conclusion is that the idea that the implementation of the right kind of automaton suffices for a possession of a mind is dubious.
Article
It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable-that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and two (of his seven) books. Much misunderstanding about the V.S.M. and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the V.S.M., without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.
Article
Interaction Computing (IC) aims to map the properties of integrable low-dimensional non-linear dynamical systems to the discrete domain of finite-state automata in an attempt to reproduce in software the self-organizing and dynamically stable properties of sub-cellular biochemical systems. As the work reported in this paper is still at the early stages of theory development it focuses on the analysis of a particularly simple chemical oscillator, the Belousov-Zhabotinsky (BZ) reaction. After retracing the rationale for IC developed over the past several years from the physical, biological, mathematical, and computer science points of view, the paper presents an elementary discussion of the Krohn-Rhodes decomposition of finite-state automata, including the holonomy decomposition of a simple automaton, and of its interpretation as an abstract positional number system. The method is then applied to the analysis of the algebraic properties of discrete finite-state automata derived from a simplified Petri Net model of the BZ reaction. In the simplest possible and symmetrical case the corresponding automaton is, not surprisingly, found to contain exclusively cyclic groups. In a second, asymmetrical case, the decomposition is much more complex and includes five different simple non-abelian groups whose potential relevance arises from their ability to encode functionally complete algebras. The possible computational relevance of these findings is discussed and possible conclusions are drawn.
Article
Medicine is a science of control and should, therefore, be a subject that is particularly open to cybernetic investigation and enlightenment, for cybernetics is concerned with control and communication. Yet, in previous columns, I have reported cases that originate in the cybernetic literature and with the grand old men (yes, I'm afraid, men) of cybernetics, who have discussed systems that are in essence uncontrollable.
Article
The application of basic cybernetic laws and information processing principles to the classroom situation suggests that traditional and modern teaching methods, regarded as control systems, are equivalent in terms of efficiency. As control structures, they embody different principles and are not decomposable. Examination of these principles reveals that the two methods are radically incompatible, in the sense that techniques developed in the one cannot be transferred to the other without dislocation of the system as a whole. Attempts to modernize the traditional method, or to formalize the modern method are ill-conceived. Such mixed methods violate basic laws of information and control, and cannot work. It is suggested that many of the problems underlying the Great Education Debate are a consequence of the impossible state of affairs created by the widespread introduction of mixed methods.
Article
“Triviality arguments” against functionalism in the philosophy of mind hold that the claim that some complex physical system exhibits a given functional organization is either trivial or has much less content than is usually supposed. I survey several earlier arguments of this kind, and present a new one that overcomes some limitations in the earlier arguments. Resisting triviality arguments is possible, but requires functionalists to revise popular views about the “autonomy” of functional description.
Article
Both Putnam and Searle have argued that that every abstract automaton is realized by every physical system, a claim that leads to a reductio argument against Cognitivism or Strong AI: if it is possible for a computer to be conscious by virtue of realizing some abstract automaton, then by Putnam’s theorem every physical system also realizes that automaton, and so every physical system is conscious—a conclusion few supporters of Strong AI would be willing to accept. Dennett has suggested a criterion of reverse engineering for identifying “real patterns,” and I argue that this approach is also very effective at identifying “real realizations.” I focus on examples of real-world implementations of complex automata because previous attempts at answering Putnam’s challenge have been overly restrictive, ruling out some realizations that are in fact paradigmatic examples of practical automaton realization. I also argue that some previous approaches have at the same time been overly lenient in accepting counter-intuitive realizations of trivial automata. I argue that the reverse engineering approach avoids both of these flaws. Moreover, Dennett’s approach allows us to recognize that some realizations are better than others, and the line between real realizations and non-realizations is not sharp.
Article
When we are concerned with the logical form of a computation and its formal properties, then it can be theoretically described in terms of mathematical and logical functions and relations between abstract entities. However, actual computation is realised by some physical process, and the latter is of course subject to physical laws and the laws of thermodynamics in particular. An issue that has been the subject of much controversy is that of whether or not there are any systematic connections between the logical properties of computations considered abstractly and the thermodynamical properties of their concrete physical realizations. Landauer [R. Landauer, Irreversibility and heat generation in the computing process, IBM Journal of Research and Development 5 (1961) 183–191. Reprinted in Leff and Rex (1990)] proposed such a general connection, known as Landauer’s Principle. To resolve this matter an analysis of the notion of the implementation of a computation by a physical system is clearly required. Another issue that calls for an analysis of implementation is that of realism about computation. The account of implementation presented here is based on the notion of an L-machine. This is a hybrid physical-logical entity that combines a physical device, a specification of which physical states of that device correspond to various logical states, and an evolution of that device which corresponds to the logical transformation L. The most general form of Landauer’s Principle can be precisely stated in terms of L-machines, namely that the logical irreversibility of L implies the thermodynamic irreversibility of every corresponding L-machine.
Article
One can construct any finite-state machine as a cascade interconnection of machines whose inputs either permute the states or reset them all to one state. Each permutation group needed in the construction is a homomorphic image of a group generated by the action of a set of input sequences on a state subset of the original machine. Proofs of these facts will be given and their application to the Krohn-Rhodes theory described.
Article
Phenotypic plasticity is the ability of a single genotype to produce more than one alternative form of morphology, physiological state and/or behaviour in response to environmental conditions. The scope of plasticity is described and its relations to natural selection and to initiation and amplification of change are noted. Plasticity is also considered in relation to speciation and macro-evolution; phenotypic plasticity may influence rate and direction of evolution. -S.J.Yates
Article
An abstract is not available.
Article
A number of factors have been proposed that may affect the capacity for an evolutionary system to generate adaptation. One that has received little recent attention among biologists is linkage patterns, or the ordering of genes on chromosomes. In this study, a simple model of genetic interactions, implemented in an evolutionary simulation, demonstrates that clustering of epistatically interacting genes increases the rate of adaptation. Moreover, long-term evolution with inversion can reorganize linkage patterns from random gene ordering into this more modular organization, thereby facilitating adaptation. These results are consistent with a large body of biological observations and some mathematical theory. Although linkage patterns are neutral with respect to individual fitness in this model, they are subject to lineage level selection for evolvability. At least two candidate mechanisms may contribute to improved evolvability under epistatic clustering: clustering may reduce interference between selection on different traits, and it may allow the simultaneous optimization of different recombination rates for gene pairs with additive and epistatic fitness effects.
Article
Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of #) contain internal evidence of a nontrivial causal history.
Article
Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the classes of automata that can serve as a basis for understanding the mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind. 1 Introduction The theory of computation is often thought to underwrite the theory of mind. In cognitive science, it is widely believed that intelligent behavior is enabled by the fact that the mind or the ...
SgpDec-Hierarchical Decompositions and Coordinate Systems
  • A Egri-Nagy
  • C L Nehaniv
  • J D Mitchell
Egri-Nagy, A., C. L. Nehaniv, and J. D. Mitchell (2014). SgpDec-Hierarchical Decompositions and Coordinate Systems, Version 0.7.29. url: sgpdec.sf.net.
In: Representation and Reality in Humans, Other Living Organisms and Intelligent Machines. Ed. by Gordana DodigCrnkovic and Raffaela Giovagnoli
  • Dominic Horsman
  • Susan Stepney
  • Viv Kendon
  • J P W Young
Horsman, Dominic, Susan Stepney, Viv Kendon, and J. P. W. Young (2017b). "Abstraction and representation in living organisms: when does a biological system compute?" In: Representation and Reality in Humans, Other Living Organisms and Intelligent Machines. Ed. by Gordana DodigCrnkovic and Raffaela Giovagnoli. Springer, pp. 91-116.
Putnamizing the Liquid State (extended abstract)
  • K Kirby
Kirby, K. (2009). "Putnamizing the Liquid State (extended abstract)". In: NACAP 2009.
On the Krohn-Rhodes Cascaded Decomposition Theorem
  • O Maler
Maler, O. (2010). "On the Krohn-Rhodes Cascaded Decomposition Theorem". In: Time for Verification: Essays in Memory of Amir Pnueli. Ed. by Z. Manna and D. Peled. Lecture Notes in Computer Science, 6200. Berlin: Springer.
Self-replication, Evolvability and Asynchronicity in Stochastic Worlds
  • Chrystopher L Nehaniv
Nehaniv, Chrystopher L. (2005). "Self-replication, Evolvability and Asynchronicity in Stochastic Worlds". In: Stochastic Algorithms: Foundations and Applications. Vol. 3777. LNCS, pp. 126-169.
On Computable Numbers, with an Application to the Entscheidungsproblem
  • A Turing
Turing, A. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". In: Proceedings of the London Mathematical Society (2) 42. A correction, ibid, 43, 1937, pp. 544-546, pp. 230-265.