ArticlePDF Available

Computationalism, The Church–Turing Thesis, and the Church–Turing Fallacy

Authors:

Abstract

The Church–Turing Thesis (CTT) is often employed in arguments for computationalism. I scrutinize the most prominent of such arguments in light of recent work on CTT and argue that they are unsound. Although CTT does nothing to support computationalism, it is not irrelevant to it. By eliminating misunderstandings about the relationship between CTT and computationalism, we deepen our appreciation of computationalism as an empirical hypothesis.
A preview of the PDF is not available
... For an overview and discussion of different versions of the Church-Turing thesis, see e.g. [40,41]. ...
... According to Lemma A.8, this inequality must then also be true for S, i.e. P(S; A) ≥ 2 −K(µ;A) µ(S; A), lower-bounding the probability of the stated event as claimed 41 . If the event on Theorem 8.8 happens (for some simple measure µ), the situation will look to the observer as follows: ...
... We hope that the insights we gain by analyzing this strong convergence situation will also remain valid in the more general case. 41 It is tempting to conjecture an alternative proof of Theorem 8.8 in the following way. Let p ∈ {0, 1} * be a minimal program for µ in the sense of Theorem 5.12; in particular, (p) = K(µ; A). ...
Article
According to the received conception of physics, a valid physical theory is presumed to describe the objective evolution of a unique external world. However, this assumption is challenged by quantum theory, which indicates that physical systems do not always have objective properties which are simply revealed by measurement. Furthermore, several other conceptual puzzles in the foundations of physics and related fields point to possible limitations of the received perspective and motivate the exploration of alternatives. Thus, here I propose an alternative approach which starts with the concept of "observation" as its primary notion, and does not from the outset assume the existence of a "world" or physical laws. It can be subsumed under a single postulate: Solomonoff induction correctly predicts future observations. I show that the resulting theory suggests a possible explanation for why there are simple computable probabilistic laws in the first place. It predicts the emergence of the notion of an objective external world that has begun in a state of low entropy. It also predicts that observers will typically see the violation of Bell inequalities despite the validity of the no-signalling principle. Moreover, it resolves cosmology's Boltzmann brain problem via a "principle of persistent regularities", and it makes the unusual prediction that the emergent notion of objective external world breaks down in certain extreme situations, yielding phenomena such as "probabilistic zombies". Additionally, it makes in principle concrete predictions for some fundamental conceptual problems relating to the computer simulation of observers. This paper does not claim to exactly describe "how the world works", but it dares to raise the question of whether the first-person perspective may be a more fruitful starting point from which to address certain longstanding fundamental issues.
... Theorem 2 gives us a method of relativizing the physical Church-Turing thesis [19]. Consider the scenario where our universe is constructed as the local machine M in a relative model. ...
Preprint
Beginning with Turing's seminal work in 1950, artificial intelligence proposes that consciousness can be simulated by a Turing machine. This implies a potential theory of everything where the universe is a simulation on a computer, which begs the question of whether we can prove we exist in a simulation. In this work, we construct a relative model of computation where a computable \textit{local} machine is simulated by a \textit{global}, classical Turing machine. We show that the problem of the local machine computing \textbf{simulation properties} of its global simulator is undecidable in the same sense as the Halting problem. Then, we show that computing the time, space, or error accumulated by the global simulator are simulation properties and therefore are undecidable. These simulation properties give rise to special relativistic effects in the relative model which we use to construct a relative Church-Turing-Deutsch thesis where a global, classical Turing machine computes quantum mechanics for a local machine with the same constant-time local computational complexity as experienced in our universe.
... This is an ontological claim, not just a metaphor: the mind actually is a computer, where 'computer' is understood as a Turing Machine, a mechanism for algorithmic string transformation. The essence of this view of mental architecture, known as the Computational Theory of Mind (CTM) (Boone & Piccinini 2016;Piccinini 2007Piccinini , 2016Rescorla 2020), is this:  The brain implements a Turing computational architecture. ...
Chapter
Full-text available
A long-standing problem in linguistics and cognitive science more generally is how natural language expressions come to possess, and how artificially intelligent systems can be endowed with, intrinsic meaning. There is a long tradition in Western thought whereby the meanings of linguistic expressions are their significations of mental concepts, and concepts are representations of the mind-external environment causally generated by the cognitive agent's interaction with that environment. This paper outlines this tradition with the aim of providing the intellectual context in which cognitive models with intrinsic meaning can be constructed.
... This is an ontological claim, not just a metaphor: the mind actually is a computer, where 'computer' is understood as a Turing Machine, a mechanism for algorithmic string transformation. The essence of this view of mental architecture, known as the Computational Theory of Mind (CTM) (Boone & Piccinini 2016;Piccinini 2007Piccinini , 2016Rescorla 2020), is this:  The brain implements a Turing computational architecture. ...
Book
A long-standing problem in linguistics and cognitive science more generally is how natural language expressions come to possess, and how artificially intelligent systems can be endowed with, intrinsic meaning. There is a long tradition in Western thought whereby the meanings of linguistic expressions are their significations of mental concepts, and concepts are representations of the mind-external environment causally generated by the cognitive agent's interaction with that environment. This paper outlines this tradition with the aim of providing the intellectual context in which cognitive models with intrinsic meaning can be constructed.
... It has been analyzed in more detail by Gandy[37], who calls (something very similar to) it "Thesis M", and in the quantum context by Arrighi and Dowek[38]. For an overview and discussion of different versions of the Church-Turing thesis, see e.g.[39,40]. ...
Article
Full-text available
According to our current conception of physics, any valid physical theory is supposed to describe the objective evolution of a unique external world. However, this condition is challenged by quantum theory, which suggests that physical systems should not always be understood as having objective properties which are simply revealed by measurement. Furthermore, as argued below, several other conceptual puzzles in the foundations of physics and related fields point to limitations of our current perspective and motivate the exploration of an alternative: to start with the first-person (the observer) rather than the third-person perspective (the world). In this work, I propose a rigorous approach of this kind on the basis of algorithmic information theory. It is based on a single postulate: that $\textit{universal induction}$ determines the chances of what any observer sees next. That is, instead of a world or physical laws, it is the local state of the observer alone that determines those probabilities. Surprisingly, despite its solipsistic foundation, I show that the resulting theory recovers many features of our established physical worldview: it predicts that it appears to observers $\textit{as if there was an external world}$ that evolves according to simple, computable, probabilistic laws. In contrast to the standard view, objective reality is not assumed on this approach but rather provably emerges as an asymptotic statistical phenomenon. The resulting theory dissolves puzzles like cosmology's Boltzmann brain problem, makes concrete predictions for thought experiments like the computer simulation of agents, and suggests novel phenomena such as ``probabilistic zombies'' governed by observer-dependent probabilistic chances. It also suggests that some basic phenomena of quantum theory (Bell inequality violation and no-signalling) might be understood as consequences of this framework.
... This view is consistent with the philosophical conclusions drawn by many analytical philosophers, including the Churchlands, Dennett, and Fodor. Both forms of broad physical interpretation of the Church-Turing Thesis have lately come under scrutiny, primarily by Piccinini (2007) and Copeland (2015). The main points of contention seem to pertain to the claims that (1) it is not true that all functions are Turing-computable (e.g., the halting function is not); and (2) neither Turing nor Church assumed such broad physical or philosophical implications of their claim. ...
Chapter
Full-text available
One day, we may build a perfect robotic lover. It would be completely indistinguishable from a human lover in all things relevant to its love-making ability. In such a case, should one care whether one's lover is human or robotic? This boils down to whether one cares about one's lover only insofar as his or her functionalities are involved, or whether one also cares how the lover feels. In the latter instance, we would need to also care whether a lover has any first-person perceptions or feelings at all. Many philosophers think that this is an intractable question. We are unable to communicate about our first-person feelings that are not fully expressed in our language: "Whereof one cannot speak, thereof one must be silent" (Wittgenstein 1922). It is even more hopeless to try to communicate those feelings that do not influence behavior in a unique way. Yet, for many people, it makes a major difference whether one's significant other can feel one's love, in all its emotional and perceptual specificities, or whether she or he is just faking it. This shows that one may have reasons to care about another person's first-person experience even if such experience was purely epiphenomenal.
... This view is consistent with the philosophical conclusions drawn by many analytical philosophers, including the Churchlands, Dennett, and Fodor. Both forms of broad physical interpretation of the Church-Turing Thesis have lately come under scrutiny, primarily by Piccinini (2007) and Copeland (2015). The main points of contention seem to pertain to the claims that (1) it is not true that all functions are Turing-computable (e.g., the halting function is not); and (2) neither Turing nor Church assumed such broad physical or philosophical implications of their claim. ...
Chapter
Full-text available
Church-Turing Lovers are sex robots that attain every functionality of a human lover, at the desired level of granularity. Yet they have no first-person consciousness-there is “nobody home.” When such a lover says, “I love you,” there are all the intentions to please you, even computer emotions. Would you care whether your significant other is a Church-Turing Lover? Does one care about one’s lover only insofar as his/her functionalities are involved, or does one care how the lover feels. Church-Turing Lovers demonstrate how even epiphenomenal experience provides reasons to care about other people’s first-person consciousness. In a related argument, I propose the notion of the Uncanny Valley of Perfection. I systematize the standards for humanoid robots as follows: minimally humanoid (teddy bears); bottom of the Uncanny Valley (repulsive sex dolls); Silver Standard (almost human-looking), Gold Standard (hard to distinguish from humans at the right level of granularity); Platinum Standard (slightly improved on humans); the Uncanny Valley of Perfection (too much better than humans); the Slope of the Angels (no longer humanoid, viewed with awe).
... See [1, Section 2] for more details and references, and[13] for a definition of the physical Church-Turing thesis. In a nutshell, the version that I am using here claims that there is an algorithm that yields a description of the probabilities of outcomes, given the description of any experiment. ...
Article
In physics, there is the prevailing intuition that we are part of a unique external world, and that the goal of physics is to understand and describe this world. This assumption of the fundamentality of objective reality is often seen as a major prerequisite of any kind of scientific reasoning. However, here I argue that we should consider relaxing this assumption in a specific way in some contexts. Namely, there is a collection of open questions in and around physics that can arguably be addressed in a substantially more consistent and rigorous way if we consider the possibility that the first-person perspective is ultimately more fundamental than our usual notion of external world. These are questions like: which probabilities should an observer assign to future experiences if she is told that she will be simulated on a computer? How should we think of cosmology's Boltzmann brain problem, and what can we learn from the fact that measurements in quantum theory seem to do more than just reveal preexisting properties? Why are there simple computable laws of physics in the first place? This note summarizes a longer companion paper which constructs a mathematically rigorous theory along those lines, suggesting a simple and unified framework (rooted in algorithmic information theory) to address questions like those above. It is not meant as a "theory of everything" (in fact, it predicts its own limitations), but it shows how a notion of objective external world, looking very much like our own, can provably emerge from a starting point in which the first-person perspective is primary, without apriori assumptions on the existence of "laws" or a "physical world". While the ideas here are perfectly compatible with physics as we know it, they imply some quite surprising predictions and suggest that we may want to substantially revise the way we think about some foundational questions.
Preprint
Full-text available
With Moore's law coming to a close it is useful to look at other forms of computer hardware. In this paper we survey what is known about several modes of computation: Neuromorphic, Custom Logic, Quantum, Optical, Spintronics, Reversible, Many-Valued Logic, Chemical, DNA, Neurological, Fluidic, Amorphous, Thermodynamic, Peptide, and Membrane. For each of these modes of computing we discuss pros, cons, current work, and metrics. After surveying these alternative modes of computation we discuss two aread where they may useful: data analytics and graph processing.
Book
Full-text available
Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience.
Chapter
Models as Mediators discusses the ways in which models function in modern science, particularly in the fields of physics and economics. Models play a variety of roles in the sciences: they are used in the development, exploration and application of theories and in measurement methods. They also provide instruments for using scientific concepts and principles to intervene in the world. The editors provide a framework which covers the construction and function of scientific models, and explore the ways in which they enable us to learn about both theories and the world. The contributors to the volume offer their own individual theoretical perspectives to cover a wide range of examples of modelling, from physics, economics and chemistry. These papers provide ideal case-study material for understanding both the concepts and typical elements of modelling, using analytical approaches from the philosophy and history of science.
Article
According to some philosophers, there is no fact of the matter whether something is either a computer, or some other computing mechanism, or something that performs no computations at all. On the contrary, I argue that there is a fact of the matter whether something is a calculator or a computer: a computer is a calculator of large capacity, and a calculator is a mechanism whose function is to perform one out of several possible computations on inputs of nontrivial size at once. This paper is devoted to a detailed defense of these theses, including a specification of the relevant notion of “large capacity” and an explication of the notion of computer.