Article

Objective Computation Versus Subjective Computation

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The question ‘What is computation?’ might seem a trivial one to many, but this is far from being in consensus in philosophy of mind, cognitive science and even in physics. The lack of consensus leads to some interesting, yet contentious, claims, such as that cognition or even the universe is computational. Some have argued, though, that computation is a subjective phenomenon: whether or not a physical system is computational, and if so, which computation it performs, is entirely a matter of an observer choosing to view it as such. According to one view, which we dub bold anti-realist pancomputationalism, every physical object (can be said to) computes every computer program. According to another, more modest view, some computational systems can be ascribed multiple computational descriptions. We argue that the first view is misguided, and that the second view need not entail observer-relativity of computation. At least to a large extent, computation is an objective phenomenon. Construed as a form of information processing, we argue that information-processing considerations determine what type of computation takes place in physical systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In the case of computation the function being performed is characterised by Piccinini as the systematic transformation of medium-independent vehicles (2015: 121), and the phenomenon of interest is just whatever application the system is being used for. 2 1 Milkowski (2013) and Fresco (2014) also give mechanistic accounts of computation, but I focus here on Piccinini's account as it is probably the most popular and well-developed. Many of the points made here would apply equally well to Milkowski's and Fresco's accounts, insofar as any mechanistic account of computation must say something about how we determine the function of a mechanism. ...
... A full response would not be appropriate here, but in brief I think that it will be possible to fix a level of physical description where, at least relative to explanatory interests, we are able to identify the required computational equivalences. 11Fresco (2015) defends a similar (although less permissive) position, according to which multiple semantic interpretations can be made of a single computational system, while nonetheless remaining constrained by the physical structure of that system. Content courtesy of Springer Nature, terms of use apply. ...
... An early word processing program.13 Fresco (2015: 1037) makes a similar suggestion, andLadyman and Ross (2007) originally made the connection between Dennettian real patterns and informational complexity.Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
The aim of this paper is to begin developing a version of Gualtiero Piccinini’s mechanistic account of computation that does not need to appeal to any notion of proper (or teleological) functions. The motivation for doing so is a general concern about the role played by proper functions in Piccinini’s account, which will be evaluated in the first part of the paper. I will then propose a potential alternative approach, where computing mechanisms are understood in terms of Carl Craver’s perspectival account of mechanistic functions. According to this approach, the mechanistic function of ‘performing a computation’ can only be attributed relative to an explanatory perspective, but such attributions are nonetheless constrained by the underlying physical structure of the system in question, thus avoiding unlimited pancomputationalism. If successful, this approach would carry with it fewer controversial assumptions than Piccinini’s original account, which requires a robust understanding of proper functions. Insofar as there are outstanding concerns about the status of proper functions, this approach would therefore be more generally acceptable.
... Various forms and flavours of the phenomenon that we call the indeterminacy of computation (Fresco, Wolf & Copeland (2016); the term is Copeland's) have been discussed by diverse authors. Pre-eminent among them is Shagrir (2001Shagrir ( , 2020; also (in chronological order) Dennett (1978Dennett ( , 2013, Block (1990), Sorensen (1999), Bishop (2009), Sprevak (2010), Fresco (2015), Piccinini (2015), Coelho Mollo (2017), and Dewhurst (2018). Some, for example Shagrir (2020), Bishop (2009) and Sprevak (2010), appeal to semantic features of the system to render it determinate what computation is being performed. ...
... Some would say that Koch is overstating matters in his claim that this view is widely accepted in the neurosciences. There is, in fact, controversy about whether computational neuroscience is committed to the literalist view that the brain is an implemented computational system (Fresco, 2014). There is also considerable controversy about whether neuronal systems compute at all in any robust sense, and, indeed, about what it even means to say that a neuronal system computes. ...
Article
Full-text available
Do the dynamics of a physical system determine what function the system computes? Except in special cases, the answer is no: it is often indeterminate what function a given physical system computes. Accordingly, care should be taken when the question ‘What does a particular neural system do?’ is answered by hypothesising that the system computes a particular function. The phenomenon of the indeterminacy of computation has important implications for the development of computational explanations of biological systems. Additionally, the phenomenon lends some support to the idea that a single neural structure may perform multiple cognitive functions, each subserved by a different computation. We provide an overarching conceptual framework in order to further the philosophical debate on the nature of computational indeterminacy and computational explanation.
... We will also examine how our proposed accounts of the two kinds of indeterminacy relate to existing views on 'implementation' and 'computation', but take no specific stand on which view is preferable. 2 Section 2 discusses the two kinds of indeterminacy in detail and examines aspects of the relations between them and computational implementation. Section 3 explores the interrelationships between the two indeterminacies, computational individuation and levels of organization 1 Some examples of such names include 'simultaneous implementation' (Shagrir 2001, 2020, Fresco 2015, Dewhurst 2018, 'the ambiguity of representation' (Maroney and Timpson, 2018), 'indeterminacy of computation' (Fresco et al., 2021), 'underdetermination of computation' (Duwell, 2018), 'multiple-computations theorem' (Hemmo and Shenker, 2019), 'multiplicity of computations', and various others. 2 In doing so, we may be seen as adopting the point of view of a scientist who employs computational characterizations for her studied systems and who may or may not have a particular theory of implementation in mind. ...
Article
It is often indeterminate what function a given computational system computes. This phenomenon has been referred to as “computational indeterminacy” or “multiplicity of computations”. In this paper, we argue that what has typically been considered and referred to as the (unique) challenge of computational indeterminacy in fact subsumes two distinct phenomena, which are typically bundled together and should be teased apart. One kind of indeterminacy concerns a functional (or formal) characterization of the system’s relevant behavior (briefly: how its physical states are grouped together and corresponded to abstract states). Another kind concerns the manner in which the abstract (or computational) states are interpreted (briefly: what function the system computes). We discuss the similarities and differences between the two kinds of computational indeterminacy, their implications for certain accounts of “computational individuation” in the literature, and their relevance to different levels of description within the computational system. We also examine the interrelationships between our proposed accounts of the two kinds of indeterminacy and the main accounts of “computational implementation”.
... Here, we shall presuppose the correctness of mechanistic (non-semantic) accounts of computation [76][77][78][79]. Needless to say, that there are other accounts of computation (for an overview, see [95]); furthermore, the very idea that computational properties are intrinsic properties of some physical systems is contested [96,97]. Our goal is not to defend mechanistic accounts of computation, but to show that one can coherently maintain a strong (conceptual) life-mind continuity thesis and a realist account of representation, by assuming the mechanistic account. ...
Article
Full-text available
A weak version of life-mind continuity thesis entails that every living system also has a basic mind (with a non-representational form of intentionality). The strong version entails that the same concepts that are sufficient to explain basic minds (with non-representational states) are also central to understanding non-basic minds (with representational states). We argue that recent work on the free energy principle supports the following claims with respect to the life-mind continuity thesis: (i) there is a strong continuity between life and mind; (ii) all living systems can be described as if they had representational states; (iii) the ’as-if representationality’ entailed by the free energy principle is central to understanding both basic forms of intentionality and intentionality in non-basic minds. In addition to this, we argue that the free energy principle also renders realism about computation and representation compatible with a strong life-mind continuity thesis (although the free energy principle does not entail computational and representational realism). In particular, we show how representationality proper can be grounded in ’as-if representationality’.
Article
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones.
Article
What is nontrivial digital computation? It is the processing of discrete data through discrete state transitions in accordance with finite instructional information. The motivation for our account is that many previous attempts to answer this question are inadequate, and also that this account accords with the common intuition that digital computation is a type of information processing. We use the notion of reachability in a graph to defend this characterization in memory-based systems and underscore the importance of instructional information for digital computation. We argue that our account evaluates positively against adequacy criteria for accounts of computation.
Article
Full-text available
I defend Piccinini's mechanistic account of computation against three related criticisms adapted from Sprevak's critique of non-representational computation. I then argue that this defence highlights a major problem with what Sprevak calls the received view; namely, that representation introduces observer-relativity into our account of computation. I conclude that if we want to retain an objective account of computation, we should reject the received view.
Article
Full-text available
How are we able to understand and anticipate each other in everyday life, in our daily interactions? Through the use of such "folk" concepts as belief, desire, intention, and expectation, asserts Daniel Dennett in this first full-scale presentation of a theory of intentionality that he has been developing for almost twenty years. We adopt a stance, he argues, a predictive strategy of interpretation that presupposes the rationality of the people - or other entities - we are hoping to understand and predict.These principles of radical interpretation have far-reaching implications for the metaphysical and scientific status of the processes referred to by the everday terms of folk psychology and their corresponding terms in cognitive science.While Dennett's philosophical stance has been steadfast over the years, his views have undergone successive enrichments, refinements, and extensions. "The Intentional Stance" brings together both previously published and original material: four of the book's ten chapters - its first and the final three - appear here for the first time and push the theory into surprising new territory. The remaining six were published earlier in the 1980s but were not easily accessible; each is followed by a reflection - an essay reconsidering and extending the claims of the earlier work. These reflections and the new chapters represent the vanguard of Dennett's thought. They reveal fresh lines of inquiry into fundamental issues in psychology, artificial intelligence, and evolutionary theory as well as traditional issues in the philosophy of mind.Daniel C. Dennett is Distinguished Arts and Sciences Professor at Tufts University and the author of "Brainstorms" and "Elbow Room." "The Intentional Stance," along with these works, is a Bradford Book.
Article
Full-text available
This paper offers an account of what it is for a physical system to be a computing mechanism—a system that performs computations. A computing mechanism is a mech-anism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule that applies to all relevant strings and depends on the input strings and (possibly) internal states for its application. This account is motivated by reasons endogenous to the philosophy of computing, namely, doing justice to the practices of computer scientists and computability theorists. It is also an application of recent literature on mechanisms, because it assimilates com-putational explanation to mechanistic explanation. The account can be used to indi-viduate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power.
Article
Full-text available
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable.
Article
Full-text available
There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing away with semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators.
Article
Full-text available
The journal of Cognitive Computation is defined in part by the notion that biologically inspired computational accounts are at the heart of cognitive processes in both natural and artificial systems. Many studies of various important aspects of cognition (memory, observational learning, decision making, reward prediction learning, attention control, etc.) have been made by modelling the various experimental results using ever-more sophisticated computer programs. In this manner progressive inroads have been made into gaining a better understanding of the many components of cognition. Concomitantly in both science and science fiction the hope is periodically re-ignited that a man- made system can be engineered to be fully cognitive and conscious purely in virtue of its execution of an appropriate computer program. However, whilst the usefulness of the computational metaphor in many areas of psychology and neuroscience is clear, it has not gone unchallenged and in this article I will review a group of philosophical arguments that suggest either such unequivocal optimism in computa- tionalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of ele- ments each of which is conscious) is true. I conclude by highlighting an alternative metaphor for cognitive processes based on communication and interaction.
Article
Full-text available
Can quantum communication be more efficient than its classical counterpart? Holevo's theorem rules out the possibility of communicating more than n bits of classical information by the transmission of n quantum bits—unless the two parties are entangled, in which case twice as many classical bits can be communicated but no more. In apparent contradiction, there are distributed computational tasks for which quantum communication cannot be simulated efficiently by classical means. In some cases, the effect of transmitting quantum bits cannot be achieved classically short of transmitting an exponentially larger number of bits. In a similar vein, can entanglement be used to save on classical communication? It is well known that entanglement on its own is useless for the transmission of information. Yet, there are distributed tasks that cannot be accomplished at all in a classical world when communication is not allowed, but that become possible if the non-communicating parties share prior entanglement. This leads to the question of how expensive it is, in terms of classical communication, to provide an exact simulation of the spooky power of entanglement.
Article
Full-text available
Starting from first principles and general assumptions Newton's law of gravitation is shown to arise naturally and unavoidably in a theory in which space is emergent through a holographic scenario. Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. A relativistic generalization of the presented arguments directly leads to the Einstein equations. When space is emergent even Newton's law of inertia needs to be explained. The equivalence principle leads us to conclude that it is actually this law of inertia whose origin is entropic.
Chapter
Full-text available
The dialogue develops arguments for and against adopting a new world system, info-computationalist naturalism, that is poised to replace the traditional mechanistic world system. We try to figure out what the info-computational paradigm would mean, in particular its pancomputationalism. We make some steps towards developing the notion of computing that is necessary here, especially in relation to traditional notions. We investigate whether pancomputationalism can possibly provide the basic causal structure to the world, whether the overall research programme appears productive and whether it can revigorate computationalism in the philosophy of mind.
Article
Full-text available
Computational and information-theoretic research in philosophy has become increasingly fertile and pervasive, giving rise to a wealth of interesting results. Consequently, a new and vitally important field has emerged, the philosophy of information (PI). This paper introduces PI as the philosophical field concerned with (i) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, and with (ii) the elaboration and application of information-theoretic and computational methodologies to philosophical problems. It is argued that PI is a mature discipline for three reasons: it represents an autonomous field of research; it provides an innovative approach to both traditional and new philosophical topics; and it can stand beside other branches of philosophy, offering a systematic treatment of the conceptual foundations of the world of information and the information society.
Conference Paper
Full-text available
Not Available
Article
Full-text available
After briefly discussing the relevance of the notions "computation" and "implementation" for cognitive science, I summarize some of the problems that have been found in their most common interpretations. In particular, I argue that standard notions of computation together with a "state-to-state correspondence view of implementation" cannot overcome difficulties posed by Putnam's Realization Theorem and that, therefore, a different approach to implementation is required. The notion "realization of a function", developed out of physical theories, is then introduced as a replacement for the notional pair "computationimplementation ". After gradual refinement, taking practical constraints into account, this notion gives rise to the notion "digital system" which singles out physical systems that could be actually used, and possibly even built.
Article
Full-text available
Two principles explain emergence. First, in the Receipt's reference frame, Deg(S) = 4/3 Deg(R), where Supply S is an isotropic radiative energy source, Receipt R receives S's energy, and Deg is a system's degrees of freedom based on its mean path length. S's 1/3 more degrees of freedom relative to R enables R's growth and increasing complexity. Second, rho(R) = Deg(R) times rho(r), where rho(R) represents the collective rate of R and rho(r) represents the rate of an individual in R: as Deg(R) increases due to the first principle, the multiplier effect of networking in R increases. A universe like ours with isotropic energy distribution, in which both principles are operative, is therefore predisposed to exhibit emergence, and, for reasons shown, a ubiquitous role for the natural logarithm.
Chapter
For at least half a century, it has been popular to compare brains and minds to computers and programs. Despite the continuing appeal of the computational model of the mind, however, it can be difficult to articulate precisely what the view commits one to. Indeed, critics such as John Searle and Hilary Putnam have argued that anything, even a rock, can be viewed as instantiating any computation we please, and this means that the claim that the mind is a computer is not merely false, but it is also deeply confused.
Article
It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.
Article
To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase ‘of a certain kind’. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind.
Article
This paper presents an extended argument for the claim that mental content impacts the computational individuation of a cognitive system (section 2). The argument starts with the observation that a cognitive system may simultaneously implement a variety of different syntactic structures, but that the computational identity of a cognitive system is given by only one of these implemented syntactic structures. It is then asked what are the features that determine which of implemented syntactic structures is the computational structure of the system, and it is contended that these features are certain aspects of mental content. The argument helps (section 3) to reassess the thesis known as computational externalism, namely, the thesis that computational theories of cognition make essential reference to features int he individual's environment. It is suggested that the familiar arguments for computational externalism--which rest on thought experiments and on exegesis of Marr's theories of vision--are unconvincing, but that they can be improved. A reconstruction of the visex/audex thought experiment is offered in section 3.1. An outline of a novel interpretation of Marr's theories of vision is presented in section 3.2. The corrected arguments support the claim that computational theories of cognition are intentional. Computational externalism is still pending, however, upon the thesis that psychological content is extrinsic.
Article
In this chapter we investigate the relation between information and computation under time symmetry. We show that there is a class of nondeterministic automata, the quasi-reversible automata (QRTM), that is, the class of classical deterministic Turing machines operating in negative time that computes all the languages in NP. The class QRTM is isomorphic to the class of standard deterministic. Turing machines TM, in the sense that for every M∈ TM there is an M -1 in QRTM such that each computation on M is mirrored by a computation on M -1 with the arrow of time reversed. This suggests that nondeterministic computing might be more aptly described as deterministic computing in negative time. If M i is deterministic then M i -1 is nondeterministic. If M is information discarding then M -1 “creates” information. The two fundamental complexities involved in a deterministic computation are program complexity and program counter complexity. Programs can be classified in terms of their “information signature” with pure counting programs and pure information discarding programs as two ends of the spectrum. The chapter provides a formal basis for a further analysis of such diverse domains as learning, creative processes, growth, and the study of the interaction between computational processes and thermodynamics
Article
What is nontrivial digital computation? It is the processing of discrete data through discrete state transitions in accordance with finite instructional information. The motivation for our account is that many previous attempts to answer this question are inadequate, and also that this account accords with the common intuition that digital computation is a type of information processing. We use the notion of reachability in a graph to defend this characterization in memory-based systems and underscore the importance of instructional information for digital computation. We argue that our account evaluates positively against adequacy criteria for accounts of computation.
Article
There has been an ongoing conflict regarding whether reality is fundamentally digital or analogue. Recently, Floridi has argued that this dichotomy is misapplied. For any attempt to analyse noumenal reality independently of any level of abstraction at which the analysis is conducted is mistaken. In the pars destruens of this paper, we argue that Floridi does not establish that it is only levels of abstraction that are analogue or digital, rather than noumenal reality. In the pars construens of this paper, we reject a classification of noumenal reality as a deterministic discrete computational system. We show, based on considerations from classical physics, why a deterministic computational view of the universe faces problems (e.g., a reversible computational universe cannot be strictly deterministic).
Article
The ‘received view’ about computation is that all computations must involve representational content. Egan and Piccinini argue against the received view. In this paper, I focus on Egan’s arguments, claiming that they fall short of establishing that computations do not involve representational content. I provide positive arguments explaining why computation has to involve representational content, and how that representational content may be of any type (distal, broad, etc.). I also argue (contra Egan and Fodor) that there is no need for computational psychology to be individualistic. Finally, I draw out a number of consequences for computational individuation, proposing necessary conditions on computational identity and necessary and sufficient conditions on computational I/O equivalence of physical systems.
Article
This book brings together the outcome of ten years of research. It is based on a simple project, which was begun towards the end of the 1990s: information is a crucial concept, which deserves a thorough philosophical investigation. So the book lays down the conceptual foundations of a new area of research: the philosophy of information. It does so systematically, by pursuing three goals. The first is metatheoretical. The book describes what the philosophy of information is, its problems, and its method of levels of abstraction. These are the topics of the first part, which comprises chapters one, two and three. The second goal is introductory. In chapters four and five, the book explores the complex and diverse nature of several informational concepts and phenomena. The third goal is constructive. In the remaining ten chapters, the book answers some classic philosophical questions in information-theoretical terms. As a result, the book provides the first, unified and coherent research programme for the philosophy of information, understood as a new, independent area of research, concerned with (1) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilization, and sciences; and (2) the elaboration and application of information-theoretic and computational methodologies to philosophical problems.
Article
Uncertainty relations state that there exist certain incompatible measurements, to which the outcomes cannot be simultaneously predicted. While the exact incompatibility of quantum measurements dictated by such uncertainty relations can be inferred from the mathematical formalism of quantum theory, the question remains whether there is any more fundamental reason for the uncertainty relations to have this exact form. What, if any, would be the operational consequences if we were able to go beyond any of these uncertainty relations? Here we give a strong argument that justifies uncertainty relations in quantum theory by showing that violating them implies that it is also possible to violate the second law of thermodynamics. More precisely, we show that violating the uncertainty relations in quantum mechanics leads to a thermodynamic cycle with positive net work gain, which is very unlikely to exist in nature.
Article
Norbert Wiener’s cybernetic paradigm represents one of the seminal ideas of the twentieth century. Nevertheless, its full potential has yet to be realized. For instance, cybernetics is relatively little used as an analytical tool in the social sciences. One reason, it is argued here, is that Wiener’s framework lacks a crucial element – a functional definition of information. Although so-called Shannon information has made many valuable contributions and has many important uses, it is blind to the functional properties of information. Here a radically different approach to information theory is described. After briefly critiquing the literature in information theory, a new kind of cybernetic information will be proposed called “control information.” Control information is not a “thing” but an attribute of the relationships between things. It is defined as: the capacity (know-how) to control the acquisition, disposition and utilization of matter/energy in purposive (cybernetic) processes. This concept is briefly elucidated, and a formalization proposed in terms of a common unit of measurement, namely the quantity of “available energy” that can be controlled by a given unit of information in a given context. However, other metrics are also feasible, from money to allocations of human labor. Some illustrations are briefly provided and some of the implications are discussed.
Article
Although the structured programming movement has been with us for nearly fifteen years, it's not entirely clear where the term "structured" originated. If one had to choose a single source, the following landmark paper by Edsger Dijkstra would be it. The theme of the paper, which was presented at a 1969 conference sponsored, strangely enough, by the North Atlantic Treaty Organization Science Committee, is intriguing. As Dijkstra points out, exhaustive testing of a computer program is virtually impossible, and testing by "sampling" also is pointless. As he says, "Program testing can be used to show the presence of bugs, but never to show their absence!" And, while rigorous mathematical proofs of program correctness are possible, they are difficult to construct, often because the programs themselves are not well suited to such analysis. So, as Dijkstra explains, rather than first writing a program and then worrying about the difficult task of constructing a proof of program correctness, it makes far better sense to ask, "For what program structures can we give correctness proofs without undue labour..." and then, "How do we make, for a given task, such a wellstructured program ?" With this philosophy as a basis, Dijkstra concludes that program logic (or "sequencing," as he calls it) "should be controlled by alternative, conditional and repetitive clauses and procedure calls, rather than by statements transferring control to labelled points." Although he doesn't describe explicitly the IF-THEN-ELSE construct and the DO-WHILE construct, it is clear what he means. And, while he doesn't mention the implementation of these constructs in programming languages, one assumes that Dijkstra takes it for granted that any reasonable programming language would provide a direct implementation of the necessary control constructs. Perhaps the most interesting concept in this paper comes from Dijkstra's pearl imagery. Dijkstra suggests that we visualize a program as a string of ordered pearls, in which a larger pearl describes the entire program in terms of concepts or capabilities implemented in lower-level pearls. It is clear that we are being shown, by means of a delightful, vivid analogy, the essential concepts of what is now called top-down design. The only thing that detracts from this paper is the repeated emphasis that it is based on experiments with small programs. In itself, this is not surprising. Throughout the literature in the computer field, we see that small programs and small projects are the only ones that can be conducted in a "laboratory" environment. However, the concluding sentence of Dijkstra's paper may have alienated the very people who most need to understand his ideas: " . . . I have given but little recognition to the requirements of program development such as is needed when one wishes to employ a large crowd; I have no experience with the Chinese Army approach, nor am I convinced of its virtues." This quote reminds me of an experience at a recent computer conference, at which one of the speakers described his own approach as a "bottom-up" strategy. What he meant, he said, was that we should begin by attacking very small problems, e.g., those requiring only twenty to thirty program statements, and learn how to solve them properly. Then we could build on that experience, and consider solving larger problems --- those requiring perhaps 100-200 statements; in a few years, with our accumulated wisdom, we might consider working on problems as large as 1,000-2,000 statements. He then gave a brilliant and eloquent presentation of the solution to a twenty-statement problem. When he finished, a member of the audience stood and announced in a loud voice that he had the misfortune of working on a project that probably would result in coding some three million program statements. What did the speaker suggest he do? "Punt!" was the reply. Do you still wonder why it took ten years for anyone to listen to the concepts of structured programming?
Article
When we act, cerebral events, movements of our body and events outside our body occur. Many philosophers of action identify our actions with some of these events. I shall argue that actions can be individuated in terms of subsets of these events but that they are not identical with the events in terms of which they can be individuated. The suggested principle of act-individuation is defended on its intrinsic merits and is shown to escape the counterintuitive implicaton of other principles. Furthermore, it solves some well-known problems encountered in philosophy of action.
Article
charts the increasing 'levels' (embedding) of intentionality which may in principle underly primate vocal behaviour, and suggests a simple method for picking out the real level visits the very primatologists on whose data he theorized, and discusses the difficulties in executing his simple test in practice intentional systems in cognitive ethology (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
I argue against a view of the individuation of actions endorsed most notably by Hornsby and Davidson. This is the view that in, for example, Anscombe’s case of the pumping man, we have a single action which can be described, variously, as a pumping, a poisoning and so on. I argue that, even in the area of the standard arguments against this view, such as that based on the logic of ‘by’ and the argument from temporal dimensions, the case against the Davidson-Hornsby view has not been made as strong as it ought to have been. I show how those standard arguments can be strengthened; and I argue that the principal considerations adduced in support of the view do not in fact lend it the support that they have been widely thought to lend it. I conclude that the view should be rejected.
Article
It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.
Article
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
This paper is about Cognitivism, and I had better say at the beginning what motivates it. If you read books about the brain (say Shepherd (1983) or Kuffler and Nicholls (1976)) you get a certain picture of what is going on in the brain. If you then turn to books about computation (say Boolos and Jeffrey, 1989) you get a picture of the logical structure of the theory of computation. If you then turn to books about cognitive science, (say Pylyshyn, 1985) they tell you that what the brain books describe is really the same as what the computability books were describing. Philosophically speaking, this does not smell right to me and I have learned, at least at the beginning of an investigation, to follow my sense of smell. II. The Primal Story I want to begin the discussion by trying to state as strongly as I can why cognitivism has seemed intuitively appealing. There is a story about the relation of human intelligence to computation that goes back at least to Turing's classic paper (1950), and I believe it is the foundation of the Cognitivist view. I will call it the Primal Story: We begin with two results in mathematical logic, the Church-Turing thesis (or equivalently, the Churchs's thesis) and Turing's theorem. For our purposes, the ChurchTuring thesis states that for any algorithm there is some Turing machine that can implement that algorithm. Turing's thesis says that there is a Universal Turing Machine which can simulate any Turing Machine. Now if we put these two together we have the result that a Universal Turing Machine can implement any algorithm whatever. But now, what made this result so exciting? What made it send shivers up and down the spines of a whole generation of young workers in artificial intelligence is the following thought: Suppose the brain is a Un...
Article
We investigate a new research area: we are interested in the ultimate thermodynamic cost of computing from x to y. Other than its fundamental importance, such research has potential implications in miniaturization of VLSI chips and applications in pattern recognition. It turns out that the theory of thermodynamic cost of computation can be axiomatically developed. Our fundamental theorem connects physics to mathematics, providing the key that makes such a theory possible. It establishes optimal upper and lower bounds on the ultimate thermodynamic cost of computation. By computing longer and longer, the amount of dissipated energy gets closer to these limits. In fact, one can trade time for energy: there is a provable timeenergy trade-off hierarchy. The fundamental theorem also induces a thermodynamic distance metric. The topological properties of this metric show that neighborhoods are sparse, and get even sparser if they are centered on random elements. The proofs use Symmetry of Inf...
Article
Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the classes of automata that can serve as a basis for understanding the mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind. 1 Introduction The theory of computation is often thought to underwrite the theory of mind. In cognitive science, it is widely believed that intelligent behavior is enabled by the fact that the mind or the ...
Searle’s arguments against cognitive science
  • N Block
The physics and metaphysics of computation and cognition Philosophy and theory of artificial intelligence
  • P Bokulich
Information processing and the structuring of data
  • N Fresco
  • M J Wolf
What is computation? Synthese
  • BJ Copeland