Chapter

Your Brain Is Like a Computer: Function, Analogy, Simplification

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The relationship between brain and computer is a perennial theme in theoretical neuroscience, but it has received relatively little attention in the philosophy of neuroscience. This paper argues that much of the popularity of the brain-computer comparison (e.g. circuit models of neurons and brain areas since McCulloch and Pitts, Bull Math Biophys 5: 115–33, 1943) can be explained by their utility as ways of simplifying the brain. More specifically, by justifying a sharp distinction between aspects of neural anatomy and physiology that serve information-processing, and those that are ‘mere metabolic support,’ the computational framework provides a means of abstracting away from the complexities of cellular neurobiology, as those details come to be classified as irrelevant to the (computational) functions of the system. I argue that the relation between brain and computer should be understood as one of analogy, and consider the implications of this interpretation for notions of multiple realisation. I suggest some limitations of our understanding of the brain and cognition that may stem from the radical abstraction imposed by the computational framework.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Elsewhere I discuss the rise of computationalism, arguing that its appeal rested not least in the simplifications that it offered to neurophysiologists(Chirimuuta, 2021). 18 See e.g.Todes (2014, pp. ...
Article
Full-text available
This paper takes an integrated history and philosophy of science approach to the topic of "simplicity out of complexity". The reflex theory was a framework within early twentieth century psychology and neuroscience which aimed to decompose complex behaviours and neural responses into simple reflexes. It was controversial in its time, and did not live up to its own theoretical and empirical ambitions. Examination of this episode poses important questions about the limitations of simplifying strategies, and the relationship between simplification and the engineering approach to biology.
... Scientists also research a not very new, yet an utterly relevant topic of comparing the human brain and the computer. For example, Chirimuuta (2021) believes that the brain and the computer relationship is an eternal theme in theoretical neuroscience, but relatively little attention has been paid to it in the philosophy of neuroscience, and that much of the popularity of the brain-computer comparison is simply due to the usefulness of it in simplifying the brain. The scientist believes that the relationship between the brain and the computer should be understood as an analogy and considers the consequences of this interpretation for the concepts of multiple realization. ...
Article
Objections to the computational theory of cognition, inspired by twentieth century phenomenology, have tended to fixate on the embodiment and embeddedness of intelligence. In this paper I reconstruct a line of argument that focusses primarily on the abstract nature of scientific models, of which computational models of the brain are one sort. I observe that the critique of scientific abstraction was rather commonplace in the philosophy of the 1920s and 30s and that attention to it aids the reading of The Organism ([1934] 1939) by the neurologist Kurt Goldstein. With this background in place, we see that some brief but spirited criticisms of cybernetics by two later thinkers much influenced by Goldstein, Georges Canguilhem (1963) and Maurice Merleau- Ponty (1961), show continuity with the earlier discussions of abstraction in science.
Article
Full-text available
The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks (ANN’s) used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding of the brain trades off against the predictive accuracy of the models. This trade-off between prediction and understanding is better explained by a non-factivist account of scientific understanding.
Article
Full-text available
Synapses are the hallmark of brain complexity and have long been thought of as simple connectors between neurons. We are now in an era in which we know the full complement of synapse proteins and this has created an existential crisis because the molecular complexity far exceeds the requirements of most simple models of synaptic function. Studies of the organisation of proteome complexity and its evolution provide surprising new insights that challenge existing dogma and promote the development of new theories about the origins and role of synapses in behaviour. The postsynaptic proteome of excitatory synapses is a structure with high molecular complexity and sophisticated computational properties that is disrupted in over 130 brain diseases. A key goal of 21st-century neuroscience is to develop comprehensive molecular datasets on the brain and develop theories that explain the molecular basis of behaviour.
Article
Full-text available
In this paper, I argue that looking at the concept of neural function through the lens of cognition alone risks cognitive myopia: it leads neuroscientists to focus only on mechanisms with cognitive functions that process behaviorally relevant information when conceptualizing " neural function ". Cognitive myopia tempts researchers to neglect neural mechanisms with noncognitive functions which do not process behaviorally relevant information but maintain and repair neural and other systems of the body. Cognitive myopia similarly affects philosophy of neuroscience because scholars overlook noncognitive functions when analyzing issues surrounding e.g., functional decomposition or the multifunctionality of neural structures. I argue that we can overcome cognitive myopia by adopting a patchwork approach that articulates cognitive and noncognitive " patches " of the concept of neural function. Cognitive patches describe mechanisms with causally specific effects on cognition and behavior which are likely operative in transforming sensory or other inputs into motor outputs. Noncognitive patches describe mechanisms that lack such specific effects; these mechanisms are enabling conditions for cognitive functions to occur. I use these distinctions to characterize two noncognitive functions at the mesoscale of neural circuits: subsistence functions like breathing are implemented by central pattern generators and are necessary to maintain the life of the organism. Infrastructural functions like gain control are implemented by canonical microcircuits and prevent neural system damage while cognitive processing occurs. By adding conceptual patches that describe these functions, a patchwork approach can overcome cognitive myopia and help us explain how the brain's capacities as an information processing device are constrained by its ability to maintain and repair itself as a physiological apparatus.
Article
Full-text available
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive (neuro)science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year.
Article
Full-text available
New technologies in neuroscience generate reams of data at an exponentially increasing rate, spurring the design of very-large-scale data-mining initiatives. Several supranational ventures are contemplating the possibility of achieving, within the next decade(s), full simulation of the human brain.
Article
Full-text available
An underlying assumption in computational approaches in cognitive and brain sciences is that the nervous system is an input–output model of the world: Its input–output functions mirror certain relations in the target domains. I argue that the input–output modelling assumption plays distinct methodological and explanatory roles. Methodologically, input–output modelling serves to discover the computed function from environmental cues. Explanatorily, input–output modelling serves to account for the appropriateness of the computed function to the explanandum information-processing task. I compare very briefly the modelling explanation to mechanistic and optimality explanations, noting that in both cases the explanations can be seen as complementary rather than contrastive or competing.
Article
Full-text available
There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
Article
Full-text available
This paper argues that we should take into account the process of historical transmission to enrich our understanding of material culture. More specifically, I want to show how the rewriting of history and the invention of tradition impact material objects and our beliefs about them. I focus here on the transmission history of the mechanical calculator invented by the German savant Gottfried Wilhelm Leibniz. Leibniz repeatedly described his machine as functional and wonderfully useful, but in reality it was never finished and didn't fully work. Its internal structure also remained unknown. In 1879, however, the machine re-emerged and was reinvented as the origin of all later calculating machines based on the stepped drum, to protect the priority of the German Leibniz against the Frenchman Thomas de Colmar as the father of mechanical calculation. The calculator was later replicated to demonstrate that it could function ‘after all’, in an effort to deepen this narrative and further enhance Leibniz's computing acumen.
Article
Full-text available
The picture of synthetic biology as a kind of engineering science has largely created the public understanding of this novel field, covering both its promises and risks. In this paper, we will argue that the actual situation is more nuanced and complex. Synthetic biology is a highly interdisciplinary field of research located at the interface of physics, chemistry, biology, and computational science. All of these fields provide concepts, metaphors, mathematical tools, and models, which are typically utilized by synthetic biologists by drawing analogies between the different fields of inquiry. We will study analogical reasoning in synthetic biology through the emergence of the functional meaning of noise, which marks an important shift in how engineering concepts are employed in this field. The notion of noise serves also to highlight the differences between the two branches of synthetic biology: the basic science-oriented branch and the engineering-oriented branch, which differ from each other in the way they draw analogies to various other fields of study. Moreover, we show that fixing the mapping between a source domain and the target domain seems not to be the goal of analogical reasoning in actual scientific practice.
Article
Full-text available
Despite its significance in neuroscience and computation, McCulloch and Pitts's celebrated 1943 paper has received little historical and philosophical attention. In 1943 there already existed a lively community of biophysicists doing mathematical work on neural networks. What was novel in McCulloch and Pitts's paper was their use of logic and computation to understand neural, and thus mental, activity. McCulloch and Pitts's contributions included (i) a formalism whose refinement and generalization led to the notion of finite automata (an important formalism in computability theory), (ii) a technique that inspired the notion of logic design (a fundamental part of modern computer design), (iii) the first use of computation to address the mind–body problem, and (iv) the first modern computational theory of mind and brain.
Article
Full-text available
Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.
Book
Though it did not yet exist as a discrete field of scientific inquiry, biology was at the heart of many of the most important debates in seventeenth-century philosophy. Nowhere is this more apparent than in the work of G. W. Leibniz. This book offers the first in-depth examination of Leibniz's deep and complex engagement with the empirical life sciences of his day, in areas as diverse as medicine, physiology, taxonomy, generation theory, and paleontology. The book shows how these wide-ranging pursuits were not only central to Leibniz's philosophical interests, but often provided the insights that led to some of his best-known philosophical doctrines. Presenting the clearest picture yet of the scope of Leibniz's theoretical interest in the life sciences, the book takes seriously the philosopher's own repeated claims that the world must be understood in fundamentally biological terms. Here it reveals a thinker who was immersed in the sciences of life, and looked to the living world for answers to vexing metaphysical problems. The book casts Leibniz's philosophy in an entirely new light, demonstrating how it radically departed from the prevailing models of mechanical philosophy and had an enduring influence on the history and development of the life sciences. Along the way, the book provides a fascinating glimpse into early modern debates about the nature and origins of organic life, and into how philosophers such as Leibniz engaged with the scientific dilemmas of their era.
Article
This article examines three candidate cases of non-causal explanation in computational neuroscience. I argue that there are instances of efficient coding explanation that are strongly analogous to examples of non-causal explanation in physics and biology, as presented by Batterman ([2002]), Woodward ([2003]), and Lange ([2013]). By integrating Lange’s and Woodward’s accounts, I offer a new way to elucidate the distinction between causal and non-causal explanation, and to address concerns about the explanatory sufficiency of non-mechanistic models in neuroscience. I also use this framework to shed light on the dispute over the interpretation of dynamical models of the brain. • 1 Introduction • 1.1 Efficient coding explanation in computational neuroscience • 1.2 Defining non-causal explanation • 2 Case I: Hybrid Computation • 3 Case II: The Gabor Model Revisited • 4 Case III: A Dynamical Model of Prefrontal Cortex • 4.1 A new explanation of context-dependent computation • 4.2 Causal or non-causal? • 5 Causal and Non-causal: Does the Difference Matter?
Article
In this review essay on The Multiple Realization Book by Polger and Shapiro, I consider the prospects for a biologically grounded notion of multiple realization (MR) which has been given too little consideration in the philosophy of mind and cognitive science. Thinking about MR in the context of biological notions of function and robustness leads to a rethink of what would count as a viable functionalist theory of mind. I also discuss points of tension between Polger and Shapiro’s definition of MR and current explanatory practice in neuroscience.
Article
One might have thought that if something has two or more distinct realizations, then that thing is multiply realized. Nevertheless, some philosophers have claimed that two or more distinct realizations do not amount to multiple realization, unless those distinct realizations amount to multiple “ways” of realizing the thing. Corey Maley, Gualtiero Piccinini, Thomas Polger, and Lawrence Shapiro are among these philosophers. Unfortunately, they do not explain why multiple realization requires multiple “ways” of realizing. More significantly, their efforts to articulate multiple “ways” of realizing turn out to be problematic.
Article
The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.
Article
In this paper I discuss the concept of robustness in neuroscience. Various mechanisms for making systems robust have been discussed across biology and neuroscience (e.g. redundancy and fail-safes). Many of these notions originate from engineering. I argue that concepts borrowed from engineering aid neuroscientists in (1) operationalizing robustness; (2) formulating hypotheses about mechanisms for robustness; and (3) quantifying robustness. Furthermore, I argue that the significant disanalogies between brains and engineered artefacts raise important questions about the applicability of the engineering framework. I argue that the use of such concepts should be understood as a kind of simplifying idealization.
Chapter
As early as 1894, Cassirer began an in-depth study of the writings of Kant, Leibniz, and Descartes, and of Hermann Cohen as well. But it wasn’t until the spring of 1896 that he began to attend Hermann Cohen’s courses in Marburg. For Cohen, as for the young Cassirer, the philosophy of Leibniz constitutes one of the essential links in the chain of the history of idealism1 (with Plato leading the line-up); it is even a privileged link that elucidates “Kant’s relationship to his predecessors” (“Kants Verhältnis zu seinen Vorgängern”)2.
Article
Fueled by innovation in the computer vision and artificial intelligence communities, recent developments in computational neuroscience have used goal-driven hierarchical convolutional neural networks (HCNNs) to make strides in modeling neural single-unit and population responses in higher visual cortical areas. In this Perspective, we review the recent progress in a broader modeling context and describe some of the key technical innovations that have supported it. We then outline how the goal- driven HCNN approach can be used to delve even more deeply into understanding the development and organization of sensory cortical processing.
Article
Béatrice Longuenesse considers the three aspects of Kant's philosophy, his epistemology and metaphysics of nature, moral philosophy, and aesthetic theory, under one unifying standpoint: Kant's conception of our capacity to form judgments. She argues that the elements which make up our cognitive access to the world have an equally important role to play in our moral evaluations and our aesthetic judgments. Her book will appeal to all interested in Kant and his thought, ranging over Kant's account of our representations of space and time, his conception of the logical forms of judgments, sufficient reason, causality, community, God, freedom, morality, and beauty in nature and art.
Article
What are the functional units of the brain? If the function of the brain is to process information-carrying signals, then the functional units will be the senders and receivers of those signals. Neurons have been the default candidate, with action potentials as the signals. But there are alternatives: synapsesfit the action potential picture more cleanly, and glial activities (e.g., in astrocytes) mightalso be characterized as signaling. Are synapses or nonneuronal cells better candidates to play the role of functional units? Will informational signaling still be the best model for brain function if we move beyond the neuron doctrine? © 2014 by the Philosophy of Science Association. All rights reserved.
Article
In this article we argue for the existence of ‘analogue simulation’ as a novel form of scientific inference with the potential to be confirmatory. This notion is distinct from the modes of analogical reasoning detailed in the literature, and draws inspiration from fluid dynamical ‘dumb hole’ analogues to gravitational black holes. For that case, which is considered in detail, we defend the claim that the phenomena of gravitational Hawking radiation could be confirmed in the case that its counterpart is detected within experiments conducted on diverse realizations of the analogue model. A prospectus is given for further potential cases of analogue simulation in contemporary science. • 1 Introduction • 2 Physical Background • 2.1 Hawking radiation in semi-classical gravity • 2.2 Modelling sound in fluids • 2.3 The acoustic analogue model of Hawking radiation • 3 Simulation and Analogy in Physical Theory • 3.1 Analogical reasoning and analogue simulation • 3.2 Confirmation via analogue simulation • 3.3 Recapitulation • 4 The Sound of Silence: Analogical Insights into Gravity • 4.1 Experimental realization of analogue models • 4.2 Universality and the Hawking effect • 4.3 Confirmation of gravitational Hawking radiation • 5 Prospectus
Article
"I was precisely the first one, some thirty or more years ago, to confirm by physiological experiments and to define more closely that which Hughlings Jackson had concluded from clinical facts." (E. Hitzig, Brain 23:545, 1900.) "The objects I had in view in undertaking the present research were twofold: first to put to experimental proof the views entertained by Dr. Hughlings Jackson.... I regard these [researches] as an experimental confirmation of the views expressed by him. They are, as it were, an artificial reproduction of the experiments produced by disease, and the clinical conclusions which Dr. Jackson has arrived at from his observations of disease are in all essential particulars confirmed by the above experiments." (D. Ferrier, cf. Selected Writings of J. H. Jackson, 1931, 1, ix.)
Article
Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves neural representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function?
Article
The Copley Medal is awarded to Professor Albert Jan Kluyver who has held the chair of microbiology at Delft since 1921 and has become a world authority on general microbiology and the biological approach to its problems. In 1923 he made a survey of natural processes known, to occur through the agency of micro-organisms, and this survey has been the cornerstone of his work for almost twenty years. Impressed by the bewildering variety of substances (both inorganic and organic) used by these organisms for their growth processes and by the equally great variety of substances formed as the end-products of metabolism, he sought some underlying uniformity in the basic types of chemical change which occurred. Devoting himself mainly to bacterial fermentations, he found this uniformity in the extension of the concepts of Wieland and Thunberg that biological oxidations occur by successive transfers of pairs of hydrogen atoms to a suitable acceptor. His views were confirmed by a series of studies covering all the principal bacterial fermentations of carbohydrate. These researches also led him to believe that, in spite of the many and varied products formed, the degradation of carbohydrate took place stepwise by a series of simple reactions leading to a limited number of common intermediates. The initial stages were alike and the variation came later through the differing enzymic constitution of the cells and the differing effects of changes in the environment on such enzyme reactions. These views have been amply confirmed and, apart from its broader implications, the work on fermentation remains a source of some of the most accurate data in the field.
Article
The above statement of what is meant by the behavioristic method of study omits the specific structure and the instrinsic organi-zation of the object. This omission is fun-damental because on it is based the distinc-tion between the behavioristic and the alter-native functional method of study. In a func-tional analysis, as opposed to a behavioristic approach, the main goal is the intrinsic orga-nization of the entity studied, its structure and its properties; the relations between the object and the surroundings are relatively incidental. From this definition of the behavioristic method a broad definition of behavior ensues. By behavior is meant any change of an en-tity with respect to its surroundings. This change may be largely an output from the ob-ject, the input being then minimal, remote or irrelevant; or else the change may be immedi-ately traceable to a certain input. Accordingly, any modification of an object, detectable exter-nally, may be denoted as behavior. The term would be, therefore, too extensive for useful-ness were it not that it may be restricted by apposite adjectives, i. e., that behavior may be classified. The consideration of the changes of energy involved in behavior affords a basis for classi-fication. Active behavior is that in which the
Article
Our ability to move is central to everyday life. Investigating the neural control of movement in general, and the cortical control of volitional arm movements in particular, has been a major research focus in recent decades. Studies have involved primarily either trying to account for single-neuron responses in terms of tuning for movement parameters, or trying to decode movement parameters from populations of tuned neurons. While this focus on encoding and decoding has led to many seminal advances, it has not led to an agreed-upon conceptual framework. Recently, interest in understanding the underlying neural dynamics has increased, leading to questions such as how the current population response determines the future population response, and to what purpose? We review how a dynamical systems perspective may help us understand why neural activity evolves the way it does, how neural activity relates to movement parameters, and how a unified conceptual framework may result. Expected final online publicatio...
Article
charts the increasing 'levels' (embedding) of intentionality which may in principle underly primate vocal behaviour, and suggests a simple method for picking out the real level visits the very primatologists on whose data he theorized, and discusses the difficulties in executing his simple test in practice intentional systems in cognitive ethology (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.
Article
The paper examines philosophical issues that arise in contexts where one has many different models for treating the same system. I show why in some cases this appears relatively unproblematic (models of turbulence) while others represent genuine difficulties when attempting to interpret the information that models provide (nuclear models). What the examples show is that while complementary models needn’t be a hindrance to knowledge acquisition, the kind of inconsistency present in nuclear cases is, since it is indicative of a lack of genuine theoretical understanding. It is important to note that the differences in modeling do not result directly from the status of our knowledge of turbulent flows as opposed to nuclear dynamics—both face fundamental theoretical problems in the construction and application of models. However, as we shall, the ‘problem context(s)’ in which the modeling takes plays a decisive role in evaluating the epistemic merit of the models themselves. Moreover, the theoretical difficulties that give rise to inconsistent as opposed to complementary models (in the cases I discuss) impose epistemic and methodological burdens that cannot be overcome by invoking philosophical strategies like perspectivism, paraconsistency or partial structures.
Article
Computational neuroscientists not only employ computer models and simulations in studying brain functions. They also view the modeled nervous system itself as computing. What does it mean to say that the brain computes? And what is the utility of the ‘brain-as-computer’ assumption in studying brain functions? In previous work, I have argued that a structural conception of computation is not adequate to address these questions. Here I outline an alternative conception of computation, which I call the analog-model. The term ‘analog-model’ does not mean continuous, non-discrete or non-digital. It means that the functional performance of the system simulates mathematical relations in some other system, between what is being represented. The brain-as-computer view is invoked to demonstrate that the internal cellular activity is appropriate for the pertinent information-processing (often cognitive) task.