Chapter

Your Brain Is Like a Computer: Function, Analogy, Simplification

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The relationship between brain and computer is a perennial theme in theoretical neuroscience, but it has received relatively little attention in the philosophy of neuroscience. This paper argues that much of the popularity of the brain-computer comparison (e.g. circuit models of neurons and brain areas since McCulloch and Pitts, Bull Math Biophys 5: 115–33, 1943) can be explained by their utility as ways of simplifying the brain. More specifically, by justifying a sharp distinction between aspects of neural anatomy and physiology that serve information-processing, and those that are ‘mere metabolic support,’ the computational framework provides a means of abstracting away from the complexities of cellular neurobiology, as those details come to be classified as irrelevant to the (computational) functions of the system. I argue that the relation between brain and computer should be understood as one of analogy, and consider the implications of this interpretation for notions of multiple realisation. I suggest some limitations of our understanding of the brain and cognition that may stem from the radical abstraction imposed by the computational framework.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Moreover, current defenses of multiple realization in the brain tend to focus on neuroscience itself as confirming that brains are computational devices, and that those computational states are multiply realizable. Now, if you think that computational neuroscience shouldn't be taken too seriously (Bickle, 2003) or shouldn't be taken too literally as making computational claims as suggested above (cf., Cao 2022, Chirimuuta, 2021, then considerations about computational neuroscience will not carry weight for you. But we're trying to understand why many people think that multiple realization is obvious, so let's leave these reservations aside. ...
... Notice that the economic example also has the feature emphasized byChirimuuta (2021): what counts as a "monetary exchange" or "financial transaction" is dependent on human beings and our interests. The fact that anything could be used as a currency is just the fact that anything can be used as a conventional or arbitrary representation -that is, used by us to represent, given the right circumstances. ...
Article
Full-text available
According to the multiple realization argument, mental states or processes can be realized in diverse and heterogeneous physical systems; and that fact implies that mental state or process kinds cannot be identified with particular kinds of physical states or processes. More specifically, mental processes cannot be identified with brain processes. Moreover, the argument provides a general model for the autonomy of the special sciences. The multiple realization argument is widely influential, but over the last thirty years it has also faced serious objections. Despite those objections, most philosophers regard the fact of multiple realization and the cogency of the multiple realization argument as plainly correct. Why is that? What is it about the multiple realization argument that makes it so resilient? One reason is that the multiple realization argument is deeply intertwined with a view that minds are, in some sense, computational. But we argue that the sense in which minds are computational does not support the conclusion that they are ipso facto multiply realized. We argue that the sense in which brains compute does not imply that brains implement multiply realizable computational processes, and it does not provide a general model for the autonomy of the special sciences.
... Many suppose that the structure represented in the model mathematically as a computation is there in the brain activity as a series of physical state transitions, to the extent that the model is accurate. The traditional realist invokes a structure in common (a homomorphism) claimed to exist in model and target, a view I have called formal realism (Chirimuuta, 2021). This stands in contrast to an alternative way to cash out the ontological status of the model called formal idealism. ...
Article
Full-text available
Recent work in philosophy of science has shown how the challenges posed by extremely complex systems require that scientists employ a range of modelling strategies, leading to partial perspectives that make apparently conflicting claims about the target (Mitchell 2009b, Longino 2013). The brain is of course extremely complex, and the same arguments apply here. In this paper I present a variety of perspectivism called haptic realism. This account foregrounds the process by which the instrumental goals of neuroscience shape the way that objects of investigation are probed experimentally and conceptualised through modelling. Because such models do not aim to represent their targets exactly as they are, but in ways that are most useful for the investigators, the models should not be interpreted as literal descriptions of neural systems. Scientific realism traditionally involves a semantic commitment to interpreting theories and models as literal descriptions of human-independent nature. Haptic realism makes a significant departure from this tradition. Haptic realism also calls us to reassess the ontological commitments and knowledge claims of neuroscientific models and theories.
... Descartes was one of the earliest scientists (natural philosophers) whose explanatory programme presupposed the denial of any fundamental difference between natural and artefactual objects. 2 This is a simplifying assumption, because machines and other technological entities are less complex than organisms, but they afford a model, a simplifying lens through which to view the works of nature (Chirimuuta 2021). We can call this a 'Cartesian idealization', and recognize that Simon is employing it in his promotion of the computer as the model for the mind. ...
Article
Full-text available
This paper examines the dispute between Burge and McDowell over methodology in the philosophy of perception. Burge (2005, 2011) has argued that the disjunctivism posited by naive perceptual realists is incompatible with the results of current perceptual science, while McDowell (2010, 2013) defends his disjunctivism by claiming an autonomous field of enquiry for perceptual epistemology, one that does not employ the classificatory schemes of the science. Here it is argued that the crucial point at issue in the dispute is Burge’s acceptance, and McDowell’s rejection, of the ‘Cartesian idealization’ of mind as a self-contained system. Burge’s case against disjunctivism rests on the assumption of a clearly demarcated boundary between mind and world, a picture of the mind that McDowell’s philosophy reacts against. This boundary is required for scientific, causal explanations of perceptual processing because it is a simplifying assumption that helps present scientists with a clearly demarcated object of investigation. Concurring with McDowell, I conclude that philosophers need not carve up their objects of investigation in the same way.
... Elsewhere I discuss the rise of computationalism, arguing that its appeal rested not least in the simplifications that it offered to neurophysiologists(Chirimuuta, 2021). 18 See e.g.Todes (2014, pp. ...
Article
Full-text available
This paper takes an integrated history and philosophy of science approach to the topic of "simplicity out of complexity". The reflex theory was a framework within early twentieth century psychology and neuroscience which aimed to decompose complex behaviours and neural responses into simple reflexes. It was controversial in its time, and did not live up to its own theoretical and empirical ambitions. Examination of this episode poses important questions about the limitations of simplifying strategies, and the relationship between simplification and the engineering approach to biology.
... Scientists also research a not very new, yet an utterly relevant topic of comparing the human brain and the computer. For example, Chirimuuta (2021) believes that the brain and the computer relationship is an eternal theme in theoretical neuroscience, but relatively little attention has been paid to it in the philosophy of neuroscience, and that much of the popularity of the brain-computer comparison is simply due to the usefulness of it in simplifying the brain. The scientist believes that the relationship between the brain and the computer should be understood as an analogy and considers the consequences of this interpretation for the concepts of multiple realization. ...
Article
Full-text available
If qualia are mental, and if the mental is functional, then so are qualia. But, arguably, qualia are not functional. A resolution of this is offered based on a formal similarity between qualia and numbers. Just as certain sets “play the role of” the number 3 in Peano’s axioms, so a certain physical implementation of a color plays the role of, say, red in a (computational) cognitive agent’s “cognitive economy”.
Article
Full-text available
The purpose of this article is to show how the comparison or analogy with artifacts (i.e., systems engineered by humans) is foundational for the idea that complex neuro-cognitive systems are amenable to explanation at distinct levels, which is a central simplifying strategy for modeling the brain. The most salient source of analogy is of course the digital computer, but I will discuss how some more general comparisons with the processes of design and engineering also play a significant role. I will show how the analogies, and the subsequent notion of a distinct computational level, have engendered common ideas about how safely to abstract away from the complexity of concrete neural systems, yielding explanations of how neural processes give rise to cognitive functions. I also raise worries about the limitations of these explanations, due to neglected differences between the human-made devices and biological organs.
Article
Objections to the computational theory of cognition, inspired by twentieth century phenomenology, have tended to fixate on the embodiment and embeddedness of intelligence. In this paper I reconstruct a line of argument that focusses primarily on the abstract nature of scientific models, of which computational models of the brain are one sort. I observe that the critique of scientific abstraction was rather commonplace in the philosophy of the 1920s and 30s and that attention to it aids the reading of The Organism ([1934] 1939) by the neurologist Kurt Goldstein. With this background in place, we see that some brief but spirited criticisms of cybernetics by two later thinkers much influenced by Goldstein, Georges Canguilhem (1963) and Maurice Merleau- Ponty (1961), show continuity with the earlier discussions of abstraction in science.
Article
Full-text available
The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks (ANN’s) used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding of the brain trades off against the predictive accuracy of the models. This trade-off between prediction and understanding is better explained by a non-factivist account of scientific understanding.
Article
Full-text available
Synapses are the hallmark of brain complexity and have long been thought of as simple connectors between neurons. We are now in an era in which we know the full complement of synapse proteins and this has created an existential crisis because the molecular complexity far exceeds the requirements of most simple models of synaptic function. Studies of the organisation of proteome complexity and its evolution provide surprising new insights that challenge existing dogma and promote the development of new theories about the origins and role of synapses in behaviour. The postsynaptic proteome of excitatory synapses is a structure with high molecular complexity and sophisticated computational properties that is disrupted in over 130 brain diseases. A key goal of 21st-century neuroscience is to develop comprehensive molecular datasets on the brain and develop theories that explain the molecular basis of behaviour.
Article
Full-text available
In this paper, I argue that looking at the concept of neural function through the lens of cognition alone risks cognitive myopia: it leads neuroscientists to focus only on mechanisms with cognitive functions that process behaviorally relevant information when conceptualizing " neural function ". Cognitive myopia tempts researchers to neglect neural mechanisms with noncognitive functions which do not process behaviorally relevant information but maintain and repair neural and other systems of the body. Cognitive myopia similarly affects philosophy of neuroscience because scholars overlook noncognitive functions when analyzing issues surrounding e.g., functional decomposition or the multifunctionality of neural structures. I argue that we can overcome cognitive myopia by adopting a patchwork approach that articulates cognitive and noncognitive " patches " of the concept of neural function. Cognitive patches describe mechanisms with causally specific effects on cognition and behavior which are likely operative in transforming sensory or other inputs into motor outputs. Noncognitive patches describe mechanisms that lack such specific effects; these mechanisms are enabling conditions for cognitive functions to occur. I use these distinctions to characterize two noncognitive functions at the mesoscale of neural circuits: subsistence functions like breathing are implemented by central pattern generators and are necessary to maintain the life of the organism. Infrastructural functions like gain control are implemented by canonical microcircuits and prevent neural system damage while cognitive processing occurs. By adding conceptual patches that describe these functions, a patchwork approach can overcome cognitive myopia and help us explain how the brain's capacities as an information processing device are constrained by its ability to maintain and repair itself as a physiological apparatus.
Article
Full-text available
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive (neuro)science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year.
Article
Full-text available
New technologies in neuroscience generate reams of data at an exponentially increasing rate, spurring the design of very-large-scale data-mining initiatives. Several supranational ventures are contemplating the possibility of achieving, within the next decade(s), full simulation of the human brain.
Article
Full-text available
An underlying assumption in computational approaches in cognitive and brain sciences is that the nervous system is an input–output model of the world: Its input–output functions mirror certain relations in the target domains. I argue that the input–output modelling assumption plays distinct methodological and explanatory roles. Methodologically, input–output modelling serves to discover the computed function from environmental cues. Explanatorily, input–output modelling serves to account for the appropriateness of the computed function to the explanandum information-processing task. I compare very briefly the modelling explanation to mechanistic and optimality explanations, noting that in both cases the explanations can be seen as complementary rather than contrastive or competing.
Article
Full-text available
There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
Article
Full-text available
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
Article
Full-text available
This paper argues that we should take into account the process of historical transmission to enrich our understanding of material culture. More specifically, I want to show how the rewriting of history and the invention of tradition impact material objects and our beliefs about them. I focus here on the transmission history of the mechanical calculator invented by the German savant Gottfried Wilhelm Leibniz. Leibniz repeatedly described his machine as functional and wonderfully useful, but in reality it was never finished and didn't fully work. Its internal structure also remained unknown. In 1879, however, the machine re-emerged and was reinvented as the origin of all later calculating machines based on the stepped drum, to protect the priority of the German Leibniz against the Frenchman Thomas de Colmar as the father of mechanical calculation. The calculator was later replicated to demonstrate that it could function ‘after all’, in an effort to deepen this narrative and further enhance Leibniz's computing acumen.
Article
Full-text available
The picture of synthetic biology as a kind of engineering science has largely created the public understanding of this novel field, covering both its promises and risks. In this paper, we will argue that the actual situation is more nuanced and complex. Synthetic biology is a highly interdisciplinary field of research located at the interface of physics, chemistry, biology, and computational science. All of these fields provide concepts, metaphors, mathematical tools, and models, which are typically utilized by synthetic biologists by drawing analogies between the different fields of inquiry. We will study analogical reasoning in synthetic biology through the emergence of the functional meaning of noise, which marks an important shift in how engineering concepts are employed in this field. The notion of noise serves also to highlight the differences between the two branches of synthetic biology: the basic science-oriented branch and the engineering-oriented branch, which differ from each other in the way they draw analogies to various other fields of study. Moreover, we show that fixing the mapping between a source domain and the target domain seems not to be the goal of analogical reasoning in actual scientific practice.
Article
Full-text available
Despite its significance in neuroscience and computation, McCulloch and Pitts's celebrated 1943 paper has received little historical and philosophical attention. In 1943 there already existed a lively community of biophysicists doing mathematical work on neural networks. What was novel in McCulloch and Pitts's paper was their use of logic and computation to understand neural, and thus mental, activity. McCulloch and Pitts's contributions included (i) a formalism whose refinement and generalization led to the notion of finite automata (an important formalism in computability theory), (ii) a technique that inspired the notion of logic design (a fundamental part of modern computer design), (iii) the first use of computation to address the mind–body problem, and (iv) the first modern computational theory of mind and brain.
Article
Full-text available
Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.
Book
Though it did not yet exist as a discrete field of scientific inquiry, biology was at the heart of many of the most important debates in seventeenth-century philosophy. Nowhere is this more apparent than in the work of G. W. Leibniz. This book offers the first in-depth examination of Leibniz's deep and complex engagement with the empirical life sciences of his day, in areas as diverse as medicine, physiology, taxonomy, generation theory, and paleontology. The book shows how these wide-ranging pursuits were not only central to Leibniz's philosophical interests, but often provided the insights that led to some of his best-known philosophical doctrines. Presenting the clearest picture yet of the scope of Leibniz's theoretical interest in the life sciences, the book takes seriously the philosopher's own repeated claims that the world must be understood in fundamentally biological terms. Here it reveals a thinker who was immersed in the sciences of life, and looked to the living world for answers to vexing metaphysical problems. The book casts Leibniz's philosophy in an entirely new light, demonstrating how it radically departed from the prevailing models of mechanical philosophy and had an enduring influence on the history and development of the life sciences. Along the way, the book provides a fascinating glimpse into early modern debates about the nature and origins of organic life, and into how philosophers such as Leibniz engaged with the scientific dilemmas of their era.
Article
This article examines three candidate cases of non-causal explanation in computational neuroscience. I argue that there are instances of efficient coding explanation that are strongly analogous to examples of non-causal explanation in physics and biology, as presented by Batterman ([2002]), Woodward ([2003]), and Lange ([2013]). By integrating Lange’s and Woodward’s accounts, I offer a new way to elucidate the distinction between causal and non-causal explanation, and to address concerns about the explanatory sufficiency of non-mechanistic models in neuroscience. I also use this framework to shed light on the dispute over the interpretation of dynamical models of the brain. • 1 Introduction • 1.1 Efficient coding explanation in computational neuroscience • 1.2 Defining non-causal explanation • 2 Case I: Hybrid Computation • 3 Case II: The Gabor Model Revisited • 4 Case III: A Dynamical Model of Prefrontal Cortex • 4.1 A new explanation of context-dependent computation • 4.2 Causal or non-causal? • 5 Causal and Non-causal: Does the Difference Matter?
Article
In this review essay on The Multiple Realization Book by Polger and Shapiro, I consider the prospects for a biologically grounded notion of multiple realization (MR) which has been given too little consideration in the philosophy of mind and cognitive science. Thinking about MR in the context of biological notions of function and robustness leads to a rethink of what would count as a viable functionalist theory of mind. I also discuss points of tension between Polger and Shapiro’s definition of MR and current explanatory practice in neuroscience.
Article
One might have thought that if something has two or more distinct realizations, then that thing is multiply realized. Nevertheless, some philosophers have claimed that two or more distinct realizations do not amount to multiple realization, unless those distinct realizations amount to multiple “ways” of realizing the thing. Corey Maley, Gualtiero Piccinini, Thomas Polger, and Lawrence Shapiro are among these philosophers. Unfortunately, they do not explain why multiple realization requires multiple “ways” of realizing. More significantly, their efforts to articulate multiple “ways” of realizing turn out to be problematic.
Article
The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.
Article
In this paper I discuss the concept of robustness in neuroscience. Various mechanisms for making systems robust have been discussed across biology and neuroscience (e.g. redundancy and fail-safes). Many of these notions originate from engineering. I argue that concepts borrowed from engineering aid neuroscientists in (1) operationalizing robustness; (2) formulating hypotheses about mechanisms for robustness; and (3) quantifying robustness. Furthermore, I argue that the significant disanalogies between brains and engineered artefacts raise important questions about the applicability of the engineering framework. I argue that the use of such concepts should be understood as a kind of simplifying idealization.
Chapter
As early as 1894, Cassirer began an in-depth study of the writings of Kant, Leibniz, and Descartes, and of Hermann Cohen as well. But it wasn’t until the spring of 1896 that he began to attend Hermann Cohen’s courses in Marburg. For Cohen, as for the young Cassirer, the philosophy of Leibniz constitutes one of the essential links in the chain of the history of idealism1 (with Plato leading the line-up); it is even a privileged link that elucidates “Kant’s relationship to his predecessors” (“Kants Verhältnis zu seinen Vorgängern”)2.
Chapter
This book presents twenty essays on various aspects of Aristotle’s De Anima. These cover topics such as the relation between the body and soul, functionalism, sense-perception, imagination, memory, intellect, and desire. It includes an introduction that provides a description of the manuscripts of the De Anima, commentaries, and its links with other works.
Chapter
This book presents twenty essays on various aspects of Aristotle’s De Anima. These cover topics such as the relation between the body and soul, functionalism, sense-perception, imagination, memory, intellect, and desire. It includes an introduction that provides a description of the manuscripts of the De Anima, commentaries, and its links with other works.
Article
Fueled by innovation in the computer vision and artificial intelligence communities, recent developments in computational neuroscience have used goal-driven hierarchical convolutional neural networks (HCNNs) to make strides in modeling neural single-unit and population responses in higher visual cortical areas. In this Perspective, we review the recent progress in a broader modeling context and describe some of the key technical innovations that have supported it. We then outline how the goal- driven HCNN approach can be used to delve even more deeply into understanding the development and organization of sensory cortical processing.
Article
Béatrice Longuenesse considers the three aspects of Kant's philosophy, his epistemology and metaphysics of nature, moral philosophy, and aesthetic theory, under one unifying standpoint: Kant's conception of our capacity to form judgments. She argues that the elements which make up our cognitive access to the world have an equally important role to play in our moral evaluations and our aesthetic judgments. Her book will appeal to all interested in Kant and his thought, ranging over Kant's account of our representations of space and time, his conception of the logical forms of judgments, sufficient reason, causality, community, God, freedom, morality, and beauty in nature and art.
Article
What are the functional units of the brain? If the function of the brain is to process information-carrying signals, then the functional units will be the senders and receivers of those signals. Neurons have been the default candidate, with action potentials as the signals. But there are alternatives: synapsesfit the action potential picture more cleanly, and glial activities (e.g., in astrocytes) mightalso be characterized as signaling. Are synapses or nonneuronal cells better candidates to play the role of functional units? Will informational signaling still be the best model for brain function if we move beyond the neuron doctrine? © 2014 by the Philosophy of Science Association. All rights reserved.
Article
In this article we argue for the existence of ‘analogue simulation’ as a novel form of scientific inference with the potential to be confirmatory. This notion is distinct from the modes of analogical reasoning detailed in the literature, and draws inspiration from fluid dynamical ‘dumb hole’ analogues to gravitational black holes. For that case, which is considered in detail, we defend the claim that the phenomena of gravitational Hawking radiation could be confirmed in the case that its counterpart is detected within experiments conducted on diverse realizations of the analogue model. A prospectus is given for further potential cases of analogue simulation in contemporary science. • 1 Introduction • 2 Physical Background • 2.1 Hawking radiation in semi-classical gravity • 2.2 Modelling sound in fluids • 2.3 The acoustic analogue model of Hawking radiation • 3 Simulation and Analogy in Physical Theory • 3.1 Analogical reasoning and analogue simulation • 3.2 Confirmation via analogue simulation • 3.3 Recapitulation • 4 The Sound of Silence: Analogical Insights into Gravity • 4.1 Experimental realization of analogue models • 4.2 Universality and the Hawking effect • 4.3 Confirmation of gravitational Hawking radiation • 5 Prospectus
Article
"I was precisely the first one, some thirty or more years ago, to confirm by physiological experiments and to define more closely that which Hughlings Jackson had concluded from clinical facts." (E. Hitzig, Brain 23:545, 1900.) "The objects I had in view in undertaking the present research were twofold: first to put to experimental proof the views entertained by Dr. Hughlings Jackson.... I regard these [researches] as an experimental confirmation of the views expressed by him. They are, as it were, an artificial reproduction of the experiments produced by disease, and the clinical conclusions which Dr. Jackson has arrived at from his observations of disease are in all essential particulars confirmed by the above experiments." (D. Ferrier, cf. Selected Writings of J. H. Jackson, 1931, 1, ix.)
Article
Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves neural representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function?
Article
The Copley Medal is awarded to Professor Albert Jan Kluyver who has held the chair of microbiology at Delft since 1921 and has become a world authority on general microbiology and the biological approach to its problems. In 1923 he made a survey of natural processes known, to occur through the agency of micro-organisms, and this survey has been the cornerstone of his work for almost twenty years. Impressed by the bewildering variety of substances (both inorganic and organic) used by these organisms for their growth processes and by the equally great variety of substances formed as the end-products of metabolism, he sought some underlying uniformity in the basic types of chemical change which occurred. Devoting himself mainly to bacterial fermentations, he found this uniformity in the extension of the concepts of Wieland and Thunberg that biological oxidations occur by successive transfers of pairs of hydrogen atoms to a suitable acceptor. His views were confirmed by a series of studies covering all the principal bacterial fermentations of carbohydrate. These researches also led him to believe that, in spite of the many and varied products formed, the degradation of carbohydrate took place stepwise by a series of simple reactions leading to a limited number of common intermediates. The initial stages were alike and the variation came later through the differing enzymic constitution of the cells and the differing effects of changes in the environment on such enzyme reactions. These views have been amply confirmed and, apart from its broader implications, the work on fermentation remains a source of some of the most accurate data in the field.
Article
This essay has two goals. The first is to define the behavioristic study of natural events and to classify behavior. The second is to stress the importance of the concept of purpose. Given any object, relatively abstracted from its surroundings for study, the behavioristic approach consists in the examination of the output of the object and of the relations of this output to the input. By output is meant any change produced in the surroundings by the object. By input, conversely, is meant any event external to the object that modifies this object in any manner.
Article
Our ability to move is central to everyday life. Investigating the neural control of movement in general, and the cortical control of volitional arm movements in particular, has been a major research focus in recent decades. Studies have involved primarily either trying to account for single-neuron responses in terms of tuning for movement parameters, or trying to decode movement parameters from populations of tuned neurons. While this focus on encoding and decoding has led to many seminal advances, it has not led to an agreed-upon conceptual framework. Recently, interest in understanding the underlying neural dynamics has increased, leading to questions such as how the current population response determines the future population response, and to what purpose? We review how a dynamical systems perspective may help us understand why neural activity evolves the way it does, how neural activity relates to movement parameters, and how a unified conceptual framework may result. Expected final online publicatio...
Article
charts the increasing 'levels' (embedding) of intentionality which may in principle underly primate vocal behaviour, and suggests a simple method for picking out the real level visits the very primatologists on whose data he theorized, and discusses the difficulties in executing his simple test in practice intentional systems in cognitive ethology (PsycINFO Database Record (c) 2012 APA, all rights reserved)