Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

From the perspective of cognitive robotics, this paper presents a modern interpretation of Newell’s (1973) reasoning and suggestions for why and how cognitive psychologists should develop models of cognitive phenomena. We argue that the shortcomings of current cognitive modelling approaches are due in significant part to a lack of exactly the kind of integration required for the development of embodied autonomous robotics. Moreover we suggest that considerations of embodiment, situatedness, and autonomy, intrinsic to cognitive robotics, provide an appropriate basis for the integration and theoretic cumulation that Newell argued was necessary for psychology to mature. From this perspective we analyse the role of embodiment and modes of situatedness in terms of integration, cognition, emotion, and autonomy. Four complementary perspectives on embodied and situated cognitive science are considered in terms of their potential to contribute to cognitive robotics, cognitive science, and psychological theorizing: minimal cognition and organization, enactive perception and sensorimotor contingency, homeostasis and emotion, and social embedding. In combination these perspectives provide a framework for cognitive robotics, not only wholly compatible with the original aims of cognitive modelling, but as a more appropriate methodology than those currently in common use within psychology.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Detailed taxonomies of different replies to the CRA together with rebuttals have been presented elsewhere [10,45,46]. Instead of providing yet another one here, we wish to focus on a specific kind of systems reply, the so called 'robot reply', which although considered in the original paper by Searle, has more recently gained particular momentum thanks to the links between cognitive robotics and a new move in cognitive science called enactivism [32,41]. ...
... Moreover, various forms of cognitive robotic stress to a different degree the importance of embodiment for cognition, with some placing more emphasis on the actual body and its affordancies, than on the nitty gritty of the central 'computational' processing unit [43,44]. In fact, this modern successor of GOFAI has been proposed to provide a fertile experimental ground for cognitive science [32]. Considered to be a radical departure from GOFAI by its enthusiasts, it has found itself in mutually beneficial symbiosis with some forms of enactivism [29,41]. ...
... However, although the latter two notions seem intimately linked -with the notion of autopoiesis being the minimal organisation of the (unicellular) living systems -organisational closure is broader as it characterises further autonomous systems such as multi-cellular organisms as well as the nervous or even the social systems [18]. 5 Our discussion specifically addresses the particular interpretation of sensory motor account derived from early works of Nöe and O'Regan on this subject, which seems to have been adopted within cognitive robotics community [29,32,43,44]. It is important to note though that both authors have since developed their accounts in separate and increasingly divergent directions [38,42]. ...
Chapter
Full-text available
The Chinese Room Argument purports to show that ‘syntax is not sufficient for semantics’; an argument which led John Searle to conclude that ‘programs are not minds’ and hence that no computational device can ever exhibit true understanding. Yet, although this controversial argument has received a series of criticisms, it has withstood all attempts at decisive rebuttal so far. One of the classical responses to CRA has been based on equipping a purely computational device with a physical robot body. This response, although partially addressed in one of Searle’s original contra arguments - the ‘robot reply’ - more recently gained friction with the development of embodiment and enactivism, two novel approaches to cognitive science that have been exciting roboticists and philosophers alike. Furthermore, recent technological advances - blending biological beings with computational systems - have started to be developed which superficially suggest that mind may be instantiated in computing devices after all. This paper will argue that (a) embodiment alone does not provide any leverage for cognitive robotics wrt the CRA, when based on a weak form of embodiment and that (b) unless they take the body into account seriously, hybrid bio-computer devices will also share the fate of their disembodied or robotic predecessors in failing to escape from Searle’s Chinese room.
... These steps towards a robotics which integrates research from developmental psychology as well as neuroscience are being interwoven with a train of cognitive systems research whose roots lie in the biological sciences and phenomenology rather than the computationalist/functionalist tradition that was the main voice of 20th century cognitive science. While sensorimotor research in robotics and philosophy of cognitive science has often come to be labelled as "enactive" (principally through Alva Noë's use of the term "enactivism" to describe his sensorimotor theory of consciousness: see Noë, 2004Noë, , 2009, there is more to enactivism than sensorimotor skills, and as such, enactive cognitive science should not be conflated with sensorimotor cognitive science (for extended arguments on this see Di Paolo, 2009;Morse, Herrera, Clowes, Montebelli, & Ziemke, 2011;Ziemke, 2007Ziemke, , 2008Ward & Stapleton, 2012). ...
... But while haptics is now a common facet of robotics, interoception has yet to become an orthodox part of cognitive systems research. Work from research groups such as those led by Tom Ziemke (in particular, the Integrating Cognition, Emotion, and Autonomy (ICEA) project) and Ezequiel Di Paolo, who have been working on developing the insights from physiology and neuroscience outlined in Section 2 and modelling these in robotic agents, is changing this, however, and robotic modelling is beginning to integrate processes beyond sensorimotor interaction (see Morse et al., 2011). ...
... This includes both traditional architectures such as SOAR (Laird, Newell & Rosenbloom, 1987), which has its roots in Newell's research, ACT-R (Anderson 2007), which remains influential in psychology, and architectures that strive for neuro-scientific plausibility such as Leabra (O'Reilly & Munakata, 2000) and SPAUN (Eliasmith et al., 2012). Another example of this kind is research on cognitive robotic architectures as unifying cognition (Morse et al., 2011). Thus, to study developmental processes, one may use one of the robotic platforms of so-called epigenetic robotics, such as i-Cub (Metta et al., 2010). ...
... One could reply that this is because explanatory models are in some way special, i.e., they need not be unified to be satisfying, whereas there are some models, in particular physical ones, that stand in need of genuine unification. This is the kind of argument that was put forward by proponents of robotic architectures of cognition (Morse et al., 2011): To make a cognitive robot work, one needs a unified and complete model of its cognitive capacities. But this argument is not valid. ...
Article
Full-text available
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cog-nitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models.
... The jumping robot model for landing simulation will be referred from the MACE space robotics team. The simulation model is the analog of the simulated object or its structural form and it can be a physical model or a mathematical model [18]. In this paper, the referred simulation model prototype is a physical model. ...
Article
Full-text available
In recent years, the research of planetary exploration robots has become an active field. The jumping robot has become a hot spot in this field. This paper presents a work modelling and simulating a three-legged jumping robot, which has a powerful force, high leaping performance, and good flexibility. In particular, the jumping of the robot was simulated and the landing buffer of the robot was analyzed. Because this jumping robot lacks landing buffer, this paper verifies a method of absorbing landing kinetic energy to improve landing stability and storing it as the energy for the next jump in the simulation. Through the landing simulation, the factors affecting the landing energy absorption are identified. Moreover, the simulation experiment verifies that the application of the intermediate axis theorem helps to absorb more energy and adjust the landing attitude of the robot. The simulation results in this paper can be applied to the optimal design of robot prototypes and provide a theoretical basis for subsequent research.
... This new approach links the internal computational system to the external world to emulate human level skills, which is a benchmark for intelligence in all areas related to the understanding and acquisition of intelligent behaviour in machines/artificial systems. Cognitive robots are gaining a special place as a research platform for the study of cognition because they provide a way of "understanding through building" in relevant fields (D'Mello & Franklin, 2011;Morse, Herrera, Clowes, Montebelli, & Ziemke, 2011;Pezzulo et al., 2012). In addition, the requirements of robotic applications have also changed since the old days of generic industrial manipulator robots. ...
Article
This review concentrates on the issue of acquisition of abstract words in a cognitive robot with the grounding principle, from relevant theories to practical models of agents and robots. Most cognitive robotics models developed for grounding of language take inspiration from the findings of neuroscience and psychology to get the theoretical skeleton of these models. To better understand these modelling approaches, it is indispensable to work from the base (theoretical accounts) to the top (computational models). Therefore in this paper, succinct definition of abstract words is presented first, and then the symbol grounding issue and accounts of grounded cognition for abstract words are given. The next section discusses the computational modelling approaches for abstract words grounding phenomenon. Finally, important cognitive robotics models are reviewed. This paper also points out the strengths and weaknesses of relevant hypotheses and models for the representation of abstract words in the grounded cognition framework and helps the understanding of issues such as where and why modelling efforts stand to address this problem in comparison with theoretical findings.
... Robotics makes clear the evolutionary feat that is biological intelligence [1][2][3][4]. Smooth and effective action in a constantly changing physical world requires the continuous coupling of sensors and effectors to those changing physical realities [2,[5][6][7][8]. However, an intelligent system that does more than react also needs stable cognitive products such as categories and decisions that are at least partially decoupled from the here-and-now on which sensing and acting are so dependent [9]. ...
Article
Full-text available
For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body - and its momentary posture - may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1-3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1-5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies' momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6-9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge -not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping - but through the body's momentary disposition in space.
... In his opinion, a cognitive architecture can be used for creating multiple microtheories and offers a unifying perspective on how the mind works. In a more contem-porary context, researchers from the field of cognitive robotics suggested that unified cognitive-robotic architectures could be used to unify research efforts [8]. ...
Conference Paper
Full-text available
Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field of study of social intelligence. In this paper, I argue for an opposite claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic integration of different explanations, however, comes at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic. 1 From disunity of science to unified models Social sciences, and in particular, research on social intelligence, is today highly fragmented and different disciplines are sometimes highly disconnected from each other. Although some theorists of social sciences find this situation commendable, I do not. For one, there is a danger of duplicating effort in different sciences. For another , high-level abstract explanations in social sciences, when not properly constrained , or deepened [1], remain superficial and explanatorily weak. In particular, they run a danger of positing entities that play no causal role, if descriptions are not at all constrained by lower-level evidence. To this, one may reply that such deepening is desired in physical or biological sciences , as these are mostly observable, while social sciences are " constructed. " 1 I disagree. All sciences, and in particular, cognitive sciences, are highly theoretical [2], positing multiple non-observable entities for explanatory purposes. Mere observation is rarely ever explanatory. Similar point applies to " social construction " : even if one agrees that some social sciences are busy with normative questions, their job is not 1 I owe this observation to one of the reviewers of this paper.
... Conceding that robotic research serves as a source for investigating human cognition and behavior (Oudeyer, 2010, Morse et al., 2011, we use robotic experiments for identifying SA 2 . While action in general is a long-known candidate for integrating different modalities (Gallese, 2000), SA-compared to SoA-suggests a basically different type of action consciousness which in turn asks for a different explanation. ...
Article
Full-text available
This paper investigates subjective agency (SA) as a special type of efficacious action consciousness. Our central claims are, firstly, that SA is a conscious act of voluntarily initiating bodily motion. Secondly, we argue that SA is a case of multifunctional integration of behavioral functions being analogous to multisensory integration of sensory modalities. This is based on new perspectives on the initiation of action opened up by recent advancements in robot assisted neuro-rehabilitation which depends on the active participation of the patient and yields experimental evidence that there is SA in terms of a conscious act of voluntarily initiating bodily motion (phenomenal performance). Conventionally, action consciousness has been considered as a sense of agency (SoA). According to this view, the conscious subject merely echoes motor performance and does not cause bodily motion. Depending on sensory input, SoA is implemented by means of unifunctional integration (binding) and inevitably results in non-efficacious action consciousness. In contrast, SA comes as a phenomenal performance which causes motion and builds on multifunctional integration. Therefore, the common conception of the brain should be shifted toward multifunctional integration in order to allow for efficacious action consciousness. For this purpose, we suggest the heterarchic principle of asymmetric reciprocity and neural operators underlying SA. The general idea is that multifunctional integration allows conscious acts to be simultaneously implemented with motor behavior so that the resulting behavior (SA) comes as efficacious action consciousness. Regarding the neural implementation, multifunctional integration rather relies on operators than on modular functions. A robotic case study and possible experimental setups with testable hypotheses building on SA are presented.
... Namely, if one is busy replicating mechanisms, one also may not see the forest for the trees. This is not an inherent danger, especially if the methodology involves the search for invariant principlesand cognitive robotics, contrary to appearances, might be used to integrate and unify different theories of cognition by requiring decisions about the cognitive architecture to be made (D'Mello and Franklin 2011, Morse et al. 2011). ...
Chapter
Full-text available
I discuss whether there are some lessons for philosophical inquiry over the nature of simulation to be learnt from the practical methodology of reengineering. I will argue that reengineering serves a similar purpose as simulations in theoretical science such as computational neuroscience or neurorobotics, and that the procedures and heuristics of reengineering help to develop solutions to outstanding problems of simulation.
... mere bringing together of several disciplines. This is refl ected in the variety of disciplines in which the enactivist paradigm is used to guide research, such as artifi cial life and robotics (Di Paolo 2003 ;Barandiaran et al. 2009 ;Morse et al. 2011 ), social and developmental psychology (Reddy 2008 ;De Jaegher et al. 2010 ;Froese and Fuchs 2012 ) psychiatry (de Haan et al. 2013 ), sociology (Protevi 2009 ), and philosophy of mind (Thompson 2007 ;Gallagher and Zahavi 2008 ). ...
Chapter
Full-text available
Whether collective agency is a coherent concept depends on the theory of agency that we choose to adopt. We argue that the enactive theory of agency developed by Barandiaran, Di Paolo and Rohde (2009) provides a principled way of grounding agency in biological organisms. However the importance of biological embodiment for the enactive approach might lead one to be skeptical as to whether artificial systems or collectives of individuals could instantiate genuine agency. To explore this issue we contrast the concept of collective agency with multi-agent systems and multi-system agents, and argue that genuinely collective agents instantiate agency at both the collective level and at the level of the component parts. Developing the enactive model, we propose understanding agency – both at the level of the individual and of the collective – as spectra that are constituted by dimensions that vary across time. Finally, we consider whether collectives that are not merely metaphorically ‘agents’ but rather are genuinely agentive also instantiate subjectivity at the collective level. We propose that investigations using the perceptual crossing paradigm suggest that a shared lived perspective can indeed emerge but this should not be conflated with a collective first-person perspective, for which material integration in a living body may be required.
... In contrast, Gounaris & Elkheir (2018), Morse et al. (2011), Penny & Dreyfus (1994, Hoffman (2012), Zambak (2013) and Janlert (1985) contest the value of EAI in developing RHRs with multitasking and sensory capabilities in real-world environments as sensorimotor data only constitutes the ability to instinctively respond and act to an external stimulus opposed to understanding the relationship between the action and stimulus. For example, Dennett (1984a), suggests, burning one's hand instigates an automatic, instinctive physical response; this mechanism is replicable in RHR/EAI design. ...
Thesis
Full-text available
The human face is the most natural interface for face-to-face communication, and the human form is the most effective design for traversing the human-made areas of the planet. Thus, developing realistic humanoid robots (RHRs) with artificial intelligence (AI) permits humans to interact with technology and integrate it into society in a naturalistic manner insurmountable to any other form of non-biological human emulation. However, RHRs have yet to attain a level of emulation that is indistinguishable from the human condition and fall into the uncanny valley (UV). The UV represents a dip in human perception, where affinity decreases with heightened levels of human likeness. Per qualified research into the UV, artificial eyes and mouths are the primary propagators of the uncanny valley effect (UVE) and reduce human likeness and affinity towards RHRs. In consideration, this thesis introduces, tests and comparatively assesses a pair of novel robotic eye prototypes with dilating pupils capable of simultaneously replicating the natural pupilar responses of the human eyes to light and emotion. The robotic pupil systems act as visual signifiers of sentience and emotion to enhance eye contact interfacing in human-robot interaction (HRI). Secondly, this study presents the design, development and application of a novel robotic mouth system with buccinator actuators and custom machine learning (ML) speech synthesis to mouth articulation application for forming complex lip shapes (visemes) to emulate human mouth and lip patterns to vowel and consonant sounds. The robotic eyes and mouth system were installed in two RHRs named ‘Euclid and Baudi’ and tested for accuracy and processing rate against a living human counterpart. The results of these experiments indicated that the robotic eyes are capable of dilating within the average pupil range of the human eyes to light and emotion, and the robotic mouth operated with a 86.7% accuracy rating when compared against the lip movement of a human mouth during verbal communication. An HRI experiment was conducted using the RHRs and biometric sensors to monitor the physiological responses of test subjects for cross-analysis with a questionnaire. The sample consists of twenty individuals with experience in AI and robotics and related fields to examine the authenticity, accuracy and functionality of the prototypes. The robotic mouth prototype achieved 20/20 for aesthetical, and lip synchronisation accuracy compared to a robotic mouth with the buccinator actuators deactivated, heightening the potential application of the system in future RHR design. However, to reduce influential factors, test subjects were not informed of the dilating eye system, which resulted in 2/20 of test subjects noticing the pupil dilation sequences to emotive facial expressions (FEs) and light. Moreover, the eye contact behaviours of the RHRs was more significant than pupil dilation FEs and eye aesthetics during HRI, counter to previous research in the UVE in HRI. Finally, this study outlines a novel theoretical evaluation framework founded on the 1950 Turing Test (TT) for AI, named The Multimodal Turing Test (MTT) for evaluating human-likeness and interpersonal and intrapersonal intelligence in RHR design and realistic virtual humanoids (RVHs) with embodied artificial intelligence (EAI). The MTT is significant in RHR development as current methods of evaluation, such as The Total Turing Test (TTT), Truly Total Turing Test (TTTT) Robot Turing Test (RTT), Turing Handshake Test (THT), Handshake Turing Test (HTT) and TT are not nuanced and comprehensive enough to evaluate the functions of an RHR/RVH simultaneously to pinpoint the causes of the UVE. Furthermore, unlike previous methods, the MTT provides engineers with a developmental framework to assess degrees of human-likeness in RHR and RVH design towards more advanced and accurate modes of human emulation.
... The major open questions with respect to "innate" perception, attention, and sociality are the nature of bilateral interactions in the human neonate and the implications of this physiological organization on behavioral and ontogenic timescales. The former can be addressed experimentally in neonates, while the latter can be explored via longitudinal empirical designs, mathematical modeling and developmental robotics (Morse et al., 2011). The formal model and agent based simulations presented here are intended in a purely pedagogical sense, to specify and clarify our argument and demonstrate its consequences, and provide no evidence that newborn perception employs similar mechanisms. ...
Article
Full-text available
Empirical studies have revealed remarkable perceptual organization in neonates. Newborn behavioral distinctions have often been interpreted as implying functionally specific modular adaptations, and are widely cited as evidence supporting the nativist agenda. In this theoretical paper, we approach newborn perception and attention from an embodied, developmental perspective. At the mechanistic level, we argue that a generative mechanism based on mutual gain control between bilaterally corresponding points may underly a number of functionally defined "innate predispositions" related to spatial-configural perception. At the computational level, bilateral gain control implements beamforming, which enables spatial-configural tuning at the front end sampling stage. At the psychophysical level, we predict that selective attention in newborns will favor contrast energy which projects to bilaterally corresponding points on the neonate subject's sensor array. The current work extends and generalizes previous work to formalize the bilateral correlation model of newborn attention at a high level, and demonstrate in minimal agent-based simulations how bilateral gain control can enable a simple, robust and "social" attentional bias.
... Robots are, no doubt, considered 'embodied' by most AI researchers, and in fact the obvious AI approach to modeling natural embodied cognition or synthesizing artificial equivalents thereof (e.g. Morse et al., 2011). Much embodied AI research, however, also makes use of simulated robots or other types of non-physical agents, e.g. ...
Article
Full-text available
Embodied cognition is a hot topic in both cognitive science and AI, despite that fact that there still is relatively little consensus regarding what exactly constitutes ‘embodiment’. While most embodied AI research views the body as the physical/sensorimotor interface that allows to ground computational cognition processes in sensorimotor interactions with the environment, more biologically-based notions of embodied cognition emphasize the fundamental role that the living body − and more specifically its homeostatic/allostatic self-regulation − plays in grounding both sensorimotor interactions and embodied cognitive processes. Adopting the latter position, a multi-tiered affectively embodied view of cognition in living systems, it is further argued that synthetic biology research can make significant contributions to modeling/understanding the nature of organisms as layered networks of bodily self-regulation mechanisms.
... Humanoid robotics enables the extension of computational neuroscience to ecologically embedded context, analogous to the in-vitro vs. in-vivo distinction in neuroscience. This facilitates proof-of-concept numerical simulations of neuroethological theories which depend fundamentally on embodiment and situation, as well as neural computations (Morse, Herrera, Clowes, Montebelli & Ziemke, 2011). In the following section, we first briefly review related empirical and theoretical work on NFP and newborn binocularity. ...
Article
Human expertise in face perception grows over development, but even within minutes of birth, infants exhibit an extraordinary sensitivity to face-like stimuli. The dominant theory accounts for innate face detection by proposing that the neonate brain contains an innate face detection device, dubbed 'Conspec'. Newborn face preference has been promoted as some of the strongest evidence for innate knowledge, and forms a canonical stage for the modern form of the nature-nurture debate in psychology. Interpretation of newborn face preference results has concentrated on monocular stimulus properties, with little mention or focused investigation of potential binocular involvement. However, the question of whether and how newborns integrate the binocular visual streams bears directly on the generation of observable visual preferences. In this theoretical paper, we employ a synthetic approach utilizing robotic and computational models to draw together the threads of binocular integration and face preference in newborns, and demonstrate cases where the former may explain the latter. We suggest that a system-level view considering the binocular embodiment of newborn vision may offer a mutually satisfying resolution to some long-running arguments in the polarizing debate surrounding the existence and causal structure of newborns' 'innate knowledge' of faces.
... It is not then that we just need A. F. Morse, A. Cangelosi / Cognitive Science (2016) modeling to provide greater detail in proposed theories, but we also need to demonstrate integration between or across phenomena. We need the same model (not just the same methodology or approach) to account for a variety of phenomena and thus to link them to the same underlying mechanisms and dynamics (Morse, DeGreeff, Belpeame, & Cangelosi, 2010;Morse, Herrera, Clowes, Montebelli, & Ziemke, 2011;Newell, 1990). Developmental Robotics takes this a step further, forcing not only the integration of cognitive faculties but perceptual and behavioral abilities too, as an autonomous robot that cannot integrate all the way from sensory stimuli to behavior output is simply insufficient. ...
Article
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.
... A robot, whose own body would ground these concepts in (possibly subtly) different ways, could make use of this information in interaction with human beings. Overall, then, the take-home message is that using robots in vocal interaction requires the researcher to be explicit about all aspects of the necessary model [see Morse et al. (2011), for a similar point]. ...
Article
Full-text available
Almost all animals exploit vocal signals for a range of ecologically motivated purposes: detecting predators/prey and marking territory, expressing emotions, establishing social relations, and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using Siri, vocalization provides a valuable communication channel through which behavior may be coordinated and controlled, and information may be distributed and acquired. Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human–machine interaction. Opportunities for cross-fertilization between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyze different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals, and artificial agents (such as robots), and it identifies a rich set of open research questions that may benefit from an interdisciplinary analysis.
... In his opinion, a cognitive architecture can be used for creating multiple micro-theories and offers a unifying perspective on how the mind works. In a more contemporary context, researchers from the field of cognitive robotics suggested that unified cognitive-robotic architectures could be used to unify research efforts (Morse et al. 2011). ...
Article
Full-text available
Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field of study of social intelligence. In this paper, I argue for an opposite claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic integration of different explanations, however, comes at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic.
... As for the case of decision making, inspiration may be sought from the neuroscientific data, although given that specific cognitive competencies in humans (be it memory, decision making, planning, etc) are also typically studied in relative isolation, this may prove to provide an incomplete solution. This is therefore a domain in which work on synthetic cognitive architectures such as DAIM can feed back into the biological literature, by providing models for generating new hypotheses for empirical investigation with humans (Morse et al., 2011). ...
Article
With increasingly competent robotic systems desired and required for social human–robot interaction comes the necessity for more complex means of control. Cognitive architectures (specifically the perspective where principles of structure and function are sought to account for multiple cognitive competencies) have only relatively recently been considered for application to this domain. In this paper, we describe one such set of architectural principles – activation dynamics over a developmental distributed associative substrate – and show how this enables an account of a fundamental competence for social cognition: multi-modal behavioural alignment. Data from real human–robot interactions is modelled using a computational system based on this set of principles to demonstrate how this competence can therefore be considered as embedded in wider cognitive processing. It is shown that the proposed system can model the behavioural characteristics of human subjects. While this study is a simulation using real interaction data, the results obtained validate the application of the proposed approach to this issue.
... These models may be the best answer to the triviality challenge so far, as they demonstrate non-trivial bodily capacities of robots that explain cognition in a biologically plausible way. Cognitive roboticists believe that the physical machinery of the robot can help explain cognitive processes in an integrated fashion (Morse et al., 2011). I discuss two issues related to these parts of the research tradition of embodied cognition: how robots may explain behavior and cognition and how the body computes morphologically. ...
Chapter
Does embodied cognition clash with the computational theory of mind? The critics of computational modeling claim that computational models cannot account for the bodily foundation of cognition, and hence miss essential features of cognition. In this chapter, I argue that it is natural to integrate computational modeling with bodily explanations of cognition. Such explanations may include factors suggested by proponents of embodied cognition. Not only is there no conflict between embodied cognition and computationalism, but embodied cognition alone turns out to be fairly limited in its explanatory scope because it does not track proper invariant generalizations of all cognitive phenomena; some phenomena do not depend straightforwardly on embodiment alone but also on temperamental differences, individual learning history, cultural factors, and so on. This is why it works best when accompanied with other explanatory tools.
... Perceiving something as a table, means that we perceive it as something upon we can exercise a particular skill .On the other hand, knowledge of tables does not merely correspond to an understanding that a table can afford a place to eat or to read, but further understanding of how the agent's relation to a table will change according to various actions that are context-dependent. What matters is not the intrinsic character of an agent's sensations or the information it gathers while listening to or looking at its environment, but the implicit understanding of the structure of its own sensations (Morse, Harrera, Cleeves, Monteballi, Ziemke, 2011• Noë, 2004 and the ways this structure changes and reconfigures its architecture in the context of various actions. In that sense, modal representations might account as embodied in the strict sense, but they cannot enable the artificial agent to truly interact with its own environment. ...
Conference Paper
Full-text available
In recent years, Artificial Intelligence (AI) has concentrated its research interest in the philosophical theories of embodied cognition (EC). Seeking a way out of the GOFAI's dead-end attempt to develop intelligent robots with the ability to perform complex tasks in unknown and changing environments, AI adopted basic principles of the EC, like the body's direct interactions with the world. This view inspired AI researchers to abort traditional "sense-plan-act" architectures in favor of bottom-up approaches that focused on the integration of action and perception. However, the embodied AI community, tried to integrate these concepts by encapsulating them into frameworks that relied heavily in fundamental assumptions and mechanisms, that violated core principles of EC, both ontologically and practically, as in the utilization of sensorimotor knowledge in the sense of information extraction-instead of using sensorimotor knowledge in the sense of a meaningful "know-how", the utilization of internal states and representations coupled with computational information processing based architectures, and the conceptualization of affordances grounded in categorization and internal representations. In this paper our objective is to identify and classify the fundamental principles and properties, by which embodied AI and EC differ, in a philosophical as well as in a technical context. In our view, grasping this fundamental ontological bounds, apart from being philosophically interesting, will contribute to the understanding of the limits of the capabilities of embodied AI, compared to the concept of embodied cognition.
... Combining computer science and information science (Wang, 2011). Cognition results from the interaction between an individual and environmental and social factors and may not be immediately obvious or conscious (Morse et al., 2011). ...
... On large-scale adaptive analog technology and Very-Large-Scale-Integration (VLSI) digital circuits see also Mahowald, & Douglas (1991). 11 On robotic modeling in cognitive science see Morse et al. (2011). On the "biological" difficulty to scale the complexity of living beings to build robots at a human-cognition level in situated robotics see the worries by Brooks dating back to 1997: "Perhaps it is the case that all the approaches to building intelligent systems are just completely off-base, and are doomed to fail. ...
Article
Full-text available
In this article, I deal with a conceptual issue concerning the framework of two special sciences: artificial intelligence and synthetic biology, i.e. the distinction between the natural and the artificial (a long-lasting topic of history of scientific though since the ancient philosophy). My claim is that the standard definition of the “artificial” is no longer useful to describe some present-day artificial sciences, as the boundary between the natural and the artificial is not so sharp and clear-cut as it was in the past. Artificial intelligence and synthetic biology, two disciplines with new technologies, new experimental methods, and new theoretical frameworks, all need a new, more specific, and refined definition of (the) “artificial”, which is also related to the use of the synthetic method to build real world entities and in open-ended (real or virtual) environments. The necessity of a new definition of the artificial is due to the close relationship of AI and synthetic biology with biology itself. They both are engineering sciences that are moving closer and closer, at least apparently, towards (natural) biology, although from different and opposite directions. I show how the new concept of the artificial is, therefore, the result of a new view on biology from an engineering and synthetic point of view, where the boundary between the natural and the artificial is far more blurred. From this, I try to formulate a brand-new, more useful definition for future understanding, practical, and epistemological purposes of these two artificial sciences.
... Although detractors insist that this latter thesis only holds with respect to a highly restricted notion of representation (e.g., Clark & Toribio, 1994;Susswein & Racine, 2009), these considerations seem to have culminated in an interest in testing robotic implementations of developmental theories. On some understandings, embodiment is seen as incompatible with the thesis that classical divisions of mental architecture (into sensory and motor systems, for instance) are unjustified or problematic (Brooks, 1990;Morse, Herrera, Clowes, Montebelli, & Ziemke, 2011;Pfeifer & Scheier, 1999). Cognition, on such accounts, is best seen as a property of an entire dynamical system and cognition is typically thought of, to use Clark and Chalmers' (1998) term of art, as extendedthe form of an organism's body is thought of as a functional constituent of the system's cognitive capacities. ...
Article
We present an approach to subjective computing for the design of future robots that exhibit more adaptive and flexible behavior in terms of subjective intelligence. Instead of encapsulating subjectivity into higher order states, we show by means of a relational approach how subjective intelligence can be implemented in terms of the reciprocity of autonomous self-referentiality and direct world-coupling. Subjectivity concerns the relational arrangement of an agent's cognitive space. This theoretical concept is narrowed down to the problem of coaching a reinforcement learning agent by means of binary feedback. Algorithms are presented that implement subjective computing. The relational characteristic of subjectivity is further confirmed by a questionnaire on human perception of the robot's behavior. The results imply that subjective intelligence cannot be externally observed. In sum, we conclude that subjective intelligence in relational terms is fully tractable and therefore implementable in artificial agents.
Thesis
Full-text available
Cette étude porte sur la création d’une Echelle de Perception d’Autonomie du Robot (EPAR). Acceptant la présence des trois facteurs tirés de la littérature (liés à l’individu, liés au robot, liés au contexte) influençant la formation des attitudes à l’égard des robots, nous pensons qu’il existe un quatrième facteur plus important que les autres, jouant un rôle de médiateur influençant le sentiment d’anxiété envers le robot. Suite à la présentation de deux robots (autonome versus non-autonome), le participant devait choisir celui qu’il préférait puis répondait à l’EPAR et la NARS. Nos résultats valident cette échelle en 5 dimensions d’autonomie (développement, fonctionnalité, sécurité, motricité, énergie). Nous pouvons dire que le degré d’autonomie perçue explique une part des attitudes envers les robots, et que les femmes tout comme une personne préférant un robot non-autonome (versus autonome) ont des attitudes plus négatives. Au final, on propose un profil général de robot autonome, ainsi qu’un modèle explicatif des liens entre les dimensions de ce degré d’autonomie perçue et les attitudes avec comme dimension principale la sécurité (qui impacte positivement les trois dimensions de la NARS).
Article
In this paper, we model a relational notion of subjectivity by means of two experiments in subjective computing. The goal is to determine to what extent a cognitive and social robot can be regarded to act subjectively. The system was implemented as a reinforcement learning agent with a coaching function. To analyze the robotic agent we used the method of levels of abstraction in order to analyze the agent at four levels of abstraction. At one level the agent is described in mentalistic or subjective language respectively. By mapping this mentalistic to an algorithmic, functional, and relational level, we can show to what extent the agent behaves subjectively as we make use of a relational concept of subjectivity that draws upon the relations that hold between the agent and its environment. According to a relational notion of subjectivity, an agent is supposed to be subjective if it exhibits autonomous relations to itself and others, i.e. The agent is not fully determined by a given input but is able to operate on its input and decide what to do with it. This theoretical notion is confirmed by the technical implementation of self-referentiality and social interaction in that the agent shows improved behavior compared to agents without the ability of subjective computing. On the one hand, a relational concept of subjectivity is confirmed, whereas on the other hand, the technical framework of subjective computing is being theoretically founded.
Chapter
Sensorimotor theories of perception are highly appealing to A.I. due to their apparent simplicity and power; however, they are not problem free either. This paper will presents a frank appraisal of sensorimotor perception discussing and highlighting the good, the bad, and the ugly with respect to a potential sensorimotor A.I.
Thesis
Full-text available
Hierarchical Dirichlet Process mixture models (HDPMM) have recently been introduced not only in cognitive psychology but also in cognitive robotics. It was postulated in the context of categorization research that HDPMMs not only combine the strengths of all previous rational categorization procedures, but also unify the two prominent theories of categorization, the exemplar and prototype view, in a common categorization model. Consequently, findings were successfully modeled, for which the striking properties of an HDPMM, the dynamic complexity adaptation of the model to existing data, and the ability of cluster sharing appeared to be important key mechanisms. Researchers from cognitive robotics interpreted these successes as evidence for a good model of human categorization. In fact, HDPMMs are, however, a group of rational models that have so far hardly been explored. Hence, in this thesis the HDPMM introduced in cognitive psychology by Griffiths, Canini, Sanborn and Navarro (2007) is extensively evaluated. The study investigates the predictive power of the model regarding seven classical findings on human categorization, which are serious challenges for many of the current categorization models. It is demonstrated that two modifications to the model of Griffiths et al. (2007) are sufficient to successfully predict the majority of the findings. Furthermore, the HDPMM is compared with a prominent and well-established categorization model, the Supervised and Unsupervised Stratified Adaptive Incremental Network (SUSTAIN) with regard to data fitting and model flexibility, in which the modified model of Griffiths and colleagues can assert itself in the majority of the experiments. Finally, further possibilities for improvement, especially with regard to the application in the robotics context, are discussed.
Article
According to Noë's enactive theory of perception sensorimotor knowledge allows us to predict the sensory outcomes of our actions. This paper suggests that tuning input filters with such predictions may be the cause of sustained inattentional blindness. Most models of learning capture statistically salient regularities in and between data streams. Such analysis is, however, severely limited by both the problem of marginal regularity and the credit assignment problem. A neurocomputational reservoir system can be used to alleviate these problems without training by enhancing the separability of regularities in input streams. However, as the regularities made separable vary with the state of the reservoir, feedback in the form of predictions of future sensory input, can both enhance expected discriminations and hinder unanticipated ones. This renders the model blind to features not made separable in the regions of state space the reservoir is manipulated toward. This is demonstrated in a computational model of sustained inattentional blindness, leading to predictions about human behaviour that have yet to be tested.
Article
Full-text available
The development of the robot EcoBot II, which exhibits some partial form of energetic autonomy, is reported. Microbial Fuel Cells were used as the onboard self-sustaining power supply, which incorporated bacterial cultures from sewage sludge and employed oxygen from free air for oxidation in the cathode. This robot was able to perform phototaxis, temperature sensing and wireless transmission of sensed data when fed (amongst other substrates) with flies. This is the first robot in the world, to utilise unrefined substrate, oxygen from free air and perform (three) different token tasks. The work presented in this paper focuses on the combination of flies (substrate) and oxygen (cathode) to power the EcoBot II.
Article
Full-text available
When looking at a scene, observers feel that they see its entire structure in great detail and can immediately notice any changes in it. However, when brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: identification of changes becomes extremely difficult, even when changes are large and made repeatedly. Identification is much faster when a verbal cue is provided, showing that poor visibility is not the cause of this difficulty. Identification is also faster for objects mentioned in brief verbal descriptions of the scene. These results support the idea that observers never form a complete, detailed representation of their surroundings. In addition, results also indicate that attention is required to perceive change, and that in the absence of localized motion signals it is guided on the basis of high-level interest. To see or not to see: The need for attention to perceive changes in scenes. Available from: https://www.researchgate.net/publication/236170014_To_see_or_not_to_see_The_need_for_attention_to_perceive_changes_in_scenes [accessed Jun 15, 2017].
Article
Full-text available
Dynamicism has provided cognitive science with important tools to understand some aspects of "how cognitive agents work" but the issue of "what makes something cognitive" has not been sufficiently addressed yet and, we argue, the former will never be complete without the latter. Behavioristic characterizations of cognitive properties are criticized in favor of an organizational approach focused on the internal dynamic relationships that constitute cognitive systems. A definition of cognition as adaptive-autonomy in the embodied and situated neurodynamic domain is provided: the compensatory regulation of a web of stability dependencies between sensorimotor structures is created and pre served during a historical/developmental process. We highlight the functional role of emotional embodiment: internal bioregulatory processes coupled to the formation and adaptive regulation of neurodynamic autonomy. Finally, we discuss a "minimally cognitive behavior program" in evolutionary simulation modeling suggesting that much is to be learned from a complementary "minimally cognitive organization program"
Article
Full-text available
A proposal for the biological grounding of intrinsic teleology and sense-making through the theory of autopoiesis is critically evaluated. Autopoiesis provides a systemic language for speaking about intrinsic teleology but its original formulation needs to be elaborated further in order to explain sense-making. This is done by introducing adaptivity, a many-layered property that allows organisms to regulate themselves with respect to their conditions of viability. Adaptivity leads to more articulated concepts of behaviour, agency, sense-construction, health, and temporality than those given so far by autopoiesis and enaction. These and other implications for understanding the organismic generation of values are explored.
Chapter
Full-text available
Emphasizing that word-forms are culturally selected, the paper takes a distributed view of language. This is used to frame evidence that, in ontogenesis, language emerges under dual control by adult and child. Since parties gear to each other’s biomechanics, norm-based behaviour prompts affective processes that drive prepared learning. This, it is argued, explains early stages in learning to talk. Next, this approach to external symbol grounding (ESG) is contrasted with ones where a similar problem is treated as internal to the agent. Then, turning to synthetic models, I indicate how the ESG can be used to model either populations of agents or dyads who, using complex signals, transform each other’s agency.
Article
Full-text available
The focus in blending theory on the dynamics of meaning construction makes it a productive tool for analysing psychological processes in a developmental perspective. However, blending theory has largely preserved the traditionally mentalist and individualist assumptions of classical cognitive science. This article argues for an extension of the range of both theory and data, to encompass the socially collaborative, culturally and materially grounded nature of the human mind. An approach to young children's symbolic play in terms of conceptual blending is presented, together with an analysis of an episode of sociodramatic play which highlights the role of cultural material objects as crucial meaning-bearing elements in the blend. From a developmental perspective, conceptual blending can be viewed as a microgenetic process, in which not only cognitive strategies, but social roles, relationships and identities are negotiated by participants in social and communicative interactions.
Article
Full-text available
Many researchers in embodied cognitive science and AI, and evolutionary robotics in particular, emphasize the interaction of brain , body and environment as crucial to the emergence of intel ligent, adaptive behavior. Accordingly, the interaction between agent and environment, as well as the co-adaptation of artificial brains and bodies has been the focus of much research in evolutionary robotics. Hence, there are plenty of studies of robotic agents/species adapting to a given environment. Many animals, on the other hand, in particular humans, to some extent can choose to adapt the environment to th eir own needs instead of adaptin g (only) themselves. That alternative has been studied relatively little in robot experiments. This paper therefore presents some simple initial simulation experiments, in a delayed response task setting, that illustrate how the evolution of environment adaptation can serve to provide cognitive scaffolding that reduces the requirements for individual agents. Furthermore, theoretical implications, open questions and future research directions for evolutionary robotics are discussed.
Article
Full-text available
The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present SOAR, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities.
Article
Full-text available
We describe a set of experiments investigating the role of natural language symbols in scaffolding situated action. Agents are evolved to respond appropriately to commands in order to perform simple tasks. We explore three different conditions, which show a significant advantage to the re-use of a public symbol system, through self-cueing leading to qualitative changes in performance. This is modelled by looping spoken output via environment back to heard input. We argue this work can be linked to, and sheds new light on, the account of self-directed speech advanced by the developmental psychologist Vygotsky in his model of the development of higher cognitive function.
Article
Full-text available
this paper for subsumption programs have physical embodiments as copper wires which provide the medium to support the serial sensing of messages. Herbert has 30 infrared proximity sensors for local obstacle avoidance, an onboard manipulator with a number of simple sensors attached to the hand, and a laser light stripping system to collect three dimensional depth data in 60 degree wide swath in front of the robot out to a range of about 12 feet. A 256 pixel wide by 32 pixel high depth image is collected every second. Through a special purpose distributed serpentine memory, some number of the onboard 8 bit processors are able to expend about 30 instructions to each data pixel. By linking the processors in a chain we are able to implement quite good performance vision algorithms. Connell (1988b) is programming Herbert to wander around office areas, go into peoples offices and steal empty soda cans from their desks. He has demonstrated obstacle avoidance and wall following, real-time recognition of soda can like objects and desk like objects, and a set of 15 behaviors (Connell, 1988b) which drive the arm to physically search for a soda can in front of the robot, locate it and pick it up. These fifteen behaviors are shown as fifteen separate finite state machines in Figure 8.7. Herbert shows many instances of using the world as its own best model and as a communication medium. The laser-based table-like-object finder initiates a behavior which drives the robot closer to a table. It doesn't communicate with any other subsumption layers. However when the robot is close to a table there is a better chance that the laser-based
Article
Robotic cognitive modeling in the real world requires a level of integration and grounding rarely seen in more abstract modeling. However, like Newell we believe this is exactly the kind of integration needed to promote scientific cumulation in the cognitive sciences. We present a neural model of learning compatible with Noë's account of enactive perception. We highlight that accounts of enactive perception tend to oversimplify the problem of identifying contingent relationships and introduce a novel way to address the problem of marginal regularities. Finally, we describe a general (non- task specific) model and present a number of real-world robotic experiments demonstrating a wide range of integrated psychological phenomena.
Article
Embodiment has become an important concept in many areas of cognitive science. There are, however, very different notions of exactly what embodiment is and what kind of body is required for what type of embodied cognition. Hence, while many nowadays would agree that humans are embodied cognizers, there is much less agreement on what kind of artifact could be considered embodied. This paper identifies and contrasts six different notions of embodiment which can roughly be characterized as (1) structural coupling between agent and environment, (2) historical embodiment as the result of a history of struct ural coupling, (3) physical embodiment, (4) organismoid embodiment, i.e. organism- like bodily form (e.g., humanoid robots), (5) organismic embodiment of autopoietic, living systems, and (6) social embodiment.
Article
How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) "iconic representations," which are analogs of the proximal sensory projections of distal objects and events, and (2) "categorical representations," which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) "symbolic representations," grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., "An X is a Y that is Z").
Article
Emotions can be regarded as the manifestations of a system that realises multiple concerns and operates in an uncertain environment. Taking the concern realisation function as a starting point, it is argued that the major phenomena of emotion follow from considerations of what properties a subsystem implementing that function should have. The major phenomena are: the existence of the feelings of pleasure and pain, the importance of cognitive or appraisal variables, the presence of innate, pre-programmed behaviours as well as of complex constructed plans for achieving emotion goals, and the occurrence of behavioural interruption, disturbance and impulse-like priority of emotional goals. The system properties underlying these phenomena are facilities for relevance detection of events with regard to the multiple concerns, availability of relevance signals that can be recognised by the action system, and facilities for control precedence, or flexible goal priority ordering and shift.A computer program, ACRES, is described that is built upon the specifications provided by this emotion theory. It operates in an operator-machine interaction involving the task of executing a knowledge manipulation task (the knowledge domain happens to be about emotions). ACRES responds emotionally when one of his concerns (e.g. errorless input, being kept busy, receiving varied input, not being killed) is touched upon. Responses are social signals, shifts in resource allocations to the operator, interruption of current task-directed processing, and refusal to accept instructions. His flow of behaviour shows much of the preference-based predictability, response interference, goalshifts, and social signalling of human and animal emotional behaviour.
Article
A major problem in the field of emotion has been the wide variety of definitions that have been proposed. In an attempt to resolve the resulting terminological confusion, 92 definitions and 9 skeptical statements were compiled from a variety of sources in the literature of emotion. These definitions and statements were classified into an outline of 11 categories, on the basis of the emotional phenomena or theoretical issues emphasized. There are two traditional experiential categories of affect and cognition; three physical categories of external emotional stimuli, physiological mechanisms, and emotional/expressive behavior; definitions that emphasize disruptive or adaptive effects; definitions that emphasize the multiaspect nature of emotional phenomena, those that distinguish emotion from other processes, and those that emphasize the overlap between emotion and motivation; and skeptical or disparaging statements about the usefulness of the concept of emotion. The definitions are evaluated, trends are identified, and a model definition is proposed.
Article
The mechanisms for electron transfer from the microorganisms found in anaerobic sludge to the anode electrode in microbial fuel cells (MFCs) have been investigated. In doing so, both the energy accumulation and improved performance were observed as a result of the addition of exogenous Na2SO4. Treatment of anaerobic sludge by centrifugation and washing can provide samples devoid of sulphide/sulphate. Addition of exogenous sulphate can give matched samples of S-deplete and S-replete suspensions. When these are compared in an experimental MFC, the power output of the S-deplete is only 20% that of the S-replete system. Moreover, repeat washing of the anodic chamber to remove suspended cells (leaving only cells attached to the electrode) and addition of buffer substrate gives MFC that produce an output between 10 and 20% that of control. We conclude that anaerobic sludge MFCs are a hybrid incorporating both natural mediator and anodophillic properties. We have also shown that disconnected MFC (open circuit) continue to produce sulphide and when reconnected gives an initial burst of power output demonstrating accumulator-type activity.
Article
Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the intuitive demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We will advocate an externalism about mind, but one that is in no way grounded in the debatable role of external reference in fixing the contents of our mental states. Rather, we advocate an *active externalism*, based on the active role of the environment in driving cognitive processes.
Article
This paper explores differences between Connectionist proposals for cognitive architecture and the sorts of models that have traditionally been assumed in cognitive science. We claim that the major distinction is that, while both Connectionist and Classical architectures postulate representational mental states, the latter but not the former are committed to a symbol-level of representation, or to a 'language of thought': i.e., to representational states that have combinatorial syntactic and semantic structure. Several arguments for combinatorial structure in mental representations are then reviewed. These include arguments based on the 'systematicity' of mental representation: i.e., on the fact that cognitive capacities always exhibit certain symmetries, so that the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents. We claim that such arguments make a powerful case that mind/brain architecture is not Connectionist at the cognitive level. We then consider the possibility that Connectionism may provide an account of the neural (or 'abstract neurological') structures in which Classical cognitive architecture is implemented. We survey a number of the standard arguments that have been offered in favor of Connectionism, and conclude that they are coherent only on this interpretation.
Article
Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual "filling in," visual stability despite eye movements, change blindness, sensory substitution, and color perception.
Article
The emerging viewpoint of embodied cognition holds that cognitive processes are deeply rooted in the body's interactions with the world. This position actually houses a number of distinct claims, some of which are more controversial than others. This paper distinguishes and evaluates the following six claims: (1) cognition is situated; (2) cognition is time-pressured; (3) we off-load cognitive work onto the environment; (4) the environment is part of the cognitive system; (5) cognition is for action; (6) off-line cognition is body based. Of these, the first three and the fifth appear to be at least partially true, and their usefulness is best evaluated in terms of the range of their applicability. The fourth claim, I argue, is deeply problematic. The sixth claim has received the least attention in the literature on embodied cognition, but it may in fact be the best documented and most powerful of the six claims.
Neurochemical networks encoding emotion and motivation: an evolutionary perspective Who needs emotions?: The brain meets the robot
  • A E Kelley
Kelley, A. E. (2005). Neurochemical networks encoding emotion and motivation: an evolutionary perspective. In J. M. Fellous, & M. A. Arbib (Eds.), Who needs emotions?: The brain meets the robot (pp. 29–77). Oxford University Press.
Six views of embodied cognition Margaret Wilson University of California, Santa Cruz
  • M Wilson
Wilson, M. (2002). Six views of embodied cognition Margaret Wilson University of California, Santa Cruz. Psychonomic Bulletin Review, 9(4), 625–636.
The complex vehicles of human thought and the role of scaffolding, internalisation and semiotics in human representation. Paper presented at the adaptation and representation, The Institute for Cognitive Sciences
  • R W Clowes
Clowes, R. W. (2007) The complex vehicles of human thought and the role of scaffolding, internalisation and semiotics in human representation. Paper presented at the adaptation and representation, The Institute for Cognitive Sciences in Lyon (France).
Looking for Spinoza: Joy, sorrow, and the feeling brain
  • A Damasio
Damasio, A. (2003). Looking for Spinoza: Joy, sorrow, and the feeling brain. Harcourt. Di Paolo, E. (2005). Autopoiesis, adaptivity, Teleology, agency. Phenomenology and the Cognitive Sciences, 4(4), 429–452.
A theoretical exploration of mechanisms for coding the stimulus. Paper presented at the Coding prodcesses in human memory
  • A Newell
Newell, A. (1972). A theoretical exploration of mechanisms for coding the stimulus. Paper presented at the Coding prodcesses in human memory, Washington D.C.